Blog
RESPOND
Leadership éclairé
La mise à niveau : Renforcer l'adversaire avec l'IA







Vue d'ensemble
The mind of an experienced and dedicated cyber-criminal works like that of an entrepreneur: the relentless pursuit of profit guides every move they make. At each step of the journey towards their objective, the same questions are asked: how can I minimize my time and resources? How can I mitigate against risk? What measures can I take which will return the best results?
Incorporating this ‘enterprise’ model into the cyber-criminal framework uncovers why attackers are turning to new technology in an attempt to maximize efficiency, and why a report from Forrester earlier this year revealed that 88% of security leaders now consider the nefarious use of AI in cyber activity to be inevitable. Over half of the responders to that same survey foresee AI attacks manifesting themselves to the public in the next twelve months – or think they are already occurring.
AI has already achieved breakthroughs in fields such as healthcare, facial recognition, voice assistance and many others. In the current cat-and-mouse game of cyber security, defenders have started to accept that augmenting their defenses with AI is necessary, with over 3,500 organizations using machine learning to protect their digital environments. But we have to be ready for the moment attackers themselves use open-source AI technology available today to supercharge their attacks.
Enhancing the attack life cycle
To a cyber-criminal ring, the benefits of leveraging AI in their attacks are at least four-fold:
- It gives them an understanding of context
- It helps to scale up operations
- It makes attribution and detection harder
- It ultimately increases their profitability
To best demonstrate how each of these factors surface themselves, we can break down the life cycle of a typical data exfiltration attempt, telling the story of how AI can augment the attacker during the campaign at every stage of the attack.
ReconnaissanceCAPTCHA breakerIntrusionShellphish and SNAP_RC2 establishmentFirstOrder and unsupervised clustering algorithmPrivilege escalationCeWL and neural networkLateral movementMITRE CALDERAMission accomplishedYahoo NSFW
Figure 1: The ‘AI toolbox’ attackers use to augment their attacks
Stage 1: Reconnaissance
In seeking to garner trust and make inroads into an organization, automated chatbots would first interact with employees via social media, leveraging profile pictures of non-existent people created by AI instead of re-using actual human photos. Once the chatbots have gained the trust of the victims at the target organization, the human attackers can gain valuable intelligence about its employees, while CAPTCHA-breakers are used for automated reconnaissance on the organization’s public-facing web pages.
Forrester estimates that AI-enabled ‘deep fakes’ will cost businesses a quarter of a billion dollars in losses in 2020.
Stage 2: Intrusion
This intelligence would then be used to craft convincing spear phishing attacks, whilst an adapted version of SNAP_R can be leveraged to create realistic tweets at scale – targeting several key employees. The tweets either trick the user into downloading malicious documents, or contain links to servers which facilitate exploit-kit attacks.
An autonomous vulnerability fuzzing engine based on Shellphish would be constantly crawling the victim’s perimeter – internet-facing servers and websites – and trying to find new vulnerabilities for an initial foothold.
Stage 3: Command and control
A popular hacking framework, Empire, allows attackers to ‘blend in’ with regular business operations, restricting command and control traffic to periods of peak activity. An agent using some form of automated decision-making engine for lateral movement might not even require command and control traffic to move laterally. Eliminating the need for command and control traffic drastically reduces the detection surface of existing malware.
Stage 4: Privilege escalation
At this stage, a password crawler like CeWL could collect target-specific keywords from internal websites and feed those keywords into a pre-trained neural network, essentially creating hundreds of realistic permutations of contextualized passwords at machine-speed. These can be automatically entered in period bursts so as to not alert the security team or trigger resets.
Stage 5: Lateral movement
Moving laterally and harvesting accounts and credentials involves identifying the optimal paths to accomplish the mission and minimize intrusion time. Parts of the attack planning can be accelerated by concepts such as from the CALDERA framework using automated planning AI methods. This would greatly reduce the time required to reach the final destination.
Stage 6: Data exfiltration
It is in this final stage where the role of offensive AI is most apparent. Instead of running a costly post-intrusion analysis operation and sifting through gigabytes of data, the attackers can leverage a neural network that pre-selects only relevant material for exfiltration. This neural network is pre-trained and therefore has a basic understanding of what valuable material constitutes and flags those for immediate exfiltration. The neural network could be based on something like Yahoo’s open-source project for content recognition.
Conclusion
Today’s attacks still require several humans behind the keyboard making guesses about the sorts of methods that will be most effective in their target network – it’s this human element that often allows defenders to neutralize attacks.
Offensive AI will make detecting and responding to attacks far more difficult. Open-source research and projects exist today which can be leveraged to augment every phase of the attack lifecycle. This means that the speed, scale, and contextualization of attacks will exponentially increase. Traditional security controls are already struggling to detect attacks that have never been seen before in the wild – be it malware without known signatures, new command and control domains, or individualized spear phishing emails. There is no chance that traditional tools will be able to cope with future attacks as this becomes the norm and easier to realize than ever before.
To stay ahead of this next wave of attacks, AI is becoming a necessary part of the defender’s stack, as no matter how well-trained or how well-staffed, humans alone will no longer be able to keep up. Hundreds of organizations are already using Autonomous Response to fight back against new strains of ransomware, insider threats, previously unknown techniques, tools and procedures, and many other threats. Cyber AI technology allows human responders to take stock and strategize from behind the front line. A new age in cyber defense is just beginning, and the effect of AI on this battleground is already proving fundamental.
Vous aimez ça et en voulez plus ?
More in this series
Blog
A l'intérieur du SOC
Protecting Prospects: How Darktrace Detected an Account Hijack Within Days of Deployment



Cloud Migration Expanding the Attack Surface
Cloud migration is here to stay – accelerated by pandemic lockdowns, there has been an ongoing increase in the use of public cloud services, and Gartner has forecasted worldwide public cloud spending to grow around 20%, or by almost USD 600 billion [1], in 2023. With more and more organizations utilizing cloud services and moving their operations to the cloud, there has also been a corresponding shift in malicious activity targeting cloud-based software and services, including Microsoft 365, a prominent and oft-used Software-as-a-Service (SaaS).
With the adoption and implementation of more SaaS products, the overall attack surface of an organization increases – this gives malicious actors additional opportunities to exploit and compromise a network, necessitating proper controls to be in place. This increased attack surface can leave organization’s open to cyber risks like cloud misconfigurations, supply chain attacks and zero-day vulnerabilities [2]. In order to achieve full visibility over cloud activity and prevent SaaS compromise, it is paramount for security teams to deploy sophisticated security measures that are able to learn an organization’s SaaS environment and detect suspicious activity at the earliest stage.
Darktrace Immediately Detects Hijacked Account
In May 2023, Darktrace observed a chain of suspicious SaaS activity on the network of a customer who was about to begin their trial of Darktrace/Cloud™ and Darktrace/Email™. Despite being deployed on the network for less than a week, Darktrace DETECT™ recognized that the legitimate SaaS account, belonging to an executive at the organization, had been hijacked. Darktrace/Email was able to provide full visibility over inbound and outbound mail and identified that the compromised account was subsequently used to launch an internal spear-phishing campaign.
If Darktrace RESPOND™ were enabled in autonomous response mode at the time of this compromise, it would have been able to take swift preventative action to disrupt the account compromise and prevent the ensuing phishing attack.
Account Hijack Attack Overview
Unusual External Sources for SaaS Credentials
On May 9, 2023, Darktrace DETECT/Cloud detected the first in a series of anomalous activities performed by a Microsoft 365 user account that was indicative of compromise, namely a failed login from an external IP address located in Virginia.

Just a few minutes later, Darktrace observed the same user credential being used to successfully login from the same unusual IP address, with multi-factor authentication (MFA) requirements satisfied.

A few hours after this, the user credential was once again used to login from a different city in the state of Virginia, with MFA requirements successfully met again. Around the time of this activity, the SaaS user account was also observed previewing various business-related files hosted on Microsoft SharePoint, behavior that, taken in isolation, did not appear to be out of the ordinary and could have represented legitimate activity.
The following day, May 10, however, there were additional login attempts observed from two different states within the US, namely Texas and Florida. Darktrace understood that this activity was extremely suspicious, as it was highly improbable that the legitimate user would be able to travel over 2,500 miles in such a short period of time. Both login attempts were successful and passed MFA requirements, suggesting that the malicious actor was employing techniques to bypass MFA. Such MFA bypass techniques could include inserting malicious infrastructure between the user and the application and intercepting user credentials and tokens, or by compromising browser cookies to bypass authentication controls [3]. There have also been high-profile cases in the recent years of legitimate users mistakenly (and perhaps even instinctively) accepting MFA prompts on their token or mobile device, believing it to be a legitimate process despite not having performed the login themselves.
New Email Rule
On the evening of May 10, following the successful logins from multiple US states, Darktrace observed the Microsoft 365 user creating a new inbox rule, named “.’, in Microsoft Outlook from an IP located in Florida. Threat actors are often observed naming new email rules with single characters, likely to evade detection, but also for the sake of expediency so as to not expend any additional time creating meaningful labels.
In this case the newly created email rules included several suspicious properties, including ‘AlwaysDeleteOutlookRulesBlob’, ‘StopProcessingRules’ and “MoveToFolder”.
Firstly, ‘AlwaysDeleteOutlookRulesBlob’ suppresses or hides warning messages that typically appear if modifications to email rules are made [4]. In this case, it is likely the malicious actor was attempting to implement this property to obfuscate the creation of new email rules.
The ‘StopProcessingRules’ rule meant that any subsequent email rules created by the legitimate user would be overridden by the email rule created by the malicious actor [5]. Finally, the implementation of “MoveToFolder” would allow the malicious actor to automatically move all outgoing emails from the “Sent” folder to the “Deleted Items” folder, for example, further obfuscating their malicious activities [6]. The utilization of these email rule properties is frequently observed during account hijackings as it allows attackers to delete and/or forward key emails, delete evidence of exploitation and launch phishing campaigns [7].
In this incident, the new email rule would likely have enabled the malicious actor to evade the detection of traditional security measures and achieve greater persistence using the Microsoft 365 account.

Account Update
A few hours after the creation of the new email rule, Darktrace observed the threat actor successfully changing the Microsoft 365 user’s account password, this time from a new IP address in Texas. As a result of this action, the attacker would have locked out the legitimate user, effectively gaining full access over the SaaS account.

Phishing Emails
The compromised SaaS account was then observed sending a high volume of suspicious emails to both internal and external email addresses. Darktrace was able to identify that the emails attempting to impersonate the legitimate service DocuSign and contained a malicious link prompting users to click on the text “Review Document”. Upon clicking this link, users would be redirected to a site hosted on Adobe Express, namely hxxps://express.adobe[.]com/page/A9ZKVObdXhN4p/.
Adobe Express is a free service that allows users to create web pages which can be hosted and shared publicly; it is likely that the threat actor here leveraged the service to use in their phishing campaign. When clicked, such links could result in a device unwittingly downloading malware hosted on the site, or direct unsuspecting users to a spoofed login page attempting to harvest user credentials by imitating legitimate companies like Microsoft.

The malicious site hosted on Adobe Express was subsequently taken down by Adobe, possibly in response to user reports of maliciousness. Unfortunately though, platforms like this that offer free webhosting services can easily and repeatedly be abused by malicious actors. Simply by creating new pages hosted on different IP addresses, actors are able to continue to carry out such phishing attacks against unsuspecting users.
In addition to the suspicious SaaS and email activity that took place between May 9 and May 10, Darktrace/Email also detected the compromised account sending and receiving suspicious emails starting on May 4, just two days after Darktrace’s initial deployment on the customer’s environment. It is probable that the SaaS account was compromised around this time, or even prior to Darktrace’s deployment on May 2, likely via a phishing and credential harvesting campaign similar to the one detailed above.

Darktrace Coverage
As the customer was soon to begin their trial period, Darktrace RESPOND was set in “human confirmation” mode, meaning that any preventative RESPOND actions required manual application by the customer’s security team.
If Darktrace RESPOND had been enabled in autonomous response mode during this incident, it would have taken swift mitigative action by logging the suspicious user out of the SaaS account and disabling the account for a defined period of time, in doing so disrupting the attack at the earliest possible stage and giving the customer the necessary time to perform remediation steps. As it was, however, these RESPOND actions were suggested to the customer’s security team for them to manually apply.

Nevertheless, with Darktrace DETECT/Cloud in place, visibility over the anomalous cloud-based activities was significantly increased, enabling the swift identification of the chain of suspicious activities involved in this compromise.
In this case, the prospective customer reached out to Darktrace directly through the Ask the Expert (ATE) service. Darktrace’s expert analyst team then conducted a timely and comprehensive investigation into the suspicious activity surrounding this SaaS compromise, and shared these findings with the customer’s security team.
Conclusion
Ultimately, this example of SaaS account compromise highlights Darktrace’s unique ability to learn an organization’s digital environment and recognize activity that is deemed to be unexpected, within a matter of days.
Due to the lack of obvious or known indicators of compromise (IoCs) associated with the malicious activity in this incident, this account hijack would likely have gone unnoticed by traditional security tools that rely on a rules and signatures-based approach to threat detection. However, Darktrace’s Self-Learning AI enables it to detect the subtle deviations in a device’s behavior that could be indicative of an ongoing compromise.
Despite being newly deployed on a prospective customer’s network, Darktrace DETECT was able to identify unusual login attempts from geographically improbable locations, suspicious email rule updates, password changes, as well as the subsequent mounting of a phishing campaign, all before the customer’s trial of Darktrace had even begun.
When enabled in autonomous response mode, Darktrace RESPOND would be able to take swift preventative action against such activity as soon as it is detected, effectively shutting down the compromise and mitigating any subsequent phishing attacks.
With the full deployment of Darktrace’s suite of products, including Darktrace/Cloud and Darktrace/Email, customers can rest assured their critical data and systems are protected, even in the case of hybrid and multi-cloud environments.
Credit: Samuel Wee, Senior Analyst Consultant & Model Developer
Appendices
References
[2] https://www.upguard.com/blog/saas-security-risks
[4] https://learn.microsoft.com/en-us/powershell/module/exchange/disable-inboxrule?view=exchange-ps
[7] https://blog.knowbe4.com/check-your-email-rules-for-maliciousness
Darktrace Model Detections
Darktrace DETECT/Cloud and RESPOND Models Breached:
SaaS / Access / Unusual External Source for SaaS Credential Use
SaaS / Unusual Activity / Multiple Unusual External Sources for SaaS Credential
Antigena / SaaS / Antigena Unusual Activity Block (RESPOND Model)
SaaS / Compliance / New Email Rule
Antigena / SaaS / Antigena Significant Compliance Activity Block
SaaS / Compromise / Unusual Login and New Email Rule (Enhanced Monitoring Model)
Antigena / SaaS / Antigena Suspicious SaaS Activity Block (RESPOND Model)
SaaS / Compromise / SaaS Anomaly Following Anomalous Login (Enhanced Monitoring Model)
SaaS / Compromise / Unusual Login and Account Update
Antigena / SaaS / Antigena Suspicious SaaS Activity Block (RESPOND Model)
IoC – Type – Description & Confidence
hxxps://express.adobe[.]com/page/A9ZKVObdXhN4p/ - Domain – Probable Phishing Page (Now Defunct)
37.19.221[.]142 – IP Address – Unusual Login Source
35.174.4[.]92 – IP Address – Unusual Login Source
MITRE ATT&CK Mapping
Tactic - Techniques
INITIAL ACCESS, PRIVILEGE ESCALATION, DEFENSE EVASION, PERSISTENCE
T1078.004 – Cloud Accounts
DISCOVERY
T1538 – Cloud Service Dashboards
CREDENTIAL ACCESS
T1539 – Steal Web Session Cookie
RESOURCE DEVELOPMENT
T1586 – Compromise Accounts
PERSISTENCE
T1137.005 – Outlook Rules

Blog
Darktrace/Email in Action: Why AI-Driven Email Security is the Best Defense Against Sustained Phishing Campaigns
_11zon.jpg)


Stopping the bad while allowing the good
Since its inception, email has been regarded as one of the most important tools for businesses, revolutionizing communication and allowing global teams to become even more connected. But besides organizations heavily relying on email for their daily operations, threat actors have also recognized that the inbox is one of the easiest ways to establish an initial foothold on the network.
Today, not only are phishing campaigns and social engineering attacks becoming more prevalent, but the level of sophistication of these attacks are also increasing with the help of generative AI tools that allow for the creation of hyper-realistic emails with minimal errors, effectively lowering the barrier to entry for threat actors. These diverse and stealthy types of attacks evade traditional email security tools based on rules and signatures, because they are less likely to contain the low-sophistication markers of a typical phishing attack.
In a situation where the sky is the limit for attackers and security teams are lean, how can teams equip themselves to tackle these threats? How can they accurately detect increasingly realistic malicious emails and neutralize these threats before it is too late? And importantly, how can email security block these threats while allowing legitimate emails to flow freely?
Instead of relying on past attack data, Darktrace’s Self-Learning AI detects the slightest deviation from a user’s pattern of life and responds autonomously to contain potential threats, stopping novel attacks in their tracks before damage is caused. It doesn’t define ‘good’ and ‘bad’ like traditional email tools, rather it understands each user and what is normal for them – and what’s not.
This blog outlines how Darktrace/Email™ used its understanding of ‘normal’ to accurately detect and respond to a sustained phishing campaign targeting a real-life company.
Responding to a sustained phishing attack
Over the course of 24 hours, Darktrace detected multiple emails containing different subjects, all from different senders to different recipients in one organization. These emails were sent from different IP addresses, but all came from the same autonomous system number (ASN).

The emails themselves had many suspicious indicators. All senders had no prior association with the recipient, and the emails generated a high general inducement score. This score is generated by structural and non-specific content analysis of the email – a high score indicates that the email is trying to induce the recipient into taking a particular action, which may lead to account compromise.
Additionally, each email contained a visually prominent link to a file storage service, hidden behind a shortened bit.ly link. The similarities across all these emails pointed to a sustained campaign targeting the organization by a single threat actor.


With all these suspicious indicators, many models were breached. This drove up the anomaly score, causing Darktrace/Email to hold all suspicious emails from the recipients’ inboxes, safeguarding the recipients from potential account compromise and disallowing the threats from taking hold in the network.
Imagining a phishing attack without Darktrace/Email
So what could have happened if Darktrace had not withheld these emails, and the recipients had clicked on the links? File storage sites have a wide variety of uses that allow attackers to be creative in their attack strategy. If the user had clicked on the shortened link, the possible consequences are numerous. The link could have led to a login page for unsuspecting victims to input their credentials, or it could have hosted malware that would automatically download if the link was clicked. With the compromised credentials, threat actors could even bypass MFA, change email rules, or gain privileged access to a network. The downloaded malware might also be a keylogger, leading to cryptojacking, or could open a back door for threat actors to return to at a later time.


The limits of traditional email security tools
Secure email gateways (SEGs) and static AI security tools may have found it challenging to detect this phishing campaign as malicious. While Darktrace was able to correlate these emails to determine that a sustained phishing campaign was taking place, the pattern among these emails is far too generic for specific rules as set in traditional security tools. If we take the characteristic of the freemail account sender as an example, setting a rule to block all emails from freemail accounts may lead to more legitimate emails being withheld, since these addresses have a variety of uses.
With these factors in mind, these emails could have easily slipped through traditional security filters and led to a devastating impact on the organization.
Conclusion
As threat actors step up their attacks in sophistication, prioritizing email security is more crucial than ever to preserving a safe digital environment. In response to these challenges, Darktrace/Email offers a set-and-forget solution that continuously learns and adapts to changes in the organization.
Through an evolving understanding of every environment in which it is deployed, its threat response becomes increasingly precise in neutralizing only the bad, while allowing the good – delivering email security that doesn’t come at the expense of business growth.