Blog
Finding the Right Cyber Security AI for You







AI has long been a buzzword – we started seeing it utilized in consumer space; in social media, e-commerce, and even in our music preference! In the past few years it has started to make its way through the enterprise space, especially in cyber security.
Increasingly, we see threat actors utilizing AI in their attack techniques. This is inevitable with the advancements in AI technology, the lower barrier to entry to the cyber security industry, and the continued profitability of being a threat actor. Surveying security decision makers across different industries like financial services and manufacturing, 77% of the respondents expect weaponized AI to lead to an increase in the scale and speed of attacks.
Defenders are also ramping up their use of AI in cyber security – with more than 80% of the respondents agreeing that organizations require advanced defenses to combat offensive AI – resulted in a ‘cyber arms race’ with adversaries and security teams in constant pursuit of the latest technological advancements.
The rules and signature approach is no longer sufficient in this evolving threat landscape. Because of this collective need, we will continue to see the push of AI innovations in this space as well. By 2025, cyber security technologies will account for 25% of the AI software market.
Despite the intrigue surrounding AI, many people have a limited understanding of how it truly works. The mystery of AI technology is what piques the interest of many cyber security practitioners. As an industry we also know that AI is necessary for advancement, but there is so much noise around AI and machine learning that some teams struggle to understand it. The paradox of choice leaves security teams more frustrated and confused by all the options presented to them.
Identifying True AI
You first need to define what you want the AI technology to solve. This might seem trivial, but many security teams often forget to come back to the fundamentals: what problem are you addressing? What are you trying to improve?
Not every process needs AI; some processes will simply need automation – these are the more straightforward parts of your business. More complex and bigger systems require AI. The crux is identifying these parts of your business, applying AI and being clear of what you are going to achieve with these AI technologies.
For example, when it comes to factory floor operations or tracking leave days of employees, businesses employ automation technologies, but when it comes to business decisions like PR strategies or new business exploration, AI is used to predict trends and help business owners make these decisions.
Similarly, in cyber security, when dealing with known threats such as known malicious malware and hosting sites, automation is great at keeping track of them; workflows and playbooks are also best assessed with automation tools. However, when it comes to unknown unknowns like zero-day attacks, insider threats, IoT threats and supply chain attacks, AI is needed to detect and respond these threats as they emerge.
Automation is often communicated as AI, and it becomes difficult for security teams to differentiate. Automation helps you to quickly make a decision you already know you will make, whereas true AI helps you make a better decision.
Key ways to differentiate true AI from automation:
- The Data Set: In automation, what you are looking for is very well-scoped. You already know what you are looking for – you are just accelerating the process with rules and signatures. True AI is dynamic. You no longer need to define activities that deserve your attention, the AI highlights and prioritizes this for you.
- Bias: When you define what you are looking for, as humans inherently we impose our biases on these decisions. We are also limited by our knowledge at that point in time – this leaves out the crucial unknown unknowns.
- Real-time: Every organization is always changing and it is important that AI takes all that data into consideration. True AI that is real time and also changes with your organization’s growth is hard to find.
Our AI Research Centre has produced numerous papers on the applications of true AI in cyber security. The Centre comprises of more than 150 members and has more than 100 patents and patents pending. Some of the featured white papers include research on Attack Path Modeling and using AI as a preventative approach in your organization.
Integrating AI Outputs with People, Process, and Technology
Integrating AI with People
We are living in the time of trust deficit, and that applies to AI as well. As humans we can be skeptical with AI, so how do we build trust for AI such that it works for us? This applies not only to the users of the technology, but the wider organization as well. Since this is the People pillar, the key factors to achieving trust in AI is through education, culture, and exposure. In a culture where people are open to learn and try new AI technologies, we will naturally build trust towards AI over time.
Integrating AI with Process
Then we should consider the integration of AI and its outputs into your workflow and playbooks. To make decisions around that, security managers need to be clear what their security priorities are, or which security gaps a particular technology is meant to fill. Regardless of whether you have an outsourced MSSP/SOC team, 50-strong in-house SOC team, or even just a 2-man team, it is about understanding your priorities and assigning the proper resources to them.
Integrating AI with Technology
Finally, there is the integration of AI with your existing technology stack. Most security teams deploy different tools and services to help them achieve different goals – whether it is a tool like SIEM, a firewall, an endpoint, or services like pentesting, or vulnerability assessment exercises. One of the biggest challenges is putting all of this information together and pulling actionable insights out of them. Integration on multiple levels is always challenging with complex technologies because they technologies can rate or interpret threats differently.
Security teams often find themselves spending the most time making sense of the output of different tools and services. For example, taking the outcomes from a pentesting report and trying to enhance SOAR configurations, or looking at SOC alerts to advise firewall configurations, or taking vulnerability assessment reports to scope third-party Incident Response teams.
These tools can have a strong mastery of large volumes of data, but eventually ownership of the knowledge should still lie with the human teams – and the way to do that is with continuous feedback and integration. It is no longer efficient to use human teams to carry out this at scale and at speed.
The Cyber AI Loop is Darktrace’s approach to cyber security. The four product families make up a key aspect of an organization’s cyber security posture. Darktrace PREVENT, DETECT, RESPOND and HEAL each feed back into a continuous, virtuous cycle, constantly strengthening each other’s abilities.

This cycle augments humans at every stage of an incident lifecycle. For example, PREVENT may alert you to a vulnerability which holds a particularly high risk potential for your organization. It provides clear mitigation advice, and while you are on this, PREVENT will feed into DETECT and RESPOND, which are immediately poised to kick in should an attack occur in the interim. Conversely, once an attack has been contained by RESPOND, it will feed information back into PREVENT which will anticipate an attacker’s likely next move. Cyber AI Loop helps you harden security a holistic way so that month on month, year on year, the organization continuously improves its defensive posture.
Explainable AI
Despite its complexity, AI needs to produce outputs that are clear and easy to understand in order to be useful. In the heat of the moment during a cyber incident, human teams need to quickly comprehend: What happened here? When did it happen? What devices are affected? What does it mean for my business? What should I deal with first?
To this end, Darktrace applies another level of AI on top of its initial findings that autonomously investigates in the background, reducing a mass of individual security events to just a few overall cyber incidents worthy of human review. It generates natural-language incident reports with all the relevant information for your team to make judgements in an instant.

Cyber AI Analyst does not only take into consideration network detection but also in your endpoints, your cloud space, IoT devices and OT devices. Cyber AI Analyst also looks at your attack surface and the risks associated to triage and show you the most prioritized alerts that if unexpected would cause maximum damage to your organization. These insights are not only delivered in real time but also unique to your environment.
This also helps address another topic that frequently comes up in conversations around AI: false positives. This is of course a valid concern: what is the point of harvesting the value of AI if it means that a small team now must look at thousands of alerts? But we have to remember that while AI allows us to make more connections over the vastness of logs, its goal is not to create more work for security teams, but to augment them instead.
To ensure that your business can continue to own these AI outputs and more importantly the knowledge, Explainable AI such as that used in Darktrace’s Cyber AI Analyst is needed to interpret the findings of AI, to ensure human teams know what happened, what action (if any) the AI took, and why.
Conclusion
Every organization is different, and its security should reflect that. However, some fundamental common challenges of AI in cyber security are shared amongst all security teams, regardless of size, resources, industry vertical, and culture. Their cyber strategy and maturity levels are what sets them apart. Maturity is not defined by how many professional certifications or how many years of experience the team has. A mature team works together to solve problems. They understand that while AI is not the silver bullet, it is a powerful bullet that if used right, will autonomously harden the security of the complete digital ecosystem, while augmenting the humans tasked with defending it.
Vous aimez ça et en voulez plus ?
More in this series
Blog
A l'intérieur du SOC
Protecting Prospects: How Darktrace Detected an Account Hijack Within Days of Deployment



Cloud Migration Expanding the Attack Surface
Cloud migration is here to stay – accelerated by pandemic lockdowns, there has been an ongoing increase in the use of public cloud services, and Gartner has forecasted worldwide public cloud spending to grow around 20%, or by almost USD 600 billion [1], in 2023. With more and more organizations utilizing cloud services and moving their operations to the cloud, there has also been a corresponding shift in malicious activity targeting cloud-based software and services, including Microsoft 365, a prominent and oft-used Software-as-a-Service (SaaS).
With the adoption and implementation of more SaaS products, the overall attack surface of an organization increases – this gives malicious actors additional opportunities to exploit and compromise a network, necessitating proper controls to be in place. This increased attack surface can leave organization’s open to cyber risks like cloud misconfigurations, supply chain attacks and zero-day vulnerabilities [2]. In order to achieve full visibility over cloud activity and prevent SaaS compromise, it is paramount for security teams to deploy sophisticated security measures that are able to learn an organization’s SaaS environment and detect suspicious activity at the earliest stage.
Darktrace Immediately Detects Hijacked Account
In May 2023, Darktrace observed a chain of suspicious SaaS activity on the network of a customer who was about to begin their trial of Darktrace/Cloud™ and Darktrace/Email™. Despite being deployed on the network for less than a week, Darktrace DETECT™ recognized that the legitimate SaaS account, belonging to an executive at the organization, had been hijacked. Darktrace/Email was able to provide full visibility over inbound and outbound mail and identified that the compromised account was subsequently used to launch an internal spear-phishing campaign.
If Darktrace RESPOND™ were enabled in autonomous response mode at the time of this compromise, it would have been able to take swift preventative action to disrupt the account compromise and prevent the ensuing phishing attack.
Account Hijack Attack Overview
Unusual External Sources for SaaS Credentials
On May 9, 2023, Darktrace DETECT/Cloud detected the first in a series of anomalous activities performed by a Microsoft 365 user account that was indicative of compromise, namely a failed login from an external IP address located in Virginia.

Just a few minutes later, Darktrace observed the same user credential being used to successfully login from the same unusual IP address, with multi-factor authentication (MFA) requirements satisfied.

A few hours after this, the user credential was once again used to login from a different city in the state of Virginia, with MFA requirements successfully met again. Around the time of this activity, the SaaS user account was also observed previewing various business-related files hosted on Microsoft SharePoint, behavior that, taken in isolation, did not appear to be out of the ordinary and could have represented legitimate activity.
The following day, May 10, however, there were additional login attempts observed from two different states within the US, namely Texas and Florida. Darktrace understood that this activity was extremely suspicious, as it was highly improbable that the legitimate user would be able to travel over 2,500 miles in such a short period of time. Both login attempts were successful and passed MFA requirements, suggesting that the malicious actor was employing techniques to bypass MFA. Such MFA bypass techniques could include inserting malicious infrastructure between the user and the application and intercepting user credentials and tokens, or by compromising browser cookies to bypass authentication controls [3]. There have also been high-profile cases in the recent years of legitimate users mistakenly (and perhaps even instinctively) accepting MFA prompts on their token or mobile device, believing it to be a legitimate process despite not having performed the login themselves.
New Email Rule
On the evening of May 10, following the successful logins from multiple US states, Darktrace observed the Microsoft 365 user creating a new inbox rule, named “.’, in Microsoft Outlook from an IP located in Florida. Threat actors are often observed naming new email rules with single characters, likely to evade detection, but also for the sake of expediency so as to not expend any additional time creating meaningful labels.
In this case the newly created email rules included several suspicious properties, including ‘AlwaysDeleteOutlookRulesBlob’, ‘StopProcessingRules’ and “MoveToFolder”.
Firstly, ‘AlwaysDeleteOutlookRulesBlob’ suppresses or hides warning messages that typically appear if modifications to email rules are made [4]. In this case, it is likely the malicious actor was attempting to implement this property to obfuscate the creation of new email rules.
The ‘StopProcessingRules’ rule meant that any subsequent email rules created by the legitimate user would be overridden by the email rule created by the malicious actor [5]. Finally, the implementation of “MoveToFolder” would allow the malicious actor to automatically move all outgoing emails from the “Sent” folder to the “Deleted Items” folder, for example, further obfuscating their malicious activities [6]. The utilization of these email rule properties is frequently observed during account hijackings as it allows attackers to delete and/or forward key emails, delete evidence of exploitation and launch phishing campaigns [7].
In this incident, the new email rule would likely have enabled the malicious actor to evade the detection of traditional security measures and achieve greater persistence using the Microsoft 365 account.

Account Update
A few hours after the creation of the new email rule, Darktrace observed the threat actor successfully changing the Microsoft 365 user’s account password, this time from a new IP address in Texas. As a result of this action, the attacker would have locked out the legitimate user, effectively gaining full access over the SaaS account.

Phishing Emails
The compromised SaaS account was then observed sending a high volume of suspicious emails to both internal and external email addresses. Darktrace was able to identify that the emails attempting to impersonate the legitimate service DocuSign and contained a malicious link prompting users to click on the text “Review Document”. Upon clicking this link, users would be redirected to a site hosted on Adobe Express, namely hxxps://express.adobe[.]com/page/A9ZKVObdXhN4p/.
Adobe Express is a free service that allows users to create web pages which can be hosted and shared publicly; it is likely that the threat actor here leveraged the service to use in their phishing campaign. When clicked, such links could result in a device unwittingly downloading malware hosted on the site, or direct unsuspecting users to a spoofed login page attempting to harvest user credentials by imitating legitimate companies like Microsoft.

The malicious site hosted on Adobe Express was subsequently taken down by Adobe, possibly in response to user reports of maliciousness. Unfortunately though, platforms like this that offer free webhosting services can easily and repeatedly be abused by malicious actors. Simply by creating new pages hosted on different IP addresses, actors are able to continue to carry out such phishing attacks against unsuspecting users.
In addition to the suspicious SaaS and email activity that took place between May 9 and May 10, Darktrace/Email also detected the compromised account sending and receiving suspicious emails starting on May 4, just two days after Darktrace’s initial deployment on the customer’s environment. It is probable that the SaaS account was compromised around this time, or even prior to Darktrace’s deployment on May 2, likely via a phishing and credential harvesting campaign similar to the one detailed above.

Darktrace Coverage
As the customer was soon to begin their trial period, Darktrace RESPOND was set in “human confirmation” mode, meaning that any preventative RESPOND actions required manual application by the customer’s security team.
If Darktrace RESPOND had been enabled in autonomous response mode during this incident, it would have taken swift mitigative action by logging the suspicious user out of the SaaS account and disabling the account for a defined period of time, in doing so disrupting the attack at the earliest possible stage and giving the customer the necessary time to perform remediation steps. As it was, however, these RESPOND actions were suggested to the customer’s security team for them to manually apply.

Nevertheless, with Darktrace DETECT/Cloud in place, visibility over the anomalous cloud-based activities was significantly increased, enabling the swift identification of the chain of suspicious activities involved in this compromise.
In this case, the prospective customer reached out to Darktrace directly through the Ask the Expert (ATE) service. Darktrace’s expert analyst team then conducted a timely and comprehensive investigation into the suspicious activity surrounding this SaaS compromise, and shared these findings with the customer’s security team.
Conclusion
Ultimately, this example of SaaS account compromise highlights Darktrace’s unique ability to learn an organization’s digital environment and recognize activity that is deemed to be unexpected, within a matter of days.
Due to the lack of obvious or known indicators of compromise (IoCs) associated with the malicious activity in this incident, this account hijack would likely have gone unnoticed by traditional security tools that rely on a rules and signatures-based approach to threat detection. However, Darktrace’s Self-Learning AI enables it to detect the subtle deviations in a device’s behavior that could be indicative of an ongoing compromise.
Despite being newly deployed on a prospective customer’s network, Darktrace DETECT was able to identify unusual login attempts from geographically improbable locations, suspicious email rule updates, password changes, as well as the subsequent mounting of a phishing campaign, all before the customer’s trial of Darktrace had even begun.
When enabled in autonomous response mode, Darktrace RESPOND would be able to take swift preventative action against such activity as soon as it is detected, effectively shutting down the compromise and mitigating any subsequent phishing attacks.
With the full deployment of Darktrace’s suite of products, including Darktrace/Cloud and Darktrace/Email, customers can rest assured their critical data and systems are protected, even in the case of hybrid and multi-cloud environments.
Credit: Samuel Wee, Senior Analyst Consultant & Model Developer
Appendices
References
[2] https://www.upguard.com/blog/saas-security-risks
[4] https://learn.microsoft.com/en-us/powershell/module/exchange/disable-inboxrule?view=exchange-ps
[7] https://blog.knowbe4.com/check-your-email-rules-for-maliciousness
Darktrace Model Detections
Darktrace DETECT/Cloud and RESPOND Models Breached:
SaaS / Access / Unusual External Source for SaaS Credential Use
SaaS / Unusual Activity / Multiple Unusual External Sources for SaaS Credential
Antigena / SaaS / Antigena Unusual Activity Block (RESPOND Model)
SaaS / Compliance / New Email Rule
Antigena / SaaS / Antigena Significant Compliance Activity Block
SaaS / Compromise / Unusual Login and New Email Rule (Enhanced Monitoring Model)
Antigena / SaaS / Antigena Suspicious SaaS Activity Block (RESPOND Model)
SaaS / Compromise / SaaS Anomaly Following Anomalous Login (Enhanced Monitoring Model)
SaaS / Compromise / Unusual Login and Account Update
Antigena / SaaS / Antigena Suspicious SaaS Activity Block (RESPOND Model)
IoC – Type – Description & Confidence
hxxps://express.adobe[.]com/page/A9ZKVObdXhN4p/ - Domain – Probable Phishing Page (Now Defunct)
37.19.221[.]142 – IP Address – Unusual Login Source
35.174.4[.]92 – IP Address – Unusual Login Source
MITRE ATT&CK Mapping
Tactic - Techniques
INITIAL ACCESS, PRIVILEGE ESCALATION, DEFENSE EVASION, PERSISTENCE
T1078.004 – Cloud Accounts
DISCOVERY
T1538 – Cloud Service Dashboards
CREDENTIAL ACCESS
T1539 – Steal Web Session Cookie
RESOURCE DEVELOPMENT
T1586 – Compromise Accounts
PERSISTENCE
T1137.005 – Outlook Rules

Blog
Darktrace/Email in Action: Why AI-Driven Email Security is the Best Defense Against Sustained Phishing Campaigns
_11zon.jpg)


Stopping the bad while allowing the good
Since its inception, email has been regarded as one of the most important tools for businesses, revolutionizing communication and allowing global teams to become even more connected. But besides organizations heavily relying on email for their daily operations, threat actors have also recognized that the inbox is one of the easiest ways to establish an initial foothold on the network.
Today, not only are phishing campaigns and social engineering attacks becoming more prevalent, but the level of sophistication of these attacks are also increasing with the help of generative AI tools that allow for the creation of hyper-realistic emails with minimal errors, effectively lowering the barrier to entry for threat actors. These diverse and stealthy types of attacks evade traditional email security tools based on rules and signatures, because they are less likely to contain the low-sophistication markers of a typical phishing attack.
In a situation where the sky is the limit for attackers and security teams are lean, how can teams equip themselves to tackle these threats? How can they accurately detect increasingly realistic malicious emails and neutralize these threats before it is too late? And importantly, how can email security block these threats while allowing legitimate emails to flow freely?
Instead of relying on past attack data, Darktrace’s Self-Learning AI detects the slightest deviation from a user’s pattern of life and responds autonomously to contain potential threats, stopping novel attacks in their tracks before damage is caused. It doesn’t define ‘good’ and ‘bad’ like traditional email tools, rather it understands each user and what is normal for them – and what’s not.
This blog outlines how Darktrace/Email™ used its understanding of ‘normal’ to accurately detect and respond to a sustained phishing campaign targeting a real-life company.
Responding to a sustained phishing attack
Over the course of 24 hours, Darktrace detected multiple emails containing different subjects, all from different senders to different recipients in one organization. These emails were sent from different IP addresses, but all came from the same autonomous system number (ASN).

The emails themselves had many suspicious indicators. All senders had no prior association with the recipient, and the emails generated a high general inducement score. This score is generated by structural and non-specific content analysis of the email – a high score indicates that the email is trying to induce the recipient into taking a particular action, which may lead to account compromise.
Additionally, each email contained a visually prominent link to a file storage service, hidden behind a shortened bit.ly link. The similarities across all these emails pointed to a sustained campaign targeting the organization by a single threat actor.


With all these suspicious indicators, many models were breached. This drove up the anomaly score, causing Darktrace/Email to hold all suspicious emails from the recipients’ inboxes, safeguarding the recipients from potential account compromise and disallowing the threats from taking hold in the network.
Imagining a phishing attack without Darktrace/Email
So what could have happened if Darktrace had not withheld these emails, and the recipients had clicked on the links? File storage sites have a wide variety of uses that allow attackers to be creative in their attack strategy. If the user had clicked on the shortened link, the possible consequences are numerous. The link could have led to a login page for unsuspecting victims to input their credentials, or it could have hosted malware that would automatically download if the link was clicked. With the compromised credentials, threat actors could even bypass MFA, change email rules, or gain privileged access to a network. The downloaded malware might also be a keylogger, leading to cryptojacking, or could open a back door for threat actors to return to at a later time.


The limits of traditional email security tools
Secure email gateways (SEGs) and static AI security tools may have found it challenging to detect this phishing campaign as malicious. While Darktrace was able to correlate these emails to determine that a sustained phishing campaign was taking place, the pattern among these emails is far too generic for specific rules as set in traditional security tools. If we take the characteristic of the freemail account sender as an example, setting a rule to block all emails from freemail accounts may lead to more legitimate emails being withheld, since these addresses have a variety of uses.
With these factors in mind, these emails could have easily slipped through traditional security filters and led to a devastating impact on the organization.
Conclusion
As threat actors step up their attacks in sophistication, prioritizing email security is more crucial than ever to preserving a safe digital environment. In response to these challenges, Darktrace/Email offers a set-and-forget solution that continuously learns and adapts to changes in the organization.
Through an evolving understanding of every environment in which it is deployed, its threat response becomes increasingly precise in neutralizing only the bad, while allowing the good – delivering email security that doesn’t come at the expense of business growth.