Advancing UK Aerospace, Defence, Security & Space Solutions Worldwide
  • Home
  • /
  • Security
  • /
  • Darktrace releases Darktrace / SECURE AI

Security

Darktrace releases Darktrace / SECURE AI

Darktrace has introduced Darktrace / SECURE AI, a new behavioral AI security product designed to help enterprises deploy and scale artificial intelligence by understanding how AI systems behave, interact with other systems and humans and evolve over time.



Image courtesy Darktrace

Building on Darktrace’s long heritage in behavioural AI to understand intent and detect deviations, Darktrace / SECURE AI enables organisations to intervene when AI systems act abnormally, drift from intended behaviour, exceed authorised access, violate policy, or appear to be manipulated to perform unauthorised actions.

Advertisement
ODU RT

As organizations move rapidly from AI experimentation to production, traditional security controls are proving insufficient for managing dynamic, language-driven systems. With Darktrace / SECURE AI, Darktrace is bringing its proven behavioural AI approach to the challenge. Unlike static guardrails or policy-driven approaches, behavioural AI observes how generative AI and agentic workflows actually operate in the real world. Darktrace / SECURE AI continually analyses AI interactions across the enterprise, including prompt language and data access patterns to detect emerging risks based on anomalous activity that traditional security tools and static guardrails often miss.

“AI systems don’t fail like traditional software – they drift, adapt and sometimes behave in unexpected ways,” said Mike Beck, Chief Information Security Officer at Darktrace. “Darktrace has taken a behavioral approach to understanding and securing the unstructured and unpredictable ecosystems of people, data and technology within enterprises for more than a decade. With Darktrace / SECURE AI, we’re applying our behavioral approach to give security teams visibility into what AI is doing, not just what it’s allowed to do and enabling businesses to innovate with confidence.”

Darktrace / SECURE AI provides CISOs with a practical way to govern AI without stifling adoption. The product integrates with existing security operations and delivers actionable insights to both new standalone and existing Darktrace ActiveAI Security Platform customers. The new product is designed for enterprises operating AI across embedded SaaS applications, cloud-hosted models and autonomous or semi-autonomous agents developed in low and high code development environments. It helps security teams prevent sensitive data exposure, enforce internal access and usage policies, and govern autonomous AI activity across enterprise AI assistants and agents as well as AI development and deployment.

“Security has always been about behaviour,” said Jack Stockdale, Chief Technology Officer at Darktrace. “As AI becomes agentic, prompts become the behavioral layer, encoding intent, context, and downstream actions. If you can’t observe and understand prompt language at runtime, you can’t detect drift, misuse, or emergent behaviour. Securing AI without prompt visibility is like securing email without reading the message body. Prompts are to AI what traffic is to networks and identity is to users.”

AI adoption has become a board-level priority as organisations adopt AI tools at scale to boost productivity, growth, and competitiveness across the enterprise. Across Darktrace’s customer base, more than 70% of organisations are already using generative AI tools1. As adoption matures, many organisations are increasingly deploying AI agents that can log into systems, access data and take action on behalf of employees. But approved tools are only part of the picture. Among those customers with a dominant generative AI tool in use, 91% also have employees using additional AI services1, which likely represent shadow AI tools, leaving security teams without a clear view of which AI services are in use, where they are deployed, what data is leaving the business and where it is going.  

This loss of visibility and data is already translating into real business risk. Over a five-month period, Darktrace observed unusual or anomalous data uploads to generative AI services averaging 75MB per account - equivalent to around 4,700 pages of documents - with some accounts averaging anomalous uploads of over 200,000 pages1. Potentially sensitive data is leaving businesses at scale, entering AI environments where it can be retained, reused, or surfaced beyond organisational control. In the hands of threat actors, a single upload can be weaponised for targeted social engineering, impersonation, IP theft, or AI agent manipulation.

Advertisement
PTC rectangle

Darktrace / SECURE AI helps security teams safely enable and manage AI usage across the enterprise. As part of the Darktrace ActiveAI Security Platform, the solution enables visibility and analysis of the information and data inputted into and sourced from generative AI tools, autonomous agents, AI development environments and shadow AI, allowing organizations to understand where AI systems operate, what they can access and how they behave over time.

With Darktrace / SECURE AI security teams can:

  • Monitor and control generative AI usage in real time across enterprise AI assistants, low-code, high-code, and SaaS environments, providing visibility into prompts, sessions, and model responses used in tools such as ChatGPT Enterprise and Microsoft Copilot, embedded AI features in business applications like Salesforce and M365, low-code agent builders like Microsoft Copilot studio and high-code AI development platforms like Amazon Bedrock. By understanding how prompts and conversations evolve over time, security teams can identify sensitive data exposure, unusual prompt behaviour and attempts to manipulate AI systems.
  • Track and control AI agents and their access permissions by automatically discovering active AI agents operating across cloud platforms, internal systems and third-party environments, mapping the systems and data they can access and monitoring how they interact with other services, including Model Context Protocol (MCP) servers. This helps security teams identify over-privileged agents, unexpected interactions, signs of misuse or drift from intended behaviour and enables them to intervene when agents attempt unsafe or unauthorised actions.
  • Evaluate AI risks in development and deployment by gaining visibility into AI identities and their access across low-code tools, SaaS platforms, hyperscaler environments, and internal labs. Security teams can see how identities, permissions and data are configured and how AI components connect to critical systems, helping to surface misconfigurations, excessive access and usual build activity. Those insights then feed directly into prompt oversight, linking how AI systems are created to how they behave once deployed. By correlating identity creation, building events and emerging agent capabilities with the prompts that define an agent’s logic, organisations can detect risk both before release and as agents begin operating in production.
  • Discover and manage Shadow AI, by identifying unapproved AI tools, unauthorised agent development, and unexpected AI-related activity across the enterprise. This helps security teams see where unmanaged AI usage is emerging, how data flows to external AI services and when legitimate tools are being used in risky or inconsistent ways. By correlating user activity with cloud, network and endpoint behaviour, security teams can contain unapproved tools, enforce policy and guide users toward sanctioned AI services before unmanaged adoption creates risk. 

According to Darktrace’s 2026 State of AI Cybersecurity Report released earlier this month, more than three-quarters of cybersecurity professionals surveyed are concerned about the security implications of AI agents (76%) and third-party generative AI tools (76%), citing sensitive data exposure and regulatory risk as their top concerns. Nearly half (47%) of security executives say they are extremely or very concerned, underscoring how quickly AI risk is becoming a top security priority. 

Advertisement
ECS leaderboard banner
Signicat appoints Ray Ryan as UK Country Manager

Security

Signicat appoints Ray Ryan as UK Country Manager

10 February 2026

Signicat has appointed Ray Ryan as its UK Country Manager, to lead UK digital identity strategy ahead of rising fraud costs and the arrival of eIDAS 2.0.

UK Government introduces ‘fast track’ apprenticeships reforms

Aerospace Defence Security Space

UK Government introduces ‘fast track’ apprenticeships reforms

9 February 2026

Young people will be given a quicker route into high-quality jobs on major projects as the UK Government slashes red tape to fast-track the process.

Two-thirds of parents back apprenticeships as first choice after school

Aerospace Defence Security Space

Two-thirds of parents back apprenticeships as first choice after school

9 February 2026

To coincide with the start of National Apprenticeship Week 2026 (9th – 15th February), one of the UK’s largest apprenticeship providers, BAE Systems, has analysed Censuswide’s survey of the latest attitudes of young people and parents

Navantia UK targets  500 apprentices by 2030

Defence Security

Navantia UK targets  500 apprentices by 2030

9 February 2026

Navantia UK has set a target of hiring 500 apprentices by 2030 to support the business’ expansion in shipbuilding, engineering and in supplying the offshore energy industry. 

Advertisement
Security & Policing Rectangle
ITSA sees UK connector sales rise in 2025

Aerospace Defence Security Space

ITSA sees UK connector sales rise in 2025

5 February 2026

The Interconnect Technology Suppliers Association (ITSA) has revealed its members reported sales in 2025 were up by 5% over 2024 but that orders and business to business sales, were flat.

NCA and NatWest partner to address Invoice Fraud

Security

NCA and NatWest partner to address Invoice Fraud

4 February 2026

The National Crime Agency (NCA) and NatWest Group have launched a joint campaign aimed at accounts payable professionals and finance personnel that highlights the risks of Invoice Fraud, a crime that costs businesses millions each year.

Advertisement
Security & Policing Rectangle
Advertisement
Babcock LB Babcock LB