Published on June 13th, 2023 | by Adrian Gunning
Darktrace Addresses Generative AI Concerns with Introduction of AI Models That Help Protect Data Privacy and Intellectual Property
In response to growing use of generative AI tools, Darktrace today announces the launch of new risk and compliance models to help its 8,400 customers around the world address the increasing risk of IP loss and data leakage. These new risk and compliance models for Darktrace DETECT™ and RESPOND™ make it easier for customers to put guardrails in place to monitor, and when necessary, respond to activity and connections to generative AI and large language model (LLM) tools.
This comes as Darktrace’s AI observed 74% of active customer deployments have employees using generative AI tools in the workplace. In one instance, in May 2023 Darktrace detected and prevented an upload of over 1GB of data to a generative AI tool at one of its customers.
New generative AI tools promise increases in productivity and new ways of augmenting human creativity. CISOs must balance the desire to embrace these innovations to boost productivity while managing risk. Government agencies including the UK’s National Cyber Security Centre have already issued guidance about the need to manage risk when using generative AI tools and other LLMs in the workplace. In addition, regulators in a variety of jurisdictions (including the UK, EU, and US) and in various sectors are expected to lay out guidance to companies on how to make the most of AI without exacerbating its potential dangers.
“Since generative AI tools like ChatGPT have gone mainstream, our company is increasingly aware of how companies are being impacted. First and foremost, we are focused on the attack vector and how well prepared we are to respond to potential threats. Equally as important is data privacy, and we are hearing stories in the news about potential data protection and data loss,” said Allan Jacobson, Vice President and Head of Information Technology, Orion Office REIT. “Businesses need a combination of technology and clear guardrails to take advantage of the benefits while managing the potential risks.”
At London Tech Week, Darktrace’s Chief Executive Officer Poppy Gustafsson will be interviewed by Guy Podjarny, CEO of Snyk, in a fireside chat on ‘Securing Our Future by Uplifting the Human,’ where they’ll discuss how can we future-proof organisations against cyber compromise and prepare teams to fend off unpredictable threats.
Commenting ahead of London Tech Week, Poppy Gustafsson said:
“CISOs across the world are trying to understand how they should manage the risks and opportunities presented by publicly available AI tools in a world where public sentiment flits from euphoria to terror. Sentiment aside, the AI genie is not going back in the bottle and AI tools are rapidly becoming part of our day-to-day lives, much in the same way as the internet or social media. Each enterprise will determine their own appetite for the opportunities versus the risk. Darktrace is in the business of providing security personalised to an organisation, and it is no surprise we are already seeing the early signs of CISOs leveraging our technology to enforce their specific compliance policies.
“At Darktrace, we have long believed that AI is one of the most exciting technological opportunities of our time. With today’s announcement, we are providing our customers with the ability to quickly understand and control the use of these AI tools within their organisations. But it is not just the good guys watching these innovations with interest – AI is also a powerful tool to create even more nuanced and effective cyber-attacks. Society should be able to take advantage of these incredible new tools for good, but also be equipped to stay one step ahead of attackers in the emerging age of defensive AI tools versus offensive AI attacks.”
To complement its core Self-Learning AI for attack prevention, threat detection, autonomous response, and policy enforcement, the Darktrace Cyber AI Research Center continually develops new AI models, including its own proprietary large language models, to help customers prepare for and fight back against increasingly sophisticated threats. These models are used across the products in Darktrace’s Cyber AI Loop™.
“Recent advances in generative AI and LLMs are an important addition to the growing arsenal of AI techniques that will transform cyber security. But they are not one-size-fits-all and must be applied with guardrails to the right use cases and challenges,” said Jack Stockdale, Chief Technology Officer, Darktrace. “Over the last decade, the Darktrace Cyber AI Research Center has championed the responsible development and deployment of a variety of different AI techniques, including our unique Self-Learning AI and proprietary large language models. We’re excited to continue putting the latest innovations in the hands of our customers globally so that they can protect themselves against the cyber disruptions that continue to create chaos around the world.”
 Based on data obtained on June 2nd, 2023, from active customer deployments with Call Home enabled, where Darktrace detected generative AI activity at some point.