Securing the AI Pipeline Using Cranium
Now that AI-based tools such as conversational chatbots – see ChatGPT – have generated so much attention, it’s time to begin counting the ways in which the application of artificial intelligence to modern enterprise computing will prompt new approaches to cybersecurity.
This past week, our TAG Cyber analyst team had the wonderful opportunity to spend time with the leadership team of Cranium, a new KPMG spin-off that is focused on delivering cybersecurity and trust to the AI pipeline.
Our tour guide was Cranium CEO, Jonathan Dambrot, who designed KPMG’s Third-Party Security Program. The discussion centered on two primary areas – namely, how AI creates the need for advanced cyber protections, and how Cranium seeks to fill this gap.
“We have learned, based on surveys, that roughly 90% of companies have no tools in place to secure their machine learning systems,” Dambrot explained. “This is a concern because three quarters of companies will shift from AI pilot program to full operationalized AI by 2024.”
The Cranium team shared a basic taxonomy of the most significant security issues for AI. The first category involves data poisoning and backdoor detection, where adversaries poison the training data, a topic often brought up in the context of ChatGPT usage.
The second category involves the theft or ransom of AI models, which give adversaries the opportunity to watch model behavior in a controlled environment, thus revealing how it works and how it might be targeted or duplicated.
The third category is focused on the evasion of an AI model, which involves an adversary finding a way to avoid discovery by the data ingestion or discovery aspect of the AI. This could be a use-case for security systems based on AI.
The final category involved an adversary extracting sensitive data from the AI pipeline, which could involve the AI training set, or any Big Data analytics being used in support of a machine learning program.
“Our observation is that no security team today is properly set up to deal with these threats,” Dambrot said, “and this is what our purpose and mission is at Cranium. Our goal is to equip customer with a means for defending against exploitable weaknesses in AI pipelines.”
The approach being taken at Cranium follows a familiar cadence in cybersecurity – namely, to implement three functional capabilities: (1) Mapping of the AI pipeline, (2) validation of the AI pipeline security controls, and (3) monitoring of AI threat exploitation in production systems.
The Cranium platform includes controls focused on the AI training environment used during development. Security support is included for automated feature and data hygiene, as well as inclusion of vulnerability scans and user behavior analytic (UBA) controls.
The Cranium platform also supports evaluation of the AI testing environment, with emphasis on standardized execution and creation of unified repositories of security alerts and events generated from the AI pipeline.
Finally, the Cranium platform includes security and trust controls for the AI production environment, where focus is on adversarial activity monitoring, and mapping of anomalous events to frameworks such as MITRE Atlas.
From a TAG Cyber analyst perspective, Cranium’s focus on security in the AI pipeline is a welcome step in the AI progression from a research activity to inclusion in production support systems. It stands to reason that threats will emerge in this context.
One big challenge for Cranium is that most companies have not standardized their AI pipeline, so they will need to deal with shifting models, implementations, pilot programs, and third-party usage in the various AI projects they will support in their first few years of operation.
Nevertheless, the AI pipeline will eventually converge on a well-defined series of steps – and it is our expectation that Dambrot and his Cranium team will be well-positioned to help support security and trust in the AI pipeline.
Let us know your thoughts and comments.