Skip to main content

Insights and recommendations for security teams faced with mounting regulatory pressure

In the constantly evolving landscape of artificial intelligence (AI), policymakers are scrambling to catch up. The recent Executive Order aimed at ensuring the Safe, Secure, and Trustworthy Development and Use of AI brings a much-needed framework into play. With far-reaching implications for the industry, it’s imperative for C-suite executives, especially CISOs, to fully understand what this order means for their AI and cybersecurity practices.

What are some key takeaways?

In this overview, we’ve narrowed our focus to the sections of the executive order most pertinent to security practitioners operating in the private sector, primarily located in section 4 of the order: ‘Ensuring the Safety and Security of AI technology’. For the full breadth of what is discussed in the executive order outside of security considerations, check out this summary article from KPMG.

Developing Guidelines and Best Practices

The Department of Commerce, through NIST and in coordination with the Secretary of Energy and Homeland Security, is spearheading the initiative to develop guidelines for safe, secure, and trustworthy AI systems.

This includes companion resources to the NIST AI Risk Management Framework and the Secure Software Development Framework specifically targeted at generative AI (‘AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content’) and dual-use foundation models (“AI models that are trained on broad data; generally use self-supervision; contain at least tens of billions of parameters; and are applicable across a wide range of contexts”).

Red Teaming

Companies engaged in AI development, particularly of dual-use foundation models, will need to conduct AI red-teaming exercises to gauge the security, safety, and trustworthiness of their AI systems. The Order defines AI red teaming as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI”.

Reporting Requirements

Companies that are developing, or even intending to develop dual-use foundation models must continuously report various metrics to the Federal Government. Large-scale computing clusters also come under the purview of these reporting requirements.

Risk Management

Agencies with oversight over critical infrastructure are required to assess the risks related to the deployment of AI in their sectors. The Secretary of the Treasury will issue public reports detailing best practices for financial institutions to manage AI-specific cybersecurity risks.

What does this mean for security teams?

For security teams, it is imperative to take this opportunity to engage with various stakeholders involved in the development and delivery of AI systems, such as data science, and begin working towards a secure and compliant AI lifecycle.

  • Asset Inventory and Management: If you have not already taken stock of all your AI assets including datasets, models, and experiments, now is the time. This would aid in compliance reporting as well as in risk assessments. Begin by conducting discovery workshops with your data science teams to better understand the development environments and consider using tools like Cranium to support with automated AI asset discovery, security monitoring, and compliance reporting.
  • Compliance Overdrive: The Executive Order launching means ramping up the pace of implementing compliance measures. Organizations will need to not just adhere to industry standards but possibly also to government-defined standards. Start by aligning your AI systems and processes with the NIST AI Risk Management Framework.
  • Transparency is Paramount: With continuous reporting being a cornerstone of the Executive Order, transparency in AI development processes and their related security measures will be non-negotiable. Look to create an AI Bill of Materials for your critical AI systems to stay ahead of coming transparency requests from not only regulators, but also your clients.
  • Red-Teaming as a Necessity: Previously considered an optional but recommended exercise, red-teaming tests for AI security will become a norm, requiring investment in internal capabilities or third-party services. Leverage the MITRE ATLAS knowledge base to better understand the current threat landscape, as well as the adversarial techniques used to attack AI systems.
  • Security Culture: While most organizations have accepted the inherent risk of deploying AI systems across the enterprise, these guidelines and policies could be a catalyst for creating a more cohesive, company-wide understanding and strategy for AI security. Engage with risk owners across the organization to develop a cohesive approach to embedding AI security into the organizational culture.

The Executive Order makes it clear: the era of laissez-faire AI development and deployment is rapidly coming to an end. The AI landscape is shifting towards a more regulated environment, and enterprises need to adapt swiftly. Proactive engagement with these upcoming regulations and requirements will not only be a legal necessity but could serve as a competitive advantage in a market that values safe and ethical AI.

Prepare now, adapt swiftly, and stay ahead of the curve. The AI revolution is accelerating, and governance cannot be an afterthought. For more insight, consider leveraging the outputs of our collaboration with the Global Resilience Federation and KPMG: the Practitioners’ Guide for AI Security, as well as the Leadership Guide to Securing AI.

Connect

Leave a Reply