Skip to main content

Cranium will be one of more than 200 leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government safety institute

(Short Hills, NJ) February 8, 2024 – Today, Cranium announced that it joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute Consortium (AISIC) will bring together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

“Cranium is thrilled to participate in the AISIC to aid in guiding the group’s work around AI security and creating more visibility into AI and how threat actors may be targeting AI,” said Jonathan Dambrot, Founder & CEO of Cranium. “With a distinct focus on securing AI, this consortium brings together the best minds in the industry to guide on a framework to protect both private and public entities.”

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Secretary Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The consortium includes more than 200 member companies and organizations that are on the frontlines of developing and using AI systems, as well as the civil society and academic teams that are building the foundational understanding of how AI can and will transform our society. These entities represent the nation’s largest companies and its innovative startups; creators of the world’s most advanced AI systems and hardware; key members of civil society and the academic community; and representatives of professions with deep engagement in AI’s use today. The consortium also includes state and local governments, as well as non-profits. The consortium will also work with organizations from like-minded nations that have a key role to play in setting interoperable and effective safety around the world.

The full list of consortium participants is available:

https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute/aisic-members

​​About Cranium

Cranium is the leading enterprise AI security and trust software firm, enabling organizations to gain visibility, security, and compliance across their AI and GenAI systems. Organizations can map, monitor, and manage their AI/ML environments against adversarial threats without interrupting how teams train, test, and deploy their AI models through its Cranium Enterprise software platform. The Cranium platform also allows organizations to quickly gather and share information about the trustworthiness and compliance of their AI models with their third parties, clients, and regulators. Originally incubated and funded in stealth inside of KPMG Studio, Cranium helps cybersecurity and data science teams understand that AI impacts their systems, data, or services everywhere. Secure your AI at Cranium.AI.