Trustible Announces Participation in Department of Commerce Consortium Dedicated to AI Safety

Trustible will be one of the leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government safety institute

(Washington, DC) – On February 8, 2024, Trustible announced that it joined some of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute Consortium (AISIC) will bring together AI creators and users, academics, government, and industry researchers, and civil society organizations to meet this mission.

In this collaboration with AISIC, Trustible will use its expertise in responsible AI to contribute to establishing robust safety standards within the consortium. The company will participate in research, policy discussions, and the creation of best practices, reinforcing its role as a thought leader in AI, committed to ensuring the technology benefits society. Trustible’s dedication to responsible AI advancement aligns with the consortium’s goal of creating a safe, transparent, and accountable global AI ecosystem. 

“We are thrilled to be part of this crucial initiative,” said Andrew Gamino-Cheong, CTO of Trustible. “Collaborating with NIST and other AISIC members aligns with our mission to enable trustworthy and responsible AI. This opportunity not only amplifies our commitment to AI safety but also allows us to contribute significantly to shaping the future of AI policy in a way that benefits society at large.”

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Secretary of Commerce Gina Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The consortium includes more than 200 member companies and organizations that are on the frontlines of developing and using AI systems, as well as the civil society and academic teams that are building the foundational understanding of how AI can and will transform our society. These entities represent the nation’s largest companies;  innovative startups; creators of the world’s most advanced AI systems and hardware; key members of civil society and the academic community; and representatives of professions with deep engagement in AI’s use today. The consortium also includes state and local governments, as well as non-profits. The consortium will also work with organizations from like-minded nations that have a key role to play in setting interoperable and effective safety around the world.

About Trustible

Trustible is a leading technology provider of responsible AI governance. Its software platform enables AI and compliance teams to scale their AI Governance programs to help build trust, manage risk, and comply with AI regulations.

Media Contact

Gerald Kierce

[email protected]

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More

AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

As enterprises race to deploy AI across critical operations, especially in highly-regulated sectors like finance, healthcare, telecom, and manufacturing, they face a double-edged sword. AI promises unprecedented efficiency and insights, but it also introduces complex risks and uncertainties. Nearly 59% of large enterprises are already working with AI and planning to increase investment, yet only about 42% have actually deployed AI at scale. At the same time, incidents of AI failures and misuse are mounting; the Stanford AI Index noted a 26-fold increase in AI incidents since 2012, with over 140 AI-related lawsuits already pending in U.S. courts. These statistics underscore a growing reality: while AI’s presence in the enterprise is accelerating, so too are the risks and scrutiny around its use.

Read More