AI Governance at Scale: Trustible Becomes Official Databricks Technology Partner

At Trustible, we empower organizations to responsibly build, deploy, and monitor AI systems at scale. Today, we are excited to announce our partnership with Databricks to bring together our leading AI governance platform with their trusted data and AI lakehouse, enabling joint customers to rapidly implement responsible, compliant, and accountable AI.

We believe AI practitioners and innovators should be dually focused on maximizing the benefits of AI, and minimizing its risks. Their expertise and knowledge of the systems will be necessary to comply with emerging regulations – collecting and maintaining a corpus of documentation evidence required for compliance. Our AI governance platform translates complex legal requirements and responsible AI frameworks into actionable steps to enable collaboration between AI and legal/compliance teams.

Trustible’s integration with Databricks emerged from a pain point that our team experienced while previously leading AI/ML teams: how can we leverage information already stored in the Databricks Lakehouse to accelerate compliance with emerging regulations like the European Union’s AI Act? Moreover, how do we set up policies and processes that are prepared for future requirements such as external audits, post-market monitoring, and public disclosure requirements?

Emerging AI regulations like the EU AI Act will require extensive documentation and disclosure about underlying models. Key model attributes such as training objectives, accuracy metrics, and bias/fairness statistics must be provided to users and regulators to properly convey key risks, limitations, and mitigation steps. It is best practice, and will soon be a regulatory requirement, to store these kinds of metrics and metadata in a model registry such as MLflow. In practice, organizations rarely have just one model, but rather a whole set of model variants and experiments with different hyperparameters. Ensuring that the models that reach production have the required documentation is essential in order to not have to retrofit compliance post-deployment – a costly task.

That’s where Trustible’s integration with MLflow on Databricks comes in. Our platform seamlessly generates regulatory model documentation by automatically mapping MLflow metrics and metadata to required fields in Model Cards, tailoring reporting to legal and governance needs. This is just the beginning, though. Going forward, we will extend integrations across the full machine learning lifecycle, empowering continuous monitoring, auditing, and transparency as regulations and customer needs expand.

Many proposed regulations have specific requirements for testing, internal or external audits, post market monitoring processes, and enforced internal AI governance policies. Trustible can help Databricks customers say it, do it, and prove it. For example, Trustible will help organizations say what risks are associated with a particular use case of AI, use integrations with Databricks to do the technical risk mitigations, and then export an analysis notebook as a compliance artifact to prove that the evaluation is in place. As auditing, record keeping, and post market monitoring requirements become clearer, Trustible can help Databricks customers identify what policies they need on their lakehouse, generate proof of enforcement, and connect auditors directly and securely through Delta Sharing.

The future of AI development will require visibility and collaboration between a broader set of stakeholders such as compliance teams, senior management, regulators, and the broader public. Trustible enables organizations to build trusted and accountable AI systems by connecting the needs and requirements of these various stakeholder groups. We’re excited to be working with Databricks as a Technology Partner and are looking forward to helping organizations navigate the regulated future of AI.

Share:

Related Posts

AI Governance Best Practices for Healthcare Systems and Pharmaceutical Companies

In the rapidly evolving landscape of healthcare, AI promises to revolutionize patient care, but it also brings significant risks. From algorithmic bias to data privacy breaches, the stakes are high. Effective AI governance is essential to harness the benefits of these technologies while safeguarding patient safety and ensuring compliance with regulations. This article delves into the critical challenges healthcare systems and pharmaceutical companies face, offering practical solutions and best practices for implementing trustworthy AI. Discover how to navigate the complexities of AI in healthcare and protect your organization from potential pitfalls.

Read More

Introducing the Trustible AI Governance Insights Center

At Trustible, we believe AI can be a powerful force for good, but it must be governed effectively to align with public benefit. Introducing the Trustible AI Governance Insights Center, a public, open-source library designed to equip enterprises, policymakers, and consumers with essential knowledge and tools to navigate AI’s risks and benefits. Our comprehensive taxonomies cover AI Risks, Mitigations, Benefits, and Model Ratings, providing actionable insights that empower organizations to implement robust governance practices. Join us in transforming the conversation around trusted AI into tangible, measurable outcomes. Explore the Insights Center today!

Read More