An Applied Ethical AI Framework

In our new white paper, we discuss how AI governance professionals, and their organizations, can apply an actionable and flexible framework for evaluating ethical decisions for AI systems.

Many organizations talk about Ethical AI, but many struggle to define it clearly. They often settle on sets of high level principles or values, but then struggle to operationalize those philosophies. This gap between high level principles such as ‘safety’, ‘transparency’, and ‘fairness’ complicates how teams can answer the question: ‘Is it ethical to create/deploy a specific AI product or service?’

At Trustible, we work with the teams charged with operationalizing ethics into AI governance for their organizations. One of the most common challenges we hear is, “how should we come up with an enforceable framework for ‘ethical’ AI use cases?” This is increasingly becoming a concern because of upcoming regulations like the EU AI Act, as well as ISO 42001 standard, require algorithmic impact assessments to examine the ethical dimensions of AI use. 

We have created a framework for thinking about applied ethical AI use to support our customers in this journey and help them begin to consider how to address this operationalization challenge. This framework aims to be a useful tool to help organizations create and translate their principles into actionable rules, identify the impacted stakeholders groups, and help frame key tradeoff decisions about benefits vs harms. We call it an ‘applied ethics’ framework because ML teams are actively facing ethical challenges on a daily basis and need a clear set of principles to ensure they are maximizing benefits and minimizing harms. 

Our framework is customizable for different organizations based on their own risk tolerance, industry, and the type of organization using it. For example, what is ‘ethical’ for a private company to do with AI may not be ethical for a government agency, and vice-versa. Similarly, cultural traditions or domain specific standards can vary widely. Our framework provides flexibility in how stakeholders are identified, how to weigh specific kinds of benefits and harms, and how to compare the benefits to the harms. Our framework can also be adapted and applied at multiple points in the AI lifecycle, each time taking into account additional information about the AI system, the likelihood of benefits or harms being realized, and its actual impact(s).

Our framework is structured into 3 primary parts: (i) identifying the relevant stakeholders; (ii) outlining potential benefits and harms; and (iii) establishing guidelines for balancing the benefits against the harms. We cover each below, as well as some of the clear limitations of our approach.

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More

AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

As enterprises race to deploy AI across critical operations, especially in highly-regulated sectors like finance, healthcare, telecom, and manufacturing, they face a double-edged sword. AI promises unprecedented efficiency and insights, but it also introduces complex risks and uncertainties. Nearly 59% of large enterprises are already working with AI and planning to increase investment, yet only about 42% have actually deployed AI at scale. At the same time, incidents of AI failures and misuse are mounting; the Stanford AI Index noted a 26-fold increase in AI incidents since 2012, with over 140 AI-related lawsuits already pending in U.S. courts. These statistics underscore a growing reality: while AI’s presence in the enterprise is accelerating, so too are the risks and scrutiny around its use.

Read More