top of page

United States, European Union, and China: A Comparative Look at AI Regulatory Approaches





As artificial intelligence (AI) continues to revolutionize every aspect of our lives, countries around the world are grappling with the question of how to regulate this powerful technology. In this post, we will take a comparative look at how the world’s three major economies – the United States, the European Union, and China – are approaching AI regulations.


United States 🇺🇲

Key Theme: non-interventionist, pro-innovation


Federal Approach is TBD

The United States Congress has taken a relatively hands-off approach to regulating AI thus far, though the Democratic Party’s leadership has expressed its intent to introduce a federal law regulating AI. Republicans will likely present their version as well. We expect the likelihood of such a law to pass through Congress as low. The country's regulatory framework is largely based on voluntary guidelines like the NIST AI Risk Management Framework and self-regulation by the industry.


However, US Federal Agencies are likely to step in to regulate within their jurisdictional authority. For example, the Federal Trade Commission (FTC) has been active in policing deceptive and unfair practices related to AI, particularly enforcing statutes such as the Fair Credit Reporting Act, Equal Credit Opportunity Act, and FTC Act. They have released publications outlining rules for AI development and use. These rules include training AI with representative data sets, testing AI before and after deployment to avoid bias, ensuring explainable AI outcomes, and establishing accountability and governance mechanisms for fair and responsible AI use. In addition, certain sectors such as healthcare and financial services are subject to specific regulations related to AI.


While the US generally favors a "light touch" approach to regulation in order to foster innovation and growth in the AI industry, the country is starting to align with the EU on international cooperation of AI, but specifics are unclear. Most of the initiatives revolve around topics of trade, national security, and privacy.


State & Local Take the Lead

In 2018, California adopted the California Consumer Privacy Act (CCPA) as a response to the European Union’s General Data Protection Regulation (GDPR). We expect states in the US to enact legislation on AI regulation due to the lack of federal enforcement, which would create a patchwork of state-level regulations for companies to comply with.


In New York City, Local Law 144 requires employers and employment agencies to provide a bias audit of automated employment decision tools. And in California, AB 331 requires impact assessments for developers and deployers of automated decision tools. Moreover, state legislatures in Texas, Vermont, and Washington are introducing legislation that requires state agencies to conduct an inventory of all AI systems being developed, used, or procured – which would likely demand government contractors to more effectively disclose where AI is being used in their public sector contracts.


We expect US states and localities to continue introducing legislation to regulate AI in specific use cases.


European Union 🇪🇺

Key Theme: consumer protection; fairness & safety


Global standard for AI regulation

Much like with GDPR, the EU’s AI Act is likely to become the global standard for AI regulation. The proposal includes a ban on certain uses of AI, such as facial recognition in public spaces, as well as requirements for transparency and accountability in the use of AI. Most importantly, organizations must assign a risk category to each use case of AI and conduct a risk assessment & cost-benefit analysis before implementing a new AI system, especially if it poses a "heightened risk" to consumers. Controls to mitigate risks should be determined and integrated into business units where risk can arise. From an enforcement standpoint, Europe has learned lessons from GDPR that it will likely apply to AI – such as member-state enforcement agencies and better incident response.


Risk assessments will likely become standard practice for AI implementation, helping organizations understand the cost-benefit tradeoffs of an AI system and enable them to provide transparency & explainability to impacted stakeholders. Our partners at the Responsible AI Institute are one of the leading institutions helping organizations conduct risk assessments.


Conflicting Perspectives on AI innovation

The proposed regulation has been criticized by some as overly burdensome, creating additional costs and administrative responsibilities to organizations already overwhelmed by regulatory complexity. The EU argues that it is necessary to protect individuals from the potential harms of AI.


Interestingly, according to a recent Accenture report, many organizations see regulatory compliance as an unexpected source of competitive advantage. 43% of respondents think it will improve their ability to industrialize and scale AI, and 36% believe it will create opportunities for competitive advantage/differentiation. Organizations in regulated sectors like healthcare and finance are concerned with developing and deploying AI with few guardrails. Coherent AI regulations that clarify responsibilities and liabilities would allow organizations to confidently adopt AI. The EU is betting on this.


China 🇨🇳

Key Theme: state control; economic dynamism


The Great (AI) Firewall

The Chinese government sees AI as a strategic technology that can help it achieve its economic and geopolitical goals, and as such has been actively promoting the development and adoption of AI. However, China's approach to AI also raises concerns about privacy and civil liberties, as the government has been known to use AI for surveillance, censorship, and social control purposes. Generative AI presents a risk for state control beyond those risks presented with the internet.


According to The Economist, “rules proposed by China’s internet regulator on April 11th make clear the government’s concerns. According to the Cyberspace Administration of China (CAC), firms should submit a security assessment to the state before using generative ai products to provide services to the public. Companies would be responsible for the content such tools generate. That content, according to the rules, must not subvert state power, incite secession, harm national unity or disturb the economic or social order. And it must be in line with the country’s socialist values.”


China has also been building its bureaucratic toolkits to quickly and iteratively propose new AI governance laws – allowing it to quickly adjust regulatory guidance as new use cases of the technology get adopted.


AI as an Economic Tool

Despite the Chinese government’s concerns about Generative AI applications, the country is deeply committed to investing in AI across sectors. China accounted for nearly one-fifth of global private investment funding in 2021, attracting $17 billion for AI start-ups. On research, China produced about one-third of both AI journal papers and AI citations worldwide in 2021. McKinsey estimates that AI can create upwards of $600 billion in economic value annually for the country. Expect China to continue investing in AI to support its transportation, manufacturing, and defense sectors. Lastly, the manufacturing and distribution of semiconductors will also play a critical role in AI development.


China will ensure information generated by AI aligns with the interests of the Chinese Communist Party (CCP), but as an economic tool, it will use AI in all ways possible to further strengthen its global commercial and technological priorities.


Parting Thoughts

One commonality across all models: disclosure

It is clear that each model (US, EU, and China) reflects each region’s societal values and national priorities. Over the coming years, governments, businesses, and citizens will ask themselves fundamental questions about the definitions of fairness, human values, and economic trade offs with AI. And while regulatory models and requirements will differ, all of these models will require disclosure.


While each regulatory framework may be perceived as more or less innovative, fair, or safe, all models will require organizations leveraging AI to document certain information about the system. Transparency and explainability (at the organizational, use case, and model/data level) is key to complying with emerging regulations and fostering trust in the technology.


To read more about what your organization can do to prepare for upcoming AI regulations, read our recent blog post on the subject.




bottom of page