
Trustible’s Perspective: The AI Moratorium would have been bad for AI adoption
Jul 2
6 min read
1
35
0

In the early hours of July 1, 2025, the Senate overwhelmingly voted to strip the proposed federal moratorium on state and local AI laws from the Republican’s reconciliation bill. The moratorium went through several re-writes in an attempt to salvage it, though ultimately 99 Senators supported removing it from the final legislative package.
While the political pressure from state and local officials, industry, and trade groups won the day, it by no means guarantees that we will not see another attempt to freeze state and local AI regulations. The moratorium could be revived as an amendment on a must-pass bill (for example, a debt ceiling increase) or even tied into the annual National Defense Authorization Act. Regardless of whether another moratorium attempt is made, state and local lawmakers are on notice with the potential consequences of being too aggressive with new AI laws.
Trustible thinks that states and localities should remain empowered to enact AI laws, especially in the absence of federal laws or regulatory clarity. It is our belief that imposing such a moratorium harms AI development and innovation by: (i) eroding trust in the technology; (ii) disproportionately shifting liability (iii) creating more legal uncertainty (iv) lacking necessity; and (v) reducing information sharing throughout the value chain. We provide a further explanation for each of these points below with perspectives on how federal lawmakers could address these concerns.
The AI Moratorium could have eroded trust in AI
One of the biggest challenges with AI adoption in the United States is the lack of trust in the underpinning technology. A recent KPMG study showed that only 59% of surveyed Americans do not trust AI systems. This sentiment has been registered in similar studies, which translates to a majority of Americans distrusting AI systems, or at the very least, remaining skeptical. Imposing a moratorium would further exacerbate the distrust because there will be fewer barriers to prevent bad actors from entering the ecosystem. The threat of bad actors manipulating and harnessing AI with malintentions can further degrade trust in AI systems, which can slow AI investment and innovation. However, even when bad actors can be properly contained, the perception of AI being unregulated could erode trust, which may slow down AI adoption and growth of AI companies.
Our Perspective: Federal policymakers should consider how best to mitigate the impacts of bad actors, reduce risk, and help instill greater trust within the AI ecosystem. Lawmakers could appropriate funds for AI literacy pilot programs, which could encourage partnerships between the federal government and local officials, as well as with industry and civil society. Congress could also set a federal standard for disclosing when people are interacting with AI systems, which could build on laws passed in Utah and Texas.
Leveling the playing field can help clarify liability
Clear and consistent rules help level the playing field for actors within the AI ecosystem. Imagine a scenario where an innovative healthcare startup is trying to sell their product to a hospital system. The startup has a capable product but limited resources and may need to hold massive insurance policies to offset potential liability. The hospital system’s leaders face massive legal uncertainty on liability and risk issues and need to rely on their own processes for conducting in-depth due diligence on the product. In this scenario, both entities benefit from an ecosystem governed by clear, uniform and consistent rules, especially when those rules can help accelerate the procurement process and allocate liability proportionally. Regulation can provide clear rules on how liability should be allocated, effectively transferring some of the risk from the least capable actors (e.g., start-ups). Preventing rules that clarify how liability should be allocated may ultimately impose more liability on the least capable actors, which can hamstring investment and innovation.
Our Perspective: Federal policymakers should work to establish a pragmatic, common sense framework to allocate liability appropriately and proportionally through the AI value chain. A key component for such a framework is setting standards to streamline procurement processes while preserving the ability to demonstrate supplier robustness, as well as understand and mitigate risks throughout the AI supply chain.
Legal uncertainty would have caused AI adoption delays
The moratorium raised several legal questions and subsequent efforts to prevent states or localities from regulating AI would almost certainly spur litigation, not to mention introduce even less clarity in the legal and regulatory landscape than there is today. The now-dead moratorium was opposed by a bipartisan group of State Attorneys General, as well as 17 Republican Governors. Presumably, subsequent efforts would face similar opposition and the resulting impending litigation would lead to a period of further legal uncertainty that could take years to resolve. In the meantime, states with existing AI laws would be left in legal limbo. For example, 24 State insurance regulators adopted the National Association of Insurance Commissioners (NAIC) AI Model Bulletin, with some States like Colorado passing their own regulations. The extent to which these existing rules would have been affected by the moratorium (had it passed) was already ambiguous.
Our Perspective: If federal lawmakers are interested in reducing regulatory burdens on AI innovation, they should appropriately balance that interest with reasonable exemptions or waivers for existing state or local laws. Congress can take a more sectoral approach to AI oversight, setting a baseline standard that allows states to address additional issues unique to their residents (like the music industry in Tennessee). Legal uncertainty will be a turn-off towards entrepreneurs exploring the space and to mature enterprises looking to mitigate risk.
The Moratorium was a solution in search of a problem
The arguments for the original moratorium were not necessarily supported by factual evidence. One particular claim asserted that the moratorium is necessary to stop over so-called “1000 AI-related state bills” that would regulate AI. However, this claim was fairly disingenuous, as the criteria for an “AI-realted state bill” is one that happens to mention the term “Artificial Intelligence.” A piece of legislation that simply mentions the term ‘AI’ is substantially different from one that imposes substantive duties or obligations. Trustible's legislative tracking found that there was a much smaller universe of high-impact bills that were by state legislatures (e.g., New York’s RAISE Act) that would require an extreme overcorrection by the federal government.
Our Perspective: Federal policymakers should objectively evaluate the need for such a ban should it make another attempt to impose a moratorium. Congress should use the bipartisan reports generated by the House and Senate to assess the least burdensome, highest impact AI issues and craft targeted, bipartisan legislation.
AI progress requires information sharing
The ambiguous legal issues surrounding AI, such as with copyright concerns, allocating liability, and protecting data privacy, erect high barriers to basic information sharing that could otherwise accelerate innovation and growth. Information sharing in the proper context can be a net benefit for organizations, like we have seen within the cybersecurity space. Cyber vulnerabilities and exploits are identified, analyzed, and shared, which improvise how organizations protect themselves against adversarial actors. This approach could be realized within the AI landscape too. But without certain assurances or legal protections, organizations are unlikely to voluntarily disclose information such as AI incidents. Unreported AI incidents can undermine trust in AI systems and deprive AI developers of essential feedback they need to improve their systems. Encouraging information sharing about AI systems would also have a much larger impact on AI innovation and development, particularly for startups that do not have dedicated research arms of their own. For instance, when model providers do not disclose their data collection practices or system limitations that can cause downstream providers to suffer from developing less trustworthy AI products or services.
Our Perspective: Federal policymakers should consider how to foster a collaborative environment where actors within the AI ecosystem feel empowered to share information, rather than shroud the landscape in further opacity. Federal agencies could use the Cybersecurity and Infrastructure Security Agency’s voluntary cyber threat information sharing model to create confidential channels for model providers and AI system deployers to share sensitive information about malicious or unintended activity. Congress can also enact legislation that is similar to the Cyber Incident Reporting for Critical Infrastructure Act of 2022, which would compel AI incident disclosures for certain critical infrastructure sectors.
We applaud the Senate’s near unanimous decision to pull back from this immediate moratorium, but maintaining the status quo is not a long term solution. Partnerships between federal, state, and local leaders, as well as industry, are necessary to arrive at common sense regulatory frameworks that respect the unique nuances of each state’s perspective and specific industry risk profiles. We need this collaboration to build trust and drive faster societal adoption of AI, as well as retain the U.S.’s global competitive advantage in AI innovation.
Trustible is here to help create the space for this type of dialogue, convening the right leaders and brightest minds across industry and government. We look forward to continuing to be a leading voice in this important dialogue regarding trustworthy & responsible AI.