In the background of debates over the pending One Big Beautiful Bill is a provision that would ban state and local regulation of AI for a decade. I wrote on this earlier this week for National Review:
Defenders of shutting down state and local regulation say that there needs to be a nationally uniform AI policy in order for the U.S. to keep on the forefront of that technology. And they’re right that it is a national security imperative for the United States to stay a leader in that field. AI has tremendous implications for the military conflicts, manufacturing, and data processing of the future.
However, at its deepest level, tech policy has to attend to more than maximal technological power; it also needs to ensure that this technological might can be disciplined for the sake of human flourishing. The very power of artificial intelligence means that its risks are not insignificant, and one of the essential insights of the conservative tradition is the need to temper disruption and preserve essential elements of the social compact.
You can read the rest here.
This provision highlights tensions between populists and libertarian-leaning folks on the right, and it also shows how the imperatives of certain tech accelerationists might be at odds with the aims of pro-family policy. After all, there are many reasons to believe that it might be a good idea to regulate some parts of AI (for instance, should medical providers have to disclose whether a chatbot or a real person is dispensing therapy online?).
I’ll include some takes for and against this moratorium on local AI regulation below, but I’ll add one puzzling thing. As I mentioned above, supporters of the moratorium claim that there should be a single, national standard for AI. They say that a patchwork of different state policies on AI would hamper U.S. innovation in that sector.
That’s a claim not without merit. However, it’s not at all clear that the current version of the moratorium actually would secure uniform national standards. In part to get the moratorium through the reconciliation process, this moratorium is attached to a voluntary $500 million in AI funding from the federal government: by taking that money, a state pledges to not enforce any regulations on AI for a decade after passage of the law. (Or at least that’s the version that has just now been approved by the Senate parliamentarian.)
However, that policy mechanism completely undercuts arguments that this moratorium would promote uniform regulations. After all, a state or municipality could simply decide not to accept those funds. Thus, some states would still be free to devise their own AI policy, while others would be handcuffed by the federal government.
The upshot of all that would be that some states (likely wealthier ones) would be in effect able to buy their own ability to determine AI policy by refusing to take that federal money, while those states (likely poorer and more rural ones) that could not afford to say no to federal dollars would be blocked from that self-regulation. If the current version of the moratorium were to pass, it would not provide a single national standard for AI. Instead, it would give tech companies certain economically disadvantaged pockets where they wouldn’t have to worry about local regulation, while allowing wealthy states to pass their own policies to help cope with the negative externalities of AI. That hardly seems like an ideal result for a party that claims to represent the working class.
It’s understandable why proponents of this moratorium might have settled on reconciliation as a legislative vehicle for it. As with many other things, Congress is divided on AI policy, and there does not seem to be anywhere close to 60 votes in the Senate for a moratorium on local regulation of artificial intelligence. Bundling this into a mammoth reconciliation bill could, its allies think, help get the moratorium to the president’s desk. However, the structure of this current proposal vitiates one of the major arguments on behalf of the moratorium.
Reconciliation might give a moratorium an easier legislative path, but it also takes away one of the principal rationales for a federal moratorium on local AI regulation in the first place.
Other takes on this proposed moratorium:
Owen Yingling: “But slowing the reckless march toward the digital eschaton also requires that traditional conservatives reclaim their own heritage of endorsing measured and incremental progress guided by tradition.”
Will Rinehart musters two cheers for this moratorium on local regulation of AI, though he thinks the period should be reduced from ten years to five: “Five years might be the sweet spot. It would give Congress enough time to study the technology, understand its implications, and craft thoughtful federal legislation without the pressure of competing state laws proliferating in the background.”
Michael Toscano and Grant Bailey share some polling on the moratorium. The youngs aren’t exactly fans of it:
Investing legend Paul Tudor Jones says that Washington is missing the warning signs on AI: “The warning is playing out in real-time, right before our eyes. As someone who has spent nearly half a century as a professional risk manager, every alarm bell in my being is ringing, and they should be in yours, too.”
Danial Cochrane and Jack Fitzhenry make the case for state self-governance: “Why not allow states to experiment with different means of addressing or mitigating the collateral costs that innovators would impose on society? Must we wait 10 years while these evils manifest before administering a remedy?”
Trevor Wagener: “The proposed temporary pause on state-level AI regulation in recent legislative proposals isn’t just sound policy—it emulates a proven strategy that helped unlock the internet’s economic potential, supported U.S. digital leadership, increased U.S. GDP and federal tax receipts…”
And if you want a deep dive into viewpoints on this proposed moratorium, see this colossal (and ever-growing) compendium by Adam Thierer.