A President Draws His Line in the Sand
On December 11, 2025, President Trump signed an executive order with a title as sprawling as its ambition: "Ensuring a National Policy Framework for Artificial Intelligence." Its stated purpose was to sustain American dominance in AI by establishing what the White House described as a "minimally burdensome national policy framework." Its practical effect was something far more provocative — a direct challenge to the authority of every state legislature that had dared to regulate the technology on its own terms.
The order's most consequential provision is the creation of a DOJ AI Litigation Task Force, which became operational on January 10, 2026. Its mandate is unambiguous: identify state AI laws that, in the Attorney General's judgment, unconstitutionally burden interstate commerce, conflict with federal regulations, or are "otherwise unlawful," and challenge them in federal court. The language is deliberately elastic. By granting the Attorney General broad discretion to determine which laws qualify, the administration has handed itself a tool that can be aimed at virtually any state regulation it finds inconvenient.
Beyond litigation, the order deploys a second weapon: money. The Secretary of Commerce has been directed to issue a Policy Notice within 90 days specifying conditions for state eligibility under the Broadband Equity Access and Deployment (BEAD) Program. States maintaining AI statutes that the DOJ characterizes as "onerous" will be disqualified from receiving undisbursed BEAD funds — billions of dollars in broadband infrastructure money that has nothing to do with artificial intelligence, now being leveraged as a cudgel against state-level AI governance.
The Federal Trade Commission, meanwhile, has been directed to issue a policy statement by March 11 classifying state-mandated algorithmic bias mitigation as a "per se deceptive trade practice." If enacted, this would effectively reframe the very act of regulating AI fairness as itself a form of consumer harm — a breathtaking inversion of the regulatory logic that animated most state AI legislation in the first place.
There are carve-outs. The order expressly exempts state laws related to child safety, AI compute and data center infrastructure, and state government procurement of AI systems. These exceptions reveal a political calculus: the administration wants to appear tough on big-state regulation without touching the issues — protecting children, building data centers — where federal preemption would generate bipartisan backlash.
California and Texas: Unlikely Allies in the Crosshairs
When California's Transparency in Frontier Artificial Intelligence Act and Texas's Responsible Artificial Intelligence Governance Act both took effect on January 1, 2026, they represented something remarkable: two states on opposite ends of the political spectrum arriving independently at the same conclusion — that someone needed to write rules for AI, and Congress was not going to do it.
California's TFAIA targets what it calls "frontier developers" — companies that have trained AI models using more than 1026 floating-point operations. For "large frontier developers" whose combined annual revenue exceeds $500 million, the law imposes enhanced transparency and accountability obligations, including mandatory safety testing, public disclosure of model capabilities, and a requirement to report any "critical safety incident" to California's Office of Emergency Services within 15 days. It is, in essence, an attempt to treat the most powerful AI systems with the same regulatory seriousness applied to nuclear facilities or pharmaceutical trials.
Texas took a different approach. The RAIGA applies broadly to any developer or deployer of AI systems that conducts business in Texas, provides products or services used by Texas residents, or deploys AI systems within the state. Rather than focusing on transparency, Texas drew hard lines around prohibited uses: AI systems that encourage self-harm, violence, or criminality; the creation or distribution of AI-generated child sexual abuse material; unlawful deepfakes; and communications that impersonate minors in explicit contexts. Where California said "show your work," Texas said "don't cross these lines."
Colorado, too, stands in the line of fire. Its Consumer Protections for Artificial Intelligence Act, scheduled to take effect in June 2026, was singled out by name in the executive order's supporting materials as an example of state legislation that prohibits "algorithmic discrimination." At least seventeen other states have AI-related statutes on the books, and more are in committee. The executive order's architects understand that if they cannot halt this tide now, the regulatory landscape will calcify into a patchwork that no single federal framework can easily replace.
The timing was not coincidental. By signing the order on December 11 and activating the task force on January 10, the administration ensured that the federal challenge apparatus was in place before California and Texas had processed a single enforcement action. The message to state regulators was clear: enforce at your own risk.
Federalism on Trial in the Age of Foundation Models
Beneath the policy arguments lies a question that will ultimately be settled by the courts: can a president preempt state law by executive order alone, without an act of Congress? The administration says yes, pointing to the Dormant Commerce Clause and the Supremacy Clause. Constitutional scholars are less certain, and the legal terrain is more treacherous than the White House appears to acknowledge.
The Dormant Commerce Clause — the implied prohibition on state laws that unduly burden interstate commerce — is the DOJ task force's primary legal weapon. The argument is straightforward: AI models are developed and deployed across state lines, and a patchwork of conflicting state regulations imposes intolerable compliance costs on companies that operate nationally. A California law requiring bias audits, a Texas law prohibiting certain outputs, and a Colorado law mandating algorithmic impact assessments create, in this view, an unconstitutional drag on commerce.
But the Supreme Court complicated this argument in 2023. In National Pork Producers Council v. Ross, the Court ruled that the Dormant Commerce Clause does not invalidate nondiscriminatory state laws merely because they force out-of-state industries to alter their business practices. High compliance costs alone, the majority wrote, do not constitute a substantial burden on interstate commerce. If a state law applies equally to in-state and out-of-state actors and serves a legitimate local purpose, the commerce clause objection is far weaker than the administration's rhetoric suggests.
The preemption argument faces its own obstacles. Under longstanding constitutional doctrine, state laws are displaced only through express or implied congressional legislation — not through executive action alone. As the Harvard Law Review noted in a January 2026 analysis of the order, "executive preemption" is a novel and largely untested theory. The administration may argue that existing federal statutes, including the FTC Act and various commerce regulations, provide sufficient congressional authorization for the preemption claims the task force intends to bring. But this theory stretches statutory interpretation to its limits and beyond.
There is also the question of standing and enforcement. The BEAD funding mechanism — conditioning broadband infrastructure money on a state's willingness to repeal AI laws — resembles the kind of coercive federalism the Supreme Court struck down in NFIB v. Sebelius, where the Court held that Congress could not threaten states with the loss of all Medicaid funding to force compliance with the Affordable Care Act's expansion. The principle was clear: the federal government may encourage, but it may not coerce. Whether redirecting BEAD funds crosses that line will be among the first questions litigated.
California Attorney General Rob Bonta has already signaled that his office will defend the TFAIA "with every tool at our disposal." Texas Attorney General Ken Paxton, meanwhile, finds himself in the unusual position of potentially defending a state regulation against the federal government he has spent years championing. The politics of this fight defy the usual partisan mapping, and that alone suggests the courts will treat it with extraordinary care.
An Industry Divided, a World Watching
Silicon Valley's reaction to the executive order has been neither celebration nor mourning, but rather a prolonged and uncomfortable silence followed by carefully hedged public statements and furious private lobbying. The technology industry is not of one mind on this question, and the fissure runs deeper than most coverage has acknowledged.
For startups and smaller AI companies, federal preemption looks like liberation. Complying with fifty different state regulatory regimes requires legal teams that early-stage companies cannot afford. A single national standard — particularly one the administration has described as "minimally burdensome" — would dramatically reduce the cost of bringing new AI products to market. Several venture capital firms have publicly endorsed the order, arguing that regulatory simplicity is essential if American labs are to maintain their lead over competitors in China and the European Union.
But the largest AI companies — Alphabet, Meta, Microsoft, Amazon, and OpenAI among them — face a more complex calculation. Many spent the past year engineering their models to comply with California's specific safety and transparency requirements. They hired compliance teams, built auditing infrastructure, and redesigned training pipelines. The executive order does not merely remove a burden; it threatens to strand significant investments in regulatory compliance. These companies now face an agonizing choice: maintain the safety filters they built for California even without a legal mandate to do so, or pivot to the administration's looser "ideological neutrality" standards to remain eligible for federal contracts.
The concept of "legal limbo" has become the dominant metaphor in corporate boardrooms. Companies that comply with state laws risk losing federal funding under the BEAD provisions and related mechanisms. Companies that ignore state laws remain liable under those statutes until a court definitively strikes them down. The task force's litigation could take years to resolve. In the meantime, general counsels across the industry are advising their executives to comply with both regimes where possible and to document everything — the legal equivalent of building a bomb shelter and hoping it is never needed.
Civil society organizations have been more direct in their opposition. The Center for American Progress called the order "an unambiguous threat to states beyond just AI," arguing that if the executive branch can preempt state consumer protection laws through litigation and funding threats alone, the precedent extends far beyond technology. Environmental regulations, labor standards, data privacy laws — all could be vulnerable to the same mechanism. The ACLU has announced plans to intervene in any task force litigation that threatens algorithmic fairness requirements.
Internationally, the order has been received with a mixture of alarm and opportunism. The European Union, which spent years developing the AI Act, now faces the prospect that American companies may no longer maintain the compliance infrastructure that made transatlantic AI governance roughly interoperable. Brussels officials have privately expressed concern that a deregulated American AI sector will produce models that cannot be legally deployed in Europe without extensive modification — fragmenting the global AI market along regulatory lines rather than unifying it.
China, for its part, has said little publicly but appears content to watch its primary competitor in AI development consume itself in an internal governance battle. Beijing's own approach — centralized, opaque, and unapologetically state-directed — faces no such federalism complications. The irony has not been lost on analysts: the very federalism that has historically been a source of American innovation and resilience may now be the structural vulnerability that slows the country's response to the most consequential technology of the century.
What happens next depends on the courts, and the courts move slowly. The DOJ task force will likely file its first challenges within weeks. California and Texas will defend their laws. Amicus briefs will pile up from every direction. And in the meantime, the AI systems at the center of this fight will continue to advance, their capabilities outpacing the capacity of any government — state or federal — to fully comprehend what they are regulating. The patchwork problem, it turns out, may be less about the patchwork than about the problem itself: how a democratic society governs a technology that is evolving faster than its institutions.