Read Part 2: AI in Employment-Related Decisions Part 2: State Strategies to Address Pressure and What It Means for Employers
State lawmakers across the country have been busy this year trying to curb the most consequential uses of AI in employment-related decisions. As those attempts moved from idea to legislation, two powerful forces have pushed back.
The tech industry is concerned about a patchwork of state rules, and the Trump administration has prioritized removing barriers to AI use. States are reacting by shifting their strategies to narrow, revise, and/or delay legislation. Employers would be wise to stay abreast of these evolving strategies to ensure compliance in a rapidly shifting regulatory landscape.
Tech Industry Opposes State Legislation Regulating AI Use in Employment Decisions
Unsurprisingly, the tech industry is opposed to any attempt to regulate AI—particularly state-by-state. OpenAI and Google have asked federal legislators to enact a national regulatory framework that would preempt state laws. Lawmakers like Sen. Ted Cruz (TX) and Rep. Darrell Issa (CA) are working to avoid a patchwork of conflicting state laws.
In response to a February 2025 request for information on policy ideas for the AI Action Plan, OpenAI proposed that the federal government provide “domestic AI companies with a single, efficient ‘front door’ to the federal government that would coordinate expertise across the entire national security and economic competitiveness communities”—in part by preempting state law. The tech groups have specifically stated:
“We propose creating a tightly-scoped framework for voluntary partnership between the federal government and the private sector to protect and strengthen American national security. This framework would extend the tradition of government receiving learnings and access, where appropriate, in exchange for providing the private sector relief from the 781 and counting proposed AI-related bills already introduced this year in U.S. states. This patchwork of regulations risks bogging down innovation and, in the case of AI, undermining America’s leadership position.”
The House of representatives proposed a 10-year ban on state regulation of AI in the One Big Beautiful Bill Act, but that provision was ultimately dropped in the Senate.
Though federal preemption remains the goal, recent efforts are aimed at compromise. Big tech lobbyists and PACs encourage states to adopt transparency-focused laws, such as notification of AI use in job application review. This approach is less onerous than the mandatory impact assessments that some states considered. This approach may be working. Colorado has delayed its state law requiring impact assessments until June 30, 2026.
Trump Administration’s AI Action Plan and Preemption Threat
In January 2025, President Trump issued an executive order entitled “Removing Barriers to American Leadership in AI” directing the U.S. to “develop AI systems … free from ideological bias or engineered social agendas” to solidify and sustain its position as a global leader in artificial intelligence.
The One Big Beautiful Bill Act included a proposed 10-year prohibition on state AI regulation, which could have preempted or frustrated existing state laws like Illinois’s HB 3773, Regulating AI in Employment”. The Senate, however, voted to remove this provision, allowing states to to enact and enforce their AI laws.
In July, the White House published “Winning the Race—AMERICA’S AI PLAN,” which outlined how the federal government plans to promote and protect U.S. AI dominance.
The first pillar of the plan calls for the sweeping review of federal and state laws, regulations, and other rules to eliminate red tape and “onerous” regulation. This will be done through regulatory and procedural overhauls, funding incentives, and potential preemption. Federal agencies with AI funding programs are directed to consider a state’s regulatory environment when awarding funding. This will discourage support for states with burdensome AI regulation, but will respect states’ rights to pass reasonable, innovative laws. Consequently, states with burdensome AI regulations could see grants and other federal funding impacted.
The Federal Communications Commission is to evaluate whether state AI regulations interfere with its ability to carry out obligations under the Communications Act of 1934. This may support preemption arguments wherever state or local regulations conflict with federal law or policy or hinder communications infrastructure. Additionally, the Federal Trade Commission is to ensure all investigations and orders from the previous administration do not unduly burden AI innovation by advancing restrictive theories of liability.
In short, the White House aims to leverage existing federal law and funding to ensure states do not regulate AI in ways that interfere with or unduly burden its plan to bolster American AI dominance.
- Partner
As a former Human Resources supervisor, Peter understands the challenges his clients face, including being on the receiving end of a lawsuit brought on by a former employee. What he remembers of the experience is the assurance he felt ...
- Partner
In the ever-evolving landscape of labor and employment law, Ryan is a trusted ally for businesses and employers navigating the complexities of the legal system. His comprehensive experience in employment litigation and ...
- Partner
With 25 years handling labor and employment cases, Terence leverages his past experience of private practice advocacy and client-side decision-making experience to representing employers. He understands the pressures facing ...
Welcome to the Labor and Employment Law Update where attorneys from Amundsen Davis blog about management side labor and employment issues.
RSS Feed


