
States Take the Lead on AI Regulation
December 27, 2025, saw a surge in state-level action on AI. Since October, lawmakers in 47 states have introduced over 250 bills targeting AI in health care—33 of which are now law in 21 states. These range from bans on AI in mental health decision-making (Illinois) to new rules for insurer reviews and patient chatbots. Bipartisan support is strong, but industry leaders warn that a patchwork of laws could stifle innovation (source: WSJ).
Florida Governor Ron DeSantis introduced an “AI Bill of Rights” requiring companies to notify users of AI interactions, restrict AI in therapy, and limit state subsidies for AI data centers. The proposal faces legislative hurdles and Big Tech opposition, but aligns with states’ rights claims to avoid federal conflicts (source: Florida Politics).
New York’s Alex Bores is pushing deepfake safety laws and the Raise Act, mandating incident disclosures and model testing for labs like Meta, Google, and OpenAI. The bipartisan law, signed last Friday, is seen as a template for national policy, though it drew backlash from pro-AI PACs (source: NYT).
AI Regulation: Innovation, Trust, and the Path Forward
The wave of state AI laws marks a new phase in US tech governance. Health care is a key battleground: while AI promises efficiency and better outcomes, lawmakers are wary of bias, privacy risks, and overreliance on chatbots for mental health. Illinois’s ban on AI for mental health decisions reflects a broader trend toward consumer protection, but also underscores the risk of fragmented rules for national providers.
Florida’s AI Bill of Rights, with its focus on disclosure and parental controls, highlights the tension between state innovation and Big Tech resistance. As water and energy demands rise, states are also scrutinizing AI’s infrastructure footprint, seeking to balance growth with sustainability.
Deepfake risks are driving new safety laws. Alex Bores’s Raise Act requires safety plans, incident reporting, and model testing, setting a precedent for other states and the federal government. The push for open-source standards like C2PA aims to make deepfake detection as universal as HTTPS for web trust.
For businesses and builders, the practical implications are clear: compliance is getting more complex, and transparency, security, and collaboration with regulators are essential. Platforms like CloneForce are helping organizations navigate this landscape, offering secure, scalable automation that adapts to evolving laws. As the regulatory race accelerates, success will depend on balancing innovation with trust and consumer protection—ensuring AI delivers value while safeguarding society.