AI is Coming to a Legislature Near You

by Jessica Chapman

Can you believe Siri is a teenager? The digital assistant first became available as an iOS app in 2010. Amazon’s Alexa started interrupting living room and kitchen conversations not long after in 2014. These technologies exposed many people to artificial intelligence for the first time.

These days, “AI” seems to touch every part of our lives in some way or another. We’re exposed to it daily in online customer service, map applications, weather forecasting, email reminders, and autocomplete functions (when texting for example). AI-enhanced robots clean airplanes now. AI Note Takers join our meetings. We have self-driving cars!  

And while AI certainly predates Siri’s arrival, the decade and a half since her appearance on iPhones has hosted a genuine groundswell of innovation in, attention to, and curiosity (plus anxiety) about artificial intelligence that continues to build to this day.

In tandem with these technological advances, state legislatures including our own have been working to wrap their arms around artificial intelligence from a policy perspective. It’s an ongoing and unfolding situation complicated by the fact that states and municipalities are considering bills and passing (or vetoing in some cases) laws pretty much every week.

In 2024 alone, 31 states – including Colorado – adopted resolutions or enacted legislation to study AI. HB 24-1468 created Colorado’s Artificial Intelligence Task Force consisting of a bipartisan membership of 26 legislators and industry and public interest experts and professionals.

At the task force’s October meeting, a presenter from TechNet, a bipartisan network of technology executives, shared that 17 bills related to artificial intelligence had passed in California since the previous task force meeting just over a month earlier. Just in California!

The National Conference of State Legislators has been working to keep up too. The organization only began specifically tracking artificial intelligence legislation in 2019. Its task force on cybersecurity and privacy reorganized and expanded in 2023 due to what NCSL’s executive committee saw as the growing importance of AI.

The leaps and bounds in interest in artificial intelligence among state legislatures show up in the numbers. NCSL records indicate there were about 125 measures involving artificial intelligence introduced across the United States and its territories in 2023. In 2024, that number more than doubled to over 300.

(These numbers pertain to AI generally and don’t include legislation related to specific AI technologies like facial recognition, deepfakes, and self-driving cars. NCSL tracks those issues separately.)

“There is a swelling interest from the states because of a lack of federal action,” says Chelsea Canada, program principal of NCSL’s Financial Services, Technology and Communications Program. “It’s been a flurry of activity that continues to increase.”

In addition to HB24-1468, Colorado passed two other bills related to AI during the 2024 session: One related to the use of deepfakes in elections (HB 24-1147) and one that creates a regulatory structure for artificial intelligence (SB 24-205).

SB 24-205, commonly referred to as the Colorado Artificial Intelligence Act, or CAIA, has generated discussion and comment from the task force and news sources for being among the first legislation of its kind in the United States to regulate what it terms “high-risk” artificial intelligence systems. (Utah passed the Utah Artificial Intelligence Policy Act in March.)

“High-risk,” according to the Colorado’s CAIA, means AI systems that make or are a substantial factor in making a “consequential” decision.

CAIA offers Colorado consumers recourse to address what it describes as “algorithmic discrimination” which can occur when high-risk AI plays a role in hiring practices, housing and loan applications, and other consumer services.

Tech giants including Google, Amazon, Microsoft, and IBM have appeared before the state’s task force to weigh in on provisions of the SB 24-205, which does not go into effect until February 1, 2026.

So far in the 2025 session, there are two bills related to AI under consideration by Colorado’s general assembly – SB 25-022, which contemplates the use of AI to fight wildfires, and HB 25-1212, which would create certain whistleblower protections for employees of artificial intelligence developers.

According to the NCSL’s Chelsea Canada, states have been looking to the federal government and elsewhere for guidance on how to approach AI regulation, as there is not yet a consensus even on the definition of artificial intelligence. The relative immaturity of the industry poses challenges since AI technologies operate across state and national borders.

Plenty of parties are eager to plant their flags though.

The White House issued an executive order in October 2023 laying out principles for “safe, secure, and trustworthy development and use of artificial intelligence.” The federal government followed up on that order in October 2024 with additional AI guidance aimed specifically at United States military and intelligence agencies.

The Organization for Economic Cooperation and Development has also taken a stab. In mid-February, France hosted the third annual Artificial Intelligence Action Summit convening over 1,000 participants representing over 100 countries.

Trade organizations are getting in on the game too.

The American Bar Association issued a formal opinion in July 2024 on how lawyers should approach the use of AI. Industry insiders are closely following lawsuits working their way through the court system for direction on how to handle artificial intelligence.

A quick search reveals lawsuits that make accusations of price-fixing, health care denials, and even, sadly, wrongful death at the hand of artificial intelligence. How these cases resolve could influence future legislation.

We at the Office of Legislative Legal Services have been evaluating if and how to permit artificial intelligence into drafting and editing operations as well. The office convened a task force during the 2024 interim to survey our 60-plus employees on our awareness of and familiarity with AI tools.

As a result of its efforts, OLLS now has an in-house policy on the use of AI, which dictates that “OLLS employees shall not upload, enter, or otherwise incorporate confidential information into GAI [generative AI] or instruct GAI to generate confidential information.” Generative AI is a type of AI – the most well-known example is ChatGPT – that creates new content based on user inputs.

“We’re interested in exploring how AI technology can be used to improve our work product and enhance the service that we provide to the members of the General Assembly,” says Director Ed DeCecco. “But the use of AI must also be consistent with our statutory and ethical obligations of confidentiality. With that perspective, we are continuing to review how and if AI can be effectively used in our work.”

Stay tuned, and welcome to the Wild West of AI!

Says NCSL’s Chelsea Canada, “It’s an area that is emerging and evolving, and the interest is expanding and the work is increasing because members are more and more interested.”

Artwork generated by magicstudio.com (a generative AI tool) in response to the prompt “artificial intelligence in the legislative environment.”