Processing Your Payment

Please do not leave this page until complete. This can take a few moments.

November 13, 2023

Connecticut moving toward state regulation of AI use in private industry

PHOTO | CONTRIBUTED State Sen. James Maroney (D-Milford) has become a leading voice on AI legislation and oversight in Connecticut.

After adopting safeguards to prevent artificial intelligence-related discrimination in state services, lawmakers are working to develop rules for private industry.

Connecticut businesses that develop or utilize artificial intelligence could find themselves responsible for ensuring that use doesn’t lead to harm.

Gov. Ned Lamont, in June, signed legislation requiring state agencies to review their use of AI and perform ongoing tests to ensure these systems don’t lead to discrimination or disadvantages for people based on ethnicity, race, religion, age or other factors. The law also seeks to guard against disclosure of personal information.

Meantime, it established a working group to develop further recommendations, including possible new requirements for private-sector businesses using AI.

The 21-member group is required to deliver policy recommendations to state lawmakers by Feb. 1, which will likely make AI a hot-button issue during the upcoming 2024 legislative session, which runs from Feb. 7 to May 8.

Connecticut is not alone in attempting to regulate AI. Several states are working on similar legislative tracks.

And President Joe Biden, on Oct. 30, issued an executive order imposing safeguards on AI developers, and calling for the establishment of additional resources to mitigate the risks and maximize the benefits of artificial intelligence.

Potential new business requirements

In Connecticut, state Sen. James Maroney (D-Milford) has become a leading voice on AI legislation and oversight.

The co-chair of the General Law Committee also oversees the state’s newly formed AI working group, which will make policy recommendations to not only regulate, but also promote the use of AI by Connecticut businesses.

The group met twice through late October, and is bringing in various academic and private industry experts for consultation.

During a recent meeting, the group heard from Washington’s chief privacy officer about that state’s generative AI policies, as well as from an expert at Big Four accounting and consulting firm Deloitte, Maroney said.

The working group has discussed various topics, including the use of AI within state government.

As of Oct. 26, officials were more than halfway through an assessment of the approximately 1,400 different applications or programs used by various state agencies and have found just five that currently use AI, Maroney said.

“So, out of the more than 800 applications that have been reviewed so far, five different systems are employing AI,” said Maroney, a Yale graduate who is the founder and director of Milford-based First Choice College Placement, a private tutoring and SAT prep company. “It’s not in widespread use at the moment.”

Of key concern, he added, are programs that use AI to make decisions on things like who qualifies for certain benefits, such as food stamps or unemployment.

In cases like that, there needs to be transparency in the data and methods being used to train the AI to ensure outcomes aren’t biased against a particular group of people, Maroney said.

Of less concern are AI uses like chatbots.

Maroney, who is also on a multistate AI task force, said artificial intelligence could have a positive impact on state government in various ways, including with customer service support, such as directing people or businesses to proper agencies or resources. It can also help improve workforce efficiencies, particularly as state government has experienced a retirement wave in recent years.

Beyond state government, the working group’s focus is on private sector oversight.

The group is considering ongoing tests for developers of AI applications and the businesses that use them; as well as transparency requirements, such as watermarks to clearly identify AI-generated media, he said.

“We’re going to look at requiring impact assessments to identify heightened risk of harm,” Maroney said. “And how do you define harm? That harm could obviously be physical, reputational and loss of opportunities for some areas of concern: housing, finance and others.”

Recommendations are likely to include creating disclosure requirements for private businesses that use AI, Maroney said.

“The developers and employers would self-certify that they had done the testing,” Maroney said. “So, they wouldn’t register or report it, and enforcement will be done by the attorney general.”

Maroney’s group isn’t just looking at ways to guard against potential harms.

He sees more promise than risk in the technology and said the working group is contemplating ways of boosting beneficial AI uses in medicine, education and business. That could include incentives for AI developers, he said.

There will also be workforce development-based recommendations to help train more residents on AI’s use.

The Connecticut Academy of Science and Engineering is conducting a survey of artificial intelligence-related education in the state, from elementary schools through higher education, as well as AI-related workforce development in the private sector.

Outlining the needs of the AI industry and where Connecticut sits on the talent-production spectrum will help the state steer policy and resources, said Nicholas Donofrio, a former IBM executive vice president and a co-chair of the AI working group.

Nicholas Donofrio

“Connecticut was at one time a powerful innovation hub for the country,” said Donofrio, the CASE appointee to the working group. “How do we get back to that? Could we get back to that? Is this our moment in time to actually swing and pivot to be able to do that? We have 40-some-odd colleges and universities in the state. So, it’s not like we don’t have a skill base. The question is, is it fit for purpose? And we’re going to try to figure that out.”

Donofrio sees AI as a tool, bound by rules and data supplied by humans. The best way to avoid harm is to diagnose every step of AI-related creation and input, he said.

“There are people behind each one of these things,” Donofrio said. “And they’re good people. They’re good, technical people. They are not afraid to be accountable for their work. I think you know business needs to be accountable.”

Donofrio dismissed the notion that any legislative proposals would represent a heavy cost or competitive disadvantage for smaller companies.

“It’s not going to make anybody’s business harder to do,” Donofrio said. “It’s not going to make anybody’s business more complicated. It’s not going to cost them a tremendous amount of money to do anything as far as I’m concerned.”

Wariness of government regulation

Chris DiPentima, president and CEO of the Connecticut Business & Industry Association, said there is concern about “knee-jerk” legislative reactions to the rapidly evolving use of AI. But he also agrees with the AI working group’s efforts to consult with industry leaders and other experts.

Chris DiPentima

DiPentima said his main concern is the potential costs of legislation that requires a broad impact analysis from businesses that use AI.

“Our businesses are hyper-aware of AI as a disruptor, or something that can help with efficiencies and the workforce crisis we have globally,” DiPentima said. “Folks are learning more about AI and embracing it as much as they can, but doing it with eyes wide open.”

N. Kane Bennett, managing partner of Middletown-based Aeton Law Partners LLP, co-chairs the Connecticut Bar Association’s newly formed generative AI committee with his partner Jon Shapiro.

N. Kane Bennett

His firm was an early adopter of AI. Bennett describes himself as “extremely skeptical” and generally opposed to government efforts to regulate AI.

“We have very robust laws against discrimination and other things in Connecticut,” Bennett said. “What is new now? And show me some use cases of where we’re getting harms that are not otherwise covered by existing laws. It’s not like you could get away with breaking the law with AI just because you are using AI.”

Bennett said it’s one thing if state government wants to catalog and analyze potential harm from its use of AI, but quite another if this is required of small businesses.

“But why should a small business have to do that?” Bennett asked. “Because the regulation only ends up favoring the larger corporations that have compliance departments, that have lawyers to spend money on, that have departments they can dedicate to this. Why should those types of regulations apply to a small business?”

HBJ Editor Greg Bordonaro contributed to this story.

Sign up for Enews


Order a PDF