Processing Your Payment

Please do not leave this page until complete. This can take a few moments.

October 16, 2023

As insurers embrace AI, they must also monitor ‘hallucinations’

Contributed | https://pixabay.com/vectors/anatomy-biology-brain-thought-mind-1751201/

While insurers are using generative AI to become more efficient, they’re also accounting for new risks of error.

AI may make more mistakes than humans in performing certain tasks.

Paradoxically, as AI improves, it can become smarter in some ways, but make more mistakes in others.

Christopher Sirota, product manager for emerging issues and innovation at Verisk, an analytics and risk assessment firm based in New Jersey, said “drift” occurs when AI generates different answers to the same question, including answers that are inaccurate.

For example, the new version of ChatGPT 4 ostensibly “knows” more than its predecessor, ChatGPT 3.5. But the older version is better at certain tasks.

“ChatGPT 3.5 is actually getting a little smarter for some things, but not so smart for others, and the same is true for ChatGPT 4,” Sirota said.

Insurers must be aware of AI’s tendency to make mistakes, in order to account for that risk, along with the need for humans to monitor AI for hallucinations, Sirota said.

Human oversight remains an important component of emerging generative AI.

A combination of physics, human oversight and machine learning have resulted in generative AI that is capable of following instructions, showing human-like reasoning skills.

The key to using AI effectively involves a human asking the right questions and giving it the right prompts, using language it understands.

“How do you know exactly how to ask the ChatGPT or generative AI to do those things? What are the personas you have to train it upon? What is the corpus of data that you have to train upon? Those are the key areas that we’re seeing organizations starting to look at,” said Doug Vargo, who leads the U.S. emerging technologies practice at CGI, an IT and business consulting firm with a downtown Hartford office.

But keeping humans involved can also mean removing their innate biases, which can become embedded in AI language models and the data it is trained to scour.

Vargo said there will be a heavy focus on transparency and ethics, along with the trustworthiness of data, as generative AI is implemented.

Sign up for Enews

0 Comments

Order a PDF