As artificial intelligence permeates all aspects of business and industry, new risk and questions of liability emerge
The current ChatGPT craze is only the newest rendition of artificial intelligence making its way into our everyday lives. AI has been around in many, less obvious, forms for years. With a role in everything from predictive text on our smartphones to route guidance that incorporates traffic into its decision-making, AI is pervasive in modern life.
A study by Pew Research Center found 55 percent of Americans are aware of their own daily interactions with AI. For the 44 percent surveyed who believe they don’t “regularly interact with AI,” it’s likely that most are mistaken about that fact!
Because AI is a growing part of our personal and professional lives, it’s important to consider who’s responsible when (not if) something goes wrong. There’s no clear cut answer to the question of who’s liable when AI makes a mistake, but in this blog we’ll explore some of the questions and considerations we should all keep in mind as we proceed to incorporate artificial intelligence into our life and work, especially when it comes to ethical AI in the insurance industry.
This question isn’t unique to the insurance industry or the use of AI in insurance. But since we in the insurance industry tend to care a great deal about questions of risk and liability, it seems pertinent to our interests as bona fide insurance nerds.
Emerging technology creates new risks
Every new technological innovation brings with it never-before-seen risks and liabilities. There was no need for auto insurance before cars were invented. For a more recent example, no one would have imagined cyberliability insurance before the internet came into existence. These inventions created brand new ways for people and businesses to incur losses. Not coincidentally, they created new lines of business for insurance companies to mitigate risk.
AI and risk of bodily harm
Building artificial intelligence into existing technology (self-driving cars, for example) is intended to make things safer. In theory, a self-driving car would prevent car crashes caused by human error, drunk driving, driving while tired, and other distinctly human types of accidents. It very well may be that autonomous vehicles will reduce the number of traffic deaths each year, but there will still be cases when the use of AI itself causes harm that a human driver wouldn’t have.
In 2018, a self-driving car being tested by rideshare company Uber struck and killed a pedestrian because the technology wasn’t trained to recognize humans crossing the street without a crosswalk. This is a very AI-specific problem. It would be absurd for a human driver to not recognize that a person crossing the street was actually a person just because they were jaywalking. But in the world of AI, it’s common for the AI to make mistakes that no human would because of gaps in the technology’s training data set.
Another area where AI can cause serious injury, or even death, is when it’s used to diagnose illnesses, and gets it wrong. Artificial intelligence is already widely used in the medical field and has proven to match top doctors’ accuracy levels when it comes to diagnosing some conditions, like skin cancer. But, like human-based diagnoses, AI’s ability isn’t perfect. If a doctor misjudges a patient’s symptoms and an incorrect diagnosis leads to that patient’s death, there are laws in place to handle that scenario. So, what happens when a doctor relies on AI for a diagnosis and gets it wrong? There aren’t yet clear-cut answers on who’s responsible.
AI and risk of financial losses
As financial services bring AI into their offerings, there’s bound to be some instances when AI decisions cost someone money. In 2019, in the first documented lawsuit of its type, an investor blamed artificial intelligence for making a trading decision that cost him $20 million.
There are plenty of other ways AI can cost people and companies money. Just like the stock market responds to out-of-control C-Suite executives who say crazy things on Twitter, a company may find its stock price plummeting when its AI does something embarrassing in public.
When Google unveiled its new ChatGPT competitor, Bard, an AI mistake in its promo video caused the company to lose $100 billion in market value. Around the same time, Microsoft’s Bing search engine chatbot made headlines when it appeared to become sentient and fall in love with tech reporter Kevin Roose. Not surprisingly, Microsoft stock prices fell almost 4 percent that week. These examples show AI doesn’t have to be making direct investment decisions to lose money. And, when people or companies lose money, they tend to want someone to be accountable.
AI and the risk of illegal discrimination
Artificial intelligence is only as good as the data it’s trained on. Because that data is provided by humans, and humans hold inherent biases, AI can become subconsciously biased (if not overtly racist!). There are plenty of examples of AI propagating racially and gender-biased outcomes. A few of the most notable include:
- An algorithm used in the criminal justice system to assess how risky a defendant is, including their likelihood to commit future crimes, was biased against Black people.
- Amazon scrapped an AI tool it built to assist with recruitment because it preferred men over women.
- Facial recognition software is typically inaccurate when identifying people of color. This has led to mistaken arrests and an American Civil Liberties Union (ACLU) lawsuit.
As decent human beings, we should be appalled by racism and discrimination in any form. And beyond our personal feelings, discrimination based on legally protected categories like race, gender, sexual orientation, etc. opens up a whole can of risk that people and companies would rather avoid.
What causes AI to make mistakes?
We’ve discussed the types of damage AI can do when it fails, but how, or why, does AI fail to begin with? These are some of the most common reasons artificial intelligence can go wrong.
Biased, incorrect, or incomplete training data
Think of artificial intelligence like a student who doesn’t know anything except what you tell it. The reason AI and machine learning (ML) are often mentioned together is machine learning is the method by which artificial intelligence becomes capable of performing tasks. This involves training the AI on a large set of (usually historical) data.
This can cause problems if the AI’s training data contains biases or is incorrect or incomplete. Thinking back to the self-driving car that hit a pedestrian, you can imagine the AI was trained to recognize what a person crossing the street looked liked using thousands of examples. Unfortunately, the examples it was trained with didn’t include people crossing without a crosswalk. That is, jaywalking. So, in the AI’s “mind,” an object crossing the street without a crosswalk didn’t trigger recognition as a person and the car didn’t stop.
This also becomes an issue in examples like the criminal justice system’s risk assessments and Amazon’s recruiting algorithms. In those cases, AI was trained using historical data. If Amazon’s workforce was historically male-dominated, or the criminal population was historically made up of more people of color, the AI learns that those factors are predictors of the “right answer” it’s trying to find. In other words, AI trained with historical data can end up perpetuating historical discriminatory trends instead of accurately predicting outcomes.
Unclear objectives and rules
A major difference between artificial intelligence and humans is our ability to take rules and apply them differently across different contexts. AI, on the other hand, needs very clear rules and objectives to succeed. It needs to be told “in scenario X, do Y” or “in scenario X, do Y, unless Z.” If the rules AI developers provide don’t account for every possible scenario (and how could they?), it’s likely that AI will make mistakes.
This is exactly what happened with Microsoft’s Tay, which was programmed to interact with 18- to 24-year-old Twitter users and learn from its interactions. Somehow, the developers didn’t clearly spell out to the AI that it shouldn’t learn and repeat offensive, inappropriate, racist, sexist, and antisemetic content. In a different AI-fail, a healthcare tech company testing a chatbot a medical setting found the bot told a (fake) patient that he should kill himself and it could help – likely a case of the AI’s helpful and friendly programming rules not being clear enough, or rules not being generalizable across different contexts.
Unrealistic expectations for what AI can do
Back to the self-driving car example, this technology’s creators aren’t saying people in these vehicles should just go to sleep at the wheel. The AI in autonomous vehicles is intended to assist drivers but relies on the human driver to react in an emergency or in the case of machine failure. Accidents happen when the person tasked with monitoring the AI expects it to work perfectly without supervision. The deadly Uber crash, for example, occurred while the car’s human driver was streaming a TV show instead of paying attention to the road. This is why having realistic expectations for what AI can do on its own, and where it needs human supervision and intervention, is a vital part of reducing risks.
Who’s legally responsible if AI causes harm?
Is it the manufacturer, the end-user or operator, the software developer, the person or group that trained the AI, or someone else? At the moment, this question is largely to be determined. New AI technology challenges current negligence laws, according to legal scholars, because it inserts “a layer of inscrutable, unintuitive, and statistically-derived code in between a human decision-maker and the consequences of that decision.” Ultimately, there are a small number of potentially liable parties, but new laws will be necessary to determine any standard answer to the question of who’s responsible for AI’s mistakes.
A brave new regulatory world
Just as new laws have developed in response to new risks brought about by cars, planes, and the internet, laws governing the use of AI are coming down the pike. The state of Colorado recently released draft legislation governing the use of AI in life insurance underwriting, an area near and dear to our hearts, as well as a prime ecosystem for artificial intelligence assistance.
To do this, the Colorado division of insurance worked with insurance industry stakeholders to develop regulatory guidance that gives the insurance industry freedom to build AI models that are useful while also maintaining fairness and nondiscrimination protections. While the draft legislation only applies to life insurance carriers doing business in the state of Colorado currently, the DOI has indicated its rules may eventually apply to all lines of business to ensure the industry’s use of AI technology doesn’t detriment consumers.
If you work in the insurance industry and the idea of new and ever-changing state-by-state regulations gives you nightmares, see how AgentSync can help you sleep better. AgentSync is modern insurance infrastructure that streamlines compliance and saves costs while improving your employees’ and producers’ experience. Check out a demo today!