How Do You Ensure AI Ethics In Insurtech? 


What are some ethical considerations when using AI and generative AI in insurance?

Read on to learn more about the ethics of AI:

Artificial Intelligence first surfaced decades ago, but recently there has been a rise in more sophisticated AI technology in insurance— like generative AI, which are systems that can create unique content based on data and patterns they learn from.

As AI progresses, its prominence in property insurance will continue to grow. Why?

  1. AI can automate the claims processing workflow and detect fraudulent claims. 
  2. AI-powered technology can conduct virtual inspections and improve risk assessments for underwriters. 

The applications for generative AI in insurance continue to grow as well thanks to this technology’s ability to ingest enormous amounts of data and assist humans in making better, data-driven decisions. 

However, insurance providers must be mindful of the regulatory environments they operate within. There are compliance requirements — and therefore ethical considerations — that insurance companies must heed when using AI. 

It can be challenging to navigate a plethora of requirements as providers determine which AI technologies fit into a compliant, legal framework, which is why it pays to have a trusted advisor when considering the right investment for your business.

Challenges Using AI in a Highly Regulated Industry

In the U.S., each state has its own set of regulations and compliance requirements that insurance carriers must follow in order to operate legally.

How Is AI Powering Property Insurance

The Role of Artificial Intelligence Across the Property Ecosphere

Insurance laws in each state demand carriers submit their rate filing processes to confirm that they comply with a certain set of standards. These standards dictate not only how insurance companies must establish rating systems and pricing models, but also how they must handle sensitive data. In turn, some state codes regulate the type of technology and models that insurers can use to make decisions regarding insurance policies and claims.

With different legal requirements in place from state to state, it can be complicated for insurers to determine how they can use AI. Also, while some states have laid out requirements pertaining to algorithms and risk models, some have not. 

The National Association of Insurance Commissioners (NAIC) sets forth guidelines and recommendations for regulators about the use of AI. NAIC guidelines could be the basis for future laws, but for now, most state governments have not outlined specific requirements for how insurance carriers can leverage AI technology. 

There are no universal laws that regulate AI technology in the property insurance industry. 

As a result, a lot left up to interpretation for insurance companies that operate across multiple states. Federal laws do not explicitly tell insurance carriers:

  1. How and when can carriers use AI
  2. The different types of AI carriers can leverage
  3. What data variables AI can consider to make policy determinations

Although legal inconsistencies and uncertainties remain, there is too much to gain from using these solutions to avoid AI altogether. So, what is the best way for insurance companies to use AI in their day-to-day operations? 

Taking a Conservative Approach to AI: Ethics of Artificial Intelligence

We can only expect that more AI-specific laws will arise as more insurance carriers leverage this technology to conduct rate-making procedures and determine risk, and thus insurability.

So, for insurance carriers to ensure that they are building out compliant AI-powered processes even amid potential future legislation, it is best practice to take the most conservative approach when using AI to assist in underwriting functions and claims processing.

“AI has the power to transform insurance functions.  As such, the regulatory environment is changing to keep up with advancements in AI,” explains Amy Gromowski, Executive of Science and Analytics at CoreLogic. “It’s important to manage your AI wisely.  A governance program that is agile and comprehensive will ensure you are meeting a varying degree of regulatory standards. When in doubt, be conservative in your governance programs; put process and people in place to ensure the responsible use of AI.”

In the absence of AI-specific universal legislation, building an ethical, responsible data governance model that adheres to the regulations of the most conservative states in which your company provides coverage is a safe start.

A data governance model should consist of standards for gathering, storing, processing, and disposing of your data. These models will determine the AI technologies that you can use—and how to use them—because data is what trains AI to make decisions and take actions. 

All technologies should manage data in accordance with your ethical and responsible data governance program.

Adhering to AI Ethics

Ethical AI is trained on a comprehensive set of accurate, unbiased data so that it won’t lead to decision-making that discriminates against certain communities or protected classes of people. With ethical AI, there should be transparency around the type of data used. Sensitive data should also remain private and secure.

To ensure that AI promotes unbiased decisions and actions, humans should always maintain oversight of the technology. Although some AI is capable of taking independent action and making decisions without the input of humans, ethical AI will always involve humans to be sure that policies are priced fairly and insurability is determined objectively.

Working With Technology Partners That Pursue Responsible AI

It might seem impossible to keep tabs on all the updates to all the AI regulations across all the states in which your organization conducts business. That’s why it’s important to work with technology partners that share a conservative, ethical view on AI and compliance and that have strict data governance models.

At CoreLogic, we deliver AI solutions trained on robust sets of unbiased data. We also have two governance programs for AI solutions. Both programs were developed with the input of legal experts to ensure that all our software manages and processes comprehensive, objective (read: unbiased) sets of data and that they align with compliance requirements across states.

For clients who have any uncertainties about how to implement AI so that it fits within their own data governance models, CoreLogic acts as a consultant to help them roll our AI solutions ethically. 

Understanding the Ethics of AI

Since there aren’t explicit, uniform rules pertaining to AI in the insurance sphere, it is important to use ethics as your compass to guide your AI approach.

As AI evolves, it will continue to push the boundaries on what can be done with data. Still, it is important to maintain human oversight and control over the data your AI technology leverages so that you can ensure it is secure and unbiased. To remain compliant as AI grows in sophistication and influence, your entire digital ecosystem must be designed for ethics.

It takes a village to pursue AI ethically and conservatively. Not only do you have to establish conservative data governance models to guide data handling, but you must work with AI solutions providers that have the same priorities.

Learn More About How AI Is Powering Property Insurance

Ebook: The Role of Artificial Intelligence Across the Property Ecosphere

The CoreLogic statements and information in this blog post may not be reproduced or used in any form without express written permission. While all the CoreLogic statements and information are believed to be accurate, CoreLogic makes no representation or warranty as to the completeness or accuracy of the statements and information and assumes no responsibility whatsoever for the information and statements or any reliance thereon. CoreLogic® and Marshall & Swift® are the registered trademarks of CoreLogic, Inc. and/or its subsidiaries.

Originally Appeared Here

Author: Rayne Chancer