Insurance companies should also be increasingly involved in the governance of AI systems in the face of increasing regulatory pressure. Every organization should have an AI governance platform to avoid the risk of violating privacy and data protection laws, being accused of discrimination or bias, or engaging in unfair practices.
“As soon as a regulation or similar legislation is passed, organizations are left in a precarious position because [lack of governance] it can lead to fines, loss of market share and bad press. Every company using AI should have this on their radar,” said Marcus Daley (pictured), technical co-founder of NeuralMetrics.
NeuralMetrics is an insurtech data provider that assists in commercial underwriting for property and casualty (P&C) insurers. The Colorado-based firm’s proprietary artificial intelligence technology also serves financial services companies and banks.
“If insurance companies use artificial intelligence to process personally identifiable information, they should be very closely monitoring and understanding precisely how it is being used, because it is an area of responsibility that they may not be aware of,” Daley said. to Insurance Business.
How could AI regulations affect the insurance industry?
Last month, the Council of the European Union officially adopted its common position on the Artificial Intelligence Law, becoming the first major body to set standards to regulate or ban certain uses of AI.
The law assigns AI to three risk categories: unacceptable risk, high-risk applications, and other applications not specifically prohibited or considered high-risk. Insurance AI tools, such as those used for risk assessment and pricing of health and life insurance, have been deemed high risk under the AI Act and should be subject to more stringent requirements .
What is notable about the EU AI Law is that it sets a benchmark for other countries seeking to regulate AI technologies more effectively. There is currently no comprehensive federal AI legislation in the US. But in October 2022, the Biden administration released a blueprint for an AI “bill of rights” which includes guidelines on how to protect data, minimize bias and reduce the use of surveillance.
The plan contains five principles:
- Safe and effective systems – people must be protected from insecure or ineffective systems
- Algorithmic Discrimination Protections – people must not be discriminated against by AI systems, which must be used and designed in an equitable way
- data privacy – individuals must be protected from abusive data practices and have authority over how their data is used
- notice and explanation – users must be informed when an automated system is used
- alternative options – users should be able to opt out whenever they want and access a person who can fix issues
The plan for a #AIBillodeRights is for all of us:
– Project managers designing a new product.
– Parents seeking protections for children.
– Workers who advocate for better conditions
– Policymakers seeking to protect votershttps://t.co/2wIjyAKEmy— White House Office of Science and Technology Policy (@WHOSTP) October 6, 2022
The “bill of rights” is seen as a first step in establishing the liability of artificial intelligence and technology companies, many of which call the US home. However, some critics say the plan lacks force and call for stricter regulation of AI.
How should insurance companies prepare for stricter AI regulations?
Daley suggested that insurance companies need to step up the governance of AI technologies within their operations. Leaders must incorporate several key attributes into their AI governance plans:
Daley stressed that operators need to be able to answer questions about their AI decisions, explain the results, and ensure that AI models remain accurate over time. This openness also has the dual benefit of ensuring compliance by providing proof of data provenance.
When it comes to working with third-party AI technology providers, companies need to do their due diligence.
“Many carriers don’t have the in-house talent to get the job done. So, they’re going to have to go out and seek help from an outside business entity. They must have a list of things they require from that entity before choosing to participate; otherwise, it could create an enormous amount of liability,” Daley said.
To stay on top of regulatory changes and improvements in AI technologies, insurance companies must constantly monitor, review and evaluate their systems, and then make any necessary changes.
Rigorous testing will also help ensure that algorithm biases are removed. “Governance is just a way to measure risk and opportunity, and the best way to manage risk is through automation,” Daley said. Automating the inputs and testing the produced outputs creates consistent and reliable results.
To build trust with customers, regulators, and other stakeholders, insurance companies need to ensure that their AI processes remain accurate and free from bias.
Another thing that operators need to consider is the sources of their data and whether they meet the requirements. “As time goes by, you see that sometimes the source of the data is AI. The more you use AI, the more data it generates,” Daley explained.
“But under what circumstances can that data be used or not used? What is the nature of the source? What are the terms of service? [of the data provider]? Making sure you understand where the data comes from is just as crucial as understanding how the AI generates the results.”
Do you have any thoughts on AI regulation? Share them in the comments.