August 13, 2024
How Insurance Companies Can Assess and Mitigate AI Risk Factors
As artificial intelligence (AI) becomes integral to various insurance operations—ranging from customer service to fraud detection and underwriting— companies face an increasingly complex landscape of risks and opportunities. To put their organizations at a strategic advantage, CEOs, CFOs, and CIOs must understand both the possibilities and vulnerabilities associated with utilizing these emerging technologies.
Opportunities for Insurers to Utilize Artificial Intelligence
Insurance companies can use AI to support various processes, including:
- Improve customer service by deploying chatbots that are trained to respond with the appropriate tone, leverage customer profile data, and offer personalized recommendations.
- Enhance fraud detection by analyzing claims and application data, flagging unusual activities and reducing the number of fraudulent claims and applications.
- Increase the accuracy of risk assessments for underwriters by analyzing historical data to identify patterns and predict future risks.
- Scale operations by handling large volumes of claims simultaneously, making the process more scalable and reducing bottlenecks during peak times.
AI Risks for Insurance Companies
While there are many opportunities, AI presents a number of risks, including:
- Data inaccuracy from flawed algorithms or poor data quality. Using third-party data without proper authorization can lead to legal complications and a damaged reputation.
- Data vulnerability for intellectual property (IP) and other company data used to train third party models.
- Bias within insurance claims by including personal factors. For example, car insurance rates can be influenced by factors like credit score, education level, income, occupation, and homeowner status, penalizing low-income buyers and are not directly related to likelihood of a collision.
- Violation of unfair trade practice laws and other legal standards when not used in accordance with industry regulations.
How Insurers Can Mitigate AI Risks
Robust governance, risk management controls, and monitoring by internal audit functions play a core role in mitigating the risks posed by AI systems.
Look to Current Regulations and AI Frameworks
While there has been no federal legislation on AI to date, there is movement at the state level. In addition, frameworks and guidance have been produced to augment organizations’ implementation and responsible use of AI.
- NAIC Updates: The National Association of Insurance Commissioners (NAIC) voted to adopt the Model Bulletin on the Use of AI Systems by Insurers on December 4, 2023. The model bulletin focuses on how insurers will govern the development/acquisition and use of certain AI technologies.
- NYDFS Circular: On July 11, 2024, the New York State Department of Financial Services (NYDFS) adopted a circular. Insurance companies are expected to develop, implement, and maintain a written program (an “AIS program”) for the responsible use of AI systems that make, or support decisions related to regulated insurance practices, specifically focused on underwriting and pricing insurance policies and annuity contracts. The AIS program should be based on a recognized risk management framework and document the insurer’s risk identification, mitigation, and internal controls for AI systems at each stage of the AI system life cycle.
- NIST Risk Management Framework: The National Institute of Standards and Technology (NIST) released an artificial intelligence risk management framework (AI RMF). This recognized framework addresses trustworthiness considerations in the design, development, use, and evaluation of AI products, services, and systems. The NIST AI RMF is one of the most comprehensive and widely adopted frameworks.
Rely on Your Internal Audit Team to Assess AI Risks
Internal audit support of the AI RMF program is critical, and even required by some of the regulations. The internal audit team can help ensure financial, operational, and compliance risks are being effectively addressed through evaluation of the design and operating effectiveness of the AIS program. The following are example assessment areas to consider:
- Appropriate policies and procedures
- Internal controls to address all phases of the AI system lifecycle
- AI system use and testing to determine whether validations are thorough and timely
- Documentation required to support regulatory compliance is generated and maintained
- Monitoring of internal controls
- Integrity of data used by the AI system
- Potential biases in data
- Reporting of AIS program adherence
By integrating AI, insurance companies can streamline operations, enhance accuracy, reduce fraud, improve customer satisfaction, and ultimately drive better business outcomes. However, the adoption of AI also introduces new risks, including challenges in maintaining regulatory compliance, ensuring data integrity, and managing potential biases in AI systems.
Johnson Lambert provides the insurance guidance and risk management support you need to confidently implement secure, compliant, and high-performing AI systems. Our internal audit solutions are critical in supporting the AI RMF program, ensuring that financial, operational, and compliance risks are effectively managed. By partnering with us, you can turn AI’s potential challenges into a competitive edge, backed by thorough evaluations of policies, internal controls, and system validations to meet regulatory standards and drive sustainable success.