Pennsylvania Insurance Department announces guidance on insurers’ use of AI
On April 6, 2024, the Pennsylvania Insurance Department (PID) issued Notice 2024-04 to all insurers operating in Pennsylvania concerning their use of artificial intelligence systems. The Notice reminds insurers that “decisions or actions impacting consumers that are made or supported by advanced analytical and computational technologies, including Artificial Intelligence (AI) systems . . . , must comply with all applicable insurance laws and regulations,” including “laws that address unfair trade practices and unfair discrimination.”
The Notice follows the issuance of model guidelines adopted by the National Association of Insurance Commissioners (NAIC) – a national standards-setting organization operating in the insurance industry – concerning the “Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers” on December 4, 2023. Our client alert of the NAIC model guidelines is available here.
Pennsylvania is the latest state to adopt the NAIC’s model guidelines, joining Alaska, Connecticut, New Hampshire, Illinois, Vermont, Nevada, and Rhode Island. Other states have also issued their own guidance or rules, including California and Colorado, while New York has proposed, but not yet finalized, an insurance circular letter on AI.
The Notice acknowledges the transformative use of AI in the insurance industry, including for product development, marketing, sales and distribution, underwriting and pricing, policy servicing, claim management, and fraud detection. While supportive of the use of AI by Pennsylvania insurers, the PID highlights the fact that AI “present[s] unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability.” Given those risks, the Notice provides guidelines on appropriate governance frameworks, risk management protocols, and testing methodologies, all of which center around five key tenets:
- The fairness and ethical use of AI
- Accountability
- Compliance with state laws and regulations
- Transparency, and
- Safe, secure, fair, and robust systems and processes.
Those guidelines are summarized below.
AIS Programs
The Notice includes a non-exhaustive list of guidelines and best practices for the development and use of AIS Programs, which are written programs governing the use of AI systems. The guidelines include general information about the purpose and structure of an AIS Program, as well as recommendations related to program governance, risk management, internal controls, and the implementation of third-party AI systems.
The guidelines further outline a framework that insurers may follow to develop, implement, and maintain their own AIS Programs in order to assure decisions made using AI systems meet all applicable legal standards. However, the Notice acknowledges that there is no “one size fits all” AIS Program, and that each insurer must design its own program based on an “assessment of the degree and nature of risk posed to consumers by the AI systems that it uses.”
In determining the risk posed to consumers by their use of AI, the PID encourages insurers to consider:
- The nature of the decisions being made, informed, or supported using the AI system
- The type and degree of potential harm to consumers resulting from the use of AI systems
- The extent to which humans are involved in the final decision-making process
- The transparency and explainability of outcomes to the impacted consumer, and
- The extent and scope of the insurer’s use or reliance on data, predictive models, and AI systems from third parties.
The Notice cautions that the guidelines are not intended “to be binding on insurers” or “in any way, restrict or limit the PID’s discretion to evaluate an insurer’s compliance with applicable laws or regulations.”
Regulatory oversight
The Notice also provides information concerning the PID’s oversight of insurers’ use of AI systems. At a macro-level, the Notice makes clear that insurers should be prepared to respond to inquiries pertaining to both the development, the deployment, and use of AI systems, and any outcomes from the use of AI systems that impact consumers. Examples of the types of requests that insurers may receive include, but are not limited to, information and documentation relating to the insurer’s:
- AIS Program
- Implementation and compliance with its AIS Program, including documents relating to the insurer’s monitoring and audit activities respecting compliance
- Pre-acquisition/pre-use diligence, monitoring, oversight, and auditing of data or AI systems developed by a third party
- Contracts with third-party AI system, model, or data vendors, including terms relating to representations, warranties, data security and privacy, data sourcing, intellectual property rights, confidentiality and disclosures, and/or cooperation with regulators
- Audits or confirmation processes, or both, performed regarding third-party compliance with contractual and, where applicable, regulatory obligations, and
- Documentation pertaining to validation, testing, and auditing, including evaluation of model drift.
Differences from the NAIC model guidelines
While the Notice largely tracks the NAIC model guidelines with some modifications to tailor those guidelines with Pennsylvania-specific laws, the Notice differs from those guidelines in three substantive ways.
First, it applies not only to insurers that hold certificates of authority to do the business of insurance in Pennsylvania, but also to any insurer that is otherwise authorized to engage in the business of insurance in Pennsylvania.
Second, as noted above, the Notice states that it is not intended to be binding on insurers or to restrict the PID’s discretion to evaluate insurers’ compliance with applicable law. Neither are they intended to be an exhaustive list of items the PID will consider when assessing such compliance.
Third, with respect to insurers’ oversight of third-party AIS and data, the Notice adds that any due diligence and methods employed by insurers to assess such third parties “may include human oversight.” This added language suggests that the PID will look to see if human oversight is employed by insurers as part of their diligence efforts, but the PID will not necessarily expect human oversight in every case.
Key takeaways
The Notice provides guidelines for Pennsylvania insurers to use when implementing AI systems to ensure that their use complies with all applicable federal and state laws and regulations, including the Unfair Insurance Practices Act (40 P.S. §§ 1171.1—1171.15), the Unfair Claims Settlement Practices (UCSP) Regulations (31 Pa. Code §§ 146.1—146.10), the Corporate Governance Annual Disclosure (CGAD) Requirements (40 Pa.C.S. §§ 3901—3911), and various other laws regulating insurance in Pennsylvania. Foremost, the Notice emphasizes implementation of a robust, written AIS Program that outlines AI governance and documentation practices.
The PID also recommends several key actions that insurers should implement (or continue) to ensure compliance with industry guidelines:
- Govern: Adopt robust governance, risk management controls, and internal audit functions to mitigate the risk that AI systems will violate unfair trade practice laws and other applicable legal standards.
- Document: Adopt a written program for the use of AI systems designed to assure that decisions impacting consumers which are made or supported by AI systems are accurate and do not violate unfair trade practice laws or other applicable legal standards.
- Verify: Use verification and testing methods for AI systems to identify the existence of unfair bias and potential for unfair discrimination.
DLA Piper is here to help
DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world. Moreover, DLA Piper has significant experience helping insurers navigate the emerging global legal and regulatory landscape, including testing of their AI systems for bias or other harms.
As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.