Hjalmar Jesus Gibeli Gomez: Insurance Industry Highlights Inconsistent Reliance on AI
Artificial intelligence technology (“AI”) is poised to radically improve human functionality, although some say the technology is quietly learning how to overtake it. In the meantime, the insurance industry has been using AI to save time, attain consistency and improve risk mitigation. However, while the industry looks forward to cost savings and better business utilizing generative AI, some insurers have simultaneously cautioned policyholders about the potential risks that reliance on AI may pose. Insurer’s cautionary statements cast doubt on the integrity of their own reliance on the technology.
For example, Attorneys’ Liability Assurance Society Ltd. (“ALAS”) notified its law firm policyholders that ChatGPT—perhaps the leading publicly available AI platform—is “Not Ready for Prime Time.” In a bulletin issued to its policyholders, ALAS warns that the use of the technology by law firm policyholders could result in legal malpractice for which there may not be coverage under professional liability insurance policies. While cautionary in nature, this type of warning from insurers forecasts coverage denials for claims resulting from its use. It also demonstrates the differing perceptions of AI, even among members of the same industry.
According to insurance industry executives, AI is already being deployed for use in claims automation, product development, fraud detection and employee and customer-facing chatbots. AI technology can process large amounts of claims information in a fraction of the time it currently takes humans and can eliminate human bias in the underwriting process. But the insurance industry’s anticipated reliance on AI is undermined by that same industry’s warnings about the inherent flaws in AI and its lack of reliability, especially when it comes to current and updated information. For example, ChatGPT readily admits that its information database is only current through early 2021. Other AI databases may contain more up-to-date information, but even those are only as current as the last information upload. Compare that to the presumptive knowledge and information available to a qualified claims adjuster, who are required to keep abreast of current events and changes in the law and applicable regulations that affect the handling of a claim.
Lawyers and law firms are similarly required to stay abreast of relevant information. Yet, unlike insurers, lawyers who also are looking to generative AI for innovation must do so while navigating the ethical rules that govern their profession. The American Bar Association Model Rules of Professional Conduct state that lawyers must provide competent and transparent representation while protecting client information. There’s a growing awareness of the risks associated with using generative AI for legal work—most specifically about how a law firms’ privacy and client confidentiality will be protected and the technology’s accuracy. Problems with the current generations of the technology have proven a tendency to “hallucinate” or generate text that appears plausible, yet is not factual, given its ability to leverage billions of data points to predict the next word in a string of text. Still, the legal industry—like the insurance industry—is very much aware of the benefits that will result from the technology’s utilization.
Whatever the level of use of ChatGPT and generative AI by law firms and the legal industry, policyholders should be wary of the potential risks as well as the insurance coverage consequences that will inevitably follow. In the case of a claim denial, policyholders should also inquire as to the level of AI used by the insurer and, where warranted, insist on human review and analysis of the claim decision. Experienced coverage counsel can help to identify anomalies in insurer claim denials as well as help to determine potential pitfalls that might be avoided as insurers similarly adopt to the technology’s use.