Skip to content
Trending Insights

Beyond the Firewall:

Tech’s Impact on the Industry’s
Risk Landscape

When asked about the biggest risks to the industry, risk professionals continued to rank technology in the top spot, but cybercrime and fraud jumped up in ranking from last year. What factors are enhancing their concerns?

  INSIGHTS

Top Concerns Shift: Fraud and Cybercrime

Risk professionals were surveyed about the biggest risks in 2024 and how prepared their companies are to address them. While technology remains the greatest perceived risk overall (as reported in 2023), cybercrime and fraud were second, jumping up four spots in ranking from last year.

20241 2023
  1. Technology
  2. Cybercrime & Fraud
  3. Interest Rates
  4. Talent, Regulation, Reputation
  1. Technology
  2. Talent
  3. Regulation
  4. Macroeconomy


Reputational risk also moved from the bottom of the list last year to tie for fourth along with talent and regulation. The visibility and repercussions of cybercrime and fraud issues may make reputational risk seem greater.

When asked about the biggest risk to the industry (without being provided a list), responses indicated that operational risks are gaining a lot of mindshare — cybercrime and fraud were most frequently mentioned. One reason for this could be the potential for artificial intelligence to automate fraud, thereby increasing the overall risk. 

When asked how prepared insurers were to manage these risks on a scale of 1 to 10 (with 1 being completely unprepared and 10 being completely prepared), respondents rated the industry at a 6.5

AI May Be Driving Enhanced Concerns

The industry is wary of AI being leveraged to perpetuate cybercrime. For the most part, even the most sophisticated bad actors — cybercriminals — are generally unsophisticated. While their tactics have continued to be more refined, in contrast to AI systems, these bad actors are limited by the resources and infrastructure they have at their disposal to commit cybercrimes. Additionally, unlike AI, human cybercriminals need to eat, sleep, and take breaks. AI does not. It will continue to learn vulnerabilities and seek new ways to exploit them.

The use of AI to generate deepfakes creates a whole new set of challenges for account takeover (ATO) fraud, as well as novel cybercrimes. Deepfakes are increasingly indistinguishable from reality and can accurately emulate someone’s voice and intonations.

The ability to spoof biometric authentication systems will result in a paradigm shift on security, privacy, and biometric authentication. From facial recognition to voice authentication systems, the industry will need to reconsider its approach to authentication.

  IMPLICATIONS


Tackling Risk Head On: How Can the Industry Respond?

Risks associated with fraud and cybersecurity include data breaches, phishing attacks, and ransomware. There is also risk associated with third-party services.  IT infrastructure work, claims processing, customer support, and other services are often outsourced. Strong security measures need to be enforced throughout the supply chain.

Companies can mitigate risk by:

  • Continually educating and training employees. Human error remains one of the leading causes of cybersecurity breaches.
  • Implementing robust security measures.
  • Conducting regular security assessments and audits.
  • Keeping abreast of U.S. agency recommendations.
  • Evaluating and considering the use of sophisticated AI cybersecurity tools to protect and prevent AI-based attacks.

Lack of Overarching Regulation Is Also Driving Concern

It’s also important to note that there isn’t yet any regulatory guidance for the industry related to AI. The technology is ahead of regulation causing the industry to be wary. The industry looks to regulation to pattern their compliance practices. With AI, there is no overarching regulation that companies can model their compliance practices after. Regulatory frameworks exist, but they are not designed specifically for our industry. Where mature regulatory frameworks exist, they are focused on a specific use case or domain, as is the case with automated and accelerated underwriting using AI. Regulatory guidelines include those found in Colorado bill SB-169, NY Circular 19, and the NAIC’s Automated Underwriting Working Group (AUWG).

It is highly unlikely that there will be federal AI regulation. The two regulatory frameworks that the industry would do well to consider to model compliance best practices after are the European Union AI Act and President Biden’s executive order on AI. The LIMRA and LOMA AI Governance Group (AIGG) is working on developing best practices predicated on these regulatory frameworks.

Carriers and vendors in the industry ecosystem need to ensure that their AI systems are explainable, documented, and transparent. Explainability of AI models and underlying data is vital to assuage any concerns that AI outputs can be incorrect, misleading, skewed, or biased.

Even if a company can successfully lower the risk in their own AI outputs, ensuring transparency and explainability, risk remains. AI is embedded in most every vendor technology and it will become increasingly difficult for companies to derisk the vendor supply chain when it comes to a vendor’s use of an AI product.

Companies can mitigate risk by:

  • Staying updated on progress of the LIMRA and LOMA AIGG
  • Reviewing and assessing third-party AI practices
  • Staying updated on regulatory updates

 

1 Based on ranking of risks on a scale of 1–10.
Source: Top Risks for the Life & Annuity Industry Survey, LIMRA, 2024. Additional Input from Kartik Sakthivel, CIO, LIMRA and LOMA.

Did you accomplish the goal of your visit to our site?

Yes No