Welcome to Insider Insights, where we dive into hot topics facing the financial services industry. Today, Kartik Sakthivel, Chief Information Officer at LIMRA and LOMA and John Keddy, of Lazarus AI, provide insights into the challenges and imperatives of AI in the insurance industry.
Alright, born ready John Keddy. Good to see you. Welcome back. We're going to have another conversation about AI.
How did you feel about our other podcast that we did on AI? Well, every session I have with LIMRA team is a great session. So glad to be back and look forward to some more challenging questions and topics.
That's awesome, man. You say the nicest things. I appreciate you, John Keddy. So, you know, the last time you and I talked, John, we talked about the evolution of AI, where we've been, where we are today. Why is everybody talking about generative AI all day, every day? And we started teasing out two things that I think are really important for our audience to hear. One of them was the concept of AI literacy and why we need to all educate ourselves on the fundamentals of AI. But even before we get there, I think we talked about talking about today some of the downsides of AI, some of the challenges with AI, right?
So regardless of the broad, sprawling field of study that is artificial intelligence, whether it's facial recognition, voice recognition, speech to text, text to speech, machine learning, and generative AI — the number one concern always remains making sure that it is fair, explainable, transparent, free of bias and proxy discrimination, and making sure that it's equitably applied across the value chain.
So John, at the highest level, talk to us about some of the downside. What are the challenges for our industry that you foresee with AI?
Yeah, and I'll actually link two of the themes together. So in our last session, we talked quite a bit about some of the things I share with executive teams and why it's so critical to be having those conversations today.
So if anyone's interested in going deeper, I strongly suggest you listen to the first in the series. But AI literacy is something that is also absolutely, critically important that executive teams are ensuring their organization have an understanding of. And it's directly related, in my mind, to the downsides. It's not just 'Let's go out and create a whole bunch of policies and controls and block these tools on our network.'
You are not defending your organization, if you're doing that. You need to help your organization to understand how these tools can be used, including to reduce risk, and having the AI literacy as you correctly term it, so that people in the organization understand what tools and technologies they can use to create value, what tools and technologies they should not be using, and why.
Yeah, and I think sometimes and when we have fundamental change, and we are having fundamental change being driven by AI, sometimes we forget that we not only have to have rules and approaches, we have to explain why to our organization so that they can deal in a very, very fast moving environment. And that fast moving environment, as your question indicates, means there are risks.
So in our first session, we talked about some of the capabilities, and some of the solutions, and some of the things we're already seeing today. But you can quickly turn some of those on their head and see a risk. So generative AI let's us produce computer code at blinding speed means our attackers now are using those same tools to produce threats at blinding speed.
Yep.
The ability to generate even my voice — God forbid — no one would want to generate my voice, but let's say they did for malicious reasons.
And so you say, well, 'What about voice centric, authentic authentication metrics?' Yes, there are not metrics, there are approaches. They are absolutely under siege today. So, as we talk about all the possibilities, which are real, and are only going to grow, as leaders we have to think about with every one of those opportunities, there is also a risk.
Now here is where, again, I'll repeat myself, as I did in the first session, leaders must be engaging in the AI conversation today to reduce risk. If you are ignorant of the tools and the possibilities and how they can be turned on their heads to create risk, you're putting your organization at risk. You really need to be driving your organization in a way to use these tools and technology to reduce risk.
So, John, what you're saying is not only do we need to educate our employees on how to successfully use AI, but elevate their level of concern and diligence to be vigilant against malicious actors and bad actors who are going to use the AI, who are using AI, to perpetuate cybercrime. You raise a really good point, John, and I want to touch upon that really quick.
So, you know, LIMRA and LOMA, we built a FraudShare platform. It is by the industry, for the industry, to fight back and combat against insurance fraud. And one of the things that we have talked about with FraudShare, was the account takeover type fraud. Right. When we talk about voice authentication, imagine a scenario where you are an 86 year old grandmother and you get a video call from an unknown WhatsApp number from your grandson who says, 'Grandma, I made some poor choices.'
It's a video call. I am now in Tijuana, without my passport and my wallet can you Venmo me 1500 dollars to this other number — that you don't recognize — but that's because I don't have my phone with me, right? I mean, that is the next frontier of account fraud, right, of cybercrime is going to happen. It's phishing and smishing to the next level. It's very important for our employees to be a lot more diligent.
Yeah, absolutely. And I'm glad you brought up FraudShare. That's an initiative that I'm a huge fan of. But you correct the risks now and the threats get to that next level and you're exactly right.
And so just some quick perspectives on how quickly these risks may be moving. Let's pick ancient times — like a year ago — to take a snippet of my voice into really come up with a credible copy would have taken quite a bit of us talking on a session like this and quite a bit of machining. You come to this spring and there are people reporting very short time frames of snippets and far less machining.
So, as AI advances, and the capabilities advance, the threats advanced exponentially. And any time — you know me — you know my background, I've been a CISO in the past. Sometimes we would think that threats or malicious actors were these giants — seven feet tall. In fact, that's not true. Often our cybersecurity people in the insurance industry were smarter than any attacker, but the attackers only have to be right once, right.
So they appear like they're these giants and geniuses. They're not. But they have to be right once, and now they have this whole new arsenal of tools where they can be very average, maybe even bumblers, but they can create so much stuff and create it so quickly, that a lot of our traditional defenses and approaches need to be taken to the next level.
I concur. You know, that's a really good point, John. What I've been telling industry leaders is not even the most sophisticated bad actors are generally unsophisticated, right? When it comes to our instruments and our people to be able to combat them, to meet them where they are and be one step ahead. But these are still human beings. They need to eat, they need to sleep, they need to do other things in their lives.
AI has no such limitations, right? 24/7, 365. Especially when you think about something that's going to self learn your vulnerabilities and be able to try and capitalize on them, It's a terrifying landscape. So I think educating employees is going to be vital because it all comes down usually to a single click, right. You can have the best cyber defenses in the world, but it all comes down to one errant, single click.
That being said, John, you know, I've talked about this in the past. Look, it's just the Google, right? It's an enabler. It's a technology platform. It's what you make of it. In my opinion, some of the greatest challenges on institutionalizing AI within the insurance value chain, is going to come down to corporate culture right? People's tasks that they perform, role repeatable operational tasks that they perform as part of their jobs will change, right?
Things will look different in the next five years. And that's where the change management dimension comes up, right? So my opinion, I think culture and organizational culture is going to be — the if not the number one — but close to the number one challenge that organizations need to need to traverse in their success with AI. What are some of the other things that you can think about in terms of the challenges AI adoption will face in our industry?
Well, I think you've hit it right on the money. I think you're 100% right, and not enough time is being thought about organizational change. We just talked about like literacy and educating people and really defending your organization. And you're also exactly right. As we look ahead, the next five years are going to be fundamentally different.
And so I often will share, we are in a bubble for sure, because I often get asked, 'Are we in a bubble?' Yes, we are in a bubble. We are also in the age of AI, both of those things are true. Sometimes leaders want to pick one or the other. It's like, no, we can't. We are in a bubble and we're also in the age of AI.
And I liken it back to the 'Internet Era.' We look back now sometimes and we make fun of Pets.com and all these stupid ideas and focus on eyeballs only. And it took years for the NASDAQ to recover. All of those are true statements, every one of them. But if you look at the way we did business in 1997 and you look at the way we did business in 2002, the world had changed, it was fundamentally a different world.
That's where we are today. And leaders cannot get sidetracked with all 'There's a lot of hype and there'll be a bubble and they won't all deliver everything people promises.' They're going to miss the world is fundamental changing, and we need to be part of that fundamental change. And I'll tell you there some other thought leaders would disagree with my analogy on the Internet.
They would say, 'No, it's this is like man discovering fire.' This is even more of a fundamental change to our society and our civilization and things that we as leaders really need to be spending time in and investing the right time into.
It's interesting, we were talking about generative AI and the implications to not, just our industry, but our society and our species in general.
Right? I don't know if you saw the there's an article — 3000 AI experts across the world were recently polled on when they believe would AI achieve the computational capacity, exceed the intelligence of a human being, human brain, which in my opinion, I'm not that smart. So that could be tomorrow. But by and large, they predict it's going to be around 2060, right?
2060 is when I will achieve or exceed the human level intelligence. And some experts say that that could happen by the end of this decade. So we're in for a wild ride to be to be certain. I will also say, about generative AI specifically, John, it's not just about the tool. I think it's a massive business process re-engineering exercise. That's number one.
So I think it's important for our listeners to keep that in mind, as they as they shepherd the their firms through the age of AI.
Number two, I'll also say something we touched upon earlier, some of the the prominent challenges with AI, is the reason for explainability and transparency and the reason why — I'll tell you why I'm passionate about having explainable AI — and then I'd love to hear from you and your thoughts.
But look, if you build an AI model and call it Johns AI and you know, your AI model reaches an output right, an outcome, a decision, a prediction, whatever, whatever the result of that is. And if I say, 'Hey, John, how did your AI reach from point A to point B?' You know, it's performing trillions of calculations under the hood.
And if you can't explain it to me succinctly in business terms, I don't know if I'm going to be able to trust in your AI, John. I mean, I trust you implicitly, but I don't know if I'm going to be able to trust in AI. So that's where explainability and transparency of AI is going to be fundamentally important.
So that's my opinion. Love to get your thoughts on it.
Yeah, I think I think this podcast might be more interesting if we disagreed more, got into some fights. But I think that you've really hit some really important points that I can really only highlight and not disagree with. And first of all, you took my line about AGI and people are worried about artificial general intelligence.
And I often get asked, when will AI be as smart as the person? And I say, which person? There are some people I know. Like tomorrow morning at 9 a.m., I think you can make the argument.
Why'd you look at me when you said that, John?
Yeah, exactly.
So, you know, it is a little bit we're joking, but there is some truth to that intelligence in what task and what way the, you know, AI rising out of the carpet, unplugging the computer, taking over our house.
We're a long ways away on that.
If at all.
Right.
People need to get a lot more focused when I say put time, effort, and energy into AI. Understand where it is today, how it can impact our business today, how hackers can use it today. That's where you should put time and effort, in my judgment, not hyperventilating over 2060.
Now, you're also very correct as we think about what can we do today, think about it as the largest BPM or business process management remanagement opportunity of our lifetimes. I've been in discussions with multiple executives who are concerned because everybody's out digitally transforming and they're feeling, 'But we didn't incorporate AI. Is everything we've done wasted? Do we need to stop?'
And my view is no. You need to incorporate AI into your roadmap and hopefully the investments that you've made in terms of data and process and other things set the stage well for you to take advantage of AI. If you haven't made those investments, you need to do that now. So those are things I'll agree with you on.
And finally, in terms of the point on explainability, I violently agree with you on. You know, again, ancient times — like a year ago — there were people running around in AI waving their hands a lot, saying, 'It's all a black box, you'll never understand it.' And I said, then, but that's just crap.
And that's not going to work in heavily regulated industries, like insurance. A year later, 14 months later, I'm saying the same thing. And I would say to anyone on this call that is engaging with partners and vendors that saying it's all a black box, you'll never understand it, these things can't be explained. Run like hell. Get away from those people. It's not going to work in our industry, nor should it work. Work with people that take this serious to apply to our industry and make explainability — and put explainability at the forefront.
Well, you said regulation. Let's touch upon that, John. So you know what's interesting about AI at large, There is no regulation or no regulatory frameworks around AI, in general. Where regulatory frameworks exist, they're very specific to a domain. So you've got the NAIC Advanced Underwting Working Group, New York 19th, Colorado Bill SB Senate Bill 169.
Right, that define a framework around it. And I'm really proud of the work of the AI Governance Group is doing. The 72 business and technology leaders. In the absence of regulatory frameworks, in the absence of regulation, we are going to define what good looks like for our industry. So we're predicating that based on two things. Now we've got President Biden's Executive Order on AI, which again has no regulatory teeth, but I think is a wonderful framework for us to do to learn from and the European Union AI Act.
Right? So it's a very simple pyramid or rungs which ranks acceptable uses of AI and unacceptable uses of AI. Right? So we're going to map that across the insurance value chain. And in the process, John, we're also going to develop frameworks and tools on how you can implement AI successfully and responsibly within your firm. So, you know, I wanted to definitely touch upon the regulatory and compliance aspect of it, but if I could ask you, John, as we enter our final minutes of our podcast, what excites you the most?
What are the use cases — two part question — what is the use case that you're seeing that has you most excited in the industry today? And then if you could give us a crystal ball view of where we're going to be in our industry in the next three years. I think that would be perfect.
Sure. So let me though, quickly here on regulatory, because I think you and I were at a LIMRA event. We were on a panel.
And one of the things we said is the importance that the industry leaders go out and build a future. Don't wait. So again, I applaud what your group is doing. Also, I applaud what the NAIC is doing. I think they've made a real effort to engage the entire ecosystem of insurance. And it's not easy because we have 50+ jurisdictions in the United States regulating insurance.
But I applaud what they're doing and making the efforts. Now, again, though, as I've said all along, all leaders need to be engaging in AI today. If anyone says, well, we're just going to wait because regulation is going to shut this down, that's not going to happen. We're in the age of AI, whether it's the Defense Department, McDonald's, or cybersecurity firms.
This is now embedded in our society and we need to deal with that reality. Regulation will be one reality, and we have a very ethical industry, so I'm sure that will be part of it.
Use cases I'm most excited about today. Go back to earlier discussions we've had where people are using it to create value today, not worrying about artificial general intelligence in the future.
We see very exciting things going on with processes where there's this plethora of different documents and information, may be handwritten notes, where we can use a AI to interrogate them and bring them to life is if you are having a conversation with a person. Forward looking view three years out, what I'll say is whatever we say today, we will undershoot it.
The world of AI currently is moving far faster than our predictions are been able to get accurate. So I think it's really important to focus on creating business value today, using these tools and technology to advance our business, to advance our industry, advance our defenses and lower risk and that's where we should focus.
I love it, John. You know, you and I could talk for hours, but I think all the time. Days, no doubt, no doubt.
Well I look forward to doing this again. Right. You're going to come back and see us for another podcast episode on AI or two.
Always, and we'll see you on the road.
Certainly shall, John, I look forward to it. Thank you for being here. Always a pleasure.
Thanks for listening to LIMRA’s Insider Insights Podcast Series. To hear future podcasts, subscribe at LIMRA.com/podcast.