Skip to content

Insider Insights
Podcast Series

November 4, 2024 — AI Governance: Insights in Insurance

apple-podcasts-badge-150px.png Listen-on-Spotify-Badge-150px.png listen-on-iheart-150px.png

Kartik Sakthivel, chief information officer, LIMRA and LOMA and Kelly Coomer, CIO, Sammons Financial Group and Chair of the LIMRA and LOMA AI Governance Group, discuss the purpose and achievements of the group, including tools and frameworks to maximize the business value of AI and promote responsible usage. Read the Transcript.

Transcript

(Auto Generated)

Kelly Coomer, how's it going?

It's going well. Good to see you, Kartik.

Good to see you too. We've been seeing a lot of each other, over the past year. Right? And especially at the, at the awesome LIMRA Annual Conference a couple weeks ago, Innovate with Purpose.

So Kelly, you and I have been seeing a lot of each other. We've been spending a lot of time together, because you are the chair for the LIMRA and LOMA AI Governance Group. We have come a long, long way, in less than a year. So, Kelly, you know, just in your own words, because, you know, I am the hype man for the AI Governance Group — rightfully so. In your own words, what's the purpose of the AIGG and what have you accomplished in less than a year?

Yeah. I mean, it's great to see the industry coming together on a new technology solution. Right? I liken this to, you know, the big ones that have come before, whether it was when we finally went to, like, personal computers or the Internet. There's never been a case where we converged as an industry around one of those big game changers and said, “Okay, how do we share lessons and our thoughts on what this means for our industry?” So, it's exciting. The group is large. They're vocal. They've got a lot of good ideas, a lot of diversity as to where people are at and how much they're willing to, you know, experiment with it or if they're going to wait and see and also talking about the governance aspect. So. lots of good dialogue happening, amongst the industry partners.

No, one hundred percent, Kelly. And you know, you're going to undersell yourself. So, let me underscore the value proposition that you as the chair has brought to the group. Right? I mean, it's not easy for any organization to convene. When was that? When you and I first started this group, it was January. We had less than fifty individuals. Now we're close to double that. Right? We’re quickly approaching that. And this is a really, really senior group — senior business and technology executives — because we get the broad diversity of thought and opinion. And I think you should give yourself, Kelly, a ton of credit for being the steward, for being the chairperson for this group. So, you know, we operated in two phases. Phase one concluded a couple of months ago around June-July time frame. The three papers that resulted from phase one, a current state assessment of our industry, where are we with these cases, and where are we with all aspects of governance. Those three papers are available, and soon to be available by the end of October on LIMRA.com. So LIMRA.com > Trending Insights > Artificial Intelligence. You'll find podcasts. You'll find papers. You'll find infographics. You'll find some benchmarks there as well. And we're going to be pivoting over into phase two, co-creating some of the tools and frameworks that we can all benefit from — neutralize that problem solving. So those are my key takeaways.

I think it's been a really fulfilling, gratifying journey thus far, Kelly. And you and I have talked about this, extensively. In your opinion — having come from P&C as well — right? In your opinion, what are some of the deliverables and activities that have been so important to the insurance industry at large, with this group?

Yeah. I think it's good you brought up phase one because I think phase one was important just to get real industry specific insights to how people were thinking about where AI would fit. There's a lot of consultancies out there giving guidance on where it could. But to be able to apply it to our specific industry is the hard part. And the group has done a good job of talking about how they're thinking about it and where they plan to use it. So those white papers, I think, are very insightful and give some good broad representation amongst the industry.

I think for this phase two, I like that it's divided into two parts. I really like the business value side. I mean, governance is important. And yes, we as an industry need to talk about ethical and responsible use of AI and what that looks like in lieu of, you know, regulations that will likely come in the future. But until then, helping us with that. But more importantly, this is one of those technologies that makes us all more productive in the use cases that it's proven to do so. But how do we turn that productivity into real bottom line impact for our companies? That's the hard part because it's a new, emerging technology. There's not a lot of industry research or, you know, case studies that you can pull from to say, “Oh, well, this company did that. And if I replicate that, here are the returns I should be able to expect.”

So we have to work together. We can't do our own experiments in bulk and mass. None of us are scaled to do that. But we can come together and talk about different things, like the use of tooling in development and what we can each share from our experiences to hopefully converge together on some recommendations of where it fits and where it'll give the most value and how to measure that value.

Yeah. Hundred percent, Kelly. And I think it'll be good for us to remind our listeners that across these two phases, the AI Governance Group broke ourselves into two subgroups, two focus areas. One of them, as you mentioned, business value enablement. Right? We tried to get after all use cases that have been pervasive to the industry, you know, AI and generative AI. Right?

What do we call it? The OG AI. Because AI has never been new in our industry. We've been doing AI machine learning for a good long while. Obviously, with generative AI capturing hearts and minds, really heartening to see, including at Sammons Financial under your leadership, Kelly, the vibrancy of use cases that are out there, and actually having tangible business value.

And the second aspect of that are all things governance. Right? So, governance is a is an interesting facet for us to explore, primarily because there is no federal regulation on AI, in the United States. I don't anticipate that coming. So, for us to do something as, I won't say straightforward, but as important, vital, baseline as taking the EU AI act and just plotting out what are the acceptable uses of AI across the insurance value chain, I think that's going to be remarkable for us. So, you know, Kelly, going back to business value use cases for a second, you have advised us — the industry as a whole — that generative AI is not just technology for technology's sake. What did I say? It's not a sandwich looking for a picnic.

But it's also going to necessitate potentially massive business process reengineering across our organization. Talk to us a little bit more about that.

Yeah. Like I said, I liken it to, you know, moving from dumb terminals or desktop machines to laptops or with the, you know, accessibility of information over the Internet.

Both of those were major game changers in productivity and enabling us to do our jobs more efficiently.

But at those points in times, never did we get asked, “Okay, well, how much more efficient are you and what are you going to do with that efficiency? Do I need less people? Can I get more done? You know, what comes out of those advancements?”

This is the first time I think that question is coming up, which is interesting in and of itself. But I always say when it comes to this technology, it is pretty easy to use and adopt out of the box, and you will get productivity gains in certain use cases right away.

But, that doesn't mean you'll be able to translate it into any impact. You'll be more productive, but what will you do with that productivity? To me, that's where process reengineering fits in. I liken it to the early nineties where we did a lot of time and motion studies and process flows and looking at, step by step processes and how to streamline and make them more efficient. And that was a big thing, there were process re-engineers. That's how continuous process improvement teams formed. To me, this is now taking those same types of concepts and applying them through new tooling that's available.

So, the technology has come further along. It is a technology focused on product productivity and freeing up humans for more human tasks, but, only if we as humans learn how to adopt and integrate it into how we do our jobs every day.

A hundred percent, Kelly. And you and I have talked about this, and we've actually said this to the industry, including the C-Suite. Right? Generative AI, at least for the near term, is not an FTE elimination play. Right? It is a hundred percent productivity play for us. One of the interesting things that we should talk about — because you and I had very similar experiences — coincidentally, on the same exact day. Right?

So you've got active, use cases ongoing at Sammons.

We have active use cases for, capitalizing on generative AI here, at LIMRA and LOMA. And we were both asked individually, maybe we had to compare notes after the fact. Right? What did you say? What did you say? By our CEOs on the ROI for AI, the return on investment, that we have for AI. And, you know, there are some things we can measure, and we are measuring. Right? So, we are measuring, FTE save by virtue of, you know, time saved. Right? Productivity gain. We can measure And then there are some lagging indicators. Right? So, for example, the eNPS. How much more satisfied am I doing, less repetitive tasks at my job? Those are some of the lagging intangibles, the feeling type indicators that we can measure.

But, you know, some of this, is also partially a leap of faith. Right? So, we don't want to be sitting around, trying to identify what the ROI of this thing called the, called the website is in 1999. Right?

So, you know, by virtue of the AI Governance Group — I should point out to our to our listeners — that we're also creating co-creating by the industry for the industry, cost benefit analysis templates, as well as ROI templates for anybody to adopt.

What I love, Kelly, about the scorecards and the frameworks and the tools that we're developing is that they're generic. Right? They're a hundred percent generic. They're as broad as, as they need to be, but they're also incredibly turnkey.

So if you're a smaller carrier without the, the resources available to you, you can implement these frameworks pretty much in a turnkey manner. But for larger organizations, more established organizations, for example, like Sammons, you can harvest, you can capitalize on these on these frameworks, but you can make them on your own. Right? You can extend them, you can revoke them, you can customize them.

You talk, Kelly, about the, CBA — cost benefit analysis — for these developer tools. You know, we have lots of business, listeners on here as well. In your own words, how would you describe what we're trying to co-create, with these cost benefit analysis for code generation tools?

Yeah. Well, we had our first subgroup meeting on Friday on that very topic. And, you know, originally it was about, okay, how do you how do you show value of using these tools to prefill code or complete code? But the discussion amongst that group became much broader and it became more about, you know, there's a lot of tasks in the development lifecycle the developers are involved in that aren't about developing code and take a lot of time and aren't viewed as the best use of a developer's time. So, think of having to, you know, write or translate requirements or understand the context of what you're creating and then ensure that content makes its way with your code through the development life cycle.

Are there ways to use AI tools to automate some of that, you know, manual work that has to happen in the development life cycle? And, yes, it can also do some of the, you know, coding completion and/or create a draft of the code so the developer gets a jump start. It may not create all the code and then, you know, commit it and move it forward. We're not there yet, but it can definitely jump start. So, it was good to get practitioners, again, in the industry, with a specific problem we're going after to say, okay, how do we talk about the value of these tools? What measures might we use?

Because in the past, we've made mistakes in the broader IT industry about saying, well, for code efficiency, let's look at lines of code. And then you get very wieldy, you know, code that isn't maintained well and is larger than it needs to be. So, I thought it was a very good discussion and good to tackle it together, and I'm hopeful that we'll come out with some good metrics that we can all use to help us articulate the value of these tools in the development life cycle.

A hundred percent. And what was interesting, Kelly, is lots of organizations are leveraging these, AI code generation tools.

Right? So, from a business perspective, if I'm a line of business head, it means that my IT partners, by virtue of leveraging these code generating tools, are going to deliver business value to me quicker. Right? Faster. Potentially cheaper. Because now you have increased the bandwidth and throughput of your development team, of your programmers. More secure code, more reliable code, more scalable code, more extensible code. I think the possibilities are endless. And I think, I'm confident, actually that the output of what we come up with is going to be able to explain to our business partners the value of investing in these core generation tools. Very exciting.

So, Kelly, pivoting over. So, you know, we've been at this since, December, then January we kicked off in earnest. You know, the first few meetings where we're still trying to get a handle on the generative AI explosion the workings are.

You know, you've served as chair since day zero, on this group. What are some of the key takeaways that you yourself as a CIO of a large firm have gathered during your time here?

Yeah. I think it's just making sure that we're thinking about all aspects of this differently, in light of some of the things we've learned from the past. So, for instance, being able to, you know, think about how much in your organization do you want governance and oversight and centralization of AI? And how do you enable it to be pursued in individual groups in your organization?

So, we didn't do that with the Internet phase at companies I've been in in the past. It wasn't as much of a conscious decision to say, okay, how do we go about doing this? And I know at my old company, we had a bunch of different websites and mobile apps and in the end realized, you know what? This is not what customers want, though. That's an inside out view, not an outside in.

So, it's kind of nice to have an inflection point. You don't get many of those in your career where you can say, okay, I've learned a lot from the other things in my past. How do we, you know, how do I and my teams apply that knowledge to this situation? And how do we as an industry work together and learn from each other? So, yeah, those are some of the things I've identified thus far working with the group.

That's great, Kelly. So, you know, we've got, we've got a pretty ambitious roadmap, that we've outlined that the entire, you know, 85+ leaders have, have agreed to. On this road map as part of phase two, we're going to be publishing the cost benefit analysis templates that organizations can use. We're going to have the ROI templates that we're going to push out. We have the acceptable uses of AI across the insurance value chain. All of those are very exciting. We also have outlined AI maturity models, AI governance frameworks.

One of the things that we're working on now, is how do you identify — as an organization — the roles, positions, jobs that are most influenced, if you will, not impacted, influenced by generative AI, and then develop a skilling, reskilling strategy for those. Right? So this is going to sail us well into 2025. And who knows what other advancements happen in AI. In your foresight, in your vision, Kelly, where is this group headed, into next year, into 2025?

I think it's going to be putting those into practice. For instance, take the, what you took from the EU acceptable AI use framework and applying it to the industry. We talked about that today amongst my AI task force team. We need to help the organization understand where we want centralized governance or oversight and not a lot of sprawl in vetting different AI tools or solutions. And where we think, “Nope, this is a part of it that's lower risk, and we're going to enable teams that if they're working with different software providers and they offer AI within their solution, go pursue it.” Just make sure we hit, you know, certain aspects like explainability, etc.

So we're taking that LIMRA draft guidance of the value chain and the risks. And saying, hey, let's use that to create an internal framework for how do we help our teams, our business units, and our architects who they work with to understand where they kind of have freedom to move forward at their own pace and where we as a company don't want to be investigating, you know, fifteen different dev tools, for how to do cogeneration. Right. We want to focus energy there. So I think that's the right now, these are individual deliverables. And I think as the industry starts to use them and matures, we're going to figure out how to use them collectively to guide our organizations to a better passport than we, like I said, have done in the past with other new technologies.

Kelly, I love it. You know, you and I have mentioned this to the industry before. Whether you have an intentional artificial intelligence strategic plan in place, every organization is going to be by virtue of every vendor incorporating generative AI into their solution and AI consumer. Right? So even though you might not intentionally invest in an AI program, I think it's important for organizations to have a sense of this acceptable uses of AI governance framework in place. And these aren't academic exercises for us.

We want organizations to implement them in the practice. And our only ask, Kelly — you and I — the only ask that we have the industry is if you make it better and it's generic enough, please do contribute it back into the industry. Right? Download it, make it better, and give it back to us so everybody can benefit. So, Kelly, I think we're, almost out of time. I’m looking forward to our next engagement together.

Kelly, I just wanted to say on behalf of the entire industry, your industry leadership — not just your stewardship of Sammons — but your industry leadership with this AI governance group has been phenomenal. You have been a terrific leader, and the industry is deeply, deeply appreciative. Thank you.

Kartik, you're very kind, but really appreciate what you're bringing in the industry, your ability to pull everyone together, and you're doing a lot of the heavy lifting with a lot of these templates, and getting our feedback on them. So same back at you. Really, really appreciate your leadership in the industry.

Thanks for listening to LIMRA's Insider Insights podcast series. To hear future podcasts, subscribe at LIMRA.com/podcast.

Subscribe to Insider Insights Podcast

Related Resources

Did you accomplish the goal of your visit to our site?

Yes No