Welcome to Insider Insights where we dive into hot topics facing the financial services industry. Today, Kartik Sakthivel, LIMRA and LOMA’s Chief Information Officer and Nirav Dagli, Chief Operating Officer at Spinnaker Analytics offer insights into what is AI and what is not AI.
Hey, Nirav. Fancy meeting you here. How's it going?
Good.
It's, good to see you as always. Next time we should do this in person.
I completely concur. But you know, I don't think we can spend better time, better quality time, than two guys who grew up in Mumbai talking about all things AI and analytics.
Sounds good.
Alright. So let's, let's talk about, you know, you and I have had a conversation around AI vis a vis analytics, Nirav. Your organization, Spinnaker Analytics, has been in business for, gosh, what, twenty, twenty five years now?
Twenty five.
Twenty five years.
And literally, the word analytics is in your company's name. Right? Spinnaker Analytics. You know, AI has become extremely alluring, extremely seductive to call everything AI. Right? Just slap the word AI on it, and then people have guaranteed funding. You know, if you're a startup, most vendors are trying to incorporate all the spectrum of artificial intelligence offerings in their products. But there's a lot of confusion out there. Right? What is an “analytics” vis a vis what is an “AI”? Can you educate me on that?
Sure. Be happy to. And it was a great example you made — anything with AI in its name is getting funding. Just like any company with the words “dot com” in its name, we're getting funding until the first dot com bubble crash of the spring of two thousand one. Many of your listeners might be too young to remember that.
Hey. I might be too young to remember that, Nirav.
So, when I was in graduate school, speech recognition was AI. And now, after Siri, nobody thinks about speech recognition as AI.
Yeah.
So the first thing I'm answering your question in reverse, which is AI is a bit of an amorphous concept, and our understanding or definition of it changes every time we solve a specific problem that was previously called AI and then nobody calls it that anymore. So it's a moving target. I'll say the same thing about analytics where the word analytics wasn't coined until sometime in the mid two thousands. Until then, it was just statistical analysis and synthesis. And over time, what used to be data mining is now being considered AI, and data analytics is often limited in common perception to BI reporting and even in some cases, data input. So you have large teams of data analytics that are just doing BI reporting as opposed to developing algorithms analytically to solve problems and achieve business goals.
So that would be my short answer. You know, I think vendors are trying to essentially sell very large AI projects when the reality is that the one thing AI does, unlike most other technologies, is really reduce the cost and simplify it. And maybe I should say including other technologies. Just like we went from mainframe to PCs to tablets, AI is now available in many different modules that you can get involved in and deploy without breaking the bank.
Yeah. No. That's great, Nirav. So one of the things that, that we should talk about from an AI perspective. Right? People think about AI to your point with speech AI. AI is not a monolith. Right? It's much more than generative AI. It's much more than large language models. It's much more than GPT.
Speech AI is one facet, one branch of the sprawling field, which is
That's a great example, Kartik, which is that currently, if you ask people what AI is, ChatGPT and large language models are the flavor of the day or of the year.
Yeah.
There are many different techniques and technologies that are not being looked at as AI. Everybody's consumed by the fear of or the seduction of, large language models.
Well, fast forward this, particular aspect five years. Right? So when you have generative AI capabilities that are baked into every vendor product that you use, right, whether it's Office, Salesforce, any utility that you use, are we still going to be considering generative AI, as a special field of AI study or is it just going to be part of the course?
Two part answer. I think longer term, you're right. It would just be considered part of the course if nobody would be discriminating or classifying that separately. I think for the next two to three years, though, we'll see the expansion of large language models into not only natural language processing, but sentiment analysis and even image processing that people have started demonstrating. So I think trying to understand the consumer better, we are understanding them better today than we did yesterday. We'll understand them better tomorrow than today. I think that will happen before the L and M and other capabilities of generative AI gets subsumed into overall AI.
Okay. So let's do this. Right?
Let's, let's tease that out a little bit more because I think it's important for us to understand.
So I'm a business executive, right, in ACME Insurance Company, and I am trying to get my head wrapped around what is the difference between predictive analytics, prescriptive analytics, and artificial intelligence. Right?
You know, operational reporting I got. Right? Business intelligence I've got because we've been doing it for a while. If you remember the age-old operational reports, we could we could predict the past really well, but not really forecast the future. Right? So how would you help me understand the delineation between what is predictive analytics/prescriptive analytics and artificial intelligence?
So, Kartik, you might be better at answering that question than me. The way I see this is more of a spectrum than a straight line that you can cut and say everything to the left of this is predictive analytics. Everything to the right of this is AI.
Yeah.
A couple of — one clear example — of that would be that we were using statistical techniques to do predictive analytics.
Availability of greater data and computational power means that now we are at the tipping point where explainability and predictability are no longer issues that CEOs struggle with. Right up to now, CEOs would tell me we would take explainability or predictability all day long.
But more recently, I'm beginning to see a shift of if it's performing accurately, I can live with less explainability as long as I have confidence in the algorithms. And the confidence comes from proof, not by understanding. Almost think of it as to drive a car, you don't need to know how fuel injection works. What you need to see evidence of is that it improves the engine performance, even though you don't understand or don't need to understand the principles behind it. I think that's probably the biggest distinguisher, but that line is dotted or blurry and will keep on moving all over the place. But you may have a different opinion on that, Kartik.
So, well slight, not necessarily a different opinion. Right? I I have a nuanced opinion on it. So the way I describe the delineation between predictive/prescriptive analytics, and AI, driven outputs, decision recommendations is, the former predictive analytics, prescriptive analytics requires people. Right? It requires people. So you have a statistician or someone really, really smart with these modelings — modeling — to be able to explain to you how their output was arrived at. Right? How do they get from point A to point B?
And then, obviously, that inspires some level of confidence in you because you have somebody who can explain that to you. Right? How I got from point A to point B. How my model operates.
Artificial intelligence, in my opinion, is that, with a lot more autonomy, and not necessarily a statistical analyst, a QME, managing that. Right? You might need to manage the model, but most of the time, this engine is coming up with varieties of options and recommendations. The challenge with that then becomes, how do you make sure that an AI — that it arrives at the recommendations that it does — is transparent and explainable. Right? If you can't explain to me how you got from point A to point B, I'm going to have a really hard time, trying to trying to believe in it. Right?
Because these AI systems can be black boxes. So, to me, the way I've been describing it is predictive/prescriptive analytics require some level of if then else. Right? Requires human expertise to be able to model the outcomes. Artificial intelligence figures it out by itself. And then, obviously, we don't know how it figures it out. Right? That there's causality, there's correlations, not necessarily things that you would want, but does require humans in the center, albeit in a, in a very different perspective. But, yeah, no. That's, that's my answer.
So what you're what you're describing is a reason why AI models often have hallucination.
Correct.
Because they have biases, and there's no way to check on that. So even today, while Spinnaker is an analytics and an AI firm, what we do is when we test and train various AI models, we actually use a series of models so that when the results are different, we can reverse engineer the black box that you mentioned.
That's great. Yeah. No. That’s fantastic.
And focus on what some of the key variables are. So, I think that human involvement, doing design testing and training — according to us — definitely needs to continue. I think if anybody thinks that ChatGPT just learns everything its own. That's not true. There are thousands of people involved in even the rule setting. And even currently, it just looks for text prediction. It doesn't know if what it's creating is true or false.
Yeah.
So I don't see the need for that going away quite yet.
Yeah. The humans in the center is so important, Nirav. Right? I think you and I can, we can agree on that one.
Hey. So let me ask you something. Right? So, you guys, at Spinnaker Analytics, you serve a variety of industries. You've definitely been quite engaged, in financial services. What do you think, Nirav, are the limitations of artificial intelligence in in financial services?
And, you know, it's tantalizing going back to where we started this conversation. If all you have is a hammer, everything looks like a nail. Right? So we're like, ‘Hey, AI can solve everything,’ but it can't. So, what kind of problems — describe for us — what kind of problems do you think AI is not well suited to solve?
I'll answer it the following way. If the perception of AI is what ChatGPT and large language models can do, then I think in financial services it’s a wonderful tool because the responses are limited. It doesn't have to create fake content without realizing it because the content is made available to it in a factual manner. Having said that, though, when it comes to data and not text, there is a lot of — many analytic techniques available — many AI algorithms available that can be put to use, but I think people are chasing the text piece in a in a somewhat limited way.
Having said that, though, when it comes to decision making, the bias and variance, those things become very, very important along with the predictability and explainability. So how do you create the right trade off and balance is what senior executives need guidance on — to just make it their problem will be unfair.
But I believe that as far as predictive and prescriptive techniques are concerned, I can't think currently of any single financial services problem or opportunity that cannot be solved. The question is not if it can be, but what is the best way? How do we solve it? We've seen there's a fear of missing out on AI, and some folks don't know how to move forward, so they're not doing anything. Other folks have a sense that if they build out multimillion dollar infrastructure and just collect a whole lot of data, that somehow everything would be figured out. And the answer is usually somewhere in a third place entirely. Where the businesses need to start is always with what the business problem is, what is the business priority.
Hundred percent.
What questions do I need to know the answers to in order to solve for that?
Yeah.
What information do I need? And then let's apply the best technique whether it's predictive AI, or something else internally.
Completely concur. You know, one of the things that, you know, LIMRA and LOMA, we've got this AI Governance Group. Right? It's a vibrant group of almost eighty — eight zero — business and technology executives, and we're currently in phase two. And what's interesting is the way we have been discussing, the AI — the age of AI —like, the explosion of generative AI.
Generative AI is going to necessitate, a broad business process reengineering exercise within the organization. So one of the pieces of guidance that they're giving the industry is don't just think about the technology and the applicability of the technology. The underlying business process that you need to — and the changes — that you need to effectuate are going to be just as critical.
You know, is it true, Nirav, in your opinion that, generative AI is likely much better for unstructured data and predictive and prescriptive analytics is much better for structured type of data within financial services?
No. I don't think that way even though I think generative AI needs — while it can deal with unstructured data, that's much more true on a query point of view, — have a natural language processing and be able to annotate a very broad range of queries.
Yeah.
It has been proven to be quite ineffective in coming up with the answers to those. So, I think the answers need to be provided to those large language models in a fairly structured way. Now unstructured, as we get into sentiment analysis and things like that, is something I see in the future of that. As far as predictive analytics is concerned, and even AI algorithms related to those, there's been significant move forward in dealing with unstructured data.
Mhmm. Okay. That makes sense. Hey. Let me ask you something. Right? When everything looks like a duck, it quacks like a duck, walks like a duck. Right? It's extremely tantalizing for us to, to think about OG, analytics as AI today, as we talked about. Right? What are some of the things from a tools and technologies point of view today that get mistaken for AI but are not AI?
So, I think we already talked about confusing BI, reporting
Yep.
As AI. That's definitely one of the strongest and obvious areas.
Yeah.
Now going back to your point, there are many statistical regression analysis techniques. AI also largely uses that. It's just that it uses it on a much larger dataset with many, many more iterations. That would be an example of where AI may even create a better accurate forecasting, or rather more precise forecasting, but not necessarily accurate. So, what do I mean by that? There are many different statistical techniques one can use. Some of them are more accurate. You're going to have less volatility in responses, and some of them are precise but very volatile.
AI will often create precise but volatile results, and that's why it needs to be constrained either through human input or other rules that you can specify. And that's getting very, very sophisticated right now. So don't know if I answered your question in a specific tool set point of view, but one example of that is chatbots are always considered often nowadays, like, oh, we're doing AI. We have a chatbot.
Not everything you do with a computer or using math is AI. In fact, we at Spinnaker are much more focused on avoiding AS — artificial stupidity. Let's just focus on the intelligence part of it, whether it's artificial or human. What technique will solve the problem sustainably is what we're really focused on. And that's why I talk about it as a spectrum.
What was that adage? Artificial intelligence is no match for human stupidity. And by the way, for the listeners, when you said artificial stupidity, you did look at me, directly in the eye. So I’m just going to move past that.
No listen, you raised a really interesting point, right? BI — you talk about BI being mistaken for AI. BI — business intelligence — firms have been doing BI for a while. What does that look like? You know, if you can if you can succinctly summarize, how do you go from BI to AI? Because there is significant value in business intelligence being able to derive insights using artificial intelligence, that we might not have visibility into today. What does that journey look like?
So, one example of that they're actually working on it for a client is that, first of all, the BI can become more accurate by using much more advanced techniques like time series modeling. Which would be difficult to do without significant computing power. It's not unlimited. Not everybody needs to go and buy NVIDIA chips out the wazoo, but it's computationally much more intensive, and it actually gets more accurate as a result of that. But then when you look forward to that let's say you're doing forecasting. And now you're showing some variance, then we forecast that you come ten million dollars below your target.
Okay. Good. And the accuracy has been proven. What does the management do about it? The BI needs to follow-up with, so what now?
And that management commentary on we are going do XYZ over the next quarter to try and address this, that's where the document ingestion interpretation — ‘What does that context mean?’ — is where I see the next hour coming up in the next twelve to eighteen months. Some of it is already here.
Where then we can check that and convert management commentary on, ‘Yeah, if you do all of these things, that will add five million, but it will not add ten million. What else are you going do? And, also, did you do what you said you were going do?’ Commentary is always used to explain why something that already happened, happened and will not happen again.
I think BI moving to AI is inclusion of the humans in that conversation and then quantifying the impact of that that they usually don't do.
So basically following the part, now part, and the so part model, and then making sure that's incorporated continually. I mean, that's fantastic, Nirav. So alright. Last question, right, before we, before we depart.
Where do you see AI in insurance in three years?
So I'll give a couple of examples outside of insurance. So I think I'm seeing a lot of improvement in the mechanical aspects of AI — being able to look at terrain data, but then move arms and legs and wheels real time. We're seeing a lot more improvement in there.
I already mentioned on the generative AI moving from text analysis and natural language processing to sentiment analysis and image processing. This will further help from a customer service point of view. In case of insurance, I see three areas where the use of AI will significantly improve what the carriers have to offer to the market.
One, people are collecting a lot of variables data, consumer behavior. But mostly that data is sitting around. It is not getting analyzed and getting incorporated into better product design, better underwriting decision.
Second one is related to the intelligent workflow. People just use old style staffing models. I think now work will be intelligently directed to the person more suitable to handle that. Again, think of it from an underwriting point of view, ongoing improvements in fraud detection.
The second area is having a longer view into comorbidities along with mortality on the life side. In case of PNC, it'll be more on the loss control side. These are some of the areas where I see there is not only promise but potential.
Awesome. Well, you know, we should do this again, in the next three years, and just analyze how far we've come in the past three. I promise you, all of the things that you just mentioned are going to come to pass and then some because we will probably have a lot more innovation in the space that, that you and I probably can't even foresee today. There's no there's no predictive analytic or AI on the planet that can help us analyze that.
Which is why I'm glad you asked me about three years because after five years, who knows?
Who knows? Yeah. No. I completely concur with that. Hey, man. It's always good to see you. Always good to catch up. You are so insightful. I just, I just enjoy having a conversation with you. So, much appreciated. I hope our listeners got a ton of value out of this.
I have a red pen for everything you and I are going to be wrong about, so we'll keep track of that.
Oh, wait a second. Something you are going to be wrong about, but I might be appropriately evasive.
Great. Catching up with you, Kartik.
Alright, pal. Bye.
Thanks for listening to LIMRA's insider insights podcast series. To hear future podcasts, subscribe at LIMRA.com/podcast.