Redefining CyberSecurity

Predictive Risk, Data Integrity and the Role of Large Language Models in Cybersecurity | An RSA Conference 2024 Conversation With Edna Conway and Andrea Little Limbago | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

In a engaging pre-event episode of On Location coverage of the RSA Conference 2024, Sean Martin, Edna Conway, Andrea Little Limbago, and Marco Ciappelli talk about the intricacies of predictive risk and the impact of large language models in the world of cybersecurity. The discussion highlights the importance of data integrity, the role of human expertise in utilizing AI technology, and the challenges in mitigating risks associated with advanced AI models.

Episode Notes

Guests: 

Edna Conway, CEO, EMC ADVISORS

On LinkedIn | https://www.linkedin.com/in/ednaconway

On Twitter | https://twitter.com/Edna_Conway

At RSAC | https://www.rsaconference.com/experts/edna-conway

Andrea Little Limbago, Senior Vice President, Research & Analysis, Interos

On LinkedIn | https://www.linkedin.com/in/andrea-little-limbago/

At RSAC | https://www.rsaconference.com/experts/andrea-little-limbago

____________________________

Hosts: 

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Episode Notes

The dialogues in this insightful episode touch upon the evolving landscape of AI technology, particularly focusing on the adoption of large language models (LLMs) and their implications for predictive risk analysis. The speakers shed light on the need for a comprehensive framework that combines algorithmic advancements with robust policy guardrails to ensure the accurate and secure utilization of AI models.

One of the key takeaways from the conversation is the emphasis on the critical role of data scientists and engineers in leveraging AI technologies effectively. While AI models can enhance productivity and streamline workflows, human expertise remains paramount in validating data, identifying potential risks, and steering decision-making processes in the right direction.

The discussion also discuss the challenges posed by data integrity, potential attack vectors targeting AI systems, and the importance of implementing safeguards to protect against data leaks and malicious manipulations. The speakers stress the significance of maintaining stringent guardrails to uphold data accuracy and mitigate the negative impacts of erroneous information inputs.

Moreover, the episode explores the intersection of AI technology with military and diplomatic decision-making processes, highlighting the complex nature of forecasting risks and making informed strategic moves in response to evolving scenarios. The speakers reflect on the probabilistic nature of risk analysis and underscore the need for continuous refinement and insight generation to enhance predictive capabilities.

As the conversation unfolds, the panelists bring to light the nuances of AI utilization in different domains, from supply chain management to national security, underscoring the importance of tailored approaches and domain-specific expertise in maximizing the benefits of AI technologies.

In conclusion, the episode encapsulates the dynamic interplay between human intelligence and AI advancements, urging for a holistic approach towards integrating AI tools while upholding data integrity, security, and accuracy in predictive risk analysis.

Be sure to follow our Coverage Journey and subscribe to our podcasts!

____________________________

Follow our RSA Conference USA 2024 coverage: https://www.itspmagazine.com/rsa-conference-usa-2024-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage

On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS-B9eaPcHUVmy_lGrbIw9J

Be sure to share and subscribe!

____________________________

Resources

Getting to True Predictive Risk: Will Data Accuracy Thwart AI’s Potential?: https://www.rsaconference.com/USA/agenda/session/Getting%20to%20True%20Predictive%20Risk%20Will%20Data%20Accuracy%20Thwart%20AIs%20Potential

Learn more about RSA Conference USA 2024: https://itspm.ag/rsa-cordbw

____________________________

Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Want to tell your Brand Story as part of our event coverage?

Learn More 👉 https://itspm.ag/evtcovbrf

Episode Transcription

Predictive Risk, Data Integrity and the Role of Large Language Models in Cybersecurity | An RSA Conference 2024 Conversation With Edna Conway and Andrea Little Limbago | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Sean Martin: Marco.  
 

[00:00:03] Marco Ciappelli: Sean.  
 

[00:00:04] Sean Martin: I'm going to, uh, ask a prompt for how best to get to San Francisco this year.  
 

[00:00:13] Marco Ciappelli: And you're probably going to end up in, uh, New York.  
 

[00:00:17] Sean Martin: A different, a different San Francisco?  
 

[00:00:20] Marco Ciappelli: I don't know. There's probably another San Francisco.  
 

[00:00:25] Sean Martin: That's worth a prompt. I don't even know. Maybe there is.  
 

[00:00:28] Marco Ciappelli: And maybe there is a fake one. 
 

[00:00:31] Sean Martin: Certainly probably different addresses. I don't know. Who knows what I'm talking about. I don't know what I'm talking about half the time.  
 

The point I'm making is, uh, you can prompt something to get a response and, and think you'll get something back that you're supposed to. Um, And it may be accurate, may not be, um, but you, you're kind of relying on this random generator machine, I view it as, to, uh, to come up with some cool ways to, to, uh, look at things and explore things. 
 

And so we're on our, on our way to San Francisco. People didn't realize it's our chats on the road to RSA conference, where we get to talk to cool people who are presenting at the conference. And, uh, we have two, two cool cats with us today, Andrea and Edna, how are you?  
 

[00:01:20] Edna Conway: Great, thanks.  
 

[00:01:21] Sean Martin: The prompt, the prompt said that. 
 

[00:01:26] Edna Conway: Yeah, I don't know about cool cats. The only cool cat is the one you see walking around. There we go. I wanted to talk about, you know, wait, this is going to get technical. I'm out of here. This position.  
 

[00:01:37] Sean Martin: I'm out of here. Perfect.  
 

[00:01:39] Marco Ciappelli: Well, so, Shaun, we go rambling the topic is. That if you trust the source,  
 

[00:01:48] Sean Martin: you're screwed. 
 

No, I don't know.  
 

[00:01:50] Marco Ciappelli: And the source is not trustful, what do you end up with?  
 

[00:01:54] Sean Martin: Well, if you're making decisions on information and, uh, more specifically risk based decisions, you might, you might end up in, uh, in an exposed way that you don't want to be exposed. We're talking about a company here, of course. RSA Conference is all about security and privacy. 
 

See ya. And managing technology and data in a way that, uh, businesses can do what they want to, but, uh, not, uh, put their information, their company, their customers, their shareholders at risk. Right. And more and more companies are using data and large language models. And that's the topic, getting to true predictive risk. 
 

Will data accuracy thwart AI's potential? Uh, so we're going to get into that session where Andrea and Edna are our panelists, and they're going to be talking about that. How are you both? You done listening to me ramble? Somebody prompted me.  
 

[00:02:54] Edna Conway: Yeah, no, that is fantastic. Doing  
 

[00:02:56] Sean Martin: well.  
 

[00:02:57] Edna Conway: Delighted to be here. And yes, I'm done listening to you ramble. 
 

Um. Because I want to tell you about this dream. This is this dream to actually get to what I call true predictive risk. And a lot of people say, what is that? And does that mean we're going to get it accurately depicted every single time? 100 percent accuracy all the time. Use the words predictive. Risk. 
 

That means it's not going to be 100 percent all the time. It's still predictive. But for years, we've been waiting for things like, did we have high compute capacity? Did we have high volume storage? Did we have, there was a whole bunch of things that we needed to get to, and we were using things that you're both familiar with and that Andrea probably was, you know, part of the utilization teams who expanded its capacity, things like Monte Carlo simulations, right? 
 

Which are predictive risk, but predictive risk not at the highest degree of fidelity that we can have when we have high performance compute or, you know, storage capacity that we now have. And I could keep going. So part of the reason why we put that there was to point out something, which is could we achieve a higher degree of fidelity of risk analysis utilizing AI and machine learning? 
 

Sure, the answer is yes. There's this newcomer to AI and ML called LLMs. Um, and it, everybody loves it because it speaks their language, right? It doesn't speak the language of the computer. The result of that has revealed something that is a flaw that Andrea and I have talked about. Um, and she very graciously pointed out to me that I couldn't use my hello, I grew up in Brooklyn language. 
 

Um, so I'll use a different word, which is garbage in leads to garbage out. You can imagine my version, a little shorter word for garbage. Um, and I think we wanted to talk a little bit more about that and I'll turn it over to Andrea to make sure that she gets her two cents in because we're going to go somewhere with this, but I want to set the stage. 
 

Andrea, you want to want to tag in here?  
 

[00:05:11] Andrea Little Limbago: Yeah, no, absolutely. And I think it's interesting that the The dialogue around LLMs has also changed a lot over the course of the last year, where data initially wasn't really discussed much at all, and everyone did kind of assume this magical thing happened, and you got the recipe that you wanted, and you had a workout that you wanted, and it was just amazing, and then as they started diving a little bit below that surface, they started realizing that the data does matter, that there's ways, both on the data that's put in on its own, there are concerns that go along with it, anything from regulatory risk, Such as, was it trained on, you know, on intellectual property that was not allowed to be trained on. 
 

And so we've seen issues in that area. The fidelity, you know, like Edna talked about, uh, super important. If it is, you know, not accurate data. You're gonna get non inaccurate outcomes. And so that becomes the bigger problem. But due to that, there's been a, you know, a larger and larger discussion because we've started seeing outputs that were nonsensical and that was even without any kind of, you know, prompt engineering and manipulation that the out, and when you have those nonsensical outputs, they start realizing, well, why might that be? 
 

And a lot of people really did start thinking, like, maybe it's the model. And it almost always goes back. It's the data that tends to get, you know, that's erroneous. And it's the same thing when you're doing any kind of modeling that. It is only as strong as the data, and so you do have to be very cautious of what data is going into it. 
 

And there are a whole series of current concerns that also go along with it that all relate back to predictive risk, and it's, you know, do you have even access to the data that can inform predictive risk, or is that proprietary data that no one else has access to? And so there's just a whole range of data considerations that go into it, and the, I think the nice thing about it is that I think the discussion is, is getting back to that a little bit now, whereas a year ago, it really was, you know, viewed more as this magical thing that was going to solve all of our problems, and We do want to get to predictive risk. 
 

Like Edna said, that is where, you know, that would be great to get to that point. Um, but we have to overcome this big data challenge that we're, we're facing right now, uh, to get there. So it's not going to be easy. Yeah. The easy button's not quite as easy as everyone was hoping.  
 

[00:07:08] Sean Martin: Is it that the, the models aren't that much different? 
 

Are there other models close enough alike that you'll, if you feed them all the same data set, you'll come up relatively with the same results. But, but the, uh, the converse isn't true where the data set obviously drives the, drives the outcome.  
 

[00:07:37] Edna Conway: Yeah, that's an interesting question. I, let me, let me speak to it this way. 
 

I don't know that it's a question of which. LLM model you're using, to be honest with you, where we're trying to get underneath is we need to just, you know, it's relatively new. We don't have ML, uh, is not relatively new, right? Neural networks. If you think about the first sort of, um, speech to text, right? 
 

With the exception of those of us who have, you know, a New York and New England accent, which there is no. Application that understands us correctly. The truth of the matter is if you get beyond the reality of just a neural network in ML, which is what started this all and get to this LLM, which has that natural language component, there was a perception as Andrea pointed out that we could just talk to it and it would do. 
 

It would do magic things, right? It would make our resume better. You, of course, you could give me six PhDs that I don't have that might make them resume better. Um, but. We wanted to step back and say we need a framework and a policy and we've seen some changes as Andrea pointed out correctly, right? There wasn't the AI Requirements that we're seeing coming out of Europe and when you look at Europe and you look at the U. S You see two very different things, right? Europe is Um, very authoritarian, very rigid, in a good way, and says, this is what we want you to do, and you will do it. U. S. has more of a public private partnership model, which says, what are you thinking, enterprises? How are you using it, enterprises? You want to be constrained in this way. 
 

Our, our premise is that we need a combination of algorithmic and policy guardrails. To do one thing and one thing only, right? And it's the same thing we all taught our children. Look both ways before you cross the street. Validate the data. And when I say the data accuracy, we mean a couple of things. One, are you using the right data? 
 

Two, is the data that you're using vetted? And I'll give you a supply chain example after Andrea weighs in because it's a glaring example. Um, and then the last piece is Are you conveying the data in a meaningful fashion so that the conclusion can be drawn in a way that outputs the highest fidelity analysis? 
 

Because the win is we're going to rely on the analysis and say, good, that's great. We're going to do something about it now. Andrea, thoughts?  
 

[00:10:29] Andrea Little Limbago: Yeah, no, no, I agree. And I think You know, each of the models, and we kind of put LLM all under one big umbrella, but there are a bunch of different ones that are, some are focused more on the text, but then we also have a lot of that, you know, larger, broader AI models that are image detection and, and so forth, and the, the voice detection. 
 

And so for each of those, we, we still see, you know, some great advances, but also some of the same underlying problems that all tend to go back to the data. So the models for sure matter and they help get us closer and closer to that prediction, but. If there's a foundational problem, very often it is the data. 
 

And that gets to the notion of hallucinations, which I think we've heard a lot about at this point, um, that really weren't talked about for a while. And that's really just, you know, pushing forth erroneous information as an, as the output. And a lot of that has to do with the, uh, the data going into it. 
 

But what we're also seeing, you know, like we do with any new technology, uh, you know, starting to see some new attack vectors going onto the data. Because that's the easier way to start manipulating. And so attackers and researchers are starting to figure out how to basically mess with the data going into it. 
 

So you can get the outcome that they want to have. Or to, you know, inject malware. Or to do other kinds of, you know, nefarious activity. And I think the focus on so many of the, these different attack vectors that are targeting at the data kind of highlights just how important that data really is because it then impacts what the outcome's gonna be. 
 

And so I think if you, if you follow the, the, the attack vectors, um, I think that that alone gives a pretty good signal just to how important that data is. But like I was saying, we are starting to see those guardrails in place on the policy side, and we're starting to see some both on, you know, how can you secure these algorithms and secure the data, but then also what other kind of guardrails can you have in place because. 
 

We do see attackers trying to basically, you know, social engineer their way around those guardrails. So if it, you know, so if the model is trained on saying, you know, don't use any swear words as an output, they can quickly, in many cases, convince the model to output various kinds of swear words or don't show anyone's email as an output. 
 

And they can basically do prompt engineering to circulate or circumvent that kind of guardrail. So the guardrails are really important, but it's, it is a cat and mouse, just like it is, you know, across the broader cybersecurity landscape.  
 

[00:12:39] Marco Ciappelli: Well, it's just the way it is in human relationship in general. I mean, again, if you, let's say you go to college to become an engineer and you've fed books on how to become a chef, you're not going to build an airplane or a bridge. 
 

You're going to make a cake. So, I don't know if it's an Italian example enough for you, Edna, but, you know, the point is garbage in or wrong data in, wrong data out. And when you talk about cybercriminalism, Andrea, you were going through that. I'm thinking like, in a way, thank God that we have the criminal that are trying to break all the, you know, to find the sneaking in the social engineering and, and the, and the gaps. 
 

Because maybe it's what it makes us. Better, but to go in and trust some magic machine that is just going to have all the answer in the world is like, I don't know, believing in some volcano and, you know, because I don't understand it. And, uh, maybe Vulcano is, is my God. I don't know. Am I going philosophical enough for you? 
 

[00:13:48] Edna Conway: That, that was fabulous. Uh, we can have a session about God, um, and God and predictive risk. That would be intriguing. But I do think you raised something that's important to reference here, which is, you know, and we see, we saw the new standard that I haven't really fully dug into, right? We have an ISO standard 42, 001 now on, on AI, but are we just beating home something that is really important, which is. 
 

At the end of the day, I mean, you can, you can run your analysis in an enclave, right? You can try and control your data. There's a whole host of things you can do, but only if somebody tells you when utilizing this, control your environment, control your own data, your own data. Think about back in the days when we thought about network segmentation as if it was, Oh, we can segment the network. 
 

That's basic now, right? An LLM. Has everything in it in theory. Can we begin to start to use some of the same common techniques that we've deployed elsewhere to streamline, control, and bring a higher degree of fidelity? And the example that I always give everybody, which I'll probably talk about at RSA as well, and I meet Andrea Leff, is, um, you all know that, you know, I have this weird background of Wearing many hats in many different organizations. 
 

And I think about supply chain where, um, you know, COVID brought a whole bunch of to the forefront, uh, perception on how important supply chains are, right? Great. For those of us who knew that a long time ago, it's fabulous. And Marco for the Italian analogy is, if you ricotta, my, my, my good. So it has to be a layered approach. 
 

All right. So if we think about. The fact that you're trying to get ahead of the risk, and you have an example I debated with Andrea is, I have a plant in XYZ location. Let's make it Malaysia. It's a rainforest environment. I worry about rain. We get rain all the time. They're used to it. I talked to my suppliers. 
 

It's really cool. They've done everything they can. They're prepared. Their workforce is prepared. They've moved electronics to a higher level. They have generators. They're cool, so they tell you. But at some point, when does everything break? Is it at 21 days of solid rain? Is it at 27 days of solid rain? Is it at 82 days of solid rain? 
 

What's possible? And you're trying to get your arms around this and you're also trying not to have your enterprise or your government engage in movement that causes more disruption. Then the analysis that you're doing, right? So, okay, you punt it all in, you put your magic in and you go, you get data. And it says around 23 days, you should start moving some things over to Europe. 
 

Okay. That's a cost. You have to move people. Perhaps you have to start up some lines. And, you know, you're a smart supply chain executive, so you have another place, you're ready, but you still have to rev it up. Well, here's the problem. The data that went into your analysis came from some really great, high integrity source. 
 

Like Noah. And some of the data went from a desire to always get, because we all do want to, I always call it eyewitness weather, when I listen to what the weather person says, and then I'm like, I go up in the door and see what's going on outside. Never really what they say, huh? Um, and so the other part of that came from the plant manager, who is telling you what's going on, and you know what, the plant manager is getting their data from a relative who has a tin can in the backyard, and they forgot to tell you that the tin can Not calculating the fact that they're really not thinking about evaporation. 
 

And they're also forgetting that there's several animals who have knocked the tin can over several times. So the number of inches of rainfall is actually not right. Well, it wouldn't be inches in, in Malaysia. Um, and so you weave that all in and the analysis comes out and you move 32 percent to Europe. 
 

And you cost X amount of dollars. And as it turns out, the real number was 64 days. But you didn't know that because the data validity and integrity was not examined at the outset, or you didn't put it a mathematical factor on high and low veracity of that. Right. And I don't ever want to get to that because that starts to sound to me like a Monte Carlo simulation. 
 

God help us. I don't want to go there. And, and Andrea has other examples where she's been thinking about what we can do. And I want to give her an opportunity to weigh in here.  
 

[00:18:45] Andrea Little Limbago: Yeah. And I think what your example highlights actually just reminded me of why we're seeing many companies starting to at least temporarily ban, whether or not they're integrating Gen AI. 
 

into workflows. And it's not because they're, you know, Luddites that don't care about technology and just, you know, think it's, it's not gonna be helpful. It's partly because of these concerns about the data going into it and that the potential risks of the data going into it, then, you know, integrating across their, you know, their, their corporate proprietary data. 
 

Um, so there's some examples like that with the data validity that they're concerned about that. Cause for LLMs, think about it as both algorithm and the third party data risk. So it's, it's all of that coming together versus just, uh, an algorithm. And so there's a lot of growing concern about the data coming in and that regard on top of the security, you know, attack vectors. 
 

So they're also concerned that's going to be a way to, um, get the attack vectors coming in and then, and we got to work on an analogy as well within that, as far as leaks, because there could be, you know, LLMs also are, you know, especially with the, the chat interface are prone to data leaks as well, where, you know, Yes, a person could be using the interface and basically put in proprietary source code or other kinds of proprietary or personal identifiable information into it. 
 

And that ends up looking at the data because that thing goes back into the broader, uh, data pool and could then become a part of the training data and so forth. And at a minimum, you, whoever, whatever company creates that, uh, LM now has access to that data. So there's additional data leaks that I think we can, we can integrate into that as well that are a growing concern. 
 

And we just are seeing, you know, we've seen a couple of companies that have, uh, figured out that that's part of the issue and they, uh, are thinking, you know, broader use of these kind of gen AI tools as a potential corporate risk. They just want to put the, again, it goes back to putting around the safeguards around it and how to use this new technology, which can be You know, really, you know, influential and can get us closer, ideally, to some sort of prediction with the good data, but also saying those guardrails in place so you're not, um, so you're basically protecting against some of the negative side effects of either bad data or some type of manipulation of the data itself. 
 

So there's a lot more going on. And a lot more companies are starting to think about it versus the, you know, it was a big flood. It was a big rush to it. Like we always thought see the technology is a big rush. Like how can we implement this as fast as possible? And now that we're learning a bit more about it, they're like, Oh, maybe we need to actually take a step back and figure out the secure way. 
 

And that's not everyone, by the way, some are still just going all in and we're going to start seeing. Some of the repercussions of it, but I think the more highly regulated, uh, industries are taking a closer stance at looking at the, some of the risks.  
 

[00:21:18] Sean Martin: Jeez. So, so many questions because the first one that comes to mind is, um, when I look at technologies and when we start to put abstraction layers on, which to me, an interface that gives you access to an LLM and data, right? 
 

You have an extraction that lets many more people use, The technology and I don't know. Are you seeing where organizations are no longer relying on data scientists and, and maybe more, more engineering type folks to use some of these technologies and, and therefore the abstraction is giving them what they think they want without having the ability to do the assessment of what they're. 
 

What the data looks like and things like that. I don't know. I'm just,  
 

[00:22:12] Edna Conway: yeah, the answer is yes. And now, uh, because I think you need both. I think you need the engineering community. And I also think you need the data scientists because. They come at it from different angles, to be honest with you. Right. Um, and I mean, I don't, I haven't seen it, but I read it, but I just saw something come across my screen earlier today about co pilot coded everything the wrong way when I checked it against my checks. 
 

And I'm like, well. That may not be a co pilot problem. It may be a user problem. With all due respect, it might be a little bit of both. What I think you're starting to see is the desire to Engineer a way to utilize a large language model in the right environment. I'm going to use that word enclave again. 
 

So you're controlling the data and a reminder to people that this isn't a separate engine that's independent from human capability, right? Um, I always tell people if they'd come to ask me, which they didn't, when they named this, they wouldn't have called it artificial intelligence. They would have called it enhanced intelligence because it exists. 
 

To enable the human capacity to do things. And so if you take that approach, just setting up a framework that says at the beginning, Sean, let's talk to the engineers and see what they want out of it. Let's talk to the data scientists to think about what data we need. And then let's talk quite frankly, to the network architects. 
 

And the cloud experts to talk through what workflow do we need to enable this to happen in perhaps a closed enclave so that you can include some of the intellectual property that you wouldn't want to contribute elsewhere. So, I think you're going to start to see customization. of LLM utilization for your use, your community, and then slowly we'll let, we'll let third parties in. 
 

And that addresses a whole host of things. And I'm not going to try to address bias, because I think that's a whole nother separate topic. My prayer is that let's get the accuracy right, and inherent inaccuracy might be Recognition that the data is not pristine and therefore there will be an inherent bias, but bias is a little bit more complicated. 
 

[00:24:48] Andrea Little Limbago: I think the one thing I'd add is that I do think in some cases we have seen. The going assumption being that this can replace some data scientists and engineers, at least a little bit, and have seen some layoffs along those lines as executives rationalizing due to AI. Um, so I do think we are seeing some of that. 
 

Uh, I don't think, I think it's too early to see how that's all playing out, but back to what Edna said, I think that they're going to realize that they still need the various kinds of domain expertise, uh, to ensure that they're It's working as intended. And even if it's for like finance or accounting and still making sure it's working as intended and you need those, that expertise in house. 
 

[00:25:29] Edna Conway: The smarter users are figuring out how to utilize it to minimize. What we would call mundane or repetitive, you know, mathematical or repetitive analysis and or workflows so that the humans can have higher productivity gains and enhancement. Does that end up with some people? Moving on to something new perhaps, um, but you know, I, I always remind people that don't be afraid of innovation because I'm really glad I don't have to go down to the river behind my house with a bar of soap and a washboard. 
 

I love my washing machine. Thank you very much. It was a gift. Does it do everything perfectly? No, but I'd rather have that trade off.  
 

[00:26:12] Marco Ciappelli: That's the point. Does it do everything perfectly? No, but it does pretty good what it's supposed to be doing in a tunnel. Right? In a, in a silos, in a network segmentation, whatever you call it. 
 

I mean, I have that network segmentation vision in my mind because the AI that are working well now are the one that are specialized into something. Recognize that images, x rays in healthcare or pattern. If we go back to the beginning, we're expecting this magical entity. That just give us the answer when we need them in any single possible realm and that's it's not realistic It's if one day happen, I will be very scared to deal with that. 
 

I don't know if I want it  
 

[00:26:58] Andrea Little Limbago: No, yeah, I agreed But, and also kind of what we were talking about also, it brings to mind just the degree of validation also probably should vary depending on what domain you're talking about. So, we're talking about nuclear war, and you have a model popping out with some recommendations during a war game. 
 

That probably requires, you know, quite more human in the loop than, I've got these five ingredients, tell me what recipe I should be making.  
 

[00:27:22] Marco Ciappelli: Yeah, I bought a salad. It wasn't that good. Okay, whatever. Try another one.  
 

[00:27:27] Edna Conway: Yeah. So apropos to that, if you really want to go have some fun, um, you know, I may not be the date you ever want to take out with you because what I think is fun is a 67 page, um, analysis that was done by a combination of academics from Georgia Tech. 
 

I have to look at it cause I don't want to leave anybody out. Stanford, Northeastern and the Hoover, uh, Warm Wargaming and Crisis Simulation Initiative. And what they looked at is how to deal with, escalating and identifying risks from language models as you're thinking about both military now think about this as a juxtaposition and diplomatic decision making. 
 

Do I go in with guns? What do I go in with the Department of State to go talk to another nation? And if you're using language models, right, because there are two different kinds of analysis you could be taking. One is the supply chain example I gave you. Maybe algorithmic. What about an example of what kind of maneuvers by a certain nation state around another nation state might give me pause to be thinking about we should be rethinking where are. 
 

defensive tactics and personnel ought to be and what kind of ammunition we should have in the area and she would be resurrecting alliances or initiating new paths to alignment. You can, being evasive by name, without naming any nations, but just look at some of the things that are going on right now. 
 

And they're not unique. I mean, history, we've had these all the time, all the time, but this might give us a way to think about language models As an opportunity to think about guardrails that are not just math, which blows everybody's minds like Edna. Listen, these large language models and AI, it's math, right? 
 

It's all math all the time. It's all predictive. It's all, I'm like, I understand, but the way we analyze things is not just mathematical.  
 

[00:29:30] Sean Martin: I'm thinking the war games example. Um, all it takes is one, one switch of tactic by any other party. And the data is no longer relevant. Right? You have to rethink everything because that, that one, that one move or one change could, could affect how everything else looks. 
 

[00:29:51] Andrea Little Limbago: It is. And that's where the benefit of the innovation is going on is at scale. They can adjust. Yes. A long time ago, we had game theory was one person makes one move, someone else makes one move. And that was kind of the end of the game. And now we can do all these multi people games over several different, uh, moves. 
 

And that's where, you know, AI really is excelling, is enabling you to start seeing, well, how would everything start to, um, reconstitute after that single move? So, again, it's getting us closer to that prediction. It's not there yet, and it still needs the human in the loop component onto it. We can start at least getting a little bit smarter on what the range of different kinds of next steps might be, um, a lot farther down than we used to be able to be. 
 

[00:30:29] Marco Ciappelli: You're still in probability.  
 

[00:30:31] Andrea Little Limbago: Oh yeah, absolutely. Absolutely.  
 

[00:30:34] Edna Conway: It is predictive at some point, but you're hoping to improve it, right? And I think you have these camps, you know, Marco, you've seen them, where you have like the naysayers. And then you have the people who are all in and I think what we're trying to propose is how could you not get excited about this right technology always brings new innovation and different challenges, perhaps not more challenges, but different challenges. 
 

So it's a, it's a request to step back and say, I have a dream. My dream is predictive risk. To get us there, I think we need to use some of the things we now have for the first time, or that we have a capacity to utilize with wild abandon rather than those who were doing neural networks 25 years ago. And we need to think carefully about how we set them up, what we put into them, and how we evaluate the fidelity of that which comes out from them. 
 

And not just automatically assume everything is, to your point, Marco, perfect, because that's not reality.  
 

[00:31:40] Andrea Little Limbago: And risk is inherently probabilistic. And so that's, um, if we can get it closer and closer to provide more and more insight, that, that's, that's the hope.  
 

[00:31:48] Marco Ciappelli: And I thought this was going to be a boring conversation. 
 

Look at this. I'm having a great time. It's very psychological, it's very sociological, human, and, and cyber security. So I think your panel is going to be quite amazing. Sean, you want to go see it?  
 

[00:32:04] Sean Martin: Tuesday, I'm going to be there for sure. Tuesday, May 7th, 9. 40am local time, Moscone West. Getting to true predictive risk. 
 

Will data accuracy thwart AI's potential? Andrew, Edna, Hiram, and Nadia will be on that panel. Uh, clearly, it's going to be a great conversation. I'm sure with two additional perspectives. Smarter than me, it's going to make it even more interesting. So, uh, I encourage everybody to go to that session, of course, and, uh, other topics related to it, other sessions related to this topic, if it interests you, all at RSA Conference, stay tuned for our coverage as well. 
 

Lots more coming before the conference and a lot on site, on location as well. So we appreciate everybody following us, Andrea, Edna, thank you so much for, uh, Twisting our minds today.  
 

[00:33:02] Andrea Little Limbago: Thank you.  
 

[00:33:04] Marco Ciappelli: That was predictable.  
 

[00:33:05] Andrea Little Limbago: It was. 
 

[00:33:09] Sean Martin: It's predictable. Mind twists easy. But, uh, nonetheless, I will see you in San Francisco. Safe journey and, uh, good luck with the session.  
 

[00:33:20] Andrea Little Limbago: Thank you. Yeah, thanks so much.