Redefining CyberSecurity

Navigating the Future of AI Governance with LogicGate | A Brand Story Conversation From RSA Conference 2024 | A LogicGate Story with Matt Kunkel and Nick Kathmann | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

In this episode of LogicGate RSAC 2024 Brand Story, we delve into the world of AI governance and its impact on organizations.AI responses can produce inaccurate information about people, places, or facts.

Episode Notes

The RSA Conference in San Francisco is renowned for being a hub of cutting-edge discussions around everything related to cybersecurity, and this year, one of the spotlight was on and AI governance. In this conversation featuring industry experts from LogicGate, the focus was on unraveling the challenges organizations face in adapting to the rapidly evolving landscape of AI implementation.

Unveiling the Experts

Moderated by Sean Martin, the discussion kicked off with a warm welcome to the LogicGate team, setting the stage for a deep dive into the complexity of AI governance. Matt Kunkel, the CEO of LogicGate, shared insights from his extensive consulting background in building GRC solutions for a diverse range of organizations. His vast experience culminated in the creation of the Risk Cloud Platform, a versatile tool that aids organizations in automating risk management processes tailored to their specific needs.

The CISO Perspective

Nick Kathmann, the Chief Information Security Officer at LogicGate, brought to the table over two decades of experience in cybersecurity. His journey through managing security compliance for major players like Virtustream and RSA highlighted the intricate web of challenges posed by evolving technologies like AI. Nick emphasized the critical importance of aligning internal governance with external regulations to ensure a robust security posture.

Demystifying AI Governance

As the conversation continues Sean Martin steered the discussion towards demystifying AI governance and its impact on organizational frameworks. The panel shed light on the dual challenges organizations face – the risk of embracing AI too recklessly and stifling innovation versus the risk of over-regulating and impeding progress. The consensus was clear – a balanced approach that marries speed and security is imperative for a successful AI governance strategy.

The LogicGate Solution

Matt and Nick unraveled the intricacies of the AI governance solution developed by LogicGate, designed to provide organizations with a holistic framework for managing AI risks. By integrating AI governance with existing risk management protocols, LogicGate’s platform offers a transformative approach that streamlines processes, enhances visibility, and ensures compliance with emerging standards.

Looking Towards the Future

The conversation concluded with a forward-looking approach, underscoring the rapidly evolving nature of AI technologies and the indispensable need for agile governance frameworks. The consensus was that staying ahead of the curve demands continuous assessment, adaptation, and alignment of AI governance with overarching business objectives.

In Closing

This episode of On Location Coverage at the RSA Conference 2024 offered a glimpse into the complexities and opportunities that AI governance presents for organizations worldwide. With LogicGate leading the charge in innovative solutions, the future of AI governance looks promising, anchored in a foundation of collaboration, foresight, and strategic alignment.

As organizations navigate the uncharted waters of AI implementation, partnering with pioneers like LogicGate is poised to be the key to unlocking the full potential of this transformative technology. Stay tuned for more insights and developments on AI governance as we journey towards a future powered by innovation and resilience.

Learn more about LogicGate: https://itspm.ag/logicgate-92d6bc

Note: This story contains promotional content. Learn more.

Guests: 

Matt Kunkel, CEO at LogicGate [@LogicGate]

On LinkedIn | https://www.linkedin.com/in/matt-kunkel-91056143/

Nick Kathmann, Chief Information Security Officer at LogicGate [@LogicGate]

On LinkedIn | https://www.linkedin.com/in/nicholaskathmann/

Resources

Learn more and catch more stories from LogicGate: https://www.itspmagazine.com/directory/logicgate

View all of our RSA Conference Coverage: https://www.itspmagazine.com/rsa-conference-usa-2024-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage

Are you interested in telling your story?
https://www.itspmagazine.com/telling-your-story

Episode Transcription

Navigating the Future of AI Governance with LogicGate | A Brand Story Conversation From RSA Conference 2024 | A LogicGate Story with Matt Kunkel and Nick Kathmann | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Sean Martin: Alright, here we are. We are in San Francisco for RSA conference where everything is about security, right? And, uh, and governance. All security, all governance. All the time. You have to know what's going on with the security. You have to govern that too. Um, so I'm thrilled to have a team from LogicGate here. 
 

We're going to have a nice discussion around AI governance. And some of the challenges organizations have with respect to how and where AI gets used. How does it impact the business? How does it impact security teams? And I presume you guys have had a lot of great conversations leading up to the conference and as well here. 
 

So, um, let's start off with each of you, introduce yourself, your role, what you're up to.  
 

[00:00:49] Matt Kunkel: Yeah. So, well, first of all, Sean, thanks for, thanks for hanging out with us and talking to us. My name is Matt Kunkel. I'm the CEO at LogicGate, one of the co founders. And prior to LogicGate, I spent over a decade in consulting, building custom GRC solutions for customers. 
 

Big banks down to small mom and pops to help them operationalize and automate different aspects of their security, regulatory risk, compliance, uh, programs. And took all those lessons learned and put it into a product, uh, that we now call the Risk Cloud Platform to help organizations operationalize and automate those different aspects. 
 

All the way from big corporations who want it very customized and flexible to where they are on their risk and compliance maturity level and journey down to Smaller SMB folks that just are on their first starting out on their security and risk journey And they want it out of the box and they want time to value quickly and they want it ready to go So being able to support both sides of the spectrum on that with our platform. 
 

[00:01:48] Sean Martin: I love it I'm excited for this. I'm a geek for risk and Broader GRC, so we're gonna have some fun here Nick  
 

[00:01:56] Nick Kathmann: yeah, and I I'm Nick Kauffman. I'm the CISO here at Logigate been here for a year and four months roughly My background is I've been in cyber security for 20 plus years and tech before that Um, I ran, so prior to coming to LogicGate, I ran, um, security compliance and our MSSP function for a company called Virtustream. 
 

We were the cloud division of Dell. Um, so the world's largest SAP and Epic hosting provider. Um, we had three of the world's largest SAP instances and I think all five of the world's largest Epic, um, uh, instances. So highly, highly regulated over 72 attestations, multiple FedRAMP clouds, DoD clouds, you know, etc. 
 

Um, so ran, like, again, our security compliance of the cloud managed services ourselves. But then we also offered, like, um, security services out to those really large customers. Um, prior to that I ran security compliance for a really large, um, storage cloud. We were the back end for a lot of storage clouds that you would use to the, you know, still to this day as daily names. 
 

But we ran the storage for them and they ran their compute kind of on us. Um, and prior to that, I was with RSA for a while, running security for products like, uh, Verified by Visa, Mastercard, JCB Secure, et cetera. So, going back even further, critical national infrastructure in healthcare. So, kind of in, my hands in a whole bunch of different buckets, but what I'd say is really good is, having worked with so many customers across so many different verticals, I didn't get to see how just one company does cybersecurity. 
 

I got to see how 200 companies that's at Zephyr Security and work with their CISOs and their leadership team on how to run that properly.  
 

[00:03:25] Sean Martin: This is an important point because I'm sure a lot of the people listening and watching will resonate with what I'm about to say and it's there's regulation, right? 
 

Many of them, depending on where you operate and how you operate and what you're operating for. That can be nuanced for each business, right? Depending on the tech stack you use. It's the data you collect and of course the industries and sectors have some commonalities, but there are differences there. 
 

And then, then there's this non regulated, right, there might be some frameworks you have to follow. There might be some, some other guidelines you need to follow, but then there's also the internal stuff, right? These are our beliefs. These are our values. This is how we want our, our teams to function is how we want our partners to interact with us. 
 

No regulations for that. It's just, these are policies that we want to adhere to and, and that changes how we deploy our tech and how we deploy our security and how we govern.  
 

[00:04:25] Matt Kunkel: Yeah. And I would argue that that's actually more important. Don't get me wrong. The regulators are great and they're there for a reason, but I would actually argue that, that what you just talked about there last is the most important part, because at the end of the day. 
 

What you're doing is you're built through these security programs, risk programs, controls programs, compliance programs. You're building trust with your customers and with your partners and trust is I think one of the biggest assets that you can have with a customer and a partner. And if they, you know, in this day and age where data is shared so seamlessly back and forth through different platforms and different vendors and different partners, if they don't trust you to be a a good steward of their customer data, then they're not going to want to work with you. 
 

They're not going to want to renew with you. They're not going to kind of continue the partnership with you. So having Nick jokes about it all the time, he said, Matt, good security pays for itself at the end of the day, which is true. You know, I, there's There's so many things that he has done with his programs and his security teams that I can point to and he can point to and he does when budget season comes up. 
 

Very quickly to say, listen, look at what we did with XYZ partner, with XYZ customer and how easy it was to onboard them. How the time to value when you're going through a, um, A security assessment and it's taking days and not weeks and months and dramatically shakes sales cycles around M& A activities. 
 

So there's just, there's a whole host of things that I think more than ever boards and folks like myself, CEOs of companies are starting to realize that security and compliance and risk is actually a strategic business driver and not a cost center within an organization.  
 

[00:06:08] Nick Kathmann: That's exactly it. It's really what you're outlining is the difference between compliance and governance. 
 

Compliance being external regulatory or external frameworks via governance is what we're gonna hold the standards we hold ourselves to. And we like to say, you know, we, we hold ourselves to higher standards and it shows the type of customers and the type of industries and the fortune one, hundreds, fortune tens that we can go after as a result of this. 
 

Um, and really what it's done is because we've been able to level and run our security, not just, you know, internally as we run it and not meeting a minimum bar of, you know, some compliance framework says we have to do this. Because remember, most compliance frameworks are they learn from the mistake. So somebody already had a major mistake and then they said we need to do something about this. 
 

How do we fix that? A year and a half later, two years later, it makes its way into a framework and then it gets pushed down on you that way. We'd like to stay ahead of the curve and really, you know, bat above our weight from a security governance perspective so that when the really large banks, when the really large financial institutions, whoever come after us say, you know, show us your, what you're doing with this. 
 

In a lot of cases, okay. I'm saying, wow, that's really good. We should be doing the same thing. Um, and it closes the deal cycle significantly. It increases the, the deal values significantly and just makes the whole process so much easier. So yeah, it's good security pays for itself.  
 

[00:07:16] Sean Martin: Absolutely. So the reason I set that stage is because there, there's some things that we don't know what the rules are yet. 
 

[00:07:24] Matt Kunkel: Right.  
 

[00:07:24] Sean Martin: There's a big one that everybody's always talking about. So we're going to spend some time talking about a two letter acronym now. That's on the minds of many. They're trying to figure out how to. They don't want to get left behind. They're going to embrace it, right? Or, and maybe in some places, and they're too afraid to do something in others, so they want to put some controls in place there. 
 

And regardless, it's going to impact a lot of the organization, a lot of the functions, including security. And we're, they haven't figured it out yet, we're talking about AI, right? So, what are some of the conversations that you're having with organizations that you're working with? We need help to understand what, how we need to look at this, right? 
 

How does this map to how we operate in terms of ethics and goals and morals as well as industry standards and regulations? How does that all apply to us? How can we then take, take some steps to enact on what our understanding of it is for our business?  
 

[00:08:23] Matt Kunkel: It's a great question and I think it's one that is Probably more top of mind with customers right now than really anything that I've seen over the last 10 years. 
 

And I think folks are handling it in, in two ways, right? Which is, way one is the wild, wild west where everyone is implementing AI models. Everyone is building their own models. Everyone is just putting it into their, um, into their programs. Whether that's a product program or internal programs. And that's one. 
 

And there's certainly a lot of risk and challenges with that. And the other way is the absolute opposite of that, which is everyone's locking it down and there's, there's problems in that too, in the fact that they're not innovating fast enough. I think everyone would agree that over the next three, five, 10 years, everything is going to be laden with AI, right? 
 

And that's just going to be the new norm, right? Almost like the internet was back in the nineties. So, um, You need, as an organization, you need a way to, what we call, move with speed and efficiency, but do that in a secure manner, right? And being able to, first, understand, right? Where are all the different aspects of folks around our organization that are using AI? 
 

And then, how can we link that? How can we have an approval chain, approval process, and understanding? Because if you can't, if you don't know where it is, then you can't make smart strategic decisions on it. And then when you say, okay, here's what we're doing around AI. Well, how does that link up to the standards? 
 

And the standards are coming, right? The UK, uh, the UK AI Act is out. NIST now has their NIST AI standards with RMF that are coming. And then how do we create, how do you then link that to different policies or controls? What, you know, even in my organization, a lot of people were excited about CHAT GPT and wanted to use it. 
 

Nick uses it all over. But six, nine months ago, we didn't have any policies internal. We didn't know what was acceptable and not, we didn't know what our risk tolerance was for AI. So you have to define those and, um, have the company understand where those are, and then be able to link those standards to what you're doing with that. 
 

So it's really becoming, I think, a big spider web of the approved and then, and then third parties too. What are the third parties that we work with? What are their AI acceptable use and acceptable standards? And being able to bring that in. So there's a lot that goes into our new AI governance solution based on feedback from our customers of We have the wild west and we have do nothing and both of them are wrong and we need to meet somewhere in the middle to move with speed and efficiency on being able to bring AI, whether it's from a product perspective or whether it's internal operations to bear with our company and our customers. 
 

We also have to do it in a safe, secure manner.  
 

[00:11:02] Nick Kathmann: I think it's quickly becoming, I say it's quickly becoming the next privacy. You know, we've got the EU AI Act, we've got the U. S. government has their own, the NSA has their own, but now each state is coming out with their own as well. So it's quickly becoming another privacy impact where you're going to have all these different frameworks to crosswalk and map and make sure your controls work across all of them. 
 

So it's going to be really, really hard, but then also it, when you look at what the impact of AI, it's not just how you use it in your products. Or how your sales team can use it to write emails and things like that. It really expands out, you know, as you said, this third party risk management. And even beyond to supply chain. 
 

Like, if you go in, if I go into a, I'm a large company and I'm risk averse to AI. And I just ban it flat out and block it everywhere. But all of my vendors are taking my data and using it and spending it off. I say this still ends up there anyway. So, you have to now start going out and reassessing all of your vendors. 
 

And not only are they using it for their own internal use. But are they incorporating it into their products. So now it becomes a third party risk management issue, a supply chain issue, a use case issue internally and how it benefits you, and just a compliance regulatory headache in chasing all of these emerging standards that in a lot of cases conflict with each other. 
 

So it really requires a holistic approach to how you, how you manage AI.  
 

[00:12:13] Sean Martin: And so it's, it's easy to say, let's, let's support technology safety with technology. Um, but I think we. It requires a mindset first, right? Understanding what you're trying to achieve and having that initial assessment of what's where, how's it being used. 
 

So do you find organizations are at that stage at the moment or?  
 

[00:12:40] Matt Kunkel: Yeah. I mean, I think, and that's, I think where people, well, first of all, I think they're not even at that stage. I think a lot, I think most of them, and that's why I think we've gotten a lot of great feedback on our AI governance solution is. 
 

They don't have a way to manage it, right? They don't have a way to understand just where, where is all the AI being used across an organization? So when you're thinking about these large fortune type organization, multibillion dollar organizations, there's pockets of AI being used either within product or within business operations all over them right now, and they don't understand, or there's none. 
 

And it, and that's a bad thing too, right? Cause it's stifling innovation. So really being able to. Understand where it is, understand what the approval changes, what the authority is, what are the acceptable use policies of this. To Nick's point, what are our third parties doing around this, and then how does all of that relate to the regulations that are inevitably coming on this. 
 

And having a one, and Nick used the right word here, which is holistic approach to your AI governance program. But it starts with understanding, right? You can't do anything if you don't understand. You know, what is the magnitude of the problem? What, how much are we using or, or lack of that? How much are we not using where we should be using AI within organizations? 
 

So we, we are just on, you know, the, the very, very tip, I think of a very big iceberg, um, of, of AI governance. And I think, you know, if you, if I project out over the next six to 12 to 18 to 24 months, It's many, many organizations are going to have to, whether it's with the RISCloud AI governance or something else, are going to have to enact AI governance within their organizations. 
 

[00:14:18] Nick Kathmann: I would say even worse than not using it at all is your employees are using it, you leave them now. So now you've got the whole shadow AI aspect of it. So I was joking with some of the people that worked with me before, I now call it GRCP AI, because now we're going to have privacy and AI as well to add in. 
 

So it's our acronyms just going to keep growing. Yeah. And, uh, so, you know, now you've got, essentially, if, even if you ban it, your employees are using it. Whether they're using their personal accounts on their phones and, you know, going back and forth or whatever it is. And now you can OCR from your phone directly from your computer screen. 
 

It's really, really easy. So, they are using it. So, it's understanding what they're using and how it's using it. And then just also reading the different, like, just read the terms of use for all the different AI use cases. So, even, you can have six different vendors that have implemented. CHAT GPT from AI, from OpenAI, and, you know, that is very well defined on their website. 
 

Their, their, their, um, terms of service, but then how each of those six vendors then implement it. Some of them will actually tell you every prompt you send to us is ours to keep and use how we want to in perpetuity. Now if you're sending your code through that, that means they own your source code, and they own all of your IP. 
 

So you have to be really careful about understanding what they're using it, how it's being used for, for what level of data inside the organization, what the impact of that is, It's really like taking your asset classification and all of that and making sure that it applies very heavily to all those third parties and all those use cases that you have and tracking them very closely and having some type of review and approval process before it gets to that point. 
 

[00:15:44] Sean Martin: So talk to me about the platform and solution that you've built. How, give me a story of a customer going, wow, I had no idea or I expected this and all that. Something else surfaced. Give me a story or two.  
 

[00:16:04] Matt Kunkel: Well, I'll go first and then I'll let you jump in. I think one of the largest grocery chains in the world is implementing it right now. 
 

Um, and it was something that, and again, we just launched this, right? We've only launched this less than a month ago because of the amazing customer feedback that we got around, Hey, we need help in this area. Like there was many, many, You know, we categorize them as enterprise, multiple billion dollar software, uh, banking, insurance. 
 

And the other thing is this cuts across every industry. This isn't like, Hey, AI is just for banking or it's just for pharma. Every single industry is going to need a way to understand, manage, approve, and govern their use of AI within an organization. Right. And we were, I think, very uniquely qualified to very quickly. 
 

be able to spin up a cohort of applications. So obviously it's about collecting where, what we're doing with AI internally and the approvals around that and the documentation around that. But it's also linking that to the regulations that are coming out. It's linking that to what third, then what third parties are doing and being able to understand our third party posture with AI, linking that to the regulations and then what our own internal controls are and policy. 
 

And how are we making sure that our internal employees are reading and attesting to those, and then linking that in. So it's really a very nice spider web of information within these different applications, but it starts with just the core of understanding where the AI use is within the organization, understanding do we want to approve or not approve that, and really, even more than the technology, it starts at one level up, which is the culture. 
 

What is the culture of the organization around our AI use? Are we, are we at a spot as an organization where we think that there's just too much risk because there's too many unknowns yet with it? And the culture here is we're going to really shut it down and only the most highest priority things are getting approved? 
 

Or are we at a spot culturally where we really believe in what AI will do for that company or that industry or the organization, the community? And we want to open everything up and we're going to be a little, our risk appetite is going to be a little more with that.  
 

[00:18:16] Nick Kathmann: I think it's, it's a big thing here is really it's tying it all together and figure it out. 
 

So I'd say a lot of customers as we were exploring this with them and talking to them about it, they were like, Oh, I just, I, in my head all I thought of is AI governance as just like use case management. Essentially like a, like a submit this and do a security review of it and that's it. They didn't think about how do we tie this into cyber risk? 
 

How do we tie this into enterprise risk? How do we tie this into CPRM? How do we tie this into all these other frameworks that you need as well? So it's, I think the biggest thing that we've found is customers are really, really appreciating our thorough approach to it, to where it is actually what you identify from a customer experience. 
 

Third party like AI question is a cyber risk against that particular use case. And if you're using it internally and you do and you define a new risk in it, it goes into your enterprise risk register or your AI risk, your cyber risk register. So it really it's just it's just like everything else. It's just another aspect of security that we have to now maintain. 
 

It just comes under a different guise and how it gets there, but it comes in so quickly. And it comes in so rapidly. So, you know what I'd say is, you know, if you're an expert on AI, any of the new AI, right now, in three months, you'll be behind. You'll be way behind if you don't keep up with it. So, not only do you have to get your, you know, get your generative AI governance in place, but the cadence has to be much faster. 
 

You can't just do an annual review of what you would do in normal use cases. So what we've done is, now that you can tie it in to the risk register, you can put expirations on risk. That's where it'll automatically send out and say, you need to review this, uh, on this cadence to understand like, what changed? 
 

Did something change? Did they change, use a new model? Did they go from GPT 3. 5 to 4? Did they do it all set up? Um,  
 

[00:19:52] Sean Martin: yeah. Yeah.  
 

[00:19:53] Nick Kathmann: So it really, it really allows to speed up that cadence and keep up with the emerging technologies until it becomes stable enough that you start backing it off and throttle that, uh, throttle how fast you need to be based on where the industry is at the time. 
 

[00:20:06] Sean Martin: Nice. That's exciting. Yeah. I mean, we. We just scratched the surface. Yeah, there's so much to it. I'm sure we could talk for hours on this topic and maybe we will. I'd love to do that. Um, Matt, Nick, it's been great chatting with you. Likewise, Sean. Congratulations on the launch. I hope to hear good, good stories of positive impacts. 
 

Um, obviously visibility and control and governance on this stuff is key.  
 

[00:20:32] Matt Kunkel: Excited to do it again next year to tell you how it goes. Alright. Excellent. Thank you very much for having us.  
 

[00:20:37] Sean Martin: Everybody, please do, uh, connect with Matt and Nick and the Washington Gate team. And, uh, check out the AI governance solution. 
 

And everybody listening and watching, thanks for joining. We are from RSA Conference. More to come. See you soon.