Redefining CyberSecurity

Building and Securing Intelligent Workflows: Why Your AI Strategy Needs Agentic AI Threat Modeling and a Zero Trust Mindset | A Conversation with Ken Huang | Redefining CyberSecurity with Sean Martin

Episode Summary

Ken Huang, Co-Chair of the Cloud Security Alliance AI Working Group, explains how agentic AI is transforming enterprise workflows, redefining cybersecurity, and demanding a new approach to governance and risk. This episode breaks down what leaders across development, IT, and security need to know now to responsibly adopt and secure these powerful systems.

Episode Notes

⬥GUEST⬥

Ken Huang, Co-Chair, AI Safety Working Groups at Cloud Security Alliance | On LinkedIn: https://www.linkedin.com/in/kenhuang8/

⬥HOST⬥

Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com

⬥EPISODE NOTES⬥

In this episode of Redefining CyberSecurity, host Sean Martin speaks with Ken Huang, Co-Chair of the Cloud Security Alliance (CSA) AI Working Group and author of several books including Generative AI Security and the upcoming Agent AI: Theory and Practice. The conversation centers on what agentic AI is, how it is being implemented, and what security, development, and business leaders need to consider as adoption grows.

Agentic AI refers to systems that can autonomously plan, execute, and adapt tasks using large language models (LLMs) and integrated tools. Unlike traditional chatbots, agentic systems handle multi-step workflows, delegate tasks to specialized agents, and dynamically respond to inputs using tools like vector databases or APIs. This creates new possibilities for business automation but also introduces complex security and governance challenges.

Practical Applications and Emerging Use Cases

Ken outlines current use cases where agentic AI is being applied: startups using agentic models to support scientific research, enterprise tools like Salesforce’s AgentForce automating workflows, and internal chatbots acting as co-workers by tapping into proprietary data. As agentic AI matures, these systems may manage travel bookings, orchestrate ticketing operations, or even assist in robotic engineering—all with minimal human intervention.

Implications for Development and Security Teams

Development teams adopting agentic AI frameworks—such as AutoGen or CrewAI—must recognize that most do not come with out-of-the-box security controls. Ken emphasizes the need for SDKs that add authentication, monitoring, and access controls. For IT and security operations, agentic systems challenge traditional boundaries; agents often span across cloud environments, demanding a zero-trust mindset and dynamic policy enforcement.

Security leaders are urged to rethink their programs. Agentic systems must be validated for accuracy, reliability, and risk—especially when multiple agents operate together. Threat modeling and continuous risk assessment are no longer optional. Enterprises are encouraged to start small: deploy a single-agent system, understand the workflow, validate security controls, and scale as needed.

The Call for Collaboration and Mindset Shift

Agentic AI isn’t just a technological shift—it requires a cultural one. Huang recommends cross-functional engagement and alignment with working groups at CSA, OWASP, and other communities to build resilient frameworks and avoid duplicated effort. Zero Trust becomes more than an architecture—it becomes a guiding principle for how agentic AI is developed, deployed, and defended.

⬥SPONSORS⬥

LevelBlue: https://itspm.ag/attcybersecurity-3jdk3

ThreatLocker: https://itspm.ag/threatlocker-r974

⬥RESOURCES⬥

BOOK | Generative AI Security: https://link.springer.com/book/10.1007/978-3-031-54252-7

BOOK | Agentic AI: Theories and Practices, to be published August by Springer: https://link.springer.com/book/9783031900259

BOOK | The Handbook of CAIO (with a business focus): https://www.amazon.com/Handbook-Chief-AI-Officers-Revolution/dp/B0DFYNXGMR

More books at Amazon, including books published by Cambridge University Press and John Wiley, etc.: https://www.amazon.com/stores/Ken-Huang/author/B0D3J7L7GN

Video Course Mentioned During this Episode: "Generative AI for Cybersecurity" video course by EC-Council with 255 people rated averaged 5 starts: https://codered.eccouncil.org/course/generative-ai-for-cybersecurity-course?logged=false

Podcast: The 2025 OWASP Top 10 for LLMs: What’s Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros

⬥ADDITIONAL INFORMATION⬥

✨ More Redefining CyberSecurity Podcast: 

🎧 https://www.seanmartin.com/redefining-cybersecurity-podcast

Redefining CyberSecurity Podcast on YouTube:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

Interested in sponsoring this show with a podcast ad placement? Learn more:

👉 https://itspm.ag/podadplc

Episode Transcription

Building and Securing Intelligent Workflows: Why Your AI Strategy Needs Agentic AI Threat Modeling and a Zero Trust Mindset | A Conversation with Ken Huang | Redefining CyberSecurity with Sean Martin

[00:00:00] Sean Martin: And hello everybody. You're very welcome to a new episode of Redefining Cybersecurity. I'm Sean Martin, your host, where if you listen to the show, you know, I get to talk to cool people about cool topics related to cybersecurity, the impact it can have on business and the opportunity security teams have to not just protect the business, but actually help grow revenue and protect that revenue along the way. 
 

And, uh, one topic clearly that's been on everybody's mind. For quite some time now, it doesn't seem to be slowing down is, uh, the, the idea for LLMs and, and artificial intelligence and generative ai, and there's an. I dunno if it's a new term, a new kid on the block. But, uh, AG agentic AI has been, uh, something that's come up quite a bit as well. 
 

And there, there's a lot of work with a few different working groups, uh, kind of understanding what, what that is and how it relates to cybersecurity to protect the business and cybersecurity to define programs. And I'm thrilled to have Ken Wang on, uh, to, to help me understand a little bit more and help our listeners understand a little bit more what this means to them, their teams and their programs. 
 

Ken, thanks for joining me. 
 

[00:01:08] Ken Huang: Thank you, Sean. 
 

[00:01:10] Sean Martin: It's a, a pleasure to connect with you and, and have this chat. Uh, you, you've done so much work here. Um, I could just list it, but I'd actually like, like you to kind of describe the working groups you're part of with CSA and Oasp, some of the, the, uh, books you've authored to kind of paint a picture of, of the work you've been doing here. 
 

Uh, I know you're, you're, uh. Part of the forum for OpenAI as well, contributing to that space. So give us a little background on what you're doing and some of the things you've been working on, uh, with AI broadly, and then gentech AI as well. 
 

[00:01:44] Ken Huang: Sure. Thank you, Sean. Yeah, I, uh, regard myself as AI cybersecurity researcher and book author. Uh, I. Co-chair, the Cloud Security Alliance AI Working Group. Uh, we produce the taxonomy and currently working on the ai, uh, control framework. Uh, I lead the auditing guidelines, working group, uh, or work stream. And also I am co-chair of the, uh, organizational responsibility working group Within CSA, uh, we publish white paper. 
 

Uh, last year, uh, the first white paper is the core security. Second one is the GRC side, and the last one is application supply chain, uh, responsibility. And uh, now we are working on the aging tech, AI rather teaming guide within this organization. And we. Kind of, uh, partner, uh, with OSP AI exchange. Uh, so that's my involvement with the, uh, the Cloud Security Alliance. 
 

I also see it's very important we need to work across different cyber security community, so we know. What people working on. So we are not have the duplicate effort. So I actually contribute to Sense Recent, they have the security control guide. I provide lots of comments to Science Gap. SANS, right? Sense Guide. 
 

And I also, I'm the co member. Ask for top 10 for larger energy models and contribute a lot to their work. So some of the work including the Red Teaming guide and uh, also the threat and the mitigation for Agent ai. Currently I'm leading a work stream called the Multiple Agent Threat Modeling, and we will close the comments. 
 

Actually, uh, this week it has been. Uh, working, uh, since last year, it's a lot of work. And, uh, yesterday, I, I ch to, I write a script in Google. Uh, doc to see how many comments we get. We get more than 900 comments just for that document from different revisions. So it's coming out very well. Um, and, uh, uh, in terms of a book, uh, yeah, uh, I wrote a book, uh. 
 

On the generative AI security, it's been quite popular from the spring, uh, website along, it has like 17,000 paid access, right? It's kind of, you get 17,000 likes in LinkedIn, but I never get this kind of link likes. But Spring is more, it's academic and also professional, but if people really paid for it, I think it's a good, uh, justification for the this work. 
 

Another book is Agent AI related. It's called Agent AI Series and Practice. I started writing this book, uh, early last year and submitted the final draft, uh, to Slinger as well. Um, I. That book will be published August this year. So it's in production. Uh, it could be a, a, a, another huge hit. I have another hit of a book. 
 

It's called the Beyond AI Chat, GPT and Web. So it's like, uh, the intersection, uh, that book is, uh, has like 28,000. Paid access in spring. Uh, so yeah. Anyway, if I really need to site another book is published by the Cambridge University. Plus it's called the Self-Sovereign internet and ai, right? So yeah, multiple books. 
 

But I think my real folks, the majority of folks, is really on the security and I have the distributed app. Ai. So we do the red teaming, uh, risk manage assessment and also chief AI officer kind of load map, uh, for the, uh, some of the startup and some of the enterprise. Uh, it's, it's a lot fun. So. 
 

[00:06:06] Sean Martin: A lot of fun. You, you, you've been busy, Ken, and, uh, it's, uh, it's my pleasure to, uh, let everybody know. I'll share, share the links to your, your books and resources that you mentioned here, um, all in the spirit of learning. So, uh, great, great job doing the work on the books and, and hopefully people continue to read those and, and. 
 

Become better informed and make better decisions for themselves, their career, the business, and what have you. Um, I want to start off, it's gonna be a fairly broad question, but then I'm gonna focus it into kind of my audience, uh, who listens to the show. So I think folks probably heard the, the term Mag agentic AI and probably have some base level of understanding what it means that they looked into it at all. 
 

Um. What I want your perspective on is what the organization needs to know about what it is, so a definition of it and maybe perhaps a brief word on what it means to developers and data teams, operations and IT operations and security, and any other roles you think really need to understand what's going on here. 
 

That'd be a great way to start, I think. 
 

[00:07:16] Ken Huang: Yeah. Yeah, that's good. Uh, so since the chat GPT Movement, the initial applications really. Chatbot and people trying to, using the two Augment Generations database, trying to build AI applications that eventually lead to the agent ai. There's no standard industry definition yet, but I think there is consensus in terms of what component agent AI is composed of. 
 

It usually need the planning capability or loading. Like if you give a task, it will break down the task into small task, and then each task it can be given, uh, kind slotted to different agent. And then each agent has its own task, uh, to do with tools. Uh, tool could be like, uh, the two files or search the website or maybe, uh, calling the vector database. 
 

So, uh, different tool and then take action means actually using the tool to produce the result. Right? And the, the main reason why this is, uh, good is according to research is, uh, the especially multiple agent system, it can. Or like it perform well in terms of its reasoning and grounding. Grounding means it's, uh, actually grounding to the latest information and the accuracy. 
 

Right. So from this side it is, it's really unleash the enterprise applications. The previously, the application logic is hard coded in program logic in kind of you using Java, Python. Cobol, right to programming your application or SaaS application. Now with the agent ai, all this is dynamic. Like you don't have to code it, even you want to code it, it'll generate as a code. 
 

On the fly and to execute your business logic. So we are the, at the first evening of the agent, ai. Right? It's still early, but uh, we already see lots of good result from it. Example, like open ai, uh, deep research. Are you still here? Okay, 
 

[00:09:41] Sean Martin: I, I think you froze there for a second, but we're, we're still good. Keep going. 
 

[00:09:44] Ken Huang: Okay. So yeah, as the internet sometimes is not cooperating. So anyway, uh, yeah. 
 

There's example like, uh, uh, the open AI operator or the deep research, it's the agent ai, it's being used for recently a Chinese team called Ness. Uh, many people really use it, find that it can build this application. Do some scientific research, so it's already show some, uh. Sign of the use and even the sales force has agent force. 
 

It's, uh, basically use the trying to, uh, use in different workflow, uh, applications. And the CEO even said, uh, they were not hire new. Engineer anymore, right? It's, maybe it's the marketing term team, but certainly, uh, code agent is another example is a really generate code. So it'll replace some of the junior software engineer or at least make the software engineer. 
 

More productive. So there's a lot. Uh, so as a, regarding your question in terms of what is the impact to develop, uh, is a developer, like from this Genetech AI perspective, is AI engineer. Right. So AI engineer need using different framework, like a clue ai, auto gene, uh, lot, lot of open source. There's at least like, uh, 50. 
 

Open source agent AI framework. Now, I also my own develop a, an agent AI framework in my GitHub. The, my framework is multiple agent framework, but it, uh, can instantiate the, uh, agent framework based on the JSON file. Just the configuration file and all this, uh, interaction between the agent is through the policy in the JSO file. 
 

So you don't have to, to code it, like to say, okay, I need new agent, or whatever. Right? It's just distributed. So if you want to. Update. You can. So there's so many. IBM also recently has a b AI agent. Right. And there's a many protocol. I think one thing, uh, neither mention is the MCP protocol from. Cloud, it's gaining the momentum. 
 

And actually within the oasp, we trying to select model. So if you want to use the MCP protocol to using you, uh, to build your agent AI applications, uh, what could potential threat be? Right? So we actually list all this threat. This document will be published at the ISA Confluence. I will be speaking at ISA Confluence as well. 
 

Uh, so yeah, so from developer perspective, if they want to use Crew ai, auto Gene or this framework and, uh, this framework do have little bit of like a security, uh. Features in terms of, uh, how do you authenticate, but that's it. There's no fine going access control. Uh, right. So one thing is I build a zero agent framework, or I call a zero agent, is it's also open source, right? 
 

That's what I'm talking in. ISA conference is for people to using auto gene. They can just using my SDK and uh, to. Introduce all the authentication, fling access, even the monitoring, uh, into the code. So, so that's is for developer. If they do not use the open source, they need to develop themself, but they have to make sure that this framework is secure, uh, out of box. 
 

It, it's not right. It's usually the idea of auto gene crew, ai say in. They are really good. They enable develop, to develop, to develop the multiple agent system and all the security is at the discretion of the develop, how they integrate with the security framework. So, so yes. So this SDKI think is important. 
 

Uh, I will talk about this in, I say confluence. Uh, so this develop, you also mentioned the data team. So the data teams, uh, does play a role in this agent ai, uh, in, if we, we want to fine tuning the model to, to using the private enterprise data, or if we want to embedding the data in the vector database, right? 
 

Because they will be leveraged by the agent ai. Um, so in this case, they want to make sure the other. Controls in place in terms of prominence, prominence of data and access, control of data. If the data need to be encrypted in vector database or this. Uh, so I have to mention that my book, generative AI Security, there's a specific whole chapter on the data security, like all different aspect, right? 
 

This applicable for the data team. You also mentioned the IT operations team, so. How do you deploy, uh, your agent ai, uh, in the cloud or using container? How do you, if there's API, how do you, uh, secure the API use API gateway or maybe LLM gateway for prompting injection, right? So there's different options. 
 

And the key thing is really, again, go back to zero trust because there's no boundary. In the older days, like we have the bundle, we have VPN, right? With agent, it's. In long, you have the agent in AWS, another agent, maybe in, then another agent in GCP or Azure. They can communicate with each other. So, so they across boundary, right? 
 

So you cannot really put a box into it. So, uh, this is the way the agent is moving, right? So especially the multiple agent system. So how do you make sure it works and. You do not from, I publish a, a research paper. Uh, it's, it's a blog post. I would say it's a cloud security alliance about the identity management of the agent. 
 

'cause currently, if you using simul O or O, uh, as identity authentication, access control, right, for your agent, that's static. Sta static, it's not good enough. 'cause agent really need, uh, some of autonomy, so you kind of limiting it. But how do you like unleash the power, but also it's secure? Uh, then you have to leverage dynamic. 
 

Policy. Right? That's, but still zero trust. So this is more from IT operation, I think you mentioned also security operation. Uh, so the, the book which will published in August, I do mention this security operation will be done by the agent, uh, because it's very important. Uh, we can never catch up from defense side because so many. 
 

AI risk. There's so many, right. And people just like we are shorting stuff, right? And, uh, people don't have this knowledge. I have a easy console video course to fu uh, train the cybersecurity people, AI people on the AI security side. Yeah. The people interest can take a look. There's lots of for, yeah, I think I get like five stars. 
 

More than 200 people review it to give five stars, which is good. Yeah. Uh, but the key thing is eventually we need. Agent AI to be a defensive agent to catch up, uh, with the offensive, right? So, so this one chapter on that, so from security operation side, we need to deploy this kind of agent. And I know there's a few company working on that, on the detection side. 
 

It's very interesting and many people, right? And, but how do you trust it, right? Also, again, right? So this. How do you integrate with your traditional, uh, security, uh, infrastructure? You still need the cloud strike. You still need the Microsoft Defender. You still need the Tanium, right? Or this, you still need, uh, the SaaS tool S sneak, right? 
 

Or different tool like, uh, uh, verb suite. You still need all this, but, uh, I think Agent AI is, is give you, uh. Add on an enhancement from defense side, but people also, like one chapter would write is also from offensive side, right? This may be, uh, too much to talk about. Yeah. So from agent AI side, uh, you question about develop, uh, the IT ops, uh, SecOps and also data team. 
 

Is there other thing I need to cover or. 
 

[00:19:30] Sean Martin: No, I think that's it. And I, I wanted to let you go, kind of paint that picture and I'm, I'm glad we ended with the, with the kind of detection and response, because I'm gonna lead us back there. But I'm gonna go back to the beginning, to talk about even before developers. So as we're defining, as organizations are, you talk about the, the Salesforce. 
 

Uh, plugging this into, uh, to, to drive workflows. I'm, I'm picturing if then this, if this, then that. I'm picturing things like zappy, where you can orchestrate things. Um, but obvi also, um, those are no code, low code right systems, but then there's also engineered applications that are off the shelf with sensibilities or custom built bespoke solutions. 
 

And so I wanna go to. Maybe some, I don't know. Yeah, if you have some examples of organizations using this, 'cause I'm, I'm seeing these, or my picture of this is the, the agents are purpose built. To do specific things, right? Go off research, either use some data has available or it was given to it, or use information that's available publicly. 
 

Do some magic, come back with a response. Either a yes or a no or some, some equation, or some contact or some text stories, whatever it is, and code it sounds like, which is, well, I wanna get, keep that one separate, but I guess my point is. It's a, it's a way for organizations to create specific tasks and the agents and the agents become smarter, perhaps with more context, uh, grounded as you put, uh, with the latest information. 
 

Can, can you describe a scenario or two where that's happening so people can visualize better than I just described in terms of what it means for the business? 
 

[00:21:20] Ken Huang: Yeah. So, uh, yeah, if I understand your question, you are more talking about business case, not the security side 
 

[00:21:30] Sean Martin: Correct. Yeah. Yeah. Broader. Yeah. Broader business 
 

[00:21:32] Ken Huang: Yeah. So, uh, I, I would say the, uh, currently the majority of use is still chatbot, but behind the chatbot you can have agent AI as well, right? So, so the chatbot, uh, it has, uh, one example is there is a robotic startup, right? 
 

Uh, they're using a chat bot. Uh, but behind the scene it's, uh. The, uh, what kind of robot they build, uh, internally. Uh, so they have the vector database. What are the, uh, the most recent, uh, research in the robotic ai? And this is helping, uh, the, the team. Uh, to kind of brainstorming with the chatbot is kind of like a co coworker, right? 
 

So that that's, that's one of the use case. I think Agent Force is people also building the, uh, agent forces themself to. To do some workflow process. And service now is also, uh, doing the same thing. Uh, it's more like if you have ticket, you can ask the agent to, uh, agent. Of course, you have to the, have the access control orders, right? 
 

Agent may be able to get to the, your intro ID or maybe connected to you. As you develop, uh, different database, uh, to do the work for you. Right? So that's, uh, that's coming. But uh, I have to say 20 25 5 is a genesis. Uh, the whole revolution will be at least 10 years. And if we really want to have a chat bot, uh, agent, which you can book the air ticket for you, or planning the. 
 

Hosting, uh, you travel for you and it may be in two years. Um, right. Uh, because there is still some security. That's why we focus on threat modeling side of it. You still have the, uh, like, uh, yeah. Nowadays, like even you using Zapier is okay. Yeah. You can use ZA with agent as well. MCPI think is. Kind of, uh, have an abstraction layer right on, on top of the Zapier, right? 
 

So the MCP server encapsulated the API, so you have the MCP server that a client, uh, can use, right? So, so this, uh, gaining some momentum, uh, yeah. So. I think, uh, just go back to business. Uh, enterprise leader should investigate into this and they can talk to us. So, uh, we have the kind of agent AI audit. To see what kind of works, workflow stream, uh, process you can use based on your particular, uh, requirement and also your particular, uh, IT infrastructure. 
 

Right? Some may not ready yet, but some may be a quicker wing, uh, right. So the way to provide discount for audit as well. Yep. 
 

[00:24:46] Sean Martin: I, um, my head's exploding right now 'cause the, there's so many places to go. I'm, I'm gonna stick with kind of this flow of we build something, we validate it, we deploy it, and it gets used. Um, I. Having S roots many years ago in quality assurance. And then, uh, what was DevSecOps before was called that my, and I've said this before on many, many episodes, the, the introduction of AI and especially, uh, or chat. 
 

Yeah. LLMs and generative ai. And especially when we talk about agent ai. Um. To me, it seems the, the, the possibilities for quality assurance validation are endless and equally for security. Now, perhaps with agentic ai, we can actually encapsulate some of this stuff and, and maybe I'm thinking here. Uh, you, to your point, using it, using agentic AI to maybe validate the quality, not just the response, but also the quality of the response in, in line, and then perhaps also the security, um, of the thing, and also the security of the inputs and the outputs, um, wrapping it all together. 
 

So I'm kind of just speaking out her thinking out loud here. I guess my question to you is, um. Do you see how this might look altogether in terms of how we actually test for accuracy of whatever the, the agent is, is ingesting and then responding with, um, I mean, if we're start starting to talk about multiple agents and in complex workflows, how do we, how do we validate that, where the path may change because. 
 

The agent says it needs to. So how, how do we get a handle on this, 
 

[00:26:38] Ken Huang: Right, right. I think it's a very good question. I think it's tie back to the title of your podcast, redefining Cybersecurity. I think as the whole cybersecurity is involved, evolving, right in the past, uh, like 20, 30 years, uh, very gradually, I. But now it's uh, really the tipping point or inflation point that we really need to redefine it because of agent ai. 
 

And you mentioned that using the, uh, agent AI to be a judge, uh. To validate the output, the response, and also the process, the planning process. Uh, it's, uh, actually, uh, uh, is some work that research has been doing. Um, and still we are not at the point that we can fully. Trust the LLM as a judge or agent AI as a judge, we are not at this point yet, but at least it help, right. 
 

That can help to do that. And also using the, uh, generally way of agent AI to label the data, it's actually, uh, proving to be very efficient because you have to. Uh, if you want to do attribute based access control, you do need tagging or label the data just for the, uh, security classification perspective. 
 

So that's, uh, there's, uh, uh, even the Department of Defense has the research on, on this, right? They publish, uh, the paper in the, in the archive. So, so that's is, uh. Uh, doing, but, uh, the, the whole thing is I think people still not waking up yet. Majority, and actually I documented this in generative AI security book is your current security program. 
 

If you don't change based on the generative AI agent ai, you will get hacked. Right. It's, you have to extend it, you have to revisit it. That's including like the, the, uh, use policy, AI usage policy. That's only part of it. But, uh, uh, in terms of what is your, uh, policy? Uh, how to use it. How do you do the, all this, uh, you said validation testing, uh, incidental response, right? 
 

It's all change and, uh, what is the procedures, right? And all this is, I have very detailed kind of steps there. Uh, people should take a look, 
 

[00:29:24] Sean Martin: is that in the guide? Because I was gonna ask you about the guide. Is that the guide Respon? Is that the responsibility guide? 
 

[00:29:30] Ken Huang: Uh, uh, response. Yeah. The co responsibility guide does not touch the security programming side. Uh, that's in my book. Uh, it's published by Springer. That's, yeah. It's, it's a different set of thing. Yeah. But yeah. Uh, it's chapter two or three is basically talk about the landscape of, uh, general security and security. 
 

Program, uh, uh, how the enterprise, uh, need to kind of augment you. You have, you don't have to like, erase all your security program, deleted you, you need. 
 

[00:30:10] Sean Martin: Well, I wanted to ask you about that because, um, I. I'm a CSO listening to this or a, a security leader responsible for infrastructure security or incident response in the soc. And I'm trying to think, how do, how do I make sense of this? I have a program I have to not let fall apart. Um, you say augment my current program, uh, whatever, whatever piece I own. 
 

Um, do we, do we not need to. Maybe build something in parallel. I'm just wondering, do, do we need to, in order to get this right, not just augment and tack on more, but do we actually, to your point, do we actually need to redefine and in parallel start with something? So, and if that's the case, where do, where do teams start and start with the ciso? 
 

What, what do they, if that's the case, what does a CISO need to do to actually get started with something like that? 
 

[00:31:07] Ken Huang: Yeah, so some of the, uh, people talk with me. Uh. Some of the clients, some just the, uh, like my network, uh, they're from the financial, financial airline, air industry, airline, right? Um, I cannot disclose what name they actually building. The, uh, general security. Parallel focus on the data security model, security ml SecOps or right red teaming, all this like as a whole suite of thing. 
 

Uh, one thing I can, uh, say that it's, uh, public available is Microsoft. It has a responsibility, ai. Console. Right. Uh, under the Executive vp, I think Brad Smith has the name. And then, uh, within that there has a rather teaming guide, uh, right. Uh, the rather teaming and also the, uh. Whenever you have building up a AI application, you have to firstly go through the, uh, impact analysis or responsible per the ai, a responsible AI impact analysis. 
 

Right. And of course, it's really based on the, uh, like, uh, risk management framework. So they also have, yeah. Old risk management framework, which you kind of fine tuned, uh, collaborate with the responsibility ai. There was also in kind of, uh, uh, in talk, uh, in collaboration with Microsoft Research team. So, so yeah, this is maybe a big enterprise, this approach, but the small enterprise, uh, you, you maybe just full now just augment what you have, right? 
 

You, you don't have the. The team to build like a few hundred teams to do all this. Right? So yeah, it really depends. 
 

[00:33:07] Sean Martin: Well may, maybe it starts sooner. 'cause you, you mentioned risk management and I'm a nerd for, for risk management. And I, I hate to even suggest that we, uh, that the, the one way to look at it is become the department of no, but I, I think there's a need for. Defining the scope of what this stuff can do. 
 

'cause to my earlier point, if it can do anything right, it can produce anything. Even just from a quality perspective. You might, might be running around with your head air on fire from a security perspective, you might be chasing stuff that you shouldn't have to chase. Uh, is there a way to guide the business to kind of scope. 
 

What they're building. So there are guardrails so that, so the QA teams aren't, aren't trying to test every fe, every, every potential case and security teams chasing every potential vulnerability and, and attack you. What are your views on risk management? And in this regard for, for guide, guide, guardrails, Leslie? 
 

[00:34:13] Ken Huang: Yes, so, so my view of risk management is always like, uh, the business, uh. Has to justify their application, right? The functionality, uh, like if business really need, then you can do risk management to, to see if this one is at risk, what kind of cost it is. Sometimes you just, uh, bite the bullet, say, okay, we, we take the risk right? 
 

Uh, one thing is we certainly need to look at the regulation as well, right? The U ai act like there's a prohibited use. So if you want to do face recognition or to the public, that's prohibited, right? So, uh, of course the, uh, because, uh, uh, guidance, uh, ai, exact word is rescinded. So the new exact word from Trump is more like a pro. 
 

Uh, the innovation, right. Uh, but still, like, uh, I think for the people who really want to develop their application using EU AI Act as a reference is good. And, uh. Look at the, the business, uh, justification of the functionality. Then do the threat modeling. Like you have to like, uh, do the like, uh, uh, for this application. 
 

What is the touch point like, especially agent care, touch many different point and what could potentially be the threat. You can say what threat could be high, medium, low, right? What need to be addressed. This actually can then be used by the development team to build a security control along the threat, and also can be used by the, uh, red teaming or security. 
 

The QA team, uh, to test it. So, yeah, so, uh, I think, uh, the, the key idea is risk management is really to, I think, uh, to leverage a NIST ai IMF or. Framework is good. It has like, you have to first map your, uh, threat and then you have to measure it, and then you, uh, govern it. Govern, govern is more, uh, more than manager, right? 
 

It has a manager you manage, but eventually you have to gather. So there, there is this kind of framework. Uh, I think it's, it could be useful, especially on the AI space. But, uh, we also publish. Uh, in, uh, CSA corporate responsibility working group, we do have risk management section, so. 
 

[00:36:59] Sean Martin: Perfect. Fantastic. Well, as as we wrap here, Ken, um, you mentioned UI ACT and different policies here in the US that come and go, and you also mentioned Zero Trust earlier, which I've come to realize that Zero Trust isn't necessarily a program, it's a mindset, 
 

[00:37:20] Ken Huang: Yes. 
 

[00:37:20] Sean Martin: wondering how you think of. Agent AI and security programs and, and, and leading the business with a mindset that maybe doesn't chase the, the latest regulation or policy maybe doesn't chase just, uh, the, the new flashy bits that, that says, I'm gonna solve all your problems in agent AI security, but, but it becomes more of a mindset. 
 

What, what do you say to security leaders and security teams that. That need, need to have that, uh, have that change maybe perhaps in the way they think about this. 
 

[00:37:56] Ken Huang: So I think more than ever before, zero trust is very important. I agree with you, Sean. This is a mindset. Uh, your approach is a problem. Always, never trust, always verify. Especially with agent ai, it's really increase your attack surface. So if you don't have to use ai, do not use it. I see if it's, uh, working, never break. 
 

So keep it it, right? So if you can, using just a re augment generation. Without agent behavior, agent AI is really, has a certain level of autonomy, right? It can make its own decisions. So you can maybe reduce its, uh, like, uh, decision point. Make it not agent, like just re generation, and then you have human in the judge. 
 

Look at, uh, if it works for your business cases, that's great. Uh, then if more innovative, you want to have a single agent first. I. New enterprise to just have one agent deploy, to test the water, to see how it works, to see if you have our return on investment on this. Right? And if it, uh, really increase efficiency and also the securities address, then you go to the multiple agent, right? 
 

And they can leverage, actually we have this select modeling guide. We'll be published soon so they can leverage it. Uh, and we have like, if you want to develop RPA agent, uh, which comes, that could be right. And if, if for people using the Web3, like, uh, crypto agent there. Many people try to develop, right? 
 

What kind of threat it could be that the agent can access to your wallet, then it can transfer the funds, right? All this, so this also we have the, like we have three like, uh, little word use case for threat modeling. It's based on my framework called. Select modeling, flowing work originally also published at Cloud Alliance. 
 

But, but this is all the industry joint effort, right? 
 

[00:40:17] Sean Martin: Uh, that's good. That's good. I'm, I'm glad, I'm glad you're involved in so many things so you can share some commonalities between them and, 
 

[00:40:24] Ken Huang: Yeah. I think it's important to, to like go to different, um, kind of community to see what are they talking about, what we can contribute. It's. Reduce duplication of effort. There's still a lot I would say. So my effort is with some of, um, my other coworker volunteer. We trying to do the same thing. Yes. 
 

[00:40:47] Sean Martin: Well, we need, we need the depth of course, and the, and the specifics knowledge, specific knowledge for each area. But I think the cross view, the. That, uh, you, you're promoting? Uh, super important as well. I keep going back to the, uh, I don't know if it was an airline ex example or, uh, maybe a a, um, somebody who books, books travel, travel agent 
 

[00:41:10] Ken Huang: Oh. 
 

[00:41:12] Sean Martin: but a sup, a super interesting space, just this whole thing where perhaps users are interacting with systems and 
 

[00:41:19] Ken Huang: Yeah, 
 

[00:41:19] Sean Martin: out outcomes, whole itinerary with 
 

[00:41:22] Ken Huang: yeah, 
 

[00:41:22] Sean Martin: hotels and airlines 
 

[00:41:24] Ken Huang: this particular instance is chat bot agreed to give a discount of certain amount, but this chat bot hallucinating right. You did not get, yeah. 
 

[00:41:36] Sean Martin: Yeah. You, you end up somewhere you don't want to go. There you go. Well, Ken, this has been fantastic. Um, certainly very eye-opening for me. Hopefully the, I. The folks watching and listening are, are thinking a little bit differently. Um, embracing a new mindset for how to approach Gentech AI and security of that stuff. 
 

And the quality. I mean, end users ultimately will determine, uh, whether they like it or not, and it being each, each organization's specific thing that they build, whether it's for internal partner use, customer use, whatever it is. Um. So I'm gonna include a bunch of links to the resources that you mentioned. 
 

I have a few of them. Kim, I'll, I'll ask you to share the rest of it, uh, with us so we can, uh, put that in the show notes. Read, read, read, engage, and talk. Join the communities. Join the working groups too, um, 
 

[00:42:31] Ken Huang: Yeah, that's 
 

[00:42:32] Sean Martin: contribute and, and learn. 
 

[00:42:34] Ken Huang: Yeah, working group open to everyone, so 
 

[00:42:36] Sean Martin: absolutely. Oas, CSA, you mentioned a few others, but, uh, those are the two that I know very well. 
 

Um, so Ken, thank you so much. Um, appreciate, appreciate the conversation and everything you're doing, everybody listening and watching. Please do, uh, connect with Ken and, and, uh, read some of the stuff that he's, he and others have produced. And, uh, please stay tuned to more redefining cybersecurity as we do just that, redefine. 
 

Cybersecurity. Thank you all. 
 

[00:43:06] Ken Huang: Thank you, Sean. Take care.