Redefining CyberSecurity

Confused Learning: Supply Chain Attacks through Machine Learning Models | A Conversation With Adrian Wood and Mary Walker | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

Join Sean Martin as he explores the world of supply chain attacks through machine learning models with security engineers Mary Walker and Adrian Wood from Dropbox. Discover the hidden risks of using machine learning models as software programs and the proactive steps needed to safeguard against shadow AI threats.

Episode Notes

Guests: 

Mary Walker, Security Engineer, Dropbox [@Dropbox]

On LinkedIn | https://www.linkedin.com/in/marywalkerdfir/

At Black Hat | https://www.blackhat.com/asia-24/briefings/schedule/speakers.html#mary-walker-47392

Adrian Wood, Security Engineer, Dropbox [@Dropbox]

On LinkedIn | https://www.linkedin.com/in/adrian-wood-threlfall/

At Black Hat | https://www.blackhat.com/asia-24/briefings/schedule/speakers.html#adrian-wood-39398

____________________________

Hosts: 

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Episode Notes

On this episode of On Location with Sean and Marco, Sean Martin hosts the show solo, discussing supply chain attacks through machine learning models with guests Mary Walker and Adrian Wood. Mary and Adrian, both security engineers at Dropbox, share insights on their journey in cybersecurity and research on exploiting machine learning models. They delve into the implications of machine learning models being used as software programs containing malware and the risks associated with model repositories.

The conversation explores the ease of poisoning machine learning models and the importance of understanding the provenance of models for risk mitigation. Mary and Adrian emphasize the need for enhanced detection mechanisms for shadow AI and proactive measures for securing model repositories. Additionally, they discuss the impact of AI standardization and the legal implications surrounding AI development.

The episode concludes with a call to action for listeners to engage in discussions on supply chain attacks, join Mary and Adrian for their talk at Black Hat Asia, participate in Q&A sessions, and contribute to the open-source tools developed by the guests.

Be sure to follow our Coverage Journey and subscribe to our podcasts!

____________________________

On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllQtJTmj9bp2RMzfkXLnN4--

Be sure to share and subscribe!

____________________________

Resources

Confused Learning: Supply Chain Attacks through Machine Learning Models: https://www.blackhat.com/asia-24/briefings/schedule/#confused-learning-supply-chain-attacks-through-machine-learning-models-37794

Offensive Machine Learning Playbooks: https://wiki.offsecml.com

Blog describing the attack killchain for bug bounty: https://5stars217.github.io

Learn more about Black Hat Asia 2024: https://www.blackhat.com/asia-24/

____________________________

Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Want to tell your Brand Story as part of our event coverage?

Learn More 👉 https://itspm.ag/evtcovbrf

Episode Transcription

Confused Learning: Supply Chain Attacks through Machine Learning Models | A Conversation With Adrian Wood and Mary Walker | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And hello everybody, this is Sean Martin, and I'm uh, flying solo today for our event coverage. Uh, my partner Marco is not joining, he's uh, busy with lots of other stuff. He has two shows, I only have one, Redefining Cybersecurity. And um, yeah. He said, you know what, this is going to be a technical conversation. 
 

You go, you go run that one and have fun with it. Get deep and down and dirty and all that fun stuff. It's part of Black Hat Asia. And it's a topic that piqued my interest. It's a session there called Confused Learning. That's looking at machine learning supply chain. And Mary Walker and Adrian Wood are presenting at the conference. 
 

Thank you both for joining me today.  
 

Adrian Wood: Thank you having us. Yeah.  
 

Sean Martin: It's going to be fun. It's going to be fun. And before we get into all, all the fun bits, not, not that who you are, isn't fun because it is fun. You make this obviously, [00:01:00] uh, but a few words from each of you, some of the things you've worked on leading up to your role, you're both security engineers at Dropbox, if I'm not mistaken. 
 

And, uh, yeah, so your journey to Dropbox, what you do at Dropbox, and then we'll get into. The session, Mary, we'll start with you.  
 

Mary Wood: Uh, sure. So I'm a security engineer at Dropbox. I've been in cybersecurity. Six, seven years at this point, um, I started out on red team doing like validation testing, but from there I like quickly pivoted into the good side of things and went to work on malware analysis and digital forensics. 
 

So I have a background, most of my career has been in like instant response and defense kind of things. Um, at Dropbox I had the opportunity to do more research, which is how I started to collaborate with Adrian on looking at all of his work on exploiting Machine learning models and supply chain stuff, but looking at it from a research DFIR kind of aspect. 
 

Sean Martin: Nice one. And [00:02:00] Adrian.  
 

Adrian Wood: I, uh, started out in about 2008 as an independent consultant working for myself. Um, very quickly I wound up with too much work doing, you know, web app security work and a little bit of red teaming and, uh, started a company. I, uh, had my own business for about eight years. Which I then sold off parts of and moved to America. 
 

Um, I then worked in an application security research team and later a red team at a large bank. I joined Dropbox on their red team, uh, about two years ago. My primary interests lay at the intersection of Supply chain attacks, adversary simulation and offensive machine learning, like the application of machine learning for red teaming and adversary emulation. 
 

Sean Martin: So lemme ask you this be before we get into it, 'cause [00:03:00] clearly, well, if we look at ai, it's, it's taken the world by storm a little over a year know now, right? That's, that's when the, the public facing prompt became available for pretty much anybody who wanted it. Um, I presume. You've been looking at this stuff longer and machine learning is one aspect of it, AI is another. 
 

How does How does the introduction of the ability for pretty much anybody to have access to a prompt, anybody to connect it to data, uh, change how you look at this problem? Both of your perspectives on this.  
 

Adrian Wood: Yeah. So I became very interested in carrying out as machine learning based supply chain attack because of the level of interest and rapid development that was going on. 
 

People were just pouring it on, right. Which meant great [00:04:00] opportunities, uh, to sneak something through while people are, you know, rushing to market, rushing to use venture capital and so on. So, yeah, it just sort of seemed like the natural thing to do.  
 

Mary Wood: Yeah, I actually don't have, um, any background in machine learning. 
 

I didn't really know anything about machine learning prior to the start of this project. Um, and so really, I'm part of this, like, wave of people being interested in things. Um, and yeah, it just seems like, uh, a good time to be researching in this space, because like Adrienne said, there's such heavy adoption by everybody in information technology to use machine learning models and use AI and all that kind of thing. 
 

So, it becomes important for us as defenders and Uh, security folks to understand what's going on in that space and the risks and all that.  
 

Sean Martin: Nice. And I'm, I'm hopeful because IR person as well. Many, many, many years dealing with that stuff. So hopefully, uh, we can get some nuggets from you on your perspective there.[00:05:00]  
 

So how, how did this session I mean, congratulations. First off, I know it's not, not easy to get a session accepted at a conference, especially when like black hat, how did this, how'd the talk come together? What was the catalyst for it?  
 

Adrian Wood: So I'd been running machine learning models for a bunch of reasons since like 2018. 
 

And I didn't realize until. Far more recently that machine learning models were full blown programs that they weren't just a collection of weights and biases, like a pure function that like, they would, they're just a, they're just a software program. And that means you can put malware in them. It means you can put whatever you like in them. 
 

And that, that statement holds true for the vast majority of types of models that exist that people use on a day to day basis.  
 

Sean Martin: So as soon as I described that for me, that it's. And it's actually a program. I mean, I [00:06:00] probably get it, but I think most folks would love to hear that as well.  
 

Adrian Wood: I think quite a, like a basic way to explain it would be, so you've got a bunch of, Uh, data that you've collected and you've trained a model on that's now sort of like a statistical representation of that data. 
 

And that is a collection of floating point numbers, like just, just numbers. And the issue is, is that you need a way to actually load that up and get it running on a computer so that you can make inferences against that data. And typically speaking, it's the process of how you. Load that up, how you package it, how you make it portable is the part that makes it a program that turns it from just a collection of numbers into like a useful thing that can be shared between you and I and run on your computer. 
 

Sean Martin: So how, Mary, I don't [00:07:00] know if, uh, if you can maybe shed some light on this. How, how do. Do organizations, I guess, look at it as an application, or?  
 

Mary Wood: I think we don't right now. That's part of the problem. Um, I think a lot of organizations kind of view these as not as a vector to import malware into your production environment, which is kind of an issue. 
 

It's why Adrian had a lot of success, um, with his access that he had when he was able to, uh, supply chain attacks. Um, I think that like it came as a little bit of a surprise to me as I started to research that, like what Adrian is saying is all true. Like there's all of this, like, you know, processing that has to go into load a model. 
 

And that's like a very abstract layer, but more, um, granularly speaking, the vector that he found a lot of success with is actually just I'm adding a layer to a model and the models are all layers like that's what they are. They're like layers of dictionaries and nested things and data and all that. Um, but in like, for example, Keras, which is an API that [00:08:00] from TensorFlow or PyTorch, um, to build and manipulate models. 
 

You can add a layer called the Lambda layer, which is intentionally code execution as you load and train the model. And this is a valid use case. It's something people need. You need to be able to sometimes manipulate the data in the model. Maybe you need to do like a small arithmetic kind of manipulation of a thing during training or whatever. 
 

Um, and so it's Python usually. Right. And like, that's, it's supported as valid, but this is a perfect place to put like a bunch of malware or arbitrary code really. So is this a training  
 

Sean Martin: time or runtime or both either?  
 

Mary Wood: Yeah.  
 

Adrian Wood: And other model formats, the story holds true for slightly different reasons, like serialization issues like pickles. 
 

I'm sure you've seen, there's been many black hat talks over the years about the dangers of Python pickles dating back more than 15 years now. Um, so across different formats through. [00:09:00] mechanisms. You see the same patterns emerge with malware.  
 

Sean Martin: So before we started recording, you were describing scenario where you stood something up and, and it got some attention that you didn't expect. 
 

People were accessing something you weren't, weren't necessarily expecting them to tell, tell us about that.  
 

Adrian Wood: So, yeah. So if it is news to you that machine learning models, can contain malware can contain like full blown code of whatever comes out of someone's creative mind. That's really just part of the story, right? 
 

Like you need to get that malicious thing onto a target. You have to convince them to run it and convince them to go through the trouble of dealing with it. And model repositories, uh, where you need to go to do that, right? You need to get your model into a place [00:10:00] where they are shared and distributed with others. 
 

These model repositories Uh, places where the, if you've heard of Alex Bersan's work in Dependency Confusion in about 2018, where he was able to take, say like Python packages and, uh, manipulate the order that they would be called into an organization in order to get code execution. Similar things hold true for the machine learning environment, where you can squat the namespace of a popular business. 
 

Uh, like I did for Dropbox for instance, and some other companies in Bug Bounty. When you're squatting that namespace, you're now the administrator of that namespace. So any employee from that company that you convinced to join or just asks to join, um, you can give them right access and they will start using that repository like it's their own, like it's their employers. 
 

So that gives you an opportunity to either backdoor something they upload or to place something [00:11:00] there that they would be interested in running within the corp environment. And once that happens, you get a malware detonation, typically in the machine learning pipeline, um, which is where a lot of businesses data crown jewels live. 
 

It's a terrible, terrible place to have an attacker  
 

Sean Martin: get  
 

Adrian Wood: their first taste of your environment.  
 

Sean Martin: Can you describe a scenario? Mary, where this might impact the business or what, what types of data is in the repository?  
 

Mary Wood: Uh, sure. Adrian goes into this extensively in his talk and his talk at black hat about the kinds of things available. 
 

But I mean, it's just anything you could want machine learning models for a business have to have access to really important, sensitive data because they're doing that kind of business logic for you. And so I typically, these environments are kind of scrappy environments, right? Like there's idea that like we need to move fast and iterate and do all this kind of Training and things there. 
 

And so a model, if you get any kind of model, it'll have access [00:12:00] to a lot of different kinds of data. So your data lakes, your snowflake, you know, all those kinds of things can have access to all kinds of things, but you also are just in this place where the models live. And so you have opportunity to do other things like not just exfiltrate data, but also. 
 

Poison models like Adrian was able to um, I don't know Adrian if you want to talk at all about like your your LLM poisoning  
 

Adrian Wood: Yeah, yeah So using this access and I thought it would be a great time to find out how hard it is to poison an LLM Like that sounds difficult to me as someone who like Not an expert at that kind of thing. 
 

And so given that I had, I was living in the machine learning pipeline. That meant I had access to the model registry. That's where, that's where businesses keep their models, right? Internally. And if that wasn't bad enough, I had right access in these environments. And that's quite a typical thing. Cause you know, if you're training a model, you have to be able to drop it somewhere when you're done. 
 

Like it kind of has to work that way. [00:13:00] What this allowed me to do was actually take. Uh, prebuilt LLM, so think like, uh, a meta model like LLAMA. And through my command and control access, I was able to change a ground truth of that model, and it surprised me at how easy that was because, uh, that sounds very difficult, like modifying hyperparameters and all this kind of fancy stuff I don't really understand, but tools exist for aligning models, there's little rabbit ears I'm doing there, and. 
 

Those tools are quite easy to use, and so you can run one of those, open up a configuration file that's basically written in English, and you can say, this is my question, which is like, say, like, what's the capital of Australia? I can then provide the expected answer, which would be Canberra. And then I can provide the answer that I want it to be, which would be, say, Sydney. 
 

Um, and you can do that for any number of places. Of like ground truths of a model. So [00:14:00] you've effectively changed its behavior. You've changed his understanding of the world in a way that's extremely difficult to suffer someone to detect. And, you know, that's just an example of any number of poisoning attacks that you could do, and you can imagine for certain industries, medical defense, um, infrastructure, these kinds of changes could be like pretty horrifying, especially for vision systems. 
 

That's,  
 

Mary Wood: that's especially bad, too, because, like, this tool exists for a reason, like, these things have to be aligned every so often, and so even if you have wonderful audit logging that's telling you this model's alignment has been changed and this action has been carried out at all, you're still in a situation, it's like, well, what's, what was intended and what is actually, like, the thing being changed that I care about? 
 

Um, so, it's quite tricky.  
 

Sean Martin: Yeah, because I think it's the, it's changing the outcome. Right? So it may or may not probably end up changing something inside [00:15:00] as well. And so a change may look benign in the change control system, but the outcome can look very different than what you're expecting. So I come from the world of quality assurance, uh, where I did both white and black box. 
 

Testing. And the idea is that you kind of know two, two parts. You kind of know what it's supposed to do, and you can validate that it does and it doesn't not do it, the double negative, but then there's this whole wide world of anything's possible. Um, and that's where you start to write some code and throw a bunch of stuff at it, see what happens. 
 

How does that compare to this in terms of understanding, not just what the changes and the impact that change has generally, but. What the outcome is, how, how do teams, how can they spot the impact? Cause cause to your point, Mary, a lot of this stuff is very wild west, right? Very, very dynamic. We're, we're [00:16:00] training and retraining and using and retraining. 
 

So understanding what's really happening here. So I'll stop mumbling, but how do we, how do we get a handle on this?  
 

Adrian Wood: Well, yeah, that's kind of the beauty of. Using using a system for what it's intended for as a hacker is that that kind of problem is, you know, is outlined plain as day that someone can take a thing that you've built and use it in a way that you didn't expect and have a bad thing happen to detect these kinds of. 
 

Misalignments per se that may be introduced from a poisoning attack is an area where most companies probably don't do anything at all to look out for these kinds of things in order to do it. You need to have. effectively a software bill of materials of that model. You need to know its provenance, so you need to know where it came from, how it got [00:17:00] there, and did it, did each version of it that was experimented upon, go through the same rigorous process of experimentation, and in ML pipelines, experimentation like means a very specific thing with regards to like the data sets that are applied, the metadata around that, and then it's storage and the provenance chain of all of those things, just like a software bill of materials. 
 

Unfortunately, right now, like the concept of an AI bomb, an AI bill of materials is very new. Um, you know, it's not, it's not like you can just like rush out there and get you an AI bomb and make the problem go away. I mean, look at regular software bill of materials. Those, Still something that most people, most organizations have not got their hands around. 
 

So, you know, we're pretty far down the road of the problem space here compared to where like a lot of people probably [00:18:00] currently find themselves.  
 

Sean Martin: Yeah. Is, is the, for like using your terms of the AI bomb versus the S bomb, are they similar in how Stuff is pulled together and I'll say compiled or built and delivered. 
 

Meaning I'm, I'm envisioning a large, large organization that builds a lot of their own stuff, but then uses some third party and some open source components that are, we'll say relatively static, right? They don't pull a new version in unless they go through some. Change control and some, some tests, analysis, whatever, at the other side of it. 
 

Um, but then their own stuff is kind of free flowing like the AI, right? Where it's very agile and very constant CICD. So how does that picture look?  
 

Adrian Wood: Yeah, I think I. I understand what you mean. And I would say like many companies will have the bones [00:19:00] of an AI bomb because like, let's say you do business in Europe and you're training models that contain some element of customer data, you already ought to have very sophisticated Metadata and experiment tracking around your model, because if someone files, uh, you know, remove my information request, you have to, you have to schedule a retrain and get this stuff out. 
 

Right? So these kinds of things ought to exist. But the question then is. What other applications say within security are they being used for in order to like find and discover problems like say a New version of a model being logged that went through like an entirely different training process where the hash changed But there was no experiments done and there was no test run. 
 

Sean Martin: Does that make sense? It does. Yep. It certainly does Yes, we're um Of course, I'd love to keep [00:20:00] digging deeper, but that's what your session is for. We want people to come visit you and, and listen to what you have to say. Um, who, who's the, so I'll just quickly read this off. So it's Confused Learning Supply Chain Attacks Through Machine Learning Models. 
 

It's Thursday, the 18th of April, it's a couple of weeks away. They're at Black Hat Asia and, um, clearly this is for folks who build stuff, interested in AI and, and, and, uh, machine learning. Who else do you expect to come and enjoy this session with you?  
 

Mary Wood: I'd love to see, you know, anybody that's working in instant response at a company. 
 

I think it's a great, a great chat for you to come and have a listen to about how this attack vector works and maybe this space in your production environment that you're not really aware of. Um, we talked quite a bit about like some tools available for you to see. detect these things or respond to these [00:21:00] things and what the attacks look like, what the model formats look like. 
 

So yeah, I think it's for folks that are working at a company and trying to defend any kind of machine learning environment. It's a, it's a pretty good talk.  
 

Adrian Wood: On top of that, I would say like Mary's underselling is slightly one of the tools that will help you with this.  
 

Sean Martin: Very humble. Yes.  
 

Adrian Wood: One of the tools that will help you with this particular issue with something that, you know, she has created. 
 

Um, and, you know, we have some pretty excellent data around its results, um, it's ease of use and it's applicability for, you know, everyone, um, who, who this tool will be made available to. It's a wonderful thing. Um,  
 

Sean Martin: and am I able to  
 

Adrian Wood: say the  
 

Sean Martin: name of that? This is the one you shared with me.  
 

Mary Wood: Uh, I haven't, I  
 

Sean Martin: haven't committed my code yet, but  
 

Mary Wood: it's, it's there soon. 
 

It'll be released at Black Hat. We'll leave that secret then. They have to,  
 

Sean Martin: they have to either hopefully join you for the  
 

Adrian Wood: session or connect with you afterwards. One, [00:22:00] one other group of people I think would be very interested for different reasons is, you know, people who work in governance and people who work in, um, uh, those kinds of roles and leadership roles. 
 

Or to Pay attention to this because as we were just describing the process of like the provenance of model tracking and the implications, the legal implications of that, there are also like a lot of implications around a new AI standard, the ISO, uh, for AI. And a lot of our work touches upon the, like, practical implications of that. 
 

And it, it thoroughly debunks much of the, like, supposed difficulty around performing these attacks. Um, and we break it down in like very simple explanations and components that it's not a pie in the sky idea that can only be done by like the most sophisticated of threat actors. Like it can just be done by a person who knows how to [00:23:00] operate a A command and control system, which is most hackers. 
 

Sean Martin: So let me, let me ask you this as, as we wrap, and then I'll give you each final word as well. So what I feel we've talked quite a bit about is the compromises possible, perhaps easier than one might initially expect. There are ways to detect and you've built some tools for that. What about the front end of this in terms of discoverability and, and kind of understanding the scope of what I I'll call it the risk exposure from this. 
 

Are there easy ways for teams, red team people, folks to say, where, where are all my repositories? How do I get access to them? What about that part?  
 

Adrian Wood: Um, you're, you're, you're asking, like, how can, how can people, like, quickly find out [00:24:00] if, if they're at risk of these kinds of problems?  
 

Sean Martin: And what, to what extent? 
 

Yeah. I mean, Shadow, Shadow ML. Shadow ML  
 

Adrian Wood: is a huge problem. Yeah. Shadow AI is probably, arguably, maybe even an even bigger one that we won't see the ramifications of for maybe a year or two. Um, I think understanding where your models are being sourced from, from people working in the day to day job is very, very important. 
 

Like if we take software packages as an example. Most companies have learned that they need to restrict people's direct access to NPM. To pipe pie and to source from internal like registries, like Artifactory or Nexus, Sona type, um, on their, in their build pipelines, on the developer workstations and so on. 
 

Understanding where these things are coming from and then going out and seeing like who owns our public name spaces, uh, do we namespace internally, what does that [00:25:00] look like? And then taking appropriate steps to ensure that you actually own. For squatting these kinds of things in the way that many businesses proactively Squat domain names that look like theirs and so on like you need to be thinking about Uh kind of thing that's just like a small very quick thing You can go and do it with see like who owns my business on hugging face and is it me or is it you? 
 

That's  
 

Sean Martin: probably you adrian if I know All right, um, well I'm super happy. I got a chance to uh Chat with you and get a sliver of what you're going to be talking about on the 18th at Blackout Asia. Um, final word from each of you. Call to action for folks to come listen to you.  
 

Mary Wood: Oh yeah, just like if you're going to be at Black Hat Asia, come hang out and talk to us. 
 

We'd love to chat to you about supply chain and AI and ML and all that. Um, [00:26:00] and just don't be afraid of the talk if you don't know anything about ML. Cause I didn't when I started and it's really an entry level talk. Um, so if you're of any interest in the space, come, come chat.  
 

Sean Martin: Trust me, you all know more than me. 
 

Everybody listening, Adrian.  
 

Adrian Wood: Yeah. Um, just sending what Mary says, you know, come see us after the talk in the hallway. We'll do hallway con and Q and A's. We're also doing a supply chain birds of the feather feature at a black hat where we're doing like a Q and a panel with. Two or three other, you know, very accomplished supply chain experts. 
 

If you have additional questions, I want to know. Second thing is I want to see you running our open source tool when it launches. I want to see pull requests there and, you know, fun, interesting things as we expand and grow and try and work on the problem. I can see, you know, Mary's like, Oh God.  
 

Mary Wood: No, I'm just, I'm not, I'm not a dev. 
 

I don't identify as a dev. Um, so I'm excited to [00:27:00] open source it to get other people to help me and make it better and write more rules and make it awesome. So I'm excited for the community to work together and not just my own crappy code. So  
 

Sean Martin: it's, it's, it's amazing. You did that. I love that. And, uh, I will commit, I won't share it now. 
 

So when whoever's listening to this ahead of the head of the event, you won't see the link there, but I'll commit to add that after the event. So people can access it as they hear this. Um, well, congratulations again for good talk. Thank you for the work you do in this space. I know the research is. Fun, but also taxing and, um, and getting a talk accepted is always fun and challenging as well. 
 

So congratulations on that. And for everybody listening, thanks for, thanks for joining me on location. Um, Smack Marco upside the head for me when you see him for not being here on this, but I'm glad because we [00:28:00] got to chat technical today somewhat. Um, so stay tuned more on location coming from, uh, myself and Marco and please subscribe, follow, uh, Mary and Adrian and, uh, see everybody black hat Asia.