Redefining CyberSecurity

Punch Cards, Steam Engines, 48 Volt Batteries, Platform Engineering, and the AI Revolution: The Ongoing Evolution of Language-Based Software Development | An OWASP AppSec Global Lisbon 2024 Conversation with Oleg Shanyuk | On Location Coverage

Episode Summary

Join Sean Martin and Oleg Shanyuk at the OWASP AppSec Global conference as they delve into the transformative impact of AI on application security, development processes, and platform engineering. This episode offers nuanced perspectives on the balance between technological innovation and practical efficiency, making it a must-listen for tech enthusiasts and professionals alike.

Episode Notes

Guest: Oleg Shanyuk, Platform Security, Delivery Hero [@deliveryherocom]

On LinkedIn |



Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine |

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine |


Episode Notes

In this On Location episode, Sean Martin discusses the complexities of application security (AppSec) and the challenges surrounding the integration of artificial intelligence (AI) with Oleg Shanyuk at the OWASP Global AppSec Global conference in Lisbon. The conversation delves into various aspects of AppSec, DevSecOps, and the broader scope of securing both web and mobile applications, as well as the cloud and container environments that underpin them.

One of the core topics Martin and Shanyuk explore is the pervasive influence of AI across different sectors. AI's application in coding, for instance, can significantly expedite the development process. However, as Sean Martin highlights, AI-generated code may lack the human intuition and contextual understanding crucial for error mitigation. This necessitates deeper and more intricate code reviews by human developers, reinforcing the symbiotic relationship between human expertise and AI efficiency.

Shanyuk shares insightful anecdotes about the history and evolution of programming languages and how AI's rise is reminiscent of past technological shifts. He references the advancement from physical punch cards to assembly languages and human-readable code, drawing parallels to the current AI boom. Shanyuk stresses the importance of learning from past technological evolutions to better understand and leverage AI's full potential in modern development environments.

The conversation also explores the practical applications of AI in fields beyond straightforward coding. Shanyuk discusses the evolution of automotive batteries from 12 volts to 48 volts, paralleling this shift with how AI can optimize various processes in different industries. This evolution demonstrates the potential of technology to drive efficiencies and reduce costs, emphasizing the need for ongoing innovation and adaptation.

Martin further navigates the discussion towards platform engineering, contrasting its benefits of consistency and control with the precision and customization needed for specific tasks. The ongoing debate encapsulates the broader dialogue within the tech community about finding the right balance between standardization and flexibility. Shanyuk's perspective offers valuable insights into how industries can leverage AI and platform engineering principles to achieve both operational efficiency and specialized functionality.

The episode concludes with forward-looking reflections on the future of AI-driven models and their potential to transcend the limitations of human language and traditional coding paradigms. The thoughtful dialogue between Martin and Shanyuk leaves listeners with a deeper appreciation of the challenges and opportunities within the realm of AI and AppSec, encouraging continued exploration and discourse in these rapidly evolving fields.

Be sure to follow our Coverage Journey and subscribe to our podcasts!


Follow our OWASP AppSec Global Lisbon 2024 coverage:

On YouTube: 📺

Be sure to share and subscribe!



Bret Victor:

Learn more about OWASP AppSec Global Lisbon 2024:


Catch all of our event coverage:

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

To see and hear more Redefining Society stories on ITSPmagazine, visit:

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉

Want to tell your Brand Story as part of our event coverage?

Learn More 👉

Episode Transcription

Punch Cards, Steam Engines, 48 Volt Batteries, Platform Engineering, and the AI Revolution: The Ongoing Evolution of Language-Based Software Development | An OWASP AppSec Global Lisbon 2024 Conversation with Oleg Shanyuk | On Location Coverage

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.


Sean Martin: [00:00:00] And hello everybody. Very welcome to a new On Location with Sean Martin. I'm coming to you from Lisbon at OWASP AppSec Global. And, uh, we're here for a few days talking about all things AppSec, DevSecOps, and securing the applications that run our businesses and connect us consumers and citizens together. 

And, it's a big challenge. Uh, We're not just talking web apps, we're talking mobile apps. There's cloud, there's containers, all kinds of fun stuff. And, uh, of course there's another hot topic that runs through everything, which is AI, which of course touches on data, and, uh, brings with it a host of challenges, uh, around ethics and laws and all kinds of fun things. 

Um, I had the pleasure of meeting Oleg Shonyuk, and, uh, I'm thrilled to have him here. Welcome, Oleg.  

Oleg Shanyuk: Thanks. Uh, thanks. Uh, yeah. Sean, nice to meet you here. That's, uh, like, uh, [00:01:00] my first time to OWASP as well. Um, pretty new to the security world. And, uh, even it's, uh, it was my hobby for a long time. Uh, now it shifts to professional activities and also, uh, OWASP as a community. 

I'm learning it, uh, I'm super thankful for many documents, uh, on, on, which actually, um, I started to build a foundation of my own knowledge and understanding. So, that's awesome.  

Sean Martin: Yeah, it's an amazing community, and I know, uh, uh, we've crossed paths many times over the last couple of days, and we continue to meet, uh, meet people and introduce each other to folks. 

And as you pointed out, there's the conversations and then the, I know some folks referred you to some resources as well that you just mentioned [00:02:00] some documents and things which are super helpful. And that's why I love this community and and these events. And, um, you and I, we had a chat the other night where we, I don't even know how many topics we discussed, but we were talking about power and And AI and platform engineering and precision coding and all these things. 

And I was like, ah, man, I wish we'd recorded that conversation because there's so many far, far out there, uh, points that we were talking about. Um, so we'll see how much of that comes back up in our, in our chat today. Um, One of the things, and if people listen to the pre event chats on the road that I did with some of the keynote speakers, it was a lot about AI and different facets of it. 

There's building, there's using, there's running, there's protecting, there's using it to run and using it to build and use and protect. And of course, there's the [00:03:00] security aspects and the audit and the legal, as I was mentioning earlier. Um, there's a lot of talks on that and, um, just came out of a, a keynote from Rob Vanderveer and he, he was, he made a couple points and we'll, we'll start with this and we'll see where we go. 

But he made a couple points on, on, uh, the, the role of AI writing code. Okay. And in comparison to human writing code. And the two points that he, that he made that I, that I hung on to were one, AI could write code faster than a human, but it's, it won't have the ability to have the sense of a human as it's writing the code. 

And he likened it to, he gave two examples. Um, one basketball players knows the feeling of a double dribble.  

Oleg Shanyuk: Okay.  

Sean Martin: And a soccer player knows. To keep their arms out of the way so they don't hand touch the ball. Um, AI doesn't have that [00:04:00] sense necessarily when it's writing code. It's gonna, it's gonna produce something that seems accurate and it may be accurate, but it might be faulty. 

It goes through the bowl. Yeah. So that was one point. The other point was that, uh, because you are relying on a machine to write the code, Uh, and because of these challenges that, uh, that it might have in, in the code that it generates, you have to have a human there to validate it. And that human might have, might need deeper skills to code review the code the machine wrote versus the code that they would write themselves. 

If they write it themselves, they kind of know how they wrote it, why they wrote it that way. And can then code review a human generated piece of code based on that. And a machine might produce something that might be harder for a human to validate. Or the human might [00:05:00] need deeper understanding of the code to validate it. 

So the, the other point he made is you might write code faster, but the review skill set would be, need to be deeper. And the review process might take longer. So those two points were, were very interesting to me. So I don't know, any, any thoughts from you based on, on that and your experience thus far, uh, working with it? 

Oleg Shanyuk: Yeah, as we speak, uh, actually, like, it's a little bit of topic. Uh, uh, then I would like to come back to the, these two questions. Okay. So as we speak, uh, and, uh, recalling, uh, Um, uh, the AI able to generate so much code, et cetera, et cetera, and how a human can handle that. Um, um, I'm recalling, um, a talk from 2011, maybe, uh, so, uh, [00:06:00] it's, uh, sort of the parody. 

Let me please quickly find the name. So, um, so basically maybe, maybe also know him worried dream, Brett Victor. So, uh, a guy, um, he basically there's some talks of him, but I think the most recent, uh, maybe still from the previous decade. And now he's like, uh, stealth. He's working on a product which use a lot of AI and, uh, basically he says, uh, maybe computer is not what in your pocket, but it's more like the environment you go into work on your problems. 

So he's like about the argumentation and a lot of computation happening like almost magically. So you, you draw something on the paper system, contextualize that immediately and so on. And one of his talks was like, he [00:07:00] dressed up like an engineer from seventies. Uh, he put his, uh, like glasses, the t shirt, uh, and with the pencil here, like, and he used this beamer with the film slides to go over his presentation. 

About like, ah, it's like 1971. And let's, uh, let's think how it could be like in 30 or 40 years from now. Uh, uh, what can computers do and things, uh, uh, we can imagine. And he had like a lot of, uh, uh, funny, funny jokes about, uh, the API's and models and, uh, ways we process information, how computers as well. 

And I think one of the, uh, takes from that, uh, uh, was how people perceive the programming. And he [00:08:00] told, uh, okay, uh, we started with the punch cards. And the punch cards, uh, were very physical, very real. And you have to Control every signal inside the machine using the punch card. And then there was these guys who came and told, okay, let's, um, let's do a little instructions behind it. 

Let's, uh, uh, um, maybe a little bit virtualize this punch card world and, uh, do, uh, something like a machine code. Programs and the punch card guys were like, no punch cards is a programming and the machine code is not programming. Like, uh, then the assembly guys came and they say, okay, we can like level up the machine code. 

We can [00:09:00] create the loops and things, uh, a little bit more sophisticated instructions. And again, they were not perceived. But with the time we moved to the Assembly. And then other crazy ideas pop up. There was a guy who brought this language Fortran. And it's like, let's use human readable language to create a program. 

And the Assembly guy is like, oh my God, what is he doing? Yeah. And, uh, Yeah, I can only recommend to follow up and watch a few of his talks. So that's a little bit of, of topic. Uh, and, uh, yeah, uh, he, he has some prominent ideas and principles of the, how, how he creates the software. Um,  

so coming  


Sean Martin: do you think we missed some of that today? 

Do you think we're so in the weeds of doing stuff that we kind of missed the big [00:10:00] picture?  

Oleg Shanyuk: It's a good question. In a way, it's hard to think so because, uh, I believe there is no shortcut, right? So we have to understand thousands of ways it doesn't work before we know it works. Right, right. He has a, he, with that, uh, a little bit satiric talk, he had a nice detour and, uh, he pointed out a few things which could change like the course of, uh, evolution of how we operate with machines and let's see maybe what he, uh, was imagining will like partially be applied. 

For instance, he was reviving some, uh. Models of, uh, uh, working with the parallel programs like concur, concurrency and what since I have the, uh, iOS background and [00:11:00] following a lot swift development topic, I can tell what he was like, uh, calling out. More than decade ago, uh, is being actually implemented right now in the swift six as, as a big milestone. 

And the, the, this difference between version five and six is the concurrency and, uh, they basically use the same concept. It doesn't belong to Brett Victor. But what he did, he thought, Hey people, uh, recently when he was playing about seventies, right? So we got this idea about how we can run software on multiple processors in a parallel manner. 

So, and now it's coming to modern languages. So, uh, and the AI, which, uh, we are hyping today. We added a little bit from seventies, [00:12:00] from eighties, right? Uh, but mostly we optimized and speed up some algorithms. We found, uh, uh, maybe a shortcuts or, uh, Argumentation, which helps to build better models. But mostly it's because of the amount of computation and memory we got. 

Right. We couldn't afford that much. Um, when, uh, the theory was born and it came from mathematics, uh,  

Sean Martin: can we, can we touch on that? 'cause that was one of the things I enjoyed talking about the other night, which was, I think we, we used the, um, I started with the concertina example and the processing time, and we talked about the automotive. 

industry and the advancements there, um, you use the battery example, which I'd like to talk about because I think there's, there's an opportunity for machines to help us fine tune and, and bring [00:13:00] efficiencies to the way. Things are built and run. So the example I gave you there, then I was, I met a gentleman who's building a MIDI version of a concertina, which is the little, it's a miniature accordion. 

Yeah. And he was using AI to write the code to reduce the, to optimize the, uh, the processing of the, the note being played and the note being recorded by the sensors and then passed through the system. And. That was very interesting. And you gave an example of the automotive industry and the battery power. 

So I don't know if you want to describe that again for folks. I think that was pretty cool.  

Oleg Shanyuk: Uh, yeah. Okay. So, uh, it took a big change for automotive in the last decade. And this changes. Uh, switching from 12 volt batteries to 48 volt batteries and the, [00:14:00] uh, wiring and computers sensors. So the machines used to run on 12 volt. 

And now if you bought a car after 2020, it's like most of the chances you'll get 48 volt machine. Even if it's doesn't matter if it's like a Tesla or in internal combustion engine, um, like, I don't know, Tesla, Dodge corporation, whatever. So, uh, the thing is we used 12 volt for so long. Uh, but we need a good material, good, uh, um, wiring with a high quality copper to run 12 volt through the machinery, through the older systems. 

Otherwise, the signal just doesn't go through. And, uh, if we, uh, what we could do a long, long time ago is [00:15:00] just bump the voltage up because that reduces the requirement for the, uh, uh, conductor material. And, uh, it saves time, like, basically by quadrupling the voltage, which you quadruple down. the amount of copper you need to build the same car. 

Uh, we optimized the thing and eventually we did it not because of the environment. It's because the price went up, I think. And the funny, funny, funny thing about the 12 volt battery is, uh, uh, the technology is more than a hundred years old. It started like with the first cars, with the first batteries, when there would be like few wires inside and nobody actually. 

We don't really care much about this part of the car because, uh, there would be like almost no [00:16:00] price added to the car, right? Right. And, uh, we built a huge supply chain for every manufacturer. So if you want to, if you want to change a thing like this, if you want to flip a voltage, you're going to flip the industry. 

Huh? Yeah, an easy decision to make. Yeah. Yeah. It's a, it is an like, expensive decision, but, uh, in the end of the day, uh, the biggest manufacturers, uh, did it. They, they started I think in 2014 and ongoing today. I think the most of them already converted. So, uh, it's kind of. pressing the question of, uh, then like the real needs of the world and, uh, like ideas can be different, but reality is like, uh, brings a lot [00:17:00] of dependencies. 

And when we talk about, uh, AI, for instance, and we create something today, how much energy we burn? So when we generate the code, does it, uh, um, provide us like the efficient efficiency we're looking for?  

Sean Martin: Yeah, I don't, I don't know if anybody's doing the analysis on the return on investment for it.  

Oleg Shanyuk: I believe, yes. 

There must be at least some AI model to do so.  

Sean Martin: There has to be, there has to be. The other thing I want to touch on, and I know we're kind of bouncing all over the place, but, um, cause that, that scenario is basically swapping out the, the electrical platform on the vehicles, right? [00:18:00] And, um, to support the, the capabilities needed and reduce the cost. 

The, we talked briefly about the idea of platform. Engineering where companies can build a platform, maybe even providers of software and services could build a platform that everybody uses. Now, we see it in the form of AWS and Kubernetes and those kinds of platforms, but I'm talking about all the way into the, into the microservices and apps and so we have a common ground from which we all. 

And I've been a fan of platform engineering because I think it brings consistency, it brings control, it brings, to me, a lot of positive things. But then you brought a counterpoint that I thought was very interesting [00:19:00] around the ability, and it kind of goes back to the concertina example I gave, the ability to have precision. 

things for specific tasks and specific environments, specific context, whatever. And, um, yeah, so your thoughts on, and then I guess in reality as an engineer, right? Deprecation is another thing you pointed out in that conversation.  

Oleg Shanyuk: So, yeah, yeah. I think it's also related to the car batteries, like, uh, what it costs to deprecate 12 volts. 

Yeah. And interesting reasons which push the thing forward. And also a question, is 48 volt is optimal? Or it's a part of the deal which is a good trade off between the problem we have and the next step to the solution. And, uh, yeah, the AI is definitely, um, provides us additional tool, [00:20:00] It can be used as a hammer, right? 

And if you, um, if you don't know where to start, probably you start with the hammer. Then you master your hammer, you probably want to do a little bit of precision. And, uh, yeah, it's, uh, it's okay to, uh, explore the things. So, again, When we say, uh, so much code can be generated, is it a bad thing? Is it a good thing? 

Uh, that would be like a friend of mine, he thought, okay, there will be probably a profession like code sommelier or something like this. A person who would like, um, taste the system in a way, uh, and, uh, validate it from the human perspective. And say, okay, is it functional or not? And, uh, yeah, if one machine writes the [00:21:00] code, why we cannot add another machine which reviews it and then machine which manages these machines and gives them assignments and tracks their progress. 

Uh, so basically all, all the things we do now in, uh, so many software companies. In theory, yes, this can be replaced. The question is the, uh, Uh, what we get and also by solving this problem, won't we create any more problems like, uh, uh, won't we create a thing which, uh, provide us more tools for the 

world automation? Because everything we, we now interact with. It has a little bit of processing. I believe this core to the microphone. It has already chips inside. Somebody has to take care of this part of the [00:22:00] system. The lightning in the house. Your microwave. Yeah, when I was a student, I've been talking to students from Seoul. 

They wanted to work for Samsung back then already. The end. They were exploring the world of smart devices. And they were telling, okay, chips will be everywhere. In your cup, in your, like, uh, in your shoes, in your clothes. Chips will be everywhere. And we kind of advanced into that stage. We have something wearable. 

We don't even touch in now the medical topic, right? Because some people will benefit a lot from argumentation in this area. Um, and, uh, everything has to work, everything has to communicate, [00:23:00] everything has to, uh, um, somehow to reach the coherence. And, uh, also in a way be, uh, understandable enough for human to rely on it. 

And, uh, I don't know how much more code we need. So on one hand, we already got the hammer. It can so like, uh, maybe with the hammer, we can build a table with the table. We can. Get a workplace to write a poem, a novel. Uh, like, that's, uh, the very beginning.  

Sean Martin: Right. Yeah, so I was watching the Euros the other day, and I noticed they were checking to see if there was a handball. 

Oleg Shanyuk: Hmm.  

Sean Martin: And they pulled up [00:24:00] a screen to watch this wave, a signal wave, Uh, of the ball. Okay. In, in comparison to the video image, and they would show the person moving and their hand near the ball, and then the signal from the ball, which clearly had a chip in it, would show that the ball had some pressure. 

Oleg Shanyuk: Okay.  

Sean Martin: Which signaled that the ball was touched by the hand. Okay.  

Oleg Shanyuk: Yeah. Actually, yeah, now even the ball has a chip.  

Sean Martin: Yeah. So different use cases. I want to, um, I want to, because you made another interesting point. Do we have enough code or not? And before we were recording, um, nice. You talked about the languages, MLMs and [00:25:00] AI, stick with the AI theme, because Dennis Cruz has a talk, talk about deterministic gen AI outputs with provenance, and it kind of, I'm going to connect these points together, which is, we have data, we have code, we're building more models to help analyze this stuff. 

Um, It's going to come to a point where, where's the data from? What model generated the output? Is the output accurate? How do we verify that? Um, is that human based? Can we use other machines and LLMs to do that? And then, so there's that whole picture there of what is it? Who created it? What created it? 

What's the output? Is it accurate? Can we rely on it? What do we do with it? And then your point earlier before we started recording was, that's language that we, we speak, we type, um, we have thoughts, we [00:26:00] visualize things, we see things, um, where do you see things headed in, in that regard? Which, obviously different languages, just from different countries as well, but, uh, yeah. 

We're talking about language that could be sound and visual and voice and writing and thinking and coding and blah, blah, blah, blah. It's um, it's an interesting, interesting future ahead of us. What do you see happening there?  

Oleg Shanyuk: Yeah, we were talking a little bit about how we, um, got the machines, the LLM. 

Yeah. Machines now. So basically we build them by feeding all the text there. And in return we get in the model which can reliably predict the next word and phrase and [00:27:00] generate a very likely real and truthful output. And, um, The thing is built completely on the language, which we also use. And, uh, the language is a powerful tool is probably helped human brain to evolve, uh, to the state where we are now and probably helping us to move forward. 

And there is no theory which says maybe yes, um, maybe, uh, To move forward, we should also free ourselves from the limits of the language. Because it provides the power, but it also limits, by its own construct, the imagination. So, at one point, we contributed to build the language model out of our [00:28:00] language. 

On the other point, the output we get would be always limited by this language. It's not as horrible as it sounds. That's very interesting.  

Sean Martin: I mean, we can, we can bring it back to the battery. Yeah. Bring it back. I think you, in earlier, you mentioned the combustible, uh, I'm sorry, the steam engine turning to combust. 

Oleg Shanyuk: Yeah. Yeah. It was like a. The period of history is very short when we had the steam engine and it had its own evolution. So the first steam engine was definitely not as advanced as the modern steam engines. Even today we have working models and for some situations the steam has a lot of power and today we kind of mastered it to a certain extent. 

We cannot compare it to the 18th century. There was a time when, when [00:29:00] we like peaked with the, um, steam engine power and we had to move on. We started to use, uh, uh, combustion engines. We started to use electric engines and, uh, Steam went back, went down. So if we would run on steam, we would need so much, uh, energy now. 

We would need to burn so much stuff and so on. Uh, but it helped us. It helped us to advance a little bit to burn, uh, oil. Yeah, and gas. And, uh, the language itself is also a tool. Right? It just has a bigger historical span. And we probably use tens of thousands of years, maybe a hundred thousand years, we use the language. 

Uh, it evolves all the time as we evolving and, uh, or we [00:30:00] evolving as we evolve the language and at certain 

Sean Martin: The chicken or the egg.  

Oleg Shanyuk: Yeah, yeah. And at certain point, there will be, uh Like the language might become a limit we have to change something to move on so We good. We have not only language models, but we have similar technology for signal recognition Video processing etc. 

So Yeah code generation and things with the code is language So 

Sean Martin: We'll be limited by what we know already.  

Oleg Shanyuk: Yeah. Yeah. That's a, that's a classical thing. It's not, not too bad.  

Sean Martin: Yeah. And the other, the other point as we wrap here, um, is that I think kind of to the combustion engine and [00:31:00] that reached the limit of power. We, it takes too much energy to produce the power that we need. 

So we had to switch and I know we're using electricity a lot for AI and Bitcoin, you name it, right? Um, and, and as everything becomes connected, IOT devices and sensors and cars and footballs and shoes and whatever else. That, that's all gonna require power or energy. And I, I don't know, it's just a feeling I have that we're gonna reach a point where we don't have enough power to run all this stuff. 

And, um, that's, that may be a shift as well where we need to change how we, how we, uh, create a, create an environment in which we live. I don't know. 

Oleg Shanyuk: Yeah, uh, [00:32:00] I was even trying to compute some, at some point, if that would make economical sense to actually launch the solar panels in the space and, uh, launch the computers, probably, I don't know, maybe we will have cooling issues there, but it feels like, uh, maybe easier to radiate heat, uh. 

From the computers and the more advanced computers we have like in our phones already Not that heat extensive Intensive so basically we can mine bitcoins or Mine computation using solar energy And, uh, maybe that can help us to offset. So we can, we can, there's a lot of questions, how we [00:33:00] deliver this amount of solar panels into the space, how much it would cost and how much we have to burn to get it there, right? 

We still have to burn.  

Sean Martin: You need the desk filled by the hammer first. 

Oleg Shanyuk: Yeah. Yeah. Yeah. And, uh, um, Yeah, there was a book, uh, from Suarez, I think, uh, about the asteroid miners, um, Delta V,  

yeah, uh, it's, uh, like, in short, it's, uh, He, uh, he provides a little bit of perspective, what it makes to get, uh, material into the space, like how to, how to get it there, uh, he tried to think of what it makes to a human to reach, uh, the asteroids or something which within our technical capacity to get to, to Get, uh, all the materials, uh, the resources [00:34:00] and bring it like closer to the planet. 

How much it would cost, like if it be orbiting our planet, because, uh, now we build stuff and then we launch. Of course, we cannot afford ourself to launch, uh, raw material and build in space, right? And, uh, if we want to expand, we probably Yeah, have to dig in the moon. Is it something happening already, right? 

Like a few programs to build some factories on the moon.  

Sean Martin: I know there's a view for that. I was reading last night that there's a, that the plan is to decommission the International Space Station and open that level of space for space exploration. Yeah. For the private sector. Open hotels and stuff. Yeah, yeah. 

I guess if you want a holiday there.[00:35:00]  

Oleg Shanyuk: Still be cheaper than the hotel in New York.  

Sean Martin: That's right. Probably true. Probably true. Ah, Oleg. Well, we touched on many things here. I don't know. Hopefully people follow along. I think we, uh, certainly. Hopefully, uh, made people think of it. Uh, a lot of fun topics here. Uh, an absolute pleasure meeting you and, uh, a joy talking to you today. 

And, uh, yeah, hopefully we'll be back on the show again. I think some of this philosophical stuff, Marco would love to chat about as well. So maybe we'll have you on with Marco too, um, my co founder. And, uh, we'll explore some fun things. But, uh, hopefully we didn't talk too much security. I think the main thing is you want all this stuff to stay up and running and, and, uh, and have integrity. 

Yeah. So availability, integrity for sure. Um, [00:36:00] confidentiality and privacy obviously comes into play in certain, in certain circumstances, but, uh,  

Oleg Shanyuk: thanks, uh, thanks, um, to, to. Add a few words about the security. I think for the first day, uh, I have to appreciate Martin's words. Like if you want to have the security, um, you want actually to have quality testing. 

You have to have reliability. You have to have engineering at certain level. So of course we talk about the software security and system security in software space. And, uh, you can't just go there. It's like, Yeah. It's not magic. Keeping, keeping this steam engine is not possible. No. There's no shortcut.  

Sean Martin: No, no shortcut. 


That's, that's a good point. No shortcut. Yeah. Yeah. It, it's not magic. You don't just arrive and it's done. Yeah. It takes some work. Yeah. It takes, uh, it takes this community and I'm, I'm thankful Oasp brings [00:37:00] folks together that, uh, are thinkers like you and, uh, doers and. And, uh, bring us all together to find, find that path beyond the steam engine. 

Right. Yeah. A safe one. We all survive happily ever after. All right. Well, like, thanks for, uh, thanks for the chats. Thanks everybody for listening. Um, I'll try to dig up a couple of things that we, we referenced, uh, during our chat and put, put links in the notes for that and, uh, appreciate everybody listening and see you on the next episode. 

Lots more from, uh, OASA. Sec Global in Lisbon coming your way. Ciao. Ciao.