Skip to main content Search

Decoding DeepSeek

In this episode of the DevOps Sauna, Darren and Pinja are joined by Henri Terho to discuss DeepSeek, the newest AI model released from China that's been making headlines on social media.

[Henri] (0:02 - 0:22)

The innovation is really that they can now do submodels of it, with much smaller resolutions that still work quite well.

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.

[Pinja] (0:22 - 0:32)

Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

[Darren] (0:37 - 0:55)

Welcome back to the DevOps Sauna. Today, we're going to be diving into a topic that's been making news over the last week. We're going to talk about the DeepSeek AI models coming out of China.

Here with me today, we have one of Eficode’s AI experts, Henri Terho. Hello. And of course with me is also Pinja.

[Pinja] (0:55 - 0:57)

Hi there, and welcome, Henri.

[Henri] (0:57 - 1:03)

Thank you. Glad to be here and talk a little bit about the new AI models and all the stuff happening in the AI field as there's a lot going on.

[Darren] (1:04 - 1:21)

Yeah, it's been kind of explosive over the last few days. We've seen this new DeepSeek model drop and I think the kind of talking point for it has been how much cheaper and how much simpler it has been to generate this. Now, can you elaborate on that at all, or do you think it's just marketing hype?

[Henri] (1:21 - 2:26)

Yeah, that's an interesting point, and we've seen a lot of discussion. We saw the biggest single company losing value moment with NVIDIA again, losing 17 or something, 20% now, 600 billion drop in value because of this news. But then again, yes, I think it's a great innovation.

They've really done something that they can now retrain models much cheaper that's going on. But I also think that that's also fueling NVIDIA in the long run. For now, we see a little bit of a dip and needing to build these huge labs to do it on to get the performance level that we currently are.

But as we know, technology just keeps on going forward. We want more performance, we want more usage out of it. And I think exactly this is what's going to happen, that this is going to drive the usage of AI a lot.

And we've been talking a lot about training AI and the training needs will become easier. So more people will train AI, and also now it's going to be more accessible, so it's going to be on more devices. So in the long run, I think this is going to be a really big thing for NVIDIA as well, even though they now took a dip.

And we also kind of have to remember that, well, if you think about it, DeepSeek is originally a hedge funding company. So I think they know what they're doing with stock prices as well.

[Pinja] (2:26 - 3:04)

Yes, this has been on everybody's minds, everybody's lips. If you go to LinkedIn right now, if you go to any technology discussion, any news site, you see DeepSeek, you see there is one model being discussed. So there is so much speculation going on, and people are very keen on jumping to the conclusions.

Henri, as we now say that perhaps this is just will be good news for NVIDIA, but we're looking into some people calling this the Sputnik moment of the AI or somebody says “No, it's not the Sputnik moment. It's the Google moment of AI.” So maybe with this discussion, it might be worth shedding some light and maybe bringing this more into context.

[Henri] (3:05 - 4:31)

Yeah. I think I read the same article about the Sputnik versus Google moment that I think you're referring to as well. And I think the Google moment put this quite nicely that when you do Sputnik, you have the whole engine of the whole state doing something really great and getting something to orbit and showing that our state can do this.

And the Google moment is more like, yeah, we had two guys in a university classroom figuring out something and writing a paper about it. And I think this moment, this is more about the latter. And I think this also talks a lot about how the whole AI scene and open source scene always has progress and will progress.

And I think this is another rung in the ladder. And I kind of want to take another example here as well about these training models on the cheap to get same performance. Back like a year and a half ago, Silo AI FMN as well trained the bottom model that basically ran with old AMD stuff that we have at the LUMI supercomputer in Finland.

And many people were in the technicals, but oh, you can actually train with something else than NVIDIA as well. And that was also trained quite on the cheap. So there's a lot of these kinds of movements and this is, yes, it's a technical innovation that you can do it on cheap, but it's also a superbly good marketing ploy that they've done and actually releasing the model as open source with nicer licensing.

That's even with Llama. Llama is open source, but some of the licensing terms that are, there is some limitations on its users. This is like the first MIT model, but they really didn't release everything open source.

You don't know about the data and you don't know about that. So, and also like one thing to discuss, I think is we've talked a little about the model, but there's also the app to actually use the model. And there's a lot of different terminology being thrown out on as well.

And this, this, uh, what's going on.

[Darren] (4:31 - 5:30)

Yeah. And I think what we need to do here is kind of clear up the difference between the app and the model. Cause I think the app is one of the things that's driving the marketing hype.

It became one of the most downloaded apps on the app store, but from a like security and compliance standpoint for use in Europe, it becomes so complicated because it's using servers and data storage in China. It's breaking GDPR regulations. So we, I think we do have to talk about the app for a moment first, because I think the models are more interesting and more complex discussion, but for a moment, let's just go through the app.

And like, so the app, I believe it came up kind of suspiciously quickly, in my opinion, it kind of came from nowhere overnight. And as you say, DeepSeek is a hedge fund. They do know how to play the markets and having an app suddenly explode and become the most downloaded app is certainly something that's going to drive a lot of positive attention that way.

[Henri] (5:30 - 6:44)

Yeah. And I agree with you on that, that there's been explosive market drive to get it to everybody's faces. Hey, here's something that we did, which is a lot better than NVIDIA and driving that stuff.

And not even NVIDIA, but I think the target is OpenAI, Anthropic, all of these other companies were building their own model that we did this better. And “Hey, here's an app that we use that everybody just suddenly wants to install and test out.” And I said, “That's a really thick envelope of red tape around it. Where does the data go?”

What does it actually extract from your cell phone? And all of that, we have a lot of the same kind of stuff happening with the European originally when OpenAI and others were training, they also took a lot of data, but now we have a lot of frameworks like GDPR, as you said, to kind of give you consumer rights as well around that. So that you cannot just extract all the data, but exactly like don't use apps that are tied to a lot of different places, at least not in a company context, because as I said, GDPR is pretty much broken on that.

And there's a lot of problems in there around the model in the kind of like LLM infrastructure and the runnables around it. The model in itself said it, that's a more interesting discussion of what's happening there, but exactly the like data storage, the processing of the data, where does it happen? How do we do it?

What data do they collect? And how do they use it in the future are the kind of like mysteries around this.

[Pinja] (6:44 - 6:59)

But I guess this is nothing new in terms of if we think of big companies, even in the US, for example, Meta, Google, etc. So am I now in the phase of a decision who actually is using my data and storing my data, right?

[Henri] (6:59 - 7:31)

Yeah, pretty much that. Who do you choose to give you all of your data? And also like there was the big debacle back when Slack appended the opt out clause into their Slack data, for example, that you had to opt out or otherwise they will use all of your Slack data for AI training.

And a lot of these companies have been pushing for opt out clauses now because data is king. Even though now we can train cheaper models to run, the amount of data that these guys had for training the model is huge. And that's what they need.

So that's still going to be the number one resource that everybody's trying to get their hands on.

[Pinja] (7:31 - 7:43)

And there was one claim as well for the source code as well, what I read, what DeepSeek has said that they use GPT-4 as the source code. Was that something that either one of you encountered in the news in the past couple of days?

[Darren] (7:43 - 8:35)

I think there was a news story that broke in the last couple of hours about there's some suggestion that OpenAI's bases were used to train this model. And I think that comes down to like, if we start talking about the model and talking about the actual function of the model, we can start moving towards that. So, assuming we're done with the app, it's basically just a case of be careful about using it, but that goes for everything because it depends who you trust not to use your data, even when you opt out.

So your data is usually going somewhere, the US, China, or it's not ideal. Just keep those in consideration. And then we can talk about the model because the model is far more interesting, but maybe we should start with the basics.

Can you maybe, Henri, talk about how it is outperforming these larger models with smaller budgets?

[Henri] (8:35 - 11:54)

Yep. So I think let's talk a little bit about the model and then we can talk about all the stuff around it and go on that. So typically when we talk about models, there's two parameters that many people think and how they scale up and what they actually are.

So the one that keeps floating around a lot in media is how many billions of parameters the model has. And that's typically been the dominant factor in many of the articles, they now have this many parameters and they have this many parameters going in there and what's going on. And that's kind of like telling how many buckets the AI can put knowledge into, how many different concepts of knowledge the AI model has.

Is this up to the cat or a mouse or something like that? But of course, these labels that I'm saying are generalizations, but that's basically what AI does is take knowledge, puts it into different buckets, and that's how it understands what is related to each other. And how this is done is basically every single one of these parameters is either a minus one or plus one.

So really like positive relationship or a negative relationship. An example of this would be the Swedish King and the King of the UK, for example, could have a positive relationship of 0.8, for example, and a cat and a king might have a negative relationship of minus two or something like that. Basically, similar concepts have similar relationships, basically.

And this is what we're talking about when we're talking about parameters in models. And the second thing that's been now talked even more, this has always happened in the background, but it hasn't been talked about because it hasn't been relevant in the same way, is what's called how many bit model it is. How big of a resolution that plus-one minus-one relationship has.

And what I think has happened with these DeepSeek models is that they are able to push it back into smaller pieces and work with smaller models and all the hardware is that they've made the resolution of the model smaller. So you don't have to have 32-bit large storage for each of these plus or minus relationships. The resolution, you don't have to have a thousand different options for that.

It's just enough to have like four bits or five bits in the case in there. In the same way, this increases or decreases the resolution of the model. So these two parameters pretty much then you multiply them together, and that tells you how much memory the model requires.

And that's still the case. If you look at the DeepSeek R1 model, which is the mainline model that they have, even though they're saying that, yes, you can run it on any hardware, you still, if you want to run that, you still need like 300 gigabytes of memory to run it. So that's not the innovation there, but the innovation is really that they can now do sub-models of it with much smaller resolutions that still work quite well.

And that has been outlined in many of the papers and how you can, what's called the distillation basically, take other models and then distill them to be smaller and just take subsets of the data to make them perform still quite well. And that's what's driving now the whole innovation craze and why Meta is making war rooms on how to figure this out and all of that. And that's actually all pretty much in the paper.

So these guys released all of that in open source, but they didn't release the pipelines on how to actually do it. So now everybody's rushing to reverse engineer it and do it again. And that pretty much is then what be it, that are the claims accurate?

Well, technically, yes. And they are accurate in the fact that it performs in the big model scale, it performs well. And then, on the smallest scale, it's very performant.

So you're not getting the best performance. You're not comparing a model that you can run on a Raspberry Pi to a model that you can run on a data center. But the Raspberry Pi model is a lot better than OpenAI models run on a Raspberry Pi.

[Darren] (11:55 - 12:09)

One thing I'd like to open up a little bit. I think there's this term that's been popping up that I don't think a lot of people are familiar with. And it's the idea of a distillation model.

I'm not sure that a lot of people have heard that. Can you open up a little bit more about what a distillation model means?

[Henri] (12:09 - 13:52)

Yeah. And I think this is also like many of the terminology around AI and all of that is still some of it is like old data science stuff that's then being twisted to into the models and all of that. So what we are actually talking about is what's called model distillation or knowledge distillation.

So what you can do is basically train another model with the output and input of another model. So basically, this is what's called reinforcement learning that you give the model outputs from, for example, OpenAI. This is what they are claiming that has been done that the trainees have been running queries on OpenAI's model, asking these questions and seeing what comes out, and then giving this data to their own model.

This is basically model distillation. This is, of course, a lot easier if you have your own models. So you can not go through APIs and go through all of the jumps and hoops to get into that data.

But if you have your own model, you can do it on a much lower level, but you can do it basically with any resource. Basically, I'm even doing model distillation at some point by just talking to OpenAI models and giving my data to them. But this is on a much grander scale, how you can transfer knowledge.

And this is what they also released a lot of, like GWEN running on top of DeepSeek. They're running Llama models on top of DeepSeek architecture and on this reinforcement learning model that they do. And that's that way that they can kind of provide a lot of different models on top of it.

And also, easily, like one problem was you still get some Chinese output from these base models. And they, of course, they're trained in Chinese as the main language and all of that. And you have a lot of data.

But then, when you do transfer learning or this distillation, transfer learning is also kind of similar term that overlaps with this. You take Llama model from Facebook and transfer a lot of that stuff and give it more weight that this is more important than what you have on the base level. Then you get this distill model that's going on.

So it's pretty much transferring the knowledge of other models to top of your model.

[Pinja] (13:53 - 14:04)

And if we think about the distillation and everything that goes into here, the prices of the team, the hardware, is that baked into this? How does it affect the cost of working with DeepSeek?

[Henri] (14:04 - 15:01)

Yep. Well, for example, can I say that if I use an Llama model now at Eficode in our company to build a proprietary, well, I cannot do a proprietary model on top of that, but like make an Eficode model and distill knowledge on top of an open source Llama model. Can I say that, hey, I did this just for like half a million that I did my own training?

And this is, I think, also the question that's being plotted around. These guys have said DeepSeek is a hedge fund company. They've had a data team for a long time.

They've even bought a lot of NVIDIA GPUs before the embargoes that happened. And like, where do you put the limit on what's the price of this project compared to all the big learnings, hiring the team, building up the knowledge base that you have? Yes, I think even OpenAI could train their new models quite cheaply now that they have all the infrastructure and all that place.

How do you split the money? And I think the price here is downplayed or exactly for the nice media impact it's been downplayed and put into that.

[Pinja] (15:02 - 15:25)

And also related to this, we briefly touched upon already OpenAI models being used to train and perhaps also in being used in the source code. So that's everything that we have right now is feeding into one another. And as you said, OpenAI taking now learnings from this.

Google team with Llama going into their war room and taking learnings from this. So, this is interesting to see the next steps of what everybody's going to be doing.

[Henri] (15:25 - 17:02)

And as I said, DeepSeek also has their own huge models on the background. They don't have to do, they don't have to steal stuff from OpenAI. But also like, this is a community.

The AI community is still not superbly large. There's papers going around, people know each other, people talk. So the knowledge also spreads through that.

And I think this is also like I said, what does this mean for open source or AI in general? I think many of these models that are happening and that's been happening with all tech. If you think about it, basically all server infrastructure in the world is run by Linux.

It's all run by stuff that you can fix by yourself, but you know what's actually happening under the hood. And we initially got the big pushes with Microsoft and all of that bringing operating systems into the limelight and that this is what you can do with a computer and at some point those become black boxes that you cannot fix. And I think this is now the case in the AI world.

Now that we've gotten the first taste of what we can do with all these language models, the demands are also going up that, Hey, I cannot build my business or my business logic or what I do in my company on top of a black box AI. How can I trust what's going on in there? Or then I don't want to give you all our business processes, what we are doing.

What do I have at that point? If you already have my data and then you have my processes, what is my company after that? So what I can kind of go into with the model side of it is exactly what's happening in the open source community at that point.

And how do we go, go into this, that the model is just going to be a model. Stuff's going to happen in the background. And it's more about how you run that stuff and what's happening in the background.

And I think this comes back to the app discussion that we had that yes, they have the DeepSeek AI as a business running the model in their app. And then they have the model that they have revised.

[Darren] (17:02 - 17:38)

Let's actually talk about that because kind of surrounding this discussion, the AI model itself has been under discussion, but there's, I feel like a lot of mistrust from anything that's basically fronted by a Chinese hedge fund. So I feel like there's been a lot of discussion about the security around even using these models on a local installation, which maybe we can dive into that because I think not a lot of people know how models are applied locally, how they're used. So could you maybe open up a little bit about the architecture of how these models are applied?

[Henri] (17:38 - 19:36)

Yeah, it's going to be a little bit technical. And we're going to build a little bit on top of that, what we said about the parameters and how the resolution of those parameters is. So, basically, that's exactly what a model is.

It's just a huge matrix table of vectors, basically. So, so it's, in essence, a model in its deepest state. It's just a lookup table, a huge lookup table of data of what letter comes next.

If I can view this in a very basic sense, that's, that's, I think that's the kind of like the most interesting thing about layer evolution as well is mathematically this whole thing is, I wouldn't say simple, but kind of beautiful in the way that if we can just by basically using a glorified text prediction that you have on your cell phone output, this kind of level of data that we have, I think it's that great.

And I think on exactly because of this, that there is a lot of AI demystification that we have to do in here is exactly that the model is just a lookup table. And that's why if you go into like GGUK and Opentensors, which are the file types that we use to transfer these files and run these files as well. Yes, they are pretty much just ways of matrix tables.

There isn't any kind of too much, any kind of runnable code. There's some boilerplate around them. Yes.

And there has been shown that you can do some buffer overflow attacks and stuff like that using those files. But in the same way, you can do that with PDFs, you can do the multiple files. It's not the property of the runnable model that does that.

It's possible that people do that, but for example, DeepSeq model R1 and the other models are already available in Google's G-Compute cloud. They are in their model card, and they are in Azure. They have taken those Opentensors models and run them in many places.

So I think that's also some kind of proof of security that the model isn't kind of like a security thread in itself. It's more about the platform that you use to run the model and do you trust that platform. And then we again, of course, end up in this, do we trust US platforms?

Do we trust Chinese platforms? Do we trust European platforms? And then it's more about your personal preference in there.

And yeah, but that doesn't have anything to do with the base model in there. And the same way that we have deployed models for a long time as well.

[Darren] (19:36 - 20:27)

There's also a little bit about the publication of the model there, because what we do know from security is that zero-day vulnerabilities are extremely valuable, extremely rare things. And the longer you can keep them under wraps, the more data you can get from them, the more you can use them. So the idea of publishing like an open source model to contain a novel vulnerability, exploiting something like the tensors to the minimal amount of executable code there, that would be interesting if it was more of a harder to obtain model, because otherwise it's just going to be ripped apart and discovered.

And I think people are already seeing that they're already doing traffic analysis. They're already looking for anything calling somewhere they don't expect it to call. And I don't think we're finding it yet.

So I think tentatively.

[Henri] (20:28 - 22:08)

Yeah. And exactly like I think every single security expert now in any kind of big company, any kind of country in a lot of places, if you'd like to have a zero-day vulnerability in this package, you wouldn't do it now, you would do it later. And at some other point, because everybody's now looking at deep six packets.

And I said, it's open source. You can go look, go ahead, and check out what is there. It's kind of like an open invitation.

Of course, the model is, as I said, we come back to this. It's very difficult to unwrap a model. I said, it's a huge matrix table.

It's a 700-billion-dimensional matrix table. You cannot go in there and try to figure out, okay, what does it do with this concept and whatever. It's just a bunch of weights.

But exactly that's also the reason that it's super difficult to program. Anything is a statistical process of giving you stuff. So, it's not a really easy way to make executable code as well.

And this, I think one kind of another security or threat that has been thought about is kind of like this side channel attacks on what information has been thought to the model, because that's very difficult to confirm. And that's what we see. Of course, the most typical steps of this is kind of like censorship and stuff like that, that happens in all of these models.

For example, that happens in base, in different companies' models, that what information do they choose to omit from their models in training? We always have a model bias. And this is also why now many people, there's the Open-R1 project that they are trying to unwrap because they didn't release the actual source code of the pipeline that they used to train the model.

They just open sourced all of the drawings, architectural diagrams, all of that, and they didn't open source the data. But there's a huge project of also open sourcing now the whole pipeline to see what's up. And I think that's going quite well.

And they've done a lot of innovation, even in just replicating. I think one of the universities in the US also replicated the pipeline as well. So it is possible.

[Pinja] (22:09 - 22:21)

And let's take that as a segue. What does the open source mean for AI in general? If we think of like Llama, for example, is it open sourced?

Isn't it open source? What is the situation there?

[Henri] (22:21 - 23:06)

Llama is open source as long as Facebook lets you use it. You can do stuff with it, you can use it, but there is also a kill switch on the usage side of it. So as I said, this being the first very large model published under an MIT license that you just do whatever you want with it.

And this is also like an interesting point on the fact of liability that if you just release a model on MIT license, I'm pretty sure that they are not liable for any of the comments or anything that the model does. And the whole like legal field around AI is a minefield that's yet to be tested in any court. But I think this is one of the big steps now to actually democratizing AI and getting the open source community speed of the US.

And the moat around these big companies that take in billions of resources is dwindling constantly because, well, a lot of stuff happens around us.

[Pinja] (23:06 - 23:56)

I see multiple people on LinkedIn, of course, having their opinion on this. But it's also people giving us the information how they're now working on their own rags, and they're taking these models. And as I say, this is a step towards a democratization of things.

And it will be interesting to see what, for example, can legislation follow up with this legislative processes, except especially in the EU are extremely slow as we know them. And it is not a joke. Then somebody said a couple years ago that we will see more development in in these five years now than we saw in the past 40.

So I think this is just getting, it's going to speed up even more and more. So do we know where he's going? And there is also the ethical elements, I guess, about the use of models, the training of these models that need to be taken into consideration.

[Henri] (23:56 - 25:56)

Yeah. And I guess that's exactly that we don't know with any of these companies where the data comes. Open sourcing data is very difficult because data is the key to many of these.

And I said, even in our company, the data that's in our experts head is the number one knowledge that we are selling and everybody's selling that. And then, if we, on top of that, sell our processes and give everything out to AI agents, what do we have left? And I think that's the big discussion that's going to change a lot of stuff around companies.

And I think that's what we are now seeing also that initially everybody was going to the cloud, let everybody just test out AI on the cloud, give them all the data. And now many people are realizing that, hey, we might actually have to transition back into on-site or on-prem or something like that because we don't want to give all of our stuff away. And there's a lot of, not just kind of like data privacy discussion, but ethicality, what kind of models, where is that used, who uses it, what's going on and what's going actually into the model and what's being left out, because we don't have that control.

Now with more and more open source happening, companies get more control, even people by themselves get more control if you want to build your own models for usage. And I think one interesting tidbit about why I think this is coming out of China is even though many people don't think of China as a super capitalist country, I think I'm doing some business in China. I think it's even more hyper-capitalistic in the sense that there is no data privacy laws that are very strong.

There is no copyright. There is no kind of laws around that. So everything is very, very focused on execution with everything being free game.

And this really accelerates some of the development, not always in a good way. For example, there's been cases that rival companies do bomb threats on other companies just to try to like slow down their development, to get their developers out or do like this kind of stuff. So there's a lot of stuff happening in there, but because of no copyright law, I said execution is king in there.

And that's why we get a lot of this. And that's more of a philosophical discussion then on what do we want. And I at least like the European privacy laws as well.

I like living in Europe as well, but there's a lot of different philosophical aspects to this and ethical considerations depending on your background, where do you want.

[Darren] (25:56 - 26:28)

There's also the consideration to be taken with the US side of things. I mean, we're talking about how this DeepSeek model has been open sourced and that comes to question, can it be open sourced if we don't know the data that's gone into it and don't know the pipeline? But on the other side of things, we have the US companies pushing out these completely opaque models that we don't know anything that goes into them.

We don't know the data, we don't know the pipeline, and we don't know the model. All we get is the output. So we kind of have to weigh both sides of this.

Exactly.

[Henri] (26:28 - 27:14)

And when you think about Meta and the other companies who have had these models, they have a lot of data on you already. And the question is, have they ever asked you if they can use that data because it's been collected even before the advent of AI and a lot of this. So there's a lot of data floating around that we don't really have any control.

I think pretty much every single person has a Meta account in some form or has had in the past. And has anybody actually read through all of that? I'm just taking Meta as an example, it's not to pass on them specifically, but there's a lot of these accounts that many of us have.

We all have a Microsoft account, we all have a Google account, we all have a lot of these stuff. And we do not have, we just have to trust the legality side of it. We have to trust GDPR, we have to trust this.

It's all about the chain of trust in a lot of these places that's going on, and that's what we have.

[Darren] (27:15 - 27:22)

Okay, thank you for joining us today, Henri. This was insightful. We will be back next week with another episode of the DevOps Sauna.

[Pinja] (27:23 - 27:25)

Thank you everybody for joining. Thank you, Henri. Thank you.

[Darren] (27:25 - 27:27)

Thank you for having me and talking about AI.

[Henri] (27:27 - 27:28)

It's been fun.

[Pinja] (27:32 - 27:35)

We'll now give our guest a chance to introduce himself.

[Darren] (27:36 - 27:38)

And tell you a little bit about who we are.

[Henri] (27:38 - 27:57)

Hello everybody, my name is Henri, and I'm actually a biologist by background. My background is in computational systems biology, so I've been in the AI field and data field for 15 years and now working as a Senior AI Consultant and really enjoying the progress that's happening in the LLM and the other space as well and through open source fanatic as well.

[Pinja] (27:57 - 28:01)

I'm Pinja Kujala. I specialize in Agile and portfolio management topics at Eficode.

[Darren] (28:02 - 28:05)

I'm Darren Richardson, Security Consultant at Eficode.

[Pinja] (28:05 - 28:07)

Thanks for tuning in. We'll catch you next time.

[Darren] (28:07 - 28:13)

And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.

Published:

DevOpsSauna SessionsAI