Skip to main content Search

January DevOps News

In this episode of the DevOps Sauna, Darren Richardson and Pinja Kujala round up the latest industry news for January 2025, including current stories on DevOps, technology, and AI.

[Pinja] (0:03 - 0:09)

I think we can agree on the fact that this is a step towards a democratization of AI.

[Darren] (0:14 - 0:22)

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.

[Pinja] (0:22 - 0:32)

Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

[Darren] (0:38 - 0:42)

Welcome back to the Sauna. I'm here once again with Pinja.

[Pinja] (0:42 - 0:43)

Hey Darren, how are you doing?

[Darren] (0:43 - 0:54)

Pretty good. I think it's about time we talked about some news. It's been an eventful January, I think.

Quite a lot of things have been popping up from time to time.

[Pinja] (0:54 - 1:19)

It has. It has been a very eventful month. A lot of things happening in the field of DevOps, technology, and AI.

A lot of things happening in AI, to be honest, in the past couple of days. But let's talk about other things as well, because we have seen that there is a report that tells us that DevSecOps progress is in increase, but at the same time, developer training is going down.

[Darren] (1:19 - 2:04)

Yep. Black Duck Software was actually the author of this analysis. They found a, I believe it was a 67% increase in organizations performing software composition analysis and 22% rise in the number of creating software builds and materials.

So it's quite interesting that Black Duck, as we know, they're an industry leader in software code security. And the fact that we're seeing more of an increase on the technical side is quite impressive, but it's also to me a bit worrying. The 51.2% of organizations providing basic security training to their application development teams, the lowest rate observed to date. That's quite surprising that that's being cut.

[Pinja] (2:05 - 2:27)

And there is this contrast. And to me, what is striking here is that organizations are now employing more and more what is considered good DevSecOps practices. So where is that coming from?

How is that actually working in the organizations if the training is not on a high level, but we're creating SCA and we're creating SBOMs more than before?

[Darren] (2:28 - 2:49)

Yeah, it's kind of curious. I mean, I would have expected these two numbers to kind of increase as a standard, but maybe it's than like that the more we use these automated tools, the less people think that we need the human side of things, which, as we've seen from many incident reports is just not the case.

[Pinja] (2:49 - 3:05)

That is true. So let's see how the numbers are developing in the future, because we might see some impacts on this, for example, in terms of vulnerabilities or how the organizations are perhaps taking a more active stand on the training of their people too.

[Darren] (3:06 - 3:24)

Yeah. It also makes me wonder about certifications because a lot of these trainings are actually required by a number of certifications. I wonder how much things like NIST 2 and the Cyber Resilience Act are going to have an impact on this, but maybe that will be in Black Duck's next document that they put out in a year's time.

[Pinja] (3:24 - 4:04)

That is an interesting development that we shall see most likely because of this needs for the implementation. There's another interesting news. This was more on the AI side, and this is actually from Microsoft.

They released a blog a couple of weeks ago, and they're now increasing their efforts on AI platforms with a new organizational structure. So they're bringing together multiple areas of their organization because they have this mission to build an end-to-end Copilot AI stack. So that would be for both their first-party and third-party customers so that all the AI apps and agents would be built and run together.

[Darren] (4:04 - 4:43)

Yeah. We're getting into an AI section, I think. We have a number of AI stories, and it can sometimes be quite difficult to cut through the marketing jargon.

If we look at the actual announcement on Microsoft's blog, it's saying this is leading to a new AI-first app stack, one with new UI and UX patterns. There's a lot of marketing hype, but I don't know. I've been quite impressed by the seamless integration of co-pilot.

So I'm interested to see how this will push forward and how core AI is going to influence the world of development.

[Pinja] (4:43 - 5:18)

Of course, this was a Microsoft announcement mainly about the organizational design, of course, but this is a sign of their new push in the area of the seamless AI and AI agents. They are saying that, of course, the organizational design should not dictate how they offer their services or any one of us offer our services. But we know, as the Conway Law states, that what we have as an organizational design is what we ship to the customers.

So putting an effort and some focus on an organizational part that is dedicated to this AI platform will be interesting to see.

[Darren] (5:19 - 5:39)

It was also interesting because this was like a position, the message was actually positioned as being sent internally at Microsoft and then posted to a blog. So this is something that they're pushing from the inside. So it's kind of a unique method of communication.

I don't think we get exposed to Intel company communication that often.

[Pinja] (5:39 - 6:00)

No, that is correct. So there is also from the marketing point of view, they're communicating to the customers as well, even though this is something for them to see internally. But for us as Microsoft service users or bystanders, that, hey, this is what we're working at the moment.

Please see what we can do with this way of working in the future.

[Darren] (6:00 - 6:44)

And they're not actually the only one to have some interesting news in the AI sphere. A little closer to DevOps, we have the launch of Aiden 2.0. So OpsVerse launched Aiden some time ago and they've now released a new version, which is basically a co-pilot designed to work across all stages of DevOps. And it's kind of interesting you have this approach of this unified tool integration.

So it's basically reaching in and connecting with everything. And I think the use of AI has been quite useful from a kind of building, the building of DevOps bricks to put together. It will be interesting to see how a tool that covers the entire chain works.

[Pinja] (6:45 - 7:06)

And I'm thinking of this from the Developer Experience point of view as one of their claims is here that with a consolidated AI working across the whole chain, with Copilot being across the chain so that developers could actually focus more and more into what they're supposed to be doing instead of focusing on the tool and how those are working.

[Darren] (7:06 - 7:23)

So, this is one of the selling points that they have for Aiden 2.0. And again, it comes back to something that you and I have been discussing a couple of times now, which is the best thing for developers is to abstract everything away from them as much as possible and get out of their way so they can develop.

[Pinja] (7:23 - 8:12)

Yeah. And we have seen the studies of how much time developers are using out of their days to struggle with the tools and any disruptions into their day that are always coming down to the tools. So I'm hoping that we see more and more of this.

And this is one of the predictions that we've had that DevOps will get more of the AI-agentic work in the future. And actually, this is a good segue to the next article that we had in mind, the next piece of news, that there is a study by Dan that how much time is being used on coding. And there is an article referencing how much AI is being impacting at the moment of the work.

And the study shows that developers are using 15-20% percent of their time and their work on coding.

[Darren] (8:12 - 8:49)

Yep. This was a study on DevOps.com. So this is basically a reality check of the ideas of AI and how much impact it's going to have, how much impact it will have.

And it talks about the 15-20% of developers' work and how real value lies elsewhere. It's in understanding and knowledge work and creativity that can't yet be abstracted out to AI. I mean, I don't know if we can say it won't ever be abstracted to AI, but right now, AI can produce reasonable code, but the knowledge work is still very human.

[Pinja] (8:49 - 9:08)

Exactly. And this is maybe a state of affairs news, perhaps, for this month, that we talk a lot about what the Copilot tools can do for productivity of developers and developing organizations. But we must not forget about the human element here.

[Darren] (9:08 - 9:41)

Yep. Another thing that I think was quite interesting in this article was the spotlight being shined on the advantage given to the larger player. So the Microsofts, Googles, Amazons.

And when you think about AI, the huge amount of data that is required, the huge amount of processing power that's required just lends itself naturally and gives an advantage quite naturally to those big players. So I do recommend anyone who's interested in the current state of AI check this article out on DevOps.com.

[Pinja] (9:41 - 10:01)

I think it was a very nice refresher on things and not perhaps the most biggest revelation of our time. But then again, when we talk so much about AI and it is on everybody's minds, let's just come back to what actually matters at the moment and where we are with adoption and what are the realities of using it.

[Darren] (10:01 - 10:33)

Yep. And on that subject, we actually have more news from the US regarding AI. So the Stargate project, I think everyone's heard about the idea to spend $500 billion over the next, was it 10 years, to build up the infrastructure required for AI to come.

And I feel like at $500 billion, that would make it the most expensive project ever, I think.

[Pinja] (10:33 - 10:59)

That should. And we're talking about big players here. So this is $500 billion to build AI data centers and infrastructure in the US.

And we got Oracle, we got NVIDIA, and we got OpenAI investing in this and being part of this initiative. So not only is it big in the scale of the dollars that are being used for this initiative, but also the players, which are big names in the field of AI at the moment.

[Darren] (11:00 - 11:26)

And I expect we're going to see more people joining that initiative, given the large-scale needs of AI. I mean, we're seeing power plants being opened and dedicated purely to AI in that output. We know the power requirements of AI.

So, as this pushes forward, I do expect to see many more large players come up to this stage. It's going to be a big deal, I think.

[Pinja] (11:26 - 11:48)

It will. And I'm thinking about how the discussions around the sustainability of AI data centers has also been growing at the same time as we get new models we get AI to play a bigger part in our lives. So I'm expecting this also to raise this topic more and more in the upcoming months and years.

Yeah.

[Darren] (11:49 - 13:08)

And it's one of those curious things where, as I understand it, not being an AI expert, the thing that takes the most resources is not the application of the model, it's the training of the model. So this is actually one of the curious situations where corporate interests can't shove the responsibility down to regular people as if they are doing nothing wrong, because it's typically the corporation training the model and the person using the model. So it's going to be interesting whether environmental and sustainability is considered in that project and how it's considered.

And I think that's an interesting discussion that we're frankly not having enough of anyway. The use of GPUs, the use of massive parallel computing has been accelerating since 2011 and the dawn of Bitcoin. So we got to a tipping point, I think, in 2016, where it was no longer profitable to mine Bitcoin based on the power usage alone.

I may be wrong with the dates there, but it's become very clear that power is a resource that we do need to take a critical look at when it comes to computing, when it comes to AI, and when it comes to the footprint of the whole project, this whole Stargate AI in general.

[Pinja] (13:08 - 13:33)

Previously, the big discussions have been, what is the most environmentally friendly development language, for example? So we have simple languages, we have more complex languages. So what is the GPU requirements to running software on those languages?

But now we're going into the sustainability and environmental impact of AI, which I think is due time to have those conversations.

[Darren] (13:33 - 13:49)

Yeah, I think you're absolutely right. But we've talked quite a bit about AI now and if everyone else is like me and starting to feel a bit of AI exhaustion by it being a constant topic of discussion, shall we talk about something a bit lighter, like Meta?

[Pinja] (13:49 - 14:18)

Let's talk about something lighter. Meta, indeed, and social media. Meta is ditching their fact checkers.

This is something that was big on the news a couple of weeks ago, and this is to replace the fact checkers with community-driven measures. And this is a strange development, because this is a clear change in the strategy that they've had for many years, but it is not unprecedented, as we've seen this already happen in X previously, haven't we?

[Darren] (14:19 - 15:34)

Yeah, and the idea of having community-driven measures works. We've talked about this previously when it came to Bluesky as well. Bluesky is very community-driven with their fact checking, with their general curation of the platform.

So having a strong community of fact checkers can work. And Mark Zuckerberg's justification for this was that the fact checkers were politically biased and didn't really help trust, which I think could be a fair discussion. But I think the timing of it is suspicious, and it should, you know, right after the changeover in the politics of the US, without going into too much of that side.

So it does kind of serve to remind us that these are businesses. I think it's fair to say they don't have our best interests at heart a lot of the time, and they simply will go in the direction which will work best for their business. And if community checking works, that's great.

I think the laws that were there to require fact-checking were good, but possible that they were, you know, leaning in a direction that wasn't comfortable to everyone. So it's going to be an interesting thing to see how Meta changes going forward.

[Pinja] (15:35 - 16:16)

It is. This is not, like, X has already implemented this, and Bluesky has implemented this, but this is not new, even from their time. So, this predates these systems as well.

For example, if we take Wikipedia, and I think it's called bird watching, so it has been written and edited by volunteers and fact-checked by volunteers as well. So, this is not exactly a new thing. It will be interesting for myself at least to see how the European Union is taking a stand with this.

So this is not just meta operating in the US, but can they bring this also overseas? And what is the European Union legislation take on this?

[Darren] (16:16 - 17:03)

It's actually curious, the different architectures. So we all know that Twitter and Bluesky are very similar, and Wikipedia actually operates on the same principle in that the information is available to everyone. Whereas Meta's platform, so you have Instagram and you have Facebook, these are actually built around building these kind of closed communities of people instead of an open, like, full discussion.

So it will be curious to see whether the community-driven models work, as I feel it might drive communities to be more ostracizing of certain members, which wasn't really possible in Bluesky and Twitter. So again, it will be curious to see how the implementation goes.

[Pinja] (17:03 - 17:27)

I fully agree. From social media to security, let's talk about security for a moment. So there is a study that was made by Legit Security, that we are now at more risk.

Their studies say that 100% of the companies have high and critical security risks. How does this sound to you, Darren, as a security expert?

[Darren] (17:27 - 18:33)

It was a curious read. Like 100% is a high number, and I wonder how much confirmation bias might be with that. But yeah, if we look at Legit Security as a company, they are kind of a security consulting company.

They are the kind of company who's going to be called by people who are having trouble. So I feel like there's a good potential for confirmation bias, but I also feel like even if it is confirmation bias, they may not be that far wrong. I think a lot of people have a lot of invisible vulnerabilities that they don't realize, and when they realize them, they will find ways to kind of excuse them or accept them or just kind of quietly brush them aside and hope someone else will deal with it.

So, while I don't think 100% of organizations as a whole have them, I suspect it's a higher number than anyone is comfortable to admit.

[Pinja] (18:33 - 19:20)

I agree, and the study from Legit Security said that there are, for example, secrets exposures, and that is a pervasive issue. And 36% of the secrets found outside source code in tickets, logs, and artifacts. I think, again, 36 is over one-third.

So as you said, Darren, a higher number than anyone would be comfortable enough to admit. And when it came to pipeline misconfigurations, their study said that it affects 89% of organizations, and 85% show least privileged violations at the same time. So that could actually enable the attackers to gain a broader system access.

That's quite a high number if that number is indeed correct.

[Darren] (19:20 - 20:14)

So starting with the expose secrets, yeah, we have tools in place to check for things like credentials in source code, but very few people actually think about them in logs. And obviously, tickets, artifacts, sure, if you've built a Docker image and it has a credential in there, then that's problematic. And the tooling is less sophisticated to pick those up.

But I've actually been playing with NVIDIA's Morpheus security tooling recently, and that might actually provide an interesting way to deal with this kind of log analytics on the scales we now need to. So maybe we'll revisit that at a later date, but I can definitely see 89% of organizations having pipeline misconfigurations. I think we see that a lot here at Eficode.

When we're brought in to help, there's some things that are not as they should be.

[Pinja] (20:15 - 20:37)

Related to that, I have seen quite a number of organizations where the permissions are actually presenting a risk. And this was also a result of this study from Legit Security that perhaps there are inactive accounts that have active permissions, there might be externals who have access to pipelines when they should not have. And they should be very basic things, but we see them happening quite often.

[Darren] (20:38 - 21:12)

Yeah, and I feel like there would be more room for discussion and information gathering here. If on the complete off chance that anyone from Legit Security is listening, feel free to contact us and come talk to us about how your data was gathered, because these results, if accurate, are very interesting. But speaking of that, let's talk about something a little closer to home.

Over the last few months, we've seen a number of incidents in the Baltic Sea regarding data cables being cut between various Nordic and Baltic countries.

[Pinja] (21:12 - 21:39)

There was one of the incidents that was largely on the news in Finland, happened just after Christmas time, a vessel cut the cable between Finland and Estonia. And the ongoing investigation is still ongoing. And now it was yesterday or the day before that, we got the news from Sweden, the cable between Sweden and Latvia was cut this time.

[Darren] (21:39 - 22:13)

This is not anything new. This has been happening for a while now. I think we had one in November of 2024.

We had it happen in 2023. And it's kind of interesting to see how the area is being consistently disrupted. It's, I don't know, maybe verging a bit too close to be talking about politics to discuss the reasons why or how, but it does seem like the Baltic Sea is unfortunately plagued by some kind of cable issue.

[Pinja] (22:13 - 22:36)

Yeah, the fiber optic cables are on sea floor, and NATO has now raised its interest on securing that no further cable breaks would happen in the upcoming months. Because this is now, as we said, the second one happening now within a year. So in the past couple of years, we've had multiple incidents like this.

[Darren] (22:36 - 23:02)

Yeah, that's an evolving situation. I guess we'll all see how it goes. But I think we have one more story to talk about.

We have a story that's kind of on everyone's lips out of nowhere. We had the deep sea AI model, which is previously outperformed Llama and GPT 4.0. And we actually had some more interesting news about it over the past couple of days.

[Pinja] (23:03 - 23:21)

And the news around deep sea keep evolving all the time. So by the time Darren and I were preparing for this episode only a day ago, after that, we've had multiple articles again on what's been happening. And this has now also impacted the global stock market as well.

[Darren] (23:21 - 23:52)

Yep. NVIDIA is down, what, 20%, which is a huge loss. It honestly seems a bit panicked.

I actually wonder if we should talk about deep sea and AI models in general in more depth on another episode. But yeah, the general history of it has been, it was a AI model built with a fraction of the price and processing power of Lama or GPT that is now supposedly outperforming it.

[Pinja] (23:52 - 24:21)

So the claim is that only $6 million was used to build this model. And they were using the reduced capability chips from NVIDIA as the US had imposed sanctions on the export of the chips. So that is also noteworthy.

Then again, we do not have that much information yet about how deep sea works, and it would be nice to see the results being replicated. How does it actually do when we build those models head to head?

[Darren] (24:21 - 25:06)

Yep. I feel like there's a lot of hype around it, especially it became the most downloaded free app in the US, which that's staggering to me because word of mouth doesn't travel this quickly. I actually have a suspicion regarding this, that it may be through the use of emulated Android devices to increase the numbers, to be honest because this kind of explosive growth is unprecedented, and your average person will still be just fine with ChatGPT.

So the idea of massive deep sea downloads suggests that everyone either became a data scientist or something not entirely by the book is occurring. So maybe I'm wrong about that. We'll find out.

[Pinja] (25:06 - 25:53)

But this is one of the topics that we need to keep a close eye on as this has gained a lot of traction around the globe, around the industry. Many people on LinkedIn are jumping on the bandwagon. Many people are being very skeptic about it.

People are very divided on how does it actually work? Is it as good as it's being said? But I think we can agree on the fact that this is a step towards a democratization of AI.

Countries are now building their own versions. And there was a discussion earlier that I saw on LinkedIn where people were talking about is the future of AI, gen AI, free, open source? Are we going to start monetizing the use of it more?

So there are a couple of different directions that we can look at at the moment, but nothing is certain at this point in time.

[Darren] (25:53 - 26:41)

Yeah, it's kind of interesting the idea because as we were saying earlier, it's always been about the big players having the huge advantage in AI because they have the processing power. Having similarly performing models, even if they don't fully outperform, but similarly performing models trained on lower-end hardware is going to be a game changer. It could even be a showstopper for this Stargate project because who needs to invest 500 billion if you can do more with less?

And it's actually kind of mirrors something that happened with the car industry where they stopped producing the large V10 and V12 engines and even the V8s, I think, and started basically doing more with less. So, I think we may be seeing that era of AI.

[Pinja] (26:42 - 27:15)

Some people already call this the Sputnik moment of the AI era, but I think we shall see. And this has been discussed for the past couple years already that in the next five years more development will be happening combined than in the last 40 years, and I think this is a very good example of that. And how many articles we have now seen in the last 24 hours of DeepSeek.

People are very happy to start investigating it and building RAG on top of this, so we will get more information about this in the upcoming weeks, I would say.

[Darren] (27:15 - 27:28)

And I think that wraps it up for our news. Feel free to join us next month where we'll have a similar roundup, and otherwise there will be another episode next week. Thank you for joining us again in this honor, and we hope to see you next time.

[Pinja] (27:28 - 27:29)

Thank you all.

[Darren] (27:33 - 27:36)

We'll now tell you a little bit about who we are.

[Pinja] (27:36 - 27:40)

I'm Pinja Kujala. I specialize in Agile and portfolio management topics at Eficode.

[Darren] (27:41 - 27:43)

I'm Darren Richardson, Security Consultant at Eficode.

[Pinja] (27:44 - 27:46)

Thanks for tuning in. We'll catch you next time.

[Darren] (27:46 - 27:52)

And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.

Published:

DevOpsSauna SessionsAI