In this episode, Marc and Darren chat with Charity Majors, co-founder and CTO of Honeycomb. They explore Observability 2.0, how Charity balances a public role in tech, her thoughts on leadership, and the importance of building diverse, inclusive teams. Join in the conversation at The DEVOPS Conference in Copenhagen and Stockholm and experience a fantastic group of speakers.

[Charity]

(0:05) The principle that teams internalize is this: metrics are the bridge to our past, and logs are the bridge to the future. Wide. Structured. Logs.

[Marc] (0:13 - 0:28)

Welcome to the DevOps Sauna, the podcast where we dive deep into the world where software, security and platform engineering converge with the principles of DevOps.

[Darren] (0:29 - 0:32)

Enjoy some steam and let's forge some new ideas together.

[Marc] (0:42 - 0:56)

We're back in the sauna and I am just absolutely thrilled and humbled. And it's been a long time coming to get a very special guest in today, Charity Majors is with us. Hello, Charity.

[Charity] (0:56 - 0:57)

Hi!

[Marc] (0:57 - 1:13)

It's so cool to have you. We've been trying to get you for years. You're going to be at the DevOps Conference in Copenhagen on November the 5th and November the 7th in Stockholm.

And along with us will be my dear colleague, Darren. Hello, Darren. Nice to see you today.

[Darren] (1:13 - 1:14)

Good evening, Marc.

[Marc] (1:14 - 1:21)

All right. And you got it right. It is evening at our time of recording and very early morning in California, I believe, Charity.

[Charity] (1:21 - 1:22)

Yep, it is.

[Marc] (1:23 - 1:37)

All right. You're giving a talk. You're known for many, many things, but you're going to be talking about versioning observability at the conference and 1.0, 2.0, where are we and what does this kind of mean to our average listener right now?

[Charity] (1:37 - 4:03)

Yeah, it's a great question. The definition of observability has been ping ponging all over the place for the past eight years or so. And just in the past year, I've really started leaning on the principle of semantic versioning to try and make sense of it, which is anytime you do a major version jump, it's because you have a backwards incompatible breaking change.

And I feel like that the change that is captured by 1.0 versus 2.0 is the number of sources of truth, the number of data formats. So probably the most famous thing is observability has three pillars, metrics, logs, and traces. And in reality, most people are paying for more than that, right?

You've got your APM, you've got your RUM tools, you've got your dashboards, you've got your unstructured logs, you've got your tracing tool, you've got your profiling tool, you've got all these different tools and each one of them is storing a locally optimized version of every request that enters your system, right? It's optimized for a different team or a different perspective or a different efficiency or whatever. And observability 2.0 says you've got one source of truth. Those are arbitrarily wide structured log events, which have unique request IDs. They also have trace IDs, they have span IDs. So you can visualize them over time as a trace.

You can also slice and dice in real time. You can zoom in, you can zoom out, you can derive metrics from them. You can derive your SLOs from them.

But all roads lead back to Rome, right? It all leads back to this one source of truth. So you're paying to store the data once for every time that it enters your system.

But then because you're storing it in these really rich contextual blobs, you can explore, you can treat your data like data instead of having to remember whether this is stored as a counter or a metric or a gauge or a structured or unstructured or a trait. That way madness lies. That generation of tools is very mature, it's very well-featured.

They've built some sort of bridging things between this tool and the next, but ultimately I think you're on an unsustainable path there. It's just too expensive and too sprawling. And the only thing that really connects them is you, the engineer, sitting in the middle of all of them, kind of trying to visually correlate this graph with that graph, or copy paste an ID from this source to that source.

It's madness.

[Darren] (4:04 - 4:33)

It seems like one of those occasions where I've been working with data scientists quite a bit recently, and we've basically all come to the understanding that's kind of common that the extraction and transformation process is the most heavyweight part of any kind of data science machine learning tool. And it seems like the idea of observability 2.0 is to kind of, well, maybe not to get rid of that, but it seems like it would at least go a long way to mitigating that.

[Charity] (4:33 - 5:38)

Yes, exactly. Instead of having to do the reformatting and post-processing and pre-processing and using humans as kind of like the people who are bridging all the gaps and honestly using guesses and sort of our gut reaction, our gut intuition a lot. Instead of that, you capture it in this format that is, that you could then, instead of having to do all, make all of these decisions at right time, ingest time, you can postpone those decisions and do them at read time and query time.

And I think that's a great point. Like they, this is not new computer science. They've had nice things on the business side for decades, right?

You can't run your business if you have data that you have to decide in advance, what are my cohorts going to be comprised of, right? You understand that you're not going to know that until you can slice and dice and like explore and be creative with it. And it's very, the cobbler's children have no shoes to me that as software engineers, we're still using tools that basically were designed 30 years ago to understand our telemetry.

[Marc] (5:38 - 6:05)

It's kind of like the people were building all of these data lakes and there was data warehouses when I was coming up and that made a lot of sense. It was structured upfront like you described, but you could still go back and you could kind of explore things because you had all of those changes in there. And then people started building these data lakes and they're like, we don't know what we're going to need this for, but we're going to dump it into the lake.

And someday, oh, AI came along and now all of a sudden we can look all these different ways at the things.

[Charity] (6:05 - 8:14)

Yeah. Very much, very much like that. One of the other real superpowers of observability 2.0 is that I feel like one of my rules of thumb in life is that context is what makes data valuable. The more context you have, the more valuable that data can be. And you don't always know what is going to someday correlate or jump out at you or turn out to be the magic key, but the more context that you can pack around those structured log events as you're collecting them, the more powerful it will make you someday. And one of the magical things about observability 2.0 is I think we're no longer siloing away. Here's your app data for the app developers. Here's your system data for your DevOps folks. Here's your business data for your product managers, right?

It turns out as an engineer, you need all of those and you need them smooshed together. You know, I don't know how many software engineers I know who are relying on their data, their data warehouse. Problem is that stuff's behind by a day or two, but that's the only way they can actually understand and ask these really complex, interesting questions that they need.

Another, and I know we've got a lot of topics to get to, so just one more thing on the 1.0 versus 2.0 trip, which is more sociotechnical in nature, which is I feel like observability 1.0 is very much about operating your code. It's about bugs, downtime, and outages, and crashes, and errors, and all of these things around when things are not working well. And observability 2.0 is much, I mean, yes, it includes those things too, but it's much more about not just how you operate your code, it's how you develop, it's how you understand, is this thing that I'm building, is it doing what I expect it to do?

Are users responding to it? How are they using it? Are they using it the way I expect it?

Are they using it in ways that surprise me and that I want to lean into? It's like observability 2.0 is a substrate that supports these really fast feedback loops that are what allow teams to move swiftly and with confidence. Not only when something breaks, but also you need to know what good looks like.

You should be looking at your code every day so that you know not just what things are like when it's breaking, but what things are like when it's interesting.

[Marc] (8:14 - 8:37)

One of the things that we were talking about in the office before when I was asking people what would they like to talk with you about, and it was this 1.0, 2.0 thing. It's like so many customers with their legacies and the situation that they're in being a victim of their own success, we feel like haven't even gotten to 1.0 yet. Do they need to go through this?

[Charity] (8:38 - 10:27)

No. No. This is something that I feel like the way people are doing it now is the hard way.

It is so hard. You've got to educate engineers on so many different, here are the different types of data and here are the different ways you emit them, and you can't do that because that costs too much. I know of entire observability engineering teams that are spending an outright majority of their time doing almost nothing but managing the cardinality balance, the balance between enough detail to understand their code and not so much that they go bankrupt.

It's why for most companies, it's the second biggest bill after their cloud provider bill is their observability bill. It's mostly because the 1.0 generation is not designed or capable of handling high cardinality data, which just means data with lots of detail. The more detail your data has, the better it is, the more useful it is, the more context it is.

I wish that people would understand that you can leapfrog that stuff. Obviously, any time that you're trying to do something different, it takes a certain amount of cognitive overhead. If you put observability 1.0 tools in front of a new grad versus 2.0 tools in front of a new grad, the new grad can almost immediately figure out the 2.0 stuff because it speaks to you in the language of your software. It speaks to you in the language of variables and functions and endpoints, APIs, versus the layers of expertise and translation that you have to understand to try and get a grasp on your 1.0 journey is deep. I've been doing this for almost nine years, and I'm still learning stuff every day, well, every month. I did a white paper write-up on costs and metrics, and I learned stuff that was blowing my mind.

I'm just like, how is this? This stuff is so hard. It is so hard, and it is so expensive, and it is so dense, and it is so challenging, and it just really doesn't have to be that hard.

[Darren] (10:27 - 10:47)

It's kind of an interesting idea. It sounds like it's, I mean, it's obviously better to implement observability 2.0. The question is, is it getting bypassed because 1.0 is easier? Is it cheaper?

Or is it something that satisfies a minimum viable product for these people?

[Charity] (10:47 - 11:23)

It's definitely not easier. It's definitely not cheaper. I think for folks, I think for a lot of folks, it may be satisfying as a minimum viable product, but I think the bigger reason is just that it's familiar.

It's what everybody grew up using. It's what you can Google any three technical terms and get a list of like, oh, Google's like, okay, copy, paste this, and you immediately get graphs, and half a dozen different providers. They aren't necessarily useful graphs.

They don't really give you a lot, but they're graphs, and you feel like you've done something. It's just like, boom, boom, boom. You really don't have to think.

You can just like, the road is well paved. That's how I'd put it.

[Marc] (11:23 - 11:33)

Are there some steps to get, so this is kind of what you're talking about. The road is well paved, so is there any kind of more practical things for selling this into an organization?

[Charity] (11:33 - 14:02)

Yeah. You know, so like, hilariously, I have a white paper that I've been writing that I'm going to edit tomorrow and hopefully get up on the Honeycomb site soon. I feel like the principle that Tubes internalizes this, metrics are the bridge to our past, and logs are the bridge to the future.

Wide, structured log. A trace is just a log. A span is just a log with a particular ID, right?

And so the more we can start shifting the calories over from, you know, investing in a metric, which, like, metrics are not terrible. Metrics are the best. They are the tool that you should use for summarizing vast quantities of data, right?

They are the tool that you should use for a lot of infrastructure stuff. Infrastructure is what I define as code that you have to run in order to get to the code that you want to run, right? But right now, the overwhelming majority of the tools that we use are based on metrics.

RUM is based on metrics. APM is based on metrics. You know, most of your dashboards are based on metrics.

And you can't have context when it comes to metrics because, you know, it's just a number with some tags. And so, like, if we just start, you know, stop investing in the metrics and start investing in these wide, structured logs. Like read up on canonical logs.

Like, what you want to do with logs is emit fewer of them and wider, right? So, like, the canonical log principle is basically you emit one wide, structured log for every unit of work, whether that's a request and a hop or, you know, you've got a long running process or whatever, and it's very wide, right? With Honeycomb, we actually even have infinite free custom metrics because we don't charge for how wide your events are.

You can put in hundreds, thousands, right, to emit fewer, wider logs. And just like, you know, if you start down that path, especially if you start adopting open telemetry, the beauty of open telemetry is that it liberates you from being tied to any one vendor. If you get your shit into open telemetry, then your vendor should have to compete for your business based on being awesome instead of you being stuck there.

So those are my tips, basically. Make sure you structure your logs, fewer, wider logs, invest in open telemetry. And like, we're in a weird spot right now.

It's very chicken and egg, where OTel is able to do awesome things if you already got your data into this, but like, people aren't building the awesome things unless there's data that's in this format. And so, like, I think what you're going to see over the next few years is the road to observability 2.0 is paving rapidly. It's going to get easier and easier.

But these steps you take to get your data more into this wide structured log format will start paying off immediately.

[Darren] (14:02 - 14:48)

And it feels like there's kind of a secondary advantage here that I'm pretty sure Mark is just waiting for me to talk about, because as he knows, I'm a security person. So the idea of having these kind of wide, useful, data-rich logs just kind of sounds like a dream come true for me for this kind of process tracing. And especially with the, you know, the EU or NIST 2 directive coming in with the US, I forgot what their cybersecurity directive was called, but these things of not having to dig through, like, 56 nonsense log entries when I could just have one enriched metadata idea of this is what happened.

That's going to save me a lot of time. So I think that's another area we could lead into.

[Charity] (14:49 - 14:59)

100%. The uses for this are just, like, innumerable. Its potential to accelerate teams' ability to spend time creating value is immense.

[Darren] (14:59 - 15:26)

I do think it does come back to the whole cobbler's shoes thing, too, because a cobbler's children don't have shoes, because it's like, we are just seeing this kind of thing in business. I mean, Facebook, Meta's entire business model is built around collecting data and metadata and building it together, and it's like, this is how people are making money. So it's like, the text there.

It just doesn't seem to have got into the places where it would have the most impact.

[Charity] (15:27 - 16:10)

We're so good at doing things the hard way as engineers. You know? And also, I feel like, you know, I mean, if you aren't an observability company, this is not your number one job, right?

Which is completely understandable, which means that you don't want to be investing more time into this than you absolutely have to, which I get, which is why it's really on folks like Honeycomb to try and make that road wide and pave and make it easy for you, which is why I'm so encouraged by the advent of open telemetry. I don't know if y'all have noticed, but it is now the largest project of the CNCF, larger than Kubernetes. It's actually taking off.

And this is becoming the standard. This is huge. This is huge for all of us.

[Marc] (16:11 - 16:44)

I'd like to go, I don't know how Darren's going to feel about this, but I'd like to go a little bit on the cultural and people side of things a bit. You do a lot of public work and a lot of talks and they really resonate. And I admire so much people that are practitioners, that are passionate, that can go out and actually talk about it and share it and, you know, do the conference scene, do the, do white papers and all the kinds of work that you do.

So is there a motivation for you? Is there, you know, a way of working or anything? Would you like to share how you do this and how you got into having such a public role?

[Charity] (16:44 - 18:10)

Uh, goodness, that's a great question. I don't know if I've ever really thought about it. I mean, when Honeycomb got started, when Christine and I started doing this, like we had no idea what we were doing and we were very much working out in public, just sort of showing our work as we worked on the concepts of observability and as we tried to figure out what we were building and why we were building it and who we were building it for.

And so I was doing a lot of that work on Twitter, just kind of talking about it and seeing who reacted to it and what they reacted to and what was interesting. And, and it was very valuable. I mean, also we didn't have any marketing budget and our only marketing plan for the first three years or so was anybody that invites charity to go speak, say yes.

I will say it's been a, it's been a journey for me. Like I was diagnosed with ADHD four years ago in 2020 and now I have medication, which is amazing. And I've always, I've never been one of those people who can be like at 9am tomorrow, I will sit down and I will write about it.

Like I have to be fueled by something, right? I'd be fueled by like anger or excitement or something, you know, and then it comes out in the girt in a burst, which is not sustainable and not predictable. And it can be really hard to make that work with business rhythms and the things that you know, people need from me.

So it's really a work in progress. I am very much a work in progress. I would not say I've gotten that nailed at all, but it's better with meds.

[Darren] (18:11 - 18:20)

It does sound like you benefit from the fail fast and frequently mindset, like putting everything out there and seeing what sticks.

[Charity] (18:20 - 18:28)

Yes. I am not embarrassed by being wrong. I am happy to correct myself when I'm wrong.

I would rather, I would rather figure it out fast, right?

[Marc] (18:28 - 18:51)

I think this is so amazing. And like I'm similar where I'm very eager to get out and test things and talk to people and try to raise the level of ideas, which means there has to be an idea and it doesn't have to be right at the beginning. But how does your imposter syndrome work through that process?

Is it something that you, you face on a daily basis or even back then while you're trying to figure it out? Or is it different for you?

[Charity] (18:51 - 19:12)

I don't know. I don't know. I don't really think I have imposter syndrome.

I feel like nobody's very good at what they do and we figure it out anyway. So like, I don't feel like I would need to do different than anybody else. I don't know.

Like I said, I'm very comfortable being wrong and I guess it just doesn't bother me. I mean, I know that I'm not equipped to be a CTO or to start a company, but like who is, you know, so.

[Marc] (19:13 - 19:14)

That's an interesting trigger.

[Charity] (19:14 - 19:55)

I have an interesting relationship with authority and hierarchy in general. I do not see people who seem to be better than me. And in fact, it's kind of the opposite.

I, if somebody's above me in the org chart, I take that as license to beat up on them, perhaps too much, perhaps a little bit unfairly, which has been, you know, I mean, people come in all shapes and sizes, right? And this is just a trait that as someone who's now towards the top of the org chart, I've had to like really learn and internalize that that is not how most people function. I need to be sensitive to that.

And then you think about how my words are coming across and how they're impacting people so that I don't squash them. But that really was a learning, kind of a painful learning process for me to internalize.

[Marc] (19:56 - 20:02)

Like you talk a lot about career paths and like the engineer manager pendulum. Are you familiar with Peter Principle?

[Charity] (20:02 - 20:02)

Yes.

[Marc] (20:03 - 20:14)

Yeah, I'm living proof of the Peter Principle. <laughs> So I've been in this, I've been in the C-suite and all of that. And I had to come down a couple of notches to be where I really belong.

You know, what's your, what's your views there?

[Charity] (20:15 - 21:06)

I admire people who intentionally go down. You know, I feel like, honestly, I feel like one of the best ways to get good at any job is to do the job above it for a little bit. And then you really figure out what is required of the position that you were just in.

I feel like anybody who learns how to get through school and college or whatever has learned to be motivated by rising in the ranks. And I feel like people who learn to be satisfied and get joy and fulfilment out of other paths are just more interesting to me across the board. You know, everyone's like, yeah, you got promoted or yeah, you became a manager.

You know, your mom's like, yeah. But people who are like, you know what, I tried it. That's not for me.

Or I'm really interested in this. Like there's a certain amount of ego death that goes into that or a certain amount of just like grappling with what really motivates you. And I find that more admirable.

[Marc] (21:07 - 21:28)

Cool. I think this motivation thing is something like it's finding things that give us energy, isn't it? It's like, it's, Ikigai is always about, you know, what you love and what people will pay you for and all of that.

But there's a lot of things I love that don't give me energy. And, you know, but it's really about that. I can see it from you when you talk about the thing so excitedly about what you're into.

[Charity] (21:29 - 21:33)

Yeah. Can't fake it. It clicks hard, does it?

[Marc] (21:34 - 21:41)

It sure does. So what is this, like, engineer manager pendulum though? They're going in back and forth.

[Charity] (21:41 - 23:43)

That's, you know, I wrote a blog post back in 2017. And to this day, it remains my most popular or most highest ranked whatever blog post. And at the time, I think it was a bit of a different world.

But at the time I wrote it actually kind of as a love letter to a friend of mine who was a director of engineering at Slack at the time. And he was not happy. And he was agonizing.

He really wanted to go back to being an IC [individual contributor]. And he was like, I've spent years getting my career in this place and like earning the respect of my colleagues. I love having a seat at the table.

I love having a say in decisions. But I don't love being a manager. And it was just so clear to me that if he went back to being an IC, he would become more powerful than ever before, right?

Because he would take all these things that he would learn and he would be loving his life and doing what he loved. And he did. And that turned out to be true.

It's interesting, though, for many years, I feel like my writing really had this very anti-managerial bent. Because I felt like the brightest minds of my generation were all kind of reluctantly going into management because it was the only way that they knew how to rise, to grow, to have a career path, or just to have a seat at the table, to have a say in the decisions that were getting made. And I do feel like as an industry, we've come a ways since then.

I feel like the rise of the staff plus engineer movement has been huge. Formalizing that there's a parallel leadership track for ICs has been huge. I still do think, though, that there is unique value.

I actually think that the best engineering managers that I've worked with have never been more than five-ish years away from being hands-on keyboard. And I think that there are particular strikes to technologists who repeatedly go back and forth between building and managing and building and managing. So that's a career path that I just like to advocate for as a goal into itself.

I think that if you become an engineering manager and stay there for 20 years, that's not a path towards strength.

[Marc] (23:44 - 24:09)

So let's come around now to AI and the impact on engineers, impact on juniors, meteors. I love that word. Seniors, principals, fellows, and whatnot.

So to me, the way that we've been looking at this is that you kind of have to know something for AI to be useful for you. But I know you advocate AI towards junior engineers.

[Charity] (24:10 - 26:26)

Well, I think it's a terrific learning tool. We've got junior engineers at Honeycomb, and they spend all day in conversation with AI, just asking it questions. It's not that they don't ask questions of senior engineers, but it actually helps them figure out how to formulate a really good question to ask a senior engineer.

It gets them through the really basic stuff so that it's very high quality time that they spend pairing with more senior engineers. I feel like, is there a bright future for no code? Yes.

Is there a bright future for low code? Yes. Is there a bright future for medium code?

Yes. Is there a bright future for Excel spreadsheets? Yes.

Code in all of its various forms is taking over the world. And the stuff that's traditionally been very software engineering intensive, is there a bright future for people who need to know infrastructure and kernels and databases? Yes.

All of it. But where I really feel a lot of passion is just that I feel like hiring managers have historically tried to always hire the most senior people that they can get for the money. And increasingly, jobs for juniors have been drying up more than ever.

I think this is fantastically short-sighted. I think that's really stupid. I think that strong teams are not made up of just like a bunch of staff plus engineers.

That's a bunch of folks who are... Because you really want a range of levels because you want every person on the team to be doing things that are new to them and interesting and pushing their boundaries and helping them grow, bringing their whole curiosity and emotional engagement to the table. And if you've got a bunch of folks who are like, implement a web form for the 200th time, okay.

That's not curiosity. That's not excitement. That's not engagement.

Not to mention the fact that this is an apprenticeship industry, right? And if we're not constantly bringing new blood in, we're cannibalizing our own future. And I feel like bringing in junior engineers from day one, it changes the culture and it makes it one where asking questions is normalized, where curiosity, not having the full...

We're interrogating ourselves and making sure that we're building things as simply as we should and not over-engineering. All of these things that get really hard when you're very top-heavy become a lot more natural.

[Darren] (26:27 - 27:10)

Yeah. I always find it kind of, I say disappointing, but not surprising when we stumble across a company who's basically propped up by two entrenched senior engineers and a moat and castle walls around them to stop juniors learning anything. Oh, God.

We see it in security more than ever. The average age of a security professional is 40. And it's like, where is all the young talent that we're somehow missing out on?

They're not getting hired because people think everyone in security has to be an uber-genius level person. And it's like, there's no room for learning, no room for training, no room for growth. We just expect these magical engineers to appear out of nowhere.

[Charity] (27:10 - 27:21)

It is such a fragile culture, right? It really centers people's egos to an extent that I find really disorienting and toxic.

[Marc] (27:21 - 27:34)

I was hoping for something positive on the other side of this, which is how do you see diversity and inclusion going these days? Because we see a lot of good movement as well. Is this something you see improving?

[Charity] (27:34 - 28:36)

Absolutely. I think this is under-celebrated. And I understand why people get a little anxious about saying good things about what's happening.

But I feel demotivated to feel like you never make any progress. And the progress I've seen in my lifetime has been nothing short of phenomenal, right? Now, the future is here, but it's uneven, right?

There are still plenty of places that are accessible or that are all-do, they're all straight white dudes, they're all dudes who look alike or whatever. But there are so many companies out there that are building diverse teams. And I think that you're seeing that they're becoming more successful.

I think it is a strength that when you learn to internalize a certain amount of humility, a certain amount of respect for each other, a commitment to making it a place where people can be human together and care a lot and be working together out of motivation and curiosity and creativity, not fear. And just sort of like you were talking about with the castle and the moats and the walls and either inside the fortress or you're outside the fortress, right? Fortress is not where I want to go to work.

That's not fun.

[Marc] (28:37 - 29:16)

Oh my gosh. We have gone around and around. And I've had so much fun working with you.

I would just love to come and pledge as a junior developer at Honeycomb. So thanks so much, Charity, for coming with us today. And we can't wait to see you in the DevOps Conferences.

I'll see you in there. I know. It's literally, it's a month and four days from the time of recording.

So the DevOps Conference in Copenhagen on the 5th of November and the DevOps Conference in Stockholm on the 7th of November. I think tickets are still on sale. At least one of them is not sold out yet.

Thanks so much for joining us, Charity. Can't wait to meet you eye to eye.

[Charity] (29:16 - 29:17)

Thanks for having me.

[Marc] (29:17 - 29:34)

See you in a month. And thank you once again, Darren. It's always a pleasure.

Hey, we'll see you next time in the sauna. Goodbye. We'll now give our guest an opportunity to introduce himself and tell you a little bit about who we are.

[Charity] (29:34 - 29:45)

Hi, my name is Charity Majors. I'm a co-founder and CTO of Honeycomb.io and the co-author of Database Reliability Engineering and Observability Engineering from O'Reilly.

[Marc] (29:45 - 29:53)

Hi, I'm Marc Dillon, lead consultant at Eficode in the advisory and coaching team, and I specialize in enterprise transformations.

[Darren] (29:53 - 30:00)

Hey, I'm Darren Richardson, security architect at Eficode, and I work to ensure the security of our managed services offerings.

[Marc] (30:00 - 30:07)

If you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.