Marc and Darren are joined by Dan Khurram to discuss a white paper (born from a workshop involving various experts) about the impacts of AI on businesses. They cover the paper’s creation, AI scenarios, and the "AI for all" scenario, including ethical and governance concerns. Join in the conversation at The DEVOPS Conference in Copenhagen and Stockholm and experience a fantastic group of speakers.

[Dan] (0:05 - 0:15)

For this scenario, is this impact that you've come up with positive? What effort do you think you would need in your organization to make it happen?

[Marc] (0:19 - 0:29)

Welcome to the DevOps Sauna, the podcast where we dive deep into the world where software security and platform engineering converge with the principles of DevOps.

[Darren] (0:29 - 0:32)

Enjoy some steam and let's forge some new ideas together.

[Marc] (0:41 - 1:05)

We are back in the sauna. Good afternoon, Darren. Afternoon Marc.

It certainly is on a beautiful post-summer day in Finland. I'm really happy to have one of my teammates from the advisory and coaching team in Eficode named Dan Khurram in the sauna. Would you like some löyly (steam) today, Dan?

[Dan] (1:06 - 1:07)

I'd love some.

[Marc] (1:07 - 1:39)

It's pretty exclusive around here, so thanks for inviting me in. That's what I hear. We invited Dan in today to talk about a workshop that we had a little while ago and the output of that workshop and what an exciting time it was.

We hosted a workshop with a bunch of really smart technology people looking at how product development organizations will survive or thrive in the new world of AI. What was your role in the workshop, Dan?

[Dan] (1:40 - 1:55)

My role was part creator, slash part facilitator, and then another part of outcome generator, so making sure we actually took the learnings and spread them as far and wide as we could.

[Marc] (1:55 - 2:00)

All right. So how many people joined and what kind of people were there?

[Dan] (2:01 - 2:34)

Yeah, so if you exclude the people like you and me, Marc, and our team who were just there to keep the lights on and make the show run, we had a bunch of really smart people from around the Finnish, let's say, tech scene with titles like head of product, product director, CTO, CPO. There was roughly 13 people for the whole day in the workshop, and essentially I can only classify them as the smartest people that we could find to talk about this topic. So we got as many as we could in and let them loose.

[Darren] (2:34 - 2:47)

What kind of coverage are we talking about? So are we talking about product organizations, but what sorts of products are we discussing? What kind of industries?

You say they're smart people, but what are their areas of focus?

[Dan] (2:47 - 3:44)

Yeah, so it was a really wide variety of industries and products and product types. We had people who were, let's say, at the cutting edge of actual AI development. Their products are AI solutions.

But then we had people who were CTOs at, let's say, more traditional industries, industrial machines, security, cybersecurity. This is Finland, this is Helsinki. There's a few SaaS companies thrown into the mix, scientific instruments, desktop software, Marketing software.

So super wide variety, which was great, which could give us a really well-rounded discussion of what was it from their angle. And if someone was from a less technical background, they could really come up with some real organizational gems about, okay, well, here's actually how it would work in a transformational organization. So, yeah, we had a great bunch of people.

[Marc] (3:45 - 4:09)

I love hosting these workshops, and I think that AI is such a wonderful topic at the moment because you can bring people together from lots of different places and have kind of this common thing to talk about. Everybody's still really excited. And I think one of the things that I notice is not everybody's getting exactly what they want out of AI today, and things are changing really fast.

So how did you structure this, and what was the theme of it?

[Dan] (4:09 - 4:47)

Yeah, and I really like back that statement. There's not many topics in the lab. Let's say in my working life that you could send an invitation out to some really high in demand people, and they would be like, oh, yeah, I actually want to talk about that with others, and I want to know how other people are doing it.

I cannot remember a theme or subject that most people we invite would be like, yes, absolutely, I'll be there. Let me know. So this was not only just a great topic, but then it's super exciting to hear different viewpoints about it.

And now I got so excited, I forgot what your question was, Marc. I think it was about how we...

[Marc] (4:49 - 4:56)

I know, I know, it's really cool stuff, man. So how did you structure the workshop, and what was kind of the flow of the thing?

[Dan] (4:56 - 6:23)

Yeah, so how did we structure it? Well, we tried to think, what is a way to talk to people about a possible event in the future that could happen? It might happen, it might not happen.

No one can really decide how things are going to play out. So what is a good way to structure a conversation around a bunch of unknown unknowns? What we went for was scenarios.

Okay, why don't we come up with a set of scenarios that can be based on a set of axes, which can be, okay, you might have a quadrant of A, B, C, and D, and each point in the axis, north, east, south, west, represents something slightly different happening. And by having scenarios to talk about, like, okay, this might potentially happen. How good is this for your organization?

Do you see any risks in there? What are the impacts? How could you encourage this scenario to happen?

How can you act against it? Find ways to stop this scenario happening if it's risky for your organization. That gave us a pathway and a structure to talk about the future that we can kind of not just sit back and wait to happen.

We can have some part to play in designing what happens with this technology. So yeah, I think it was a good structure. I was part of creating it, so a little bit biased, but it led to really good, fun conversations and conversations that came out with real world actions and applications that almost anybody can start to take.

[Darren] (6:23 - 6:29)

Can we then dive into these scenarios? You say you mentioned, what kind of scenarios did you come up with?

[Dan] (6:29 - 6:40)

Yeah, it's difficult to talk about a visual thing through an audio. So I'm waving my hands around trying to make axes and I obviously no one can see these, but I'll try my best.

[Marc] (6:40 - 6:50)

I'll mention it here that a paper was created and will be available for download. So you will be able to follow along even if you're not currently using a whiteboard.

[Dan] (6:50 - 9:23)

Yeah, yeah. And I think it's, thank you for mentioning that Marc, that really saved me a lot of arm waving and gesturing to nobody, well, just my neighbors. But after the scenarios, we came up with a discussion and we didn't want to lose all these great thoughts that everyone had.

So we did put everything down, both the workshop structure, how we structured it, thought about it, what the participants came up with, with the answers. And we put all that together, converted it from post-it notes to an actual white paper and that is available for people to see. So if this scenario explanation doesn't make quite sense, you can read it for yourselves.

So yeah, the scenarios, let's talk about the axes first. So going one way, we asked the question, do we think these AI tools will be used by everyone? So AI for all and by everyone, we mean not just everyone in a product organization, everyone in the business, from the developers to the product managers, to the Marketing people, to the sales people, to whoever.

Will everyone have access to this and not just have access, be able to use it well enough that it has an impact on their jobs. And the other side of that axis is AI for few. It's only going to be available to a small subset of experts who know how to use it very well and whether by design or by just this is how it plays out, people just leave them to their own devices and just have to ask them.

Just like I have to ask anyone for help when they're in the expertise, there's a small silo in that organization, that's maybe how AI could turn out. So there's the AI for few, AI for all axis. And that already gives us two sides of a scenario.

But then on the, so if that was North, South, and we're talking East, West, we can look at who will AI, or no, what is the purpose of AI? Maybe that's a better way to put it. Will AI be used for developing products only?

So it's used not just to write code, but to kind of run these things through the pipeline products faster and faster and increase the efficiency on how we churn out features. Or will AI be used for making the product better? So it doesn't matter how fast you make the product, it's, is the product any better for the user, the customer, the application side of things.

So we can really understand the customers better, the buyers better. Will we be able to come up with innovative new features that no one has ever thought of before, but it's only by putting all this into a smart machine that we can come out with even new solutions to problems we hadn't thought about. So that was the other side, AI for development or AI for development practices or AI for product and better products.

[Marc] (9:24 - 10:33)

Keeping in mind, in our business, most people in IT have seen this kind of four square situation and you're usually having the goal to get to the upper right corner, which is usually where you put the thing that you want the management to put the most emphasis on the highest risk or the highest value thing. But we wanted to paint kind of four scenarios to understand more clearly that if it were to go to only elite people, only a few people would have access, you know, AI would be cordoned off so that not everybody in the organization, it's like this a lot today actually, would even have access. Or is it more available?

And on the other side, is it more used by product management? Is it used for helping to understand how to hyper localize or hyper personalize things more or use more for development where we just get better and better and better at AI for DevOps, for example, in general, being able to write more code faster and get more value. Cool.

So when these were presented to the people and what was like the operating mode, then how did we proceed through this in order to get to an outcome?

[Dan] (10:33 - 12:44)

Yeah. So maybe before the operating mode and reflecting on what you were saying about, like, when you draw a quadrant, a set of quadrants in front of people, everyone immediately looks to one side and say, well, that must be the right solution. What was really important when we started in the workshop was to say, none of these is the wrong answer.

None of these is the right answer. These are potential scenarios and all could happen at the same time because your business's organizations are so different and based, depending on your product, it might actually be better for you to have a small subset of experts, you know, really working on the innovation side of things and come up with really cool things, which you just can't get with scale. You need that knowledge.

And on depending on other products, it might actually be good for you to be churning out new features, A, B, testing them, seeing how you can win Market share by B to C, you know, apps kind of stuff. When we explored these, we really had to make it clear, like, this isn't about picking which is the winner. It's about understanding all could happen.

How do we then take this frame into mind? Like, if all could happen, which ones for you, as in the product leaders of these organizations who were in the workshop, which ones for you seem most likely to happen or least likely to happen in your organization, in your industry? And maybe going forwards and then more pertinent is what's ideal for you?

What's risky for you? And what does that mean? What changes if this scenario happens?

What makes it ideal? How big is the impact on a scale of like, yeah, it's more business as usual, we'll just adopt it and get on with it, or there'll be a huge effort needed for upskilling, training, rolling this out, getting people on the onboarding curves and so forth. So that's what we did.

And funnily enough, for an AI workshop talking about such a futuristic, cool new thing, we went old school. We had flip charts and post-it notes. And this is the consultant in me coming out saying that flip charts and post-it notes, I don't care what technology you invent, you don't get away from the satisfaction of writing on a post-it note and peeling it off and sticking it on a board.

So that's how we went forward with the workshop, with that frame in mind.

[Marc] (12:44 - 13:11)

All right. It sounds great. Now, I like the approach of going to paper and pen and moving things around and visualizing them like that.

So then I suppose there's presentation and people talked about the scenario that's best for them and the one that's worst for them. And can you talk a little bit about how people's approaches were in those presentations and how did they kind of see the work that they had just been doing?

[Dan] (13:12 - 14:59)

Yeah. And we have to just keep in mind that when you have 13 super smart, creative people in the room, you're going to get a lot of ideas come through. And we had, going to exaggerate a bit, like hundreds of post-it notes of different things could happen, trying to keep this in a way.

How can we summarize all of these things to present back to other groups? Because with these workshops and everyone's been in a workshop where if it goes over a few, like three or four people, there's going to be a human handling to do of like bringing people together and making sure there's cross-pollination of ideas if we split people off into groups. And the way that people would have to come back and generate and give the ideas back to the room and allow others to contribute, allow for even greater discussion.

And this, I keep giggling at it, there was such an analog manual way of writing things down and making people stand up and present their idea, reading off a post-it note and a flip chart of how they think technology will move forward in these AI models. It seems a bit incongruous, but it was really the best way to share ideas and collaborate in this kind of workspace, which I think probably won't change going forward. You might have these virtual rooms and so forth, but I think that way of getting someone to stand up and speak confidently about here's what I really think will happen.

What do you think? I think it allowed us to have this open conversation and by open conversation, I mean, this topic allows you to both be really smart, but also be really vulnerable by saying, well, I'm not really sure this is going to happen, but what do you, the rest of the 12 really smart people come back and think about this scenario, agree, disagree, what would you do about it? So it was a really nice atmosphere of collaboration and idea generation.

Again, I think I answered your question. I might not have, Marc. Let's see.

[Marc] (15:00 - 15:16)

I think, fair enough. So there was an outcome and kind of an aftermath here. Would you like to describe, Dan, what happened next?

And then we can talk as well about how happened next or how it happened. What happened? And how did it happen?

Yeah.

[Dan] (15:17 - 17:56)

What happened next? Yeah. And let's kind of keep in mind, like when we have so many ideas and maybe this is why people like myself, and you were in the remark about facilitating outputs in a way that we can actually get actions out of it.

Because the good thing about the topic is everyone can have an opinion and make a forecast and have a judgment opinion on these things. Really good judgments, opinions on these things, but you start to multiply and compound ideas off of each other and people get excited. We got excited.

So how do we bring this back into something that people can actually do something? And that's what we tried to do. We tried to be like, okay, for each of these scenarios on these axes, whether something's more likely to be AI for all or AI for few, is something more likely to be for product or for AI for dev?

What are the real tangible things that everyone agrees on? And what are the real tangible things that actually everyone disagrees on and we can't make a prediction for and what can we do about it? So we asked the question, okay, for this scenario, is this impact that you've come up with positive?

What effort do you think you would need in your organization to make it happen? This is a positive impact. How can we encourage this?

Because we want to bring that positivity forward and that benefit forwards. How do we encourage this? How do we make the most of it and get value from it?

On the flip side, okay, this is a risk in this scenario for you, a negative impact. What are you going to start doing and what can you start doing to counter that risk early so it doesn't get embedded and ingrained? So we worked it through that way and what came to mind was there were some things that across the board for all scenarios, there are just things that you can do generally when no matter what happens to get the benefits of some of the impacts.

So for example, like if you in general in your organization start investing now in training, upskilling people across all the scenarios, there should be a positive impact and a risk reduction and that was great. And there's a few more of those. We can go into those.

Also, what we found was some scenarios gave us a few kind of unexpected findings that going into the session, I at least wasn't considering this a potential risk. So also capturing those so we can share those with a wider community and saying, okay, actually look out for this because this could be a problem when you're trying to apply some of these things. And you talked about what happened afterwards.

What we did was we captured all of these, took all the posts and notes and everything and tried to work it into a white paper, a guide on, okay, here's what these really smart people think. Here's how you can now apply these findings in your organization.

[Darren] (17:57 - 18:30)

On the subject of the white paper, there's a couple of questions I'd like to ask. I don't want to go too much into it because I feel like people should probably download it and read it. But for example, you had the basically AI for all scenario and every time you talked about this impact and effort on both a positive and a negative side, you actually had no negative results or impacts were agreed upon for the AI for all.

Is that something you can elaborate upon? Was then really no thought towards any kind of negative impact of AI for everyone?

[Dan] (18:31 - 19:34)

Looking back on it, I think that this is also how do we structure a workshop and how do we give a room for discussions on everything? I think as humans and as people, we're just excited about it. And when you just have this maybe unconscious bias of this sounds like a really good scenario and this sounds like a net positive, trying to pick holes in it, it didn't happen in the session.

There definitely were just challenges in the technologies and yeah, there were challenges to it. But I think in context to the other scenarios, those other scenarios gave some very obvious negative impacts. So maybe that's why in the white paper and in our findings, you'll find that, okay, if it's AI for all and it's kind of democratized, generally you're going to get the positives and you can manage the negative things in your day-to-day business as usual lives.

Just like you'd manage any risks in your business, whereas the other scenarios which had negative impact and risk called out, those were big enough to require a specific response, I'd say.

[Darren] (19:34 - 20:10)

Okay. Maybe I'm just coming from the wrong side of this as the resident doomsayer, but to me it seems like the AI for all. Also known as security nerd.

Yeah, the security nerd. I think my title should be resident doomsayer, that has a certain ring to it. But yeah, just to me it seems like there would be a number of obvious negative drawbacks there.

To me going through the white paper, that's just my first question that pops out. Why is this one considered the ideal? I understand the bias and not wanting to bring down the energy of the group, something they're excited about.

[Dan] (20:10 - 20:10)

Yeah.

[Darren] (20:10 - 20:14)

But you know, I'm a security nerd, so bringing down the energy is my speciality.

[Marc] (20:14 - 21:18)

One of the structural elements was looking at the impact and the likelihood of items. And the nice thing about this was that when we look at a question like that, this is what the experts in product development that we had in the room thought of the situation. And so it all kind of traces back to that.

Because in a nutshell, Dan, at the end of the day, you took a picture of the post, you uploaded it into chat GPT, out came a white paper. We did edit it a fair amount and we didn't just do one prompt. So there was a fair amount as well of prompt engineering there.

Look at it like this. Look at it like that. Look at what the, not only examine the scenarios, but what type of findings, what type of findings relate directly to the scenarios, what type of findings relate directly to the people in the room, things like this.

But at the end of the day, the outcome was strong enough that people signed it, didn't they?

[Dan] (21:18 - 24:11)

Yeah, absolutely. And the people in the room signed it and distributed it and were really proud to have worked on it. And that shows a degree of confidence.

And maybe we should have had a resident doomsayer in the workshop. And the next workshop we do, we'll definitely save a seat for someone with those unique set of skills. But the people who came to the workshop were willing to put their names on something that's going to be shared with their peers.

And I always kind of categorize sharing an idea or putting yourself out there in front of your peers and your industry and other people who might see it and can impact how you're going to work and network in the future. That's a brave thing to do. And people were happy with it.

People were positive about it. Generally. And if you read the white paper and you read some of the ways we wrote it as well.

And when Marc mentioned the editing that we did to make it accessible and to make it not too overly technical, both in technology jargon or business jargon, yes, we had to write things in a way which actually can be taken both as positives and as negatives. So when we talk about, let's go back to the AI for All scenario, and we said there was only positive impacts. And let's say one of them is, okay, I'll pick a good one.

Automation of routine tasks, freeing up time for creative and strategic work. That was one of the impacts. Now, generally that's positive.

And the effort required is to upscale and train people to a good enough level that they can actually do that. That totally removes any nuance in people feeling negatively about their work changing. Okay.

That is negative. They're going to be stressed out. And going through an upscaling process, training, development discussions, that might not be totally positive.

And whenever I see freeing up time for creative and strategic work, the inner capitalist in me says, ooh, hey, profit margins. You can take that both positively, negatively, but the way we tried to write about this is, okay, let's look to the future. Let's treat people as adults and they will know the risks beside this.

And let's call out the negatives when they're actually really negative. And I'll take an example of when we go into, okay, just having a few people with expertise in AI and not doing a wide scale training program, like that, both maybe ethically, both in the wider scheme things in the industry. Okay.

That's going to be pretty negative. And then we talk about, let's say security side of things, like security and ethics become even more important and it flips from being a positive to a negative. So that's what we would call those things out.

I don't know if I just went on a bit of a rant about, or maybe I went into defensive mode about our baby white paper, but I'm just trying to explain how it came about and the reasoning behind it.

[Darren] (24:12 - 24:54)

You do raise one issue that I thought was actually really great about this white paper. And it might be because of the news around Meta and LLAMA and the maybe not bringing LLAMA 3 to the EU because of GDPR and not being allowed to train on European data. But the fact that in your common themes across all four scenarios, you've got all these people from different business sectors in one room, and they all agree to the ethical guidelines and governance were one of the critical things across every single scenario.

And I guess I've just been exposed to too much Meta and too much Elon Musk, that the fact that business leaders are coming together and saying that was just extremely refreshing.

[Dan] (24:55 - 25:22)

Yeah. It's not just good for us as individuals, I think it is good for business to take care of your customers even, or take care of your employees. So maybe that's my kind of business focus on it, but everyone did agree.

And we have to remember, these are people too. These are employees, even if they're in leadership positions of these organizations, they have to live and work with this potential scenario. They also want to be safe in this future world for work.

[Marc] (25:23 - 25:27)

Well put. Thank you. Let's do a fast summary, Dan.

What did we do?

[Dan] (25:28 - 27:02)

What did we do? We organized a workshop and collaboration of leading experts in product organizations and technology in Finland. We got them together.

Oh, you're really pressuring me. We got them together to talk about potential scenarios of what could happen if and when they roll out AI tooling in their organizations. We looked at the scenarios of, is this going to be AI tools for everyone, or is it just for a few people?

Is this going to be focused on development and efficiency, churning out more and more features, more and more products all the time, or is this going to be about making the products just generally better for the customers and slowing things down to make sure we have the right thing? No single scenario was promoted as being the best one. No single scenario is probably being the worst one.

It really depended on depending on your products and your industry. We got people to have a discussion, have an analog post-it note on whiteboard discussion about the ideas, the concerns, which is the most likely thing to happen, what is going to be the impact or risk, and what will be the effort to push this positive impact or reduce the negative impact? We got them to present and share with each other.

And of course, we captured all their findings into a nice white paper with the help of our friend ChatGPT to share it widely to everyone what these smart people were thinking. Did I miss something? And where do we find it?

Where do we find the outcomes? Of course, it's of course available on the EficCode website. I won't read out the URL, but if you Google Eficode AI Workshop, you'll find the white paper.

If you go on the Eficode website and just type in the search bar, AI-driven, you'll find the white paper.

[Marc] (27:03 - 27:11)

Or just ask us and we'll send it to you as well, if all else fails. Awesome. Thank you so much, Dan, for coming on the podcast today.

[Dan] (27:11 - 27:13)

It was great to be here. Thank you so much.

[Marc] (27:13 - 27:23)

Super happy to have you. And look, Dan, he summarized the whole thing with only two questions. All right.

Thanks again, Darren. Thanks, Marc. Did I mean to say, look, Darren?

[Darren] (27:24 - 27:28)

Yeah, I think you meant to say, look, Darren, but I understand the confusion.

[Marc] (27:31 - 27:49)

Yeah, there's only a hair's breadth between you. So hey, thank you again for listening and we'll see you next time in the sauna. We'll now give our guest an opportunity to introduce himself and tell you a little bit about who we are.

Hi, I'm Dan Curham.

[Dan] (27:49 - 28:02)

I'm Marc's colleague in the advisory and coaching team at EfiCode. I work on helping customers with product management, product strategy, and portfolio management, and generally finding the right thing to do with their products at the right time.

[Marc] (28:02 - 28:10)

Hi, I'm Marc Dillon, lead consultant at EfiCode in the advisory and coaching team, and I specialize in enterprise transformations.

[Darren] (28:10 - 28:17)

Hey, I'm Darren Richardson, security architect at EfiCode, and I work to ensure the security of our managed services offerings.