#235 - From AI Chaos to Clarity: Building Situational Awareness with Wardley Mapping - Simon Wardley
“Everything an AI does is a hallucination. 100% of the time it hallucinates. It’s just that a lot of the time, the hallucination is right.”
Can you navigate AI disruption without understanding your landscape? Discover how to gain true situational awareness.
The rise of AI has exposed a fundamental problem in how organizations make decisions. Most leaders operate using stories and graphs, not actual maps of their landscape. This leaves them vulnerable to disruption and unable to make informed choices about where to apply new technologies. The result is chaos, waste, and strategic mistakes that could have been avoided.
In this episode, Simon Wardley, creator of Wardley Mapping, explains how to build true situational awareness in your organization. He shares why most business “maps” aren’t really maps at all, how to understand the landscape before making decisions, and what leaders need to know about AI adoption beyond the current hype.
Key topics discussed:
- Why leading with stories instead of maps creates fake CEOs
- The critical difference between graphs and maps in business strategy
- What Wardley mapping is and the three pattern types leaders must understand
- How to identify where human decision-making adds value in your AI adoption
- Why vibe coding is powerful but dangerous without proper code reviews
- Why software development is still a craft, not engineering
- How Jevons Paradox means AI won’t eliminate jobs but expand codebases
- The hidden dangers of AI hallucinations and the need for critical thinking
Timestamps:
- (00:02:59) Career Turning Points
- (00:06:45) Importance of Understanding Landscape for Leaders
- (00:10:42) The Problem of Leading with Stories
- (00:12:49) Wardley Maps vs Other Types of Business Maps/Analysis
- (00:17:32) Wardley Map Overview
- (00:23:54) Why Mapping is Not a Common Industry Practice
- (00:26:23) Climatic Patterns, Doctrines, and Gameplay
- (00:30:51) Understanding Disruption by Using a Map
- (00:33:17) Navigating the Recent AI Disruption
- (00:39:37) A Leader’s Guide to Adopting AI
- (00:42:49) Turning Coding From a Craft Into Engineering
- (00:48:05) Simon’s AI & Vibe Coding Experiments
- (00:55:28) The Importance of Critical Thinking for Software Engineers
- (01:03:49) Navigating Career Anxiety Due to AI Fear
- (01:08:56) Tech Lead Wisdom
_____
Simon Wardley’s Bio
Simon Wardley is a researcher, former CEO, and the creator of Wardley Mapping, a powerful method for visualizing and developing business strategy. His journey began accidentally after a bookseller recommended Sun Tzu’s The Art of War, which sparked a fascination with understanding the competitive “landscape.”
As the former CEO of an online photo service acquired by Canon, he felt like a “fake CEO,” leading with stories while lacking true situational awareness. This led him to discover that almost all business “maps” were merely graphs, prompting him to develop his own mapping technique. Today, his work is used by organizations like NASA and taught at multiple MBA programs, helping leaders to “look before they leap” and navigate complex technological and market shifts, including the current disruption caused by AI.
Follow Simon:
- LinkedIn – linkedin.com/in/simonwardley
- Twitter – x.com/swardley
- Website – www.swardleymaps.com
Mentions & Links:
- 🎧 #213 - Moldable Development: Explain Systems & Make Better Software Decisions - Tudor Girba – https://techleadjournal.dev/episodes/213/
- 📝 Rewilding Software Engineering – https://medium.com/feenk/rewilding-software-engineering-900ca95ebc8c
- 📖 Cybernation: The Silent Conquest – https://www.amazon.com/Cybernation-Silent-Conquest-Donald-Michael/dp/1013573110
- 📖 Catechism of Electricity – https://upload.wikimedia.org/wikipedia/commons/5/59/New_catechism_of_electricity%3B_a_practical_treatise_%28IA_newcatechismofel00hawk%29.pdf
- 📖 The Machine Stops – https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/Machine_stops.pdf
- 📚 The Art of War – https://en.wikipedia.org/wiki/The_Art_of_War
- Wardley Maps – https://www.wardleymaps.com/
- Learn Wardley Mapping – https://learnwardleymapping.com/
- Moldable development – https://moldabledevelopment.com/
- Technomic Empire – https://technomic-empire.lovable.app/
- Disruption theory – https://hbr.org/2015/12/what-is-disruptive-innovation
- Jevons paradox – https://en.wikipedia.org/wiki/Jevons_paradox
- Wardley maps – https://en.wikipedia.org/wiki/Wardley_map
- Climate – https://learnwardleymapping.com/climate/
- Doctrines – https://www.wardleymaps.com/glossary/doctrine
- Gameplan – https://www.wardleymaps.com/glossary/gameplay
- Eliot Sivowitch Law of Firsts – https://invention.si.edu/invention-stories/sivowitch-law-firsts
- Neuropsychology – https://en.wikipedia.org/wiki/Neuropsychology
- Manus – https://en.wikipedia.org/wiki/Manus_(AI_agent)
- Jill Lepore – https://en.wikipedia.org/wiki/Jill_Lepore
- Aleksandar Simovic – https://x.com/simalexan
- E.M. Forster – https://en.wikipedia.org/wiki/E._M._Forster
- Sun Tzu – https://en.wikipedia.org/wiki/Sun_Tzu
- Lao Tzu – https://en.wikipedia.org/wiki/Laozi
- Clayton Christensen – https://en.wikipedia.org/wiki/Clayton_Christensen
- Creative Commons – https://en.wikipedia.org/wiki/Creative_Commons
- Blockbuster – https://en.wikipedia.org/wiki/Blockbuster_(retailer)
- Nokia – https://en.wikipedia.org/wiki/Nokia
- 🎞️ 300 – https://en.wikipedia.org/wiki/300_(film)
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
Career Turning Points
-
What happened is, I started to realize that I used to come up with these wonderful visions and strategy statements and they were just made up words I pinched from other company. I had no idea what I was really doing. And I was the CEO of the company.
-
Sun Tzu talked about five factors that mattered in competition: have a purpose and moral imperative; understand your landscape; understand how the heavens, the climate, the weather, how that is changing climactic patterns; then understand principles, doctrine, how you organize yourself; and then you get into leadership and gameplay.
-
And what I was fascinated by was this concept of landscape. How do I understand the landscape around me? And that’s what kicked me off into the whole journey into mapping.
Importance of Understanding Landscape for Leaders
-
It was the discovery for me that landscape was important.
-
When we think about competition, there are three basic forms of competition. Competition is the act of groups of people seeking something. And we can do it through conflict, fighting others, collaboration, laboring with others, or cooperation, helping others. So they’re all forms of competition. And when we think about military history and where you’ve got groups of people competing, and that doesn’t have to be through conflict. It could be collaboration, cooperation as well. And maps are really important as a means of communication between them.
-
Before I had maps, I was like everybody else. I used to run my organization on something called stories. In fact, I used to read all these articles about how great leaders were great storytellers, and I thought, get the story right and everyone will follow you. But it’s a bit like generals saying, we’ll give you a great story of what to do, but we’ll have no understanding of the landscape. No maps. No situational awareness.
-
And so I very much, before I had mapping, felt very much like the fake CEO. And the reason why I felt like the fake CEO is I had no idea what I was really doing. I’d never done an MBA, so I assumed there was some sort of secret thing you learned on MBA, which you learned how to do the right thing. It turns out, they didn’t teach mapping. I was like a typical CEO, all story led. Very worried that people might rumble that I really didn’t know what I was actually doing, had this terrible internal fear that people would discover.
-
I didn’t understand what was wrong until I read that book. It was the connection to landscape. That was the bit that was missing. Once I could see the landscape, I could see what was wrong with my stories and strategy and we had ways of communicating.
-
I assumed this is what you learn at MBAs. It took me another six or seven years to realize, no, it wasn’t, they don’t teach you this, which was another shock. Cause I, assumed if I knew it, then everybody else knows it.
-
At the point, I could see the landscape is like looking at the chess board for the first time. It makes sense.
The Problem of Leading with Stories
-
One of the problems with stories is that because we tell people that great leaders are great storytellers, if I’m leading an organization, I give you a story. If you challenge my story, you’re actually challenging my leadership ability. The whole thing is built around these stories and there isn’t really a way of people communicating. So what you have is often people on the frontline, you can see the obvious fault with what we’re trying to do but have no effective way of communication because the story is almost sacrosanct.
-
And that changes with a map because one of the beauties about a map is if you put everything down on a map, somebody can say, I think the map is wrong, not that you are wrong. So we now communicate through the map. It doesn’t matter whether you come from finance, engineering, operations, whatever, we can actually all communicate with one language, which is the map. And we challenge the map, not the person. So that’s a big, big fundamental change as well.
Wardley Maps vs Other Types of Business Maps/Analysis
-
I put this SWOT and this map together and asked how would I communicate what we’re gonna do in battle? And it was obvious I’d use a map. But everything I was using was a SWOT. So I thought, okay, I’ve gotta find all the maps in my organization. We’ve gotta start communicating with maps.
-
So I asked people to send me maps, and they sent me loads. Mind maps, business process maps, systems maps, customer, just loads of maps. And they were great. And I took one of them and I think it was a mind map. I was looking at it and I took one component and I moved it and I asked, how has the map changed? And it hadn’t because the map had components with links and if you just move it left right up or down a little bit, it doesn’t make any change. I looked at geographical map. If I move UK and put it next to Australia, that really has changed that map. So why hasn’t my map changed?
-
It took me a bit of time to realize that all the things that I had in business, which were called maps, had one thing in common. None of them were maps. They were all graphs. They were all node and connection diagrams. And the distinction between a graph and a map is, in a map, the space itself has meaning. So you can’t just move a piece without changing the fundamental meaning of what you’re looking at. And so all the things I had, business process maps, mind maps customer journey map, all of them were actually graphs and we really should call them graphs. And so I couldn’t find any maps.
-
It took a long, long time of doing my mapping, and I just ended up calling it Wardley mapping.
-
So I started off with SWOT diagrams and all those sorts of frameworks and analysis. Realized I didn’t understand my landscape. Asked people to send me all the maps they had. Found out that everything we had a map wasn’t a map, it was a graph. And eventually ended up having to create a map which I made Creative Commons and shared it with others in the hope that others would find it useful.
Wardley Map Overview
-
The first challenge, I knew that space had to have meaning in a map, but where does that meaning come from? It comes from a number of places. One, you normally have an anchor, such as the compass, north. You have position of pieces. This is north, southeast, or west of this. And you have consistency of movement. So if I’m going north, I’m going north. I’m going south, I’m going south. So I had to recreate that not in a territorial space, but in technological, economic and also in these days, political, social, and even legal spaces.
-
So how do you recreate those characteristics? The first thing I needed was an anchor. And so I looked at my own business and I had various systems diagrams, and I thought, well, what are we gonna anchor around? And I thought, well, we have the business, we have the consumers, we probably have government and legal requirements. So we’ll put those as the anchors we focus. ‘Cause you can have more than one anchor on a map.
-
If you take the business or the consumers, you would start by first of all thinking what do they need? So what you can do is create a chain of needs. So now I’ve got an anchor at the top and a chain of needs. You can think of this as a supply chain. Now when within an organization, the boundary, we normally call that a value chain. When we cross multiple organizations, we call it a supply chain. There’s no difference, we start off with an anchor and we’ve got a chain of components that make that thing possible, whatever need we’re meeting.
-
So that gives me anchor and position, but it doesn’t give me movement.
-
Movement’s really easy in territorial spaces but technology does rapidly change in our lifespan. So in order to describe movement, I ended up having to describe it in terms of change itself. And it turns out there’s a common pattern by which things evolve. We’ve got genesis, custom built, product, commodity. So what I can do is I can take my graph, which is my anchor and the chain of needs, and simply ask the question, how evolved are those components?
-
And so literally by putting things where they are, then we enable others to look at the map. They can add that challenge to the map as well.
-
You start with the, by finding out who are the users, the anchors, say business consumers. What are their needs? What are the components involved in meeting those needs? And then you simply ask how evolved those components are. And that is a map.
Why Mapping is Not a Common Industry Practice
-
Vikings used to travel by stories, by narrative. They used to tell these fantastic stories, which were basically navigational documents of how to get from A to B. And then at some point, somebody came along and came up with the idea of sunstones and started creating perhaps. And they turned out to be quite useful. But it took a long, long, long, long time before we had the sort of modern day maps that we see.
-
So why haven’t we done maps before? Why have we just used graphs and stories? Because that’s how we used to do it. That’s all I can say. And that’s how I used to do it. I thought, this mapping was what you learned at various business schools.
-
I created a way of mapping. And others have found it quite useful.
-
One of my favorite quotes from Eliot Sivowitch is whenever you discover who was first, the more you look, you find someone else who was more first. And the harder you look, you find that the first person was actually third.
-
I’m sure there were people doing mapping of technological, economic, social, and political spaces, maps, not graphs. I just haven’t come across them yet.
Climatic Patterns, Doctrines, and Gameplay
-
Once you start mapping out a landscape, you start learning that things move on that landscape. Technological, economic, social, political spaces, they move quite rapidly because of competition. So the first lesson you learn, is everything evolves. If there is supply and demand competition, things are moving.
-
And then you start learning patterns, like we have inertia to change, ‘cause of pre-existing capital. So when we shift certain product to commodity, we often have resistance because of pre-existing business models.
-
Classic example is Netflix x Blockbuster. Blockbuster out innovated everybody, they were a highly innovative company. Netflix was a DVD mail order company, blockbuster, highly innovative. But the problem is their business model depended upon physical stores and late fees. And so that created an inertia to change. You learn this pattern that we have inertia.
-
What I often do is with the maps, I use maps before we do stuff. So we map it out and we use it to challenge what we’re gonna do. Then we go and do stuff and then afterwards we use maps to learn patterns. And so we start building up more and more of these patterns. And then you notice patterns about the patterns.
-
So the first one you notice is that some of the patterns are gonna happen regardless of what you do. So they’re driven by competition itself. So those I call climatic patterns or rules of the game. So these are gonna happen on the map, regardless of what you do, things are gonna evolve. You are gonna have inertia as they evolve, they will create new practices.
-
And then you have a whole bunch of patterns you’ve got choice over. The first set of them are ones which I call doctrine principles, which are universally useful as far as I can tell. So these are things like focusing on the user needs. That’s a good principle. Focusing on the users, you need to do that before you’ve done the user needs. Understanding the supply chain, turns out that’s a good thing to do as well. Understanding how evolved the components are, how we treat a custom-built kettle and a kettle are two different things.
-
There’s about 30 of the climatic patterns and there’s about 40 of the doctrine principles.
-
And then you get onto a big set of patterns, which are all context specific. So they change the map, if you use them in the right context. And these I call gameplay. And there’s about well over 100 of these. So things like open source, fantastic for accelerating the industrialization of something. Fear, uncertainty, and doubt, great for exploiting other people’s inertia.
-
And so once you have a map, you can, you apply the climatic patterns to see how things are changing. You put your gameplay into it. So you choose those patterns as well. And then you organize and structure yourself around it.
-
So you learn that of the patterns there are three basic types: climatic, principles and doctrine, or doctrine, and gameplay. And climatic will happen regardless of what you do. The doctrine principles are universally useful. You don’t have to do them, you don’t have to focus on user needs, you don’t have to understand the supply chain, but it’s quite good idea to do so. And then the last lot are contextual gameplay. So they are powerful when applied in the right places, like open source. Great if implied in the right places, not so good if applied in the wrong places.
Understanding Disruption by Using a Map
-
One of the things you learn from mapping is there are at least two different forms of disruption. Christensen who came up with disruptive innovation, used to get into this whole argument with Lepore. Lepore would say that Disruption Theory is not predictive and gave all these examples of why it was not predictive. And Christensen say it was predictive and here’s my examples.
-
The problem is, there are actually two different forms of disruption. So when you map it out, what you learn are product to product substitution, can be highly disruptive, but it’s highly unpredictable. Whereas product to utility or product commodity disruption is highly predictable. You can say a lot about what’s gonna happen, when it’s gonna happen, what practice is gonna change, etc. So when we talk about Disruption Theory, there are at least two different forms.
-
But if you can’t map it, you can’t see those two different forms. And so you end up with these big figures just having this argument over, oh, it’s just predictable, it’s not predictable, etc. And the answer is, they’re both right, both wrong. But if you can’t see the map, you totally miss that.
Navigating the Recent AI Disruption
-
That’s such a big question. There’s the trivial stuff. And the trivial stuff is that large language models that represent industrialization of machine cognition, that creates a whole new set of practices which appear. So you get things like vibe coding, prompt engineering. It’s gonna allow for new activities.
-
You can take all the basic economic patterns, the climatic patterns, AI is a big field, but within it, many different components. Some of those are industrializing. As a result, you get co-evolution of practice, you’re gonna have inertia. You’ll get impacts like Jevons Paradox.
-
If we look back at cloud in 2007, 2008, loads of CIOs running around saying, oh, we can get rid of our sys admins because of cloud. If you map it out, what’s happening is, yes, compute’s industrializing, but we’re gonna see a new set of practices, dunno what it’s gonna be called yet. Your IT estates are gonna expand massively because you’re gonna be able to do new activities, which actually will industrialize. You’re not gonna save any money, ‘cause you’re in competition with others. And so all those people you want to find, you really need just to retrain. Otherwise you’re gonna have to hire them back as really expensive something else.
-
And then you get these CEOs, CIOs go, oh, get rid of our sys admins. And a few years later, they’re desperately running around trying to rehire DevOps engineers at inflated rates and all the rest of it, which were just their sys admins retrained.
-
You’re getting the same rubbish. I listen to these CEOs, CTOs, we could get rid of all our engineers. Jevons Paradox is gonna come and hit you so hard in the same way. Your IT estate went from 2,000 servers to 200,000 virtual servers. Your lines of code are gonna go from a hundred million lines of code to 30 billion, 50 billion lines of code. And a lot of this stuff is gonna be AI generated and you’re gonna get VPs vibe coding, but you’re gonna have to review this stuff. And so you’re gonna have to have people really skilled in this space. All your software engineers, you’re gonna need them retrained in this space.
-
I find it fairly delusional but it’s normal. We had the same nonsense in cloud and you can go back to 1968, Donald Michael’s Cybernation: The Silent Conquest - how computing was gonna make everybody unemployed - all the way to a 1896, new Catechism of Electricity, Nehemiah Hawkins. Electricity would make everybody unemployed. It’s desperate stuff. It’s great for vendors, as in, VCs and AI vendors for people to believe this. But it is not. People are gonna make a lot of mistakes.
-
One of the biggest questions that you have to answer today is where is human decision making valuable in our landscape, because some aspects you aren’t gonna vibe code. By vibe code, what we mean by that is what we talked about back in 2018, which is conversational programming. So you are gonna have a conversation with the system and it’s gonna build you something, and you are never gonna look at the code. That’s what we mean. Lots of discussions about this seven years ago. The latest term is vibe coding. You are also gonna have software engineering plus AI. This is where we get the AI to build stuff and people review, and then you’re gonna have a whole bunch of stuff which you’re gonna outsource to other parties. So how do you decide where that’s going to be?
-
One of the beauties about having a map, it’s a bit like practices and methodologies. Once we have a map of the space, it becomes much easier to see where I should use things like Six Sigma, where I should use lean, where I should use agile, extreme programming. It becomes much easier to look at the map and say, this is the area I should outsource. It’s all commodity. Stuff in the middle, I’m probably gonna use software engineering plus AI. Stuff on the left hand side, I’m gonna use vibe coding. So of course, if you don’t map your landscape, much, much more difficult to actually have those conversations.
-
You’re gonna see lots of errors and mistakes and people do things like get rid of their software engineering departments and then hiring them back later as, AI engineers or some other word. It’d be that sort of mess.
-
It’s not a question of if, it’s a question of when. You are gonna get a change of practices, you’re gonna have inertia, explosion of new activities, Jevons Paradox. And the interesting stuff is the stuff about architecture and where does human decision making actually make.
-
Our biggest problem in things like architecture is that the architecture is actually in the code. We draw these wonderful diagrams and think that is architecture that’s more like a prompt. The architecture is in the code. That is the code is to actually make the real decision. And of course, when you hand that over to an AI, it doesn’t matter what your diagram is the real decision’s gonna be made by the AI and we might be okay with that in certain prototypes and things. In other areas, it’s gonna take us quite a bit of time before we trust that stuff. And that’s where you are more software engineering than AI.
A Leader’s Guide to Adopting AI
-
You need to adopt AI, because the market expects you to adopt AI. There’s very questionable results in terms of performance improvements. And it’s not a question of if, it is a question of when.
-
So number one thing, you will declare to the market that you are going full AI, because that’s what the market wants to hear.
-
Number two, start getting your people to learn how to use AI, focus more on the prototype stages. If you don’t have maps, start mapping out your landscape and working out where you need human decision making in the loop, don’t get sucked into that, oh, we can get rid of our software engineers. Now, VP can just vibe code something through a few prompts and we’ll put it into production and magically it will all work.
-
Unless you’ve got very, very good lawyers. ‘Cause you’re heading towards a world of pain, I would take the position of we’re taking our people, we’re gonna give them some time to learn about this stuff, we’re gonna try and understand our environment. And when we understand our environment, we can see more easily where it needs to be applied.
-
Of course, we’re gonna tell the market, yeah, we’re a hundred percent AI. But internally, start to understand your landscape, see where it can be applied, get your people, so they’re using the tools and getting familiar with the environment. That’s what I would be immediately doing.
Turning Coding From a Craft Into Engineering
-
There’s another side to this as well, which is the engineering question. I do some work with Tudor on Rewilding Software Engineering. Part of the problem with software engineering is there’s two sides. There’s development and testing, testing is an engineering subject. Development is a craft. And there’s a whole reason, a little bit too complex to explain on this, and I can use maps to explain why it is, but it all boils down to tool sets.
-
There’s only one engineering subject which sort of believes it can use the same tool everywhere. And that’s software. So it doesn’t matter whether I’m building an electronic healthcare record system or online gambling site, I can use the same tools. We should be using highly contextual tools in the same way that every other engineering subject and there are reasons why this is.
-
Currently, we’ve got all these vendors who’ve been trying to flog us the same tools and now flogging us the same tools with added AI. It’s a bit like somebody going and saying kitchen blender. Oh, you wanna build a deep mine shaft? Here’s a kitchen blender with a robot. No, actually I really want deep mining tools. Not kitchen blender with a robot. We are being done a disservice at the moment in the field of software by the tool vendors.
-
If you think about testing, whenever we have a problem, what we do is we start off by building a small test which the problem fails, so we build some code. And then it works. And so this is a test driven development. And what happens is through that process, we explore the space by building lots of small tests, and we build up a test suite. And we might have 50 to 100,000 tests, which are all for that contextual problem, they’re all small tools. Inputs, outputs, traffic lights. They’re just tiny tools. But we build up a highly contextual test suite for that space.
-
No one turns up to us and says, here’s the ACME hundred thousand test suite. Just run it against your application. You’d look at them like, what are you talking about? My application is nothing like your application. Of course, my tests have to be contextual.
-
It’s exactly the same with any other toolset. But we don’t do this. We instead have like the ACME test suite, but it’s the ACME tool as we have a bunch of standard tool vendors trying to flog us the idea that tool building is hard and the best way you can do it is just using our standard tools. It’s nonsense. Tool building is easy.
-
The interesting thing, if you take that approach, there is a bit where AI truly then starts to shine, because by building tools for the problem, we’ve seen massive improvements. Many, many orders of magnitude improvements in the speed of development, on one case, you’ll have 600x, which is like a ridiculous figure. But when you start doing that, the AI really shines because it can be very, very useful in helping you build new micro tools, and also coming up with new hypothesis to test as well. That sort of combination is really, really interesting to me.
-
Whereas I think the combination of, oh, we just write a prompt and magically, it will write some code. It’s better than people just writing code as a craft. But I think what we really need to do is turn development into an engineering subject and then apply AI to it. And that’s where the real power is. And we’re not there yet. Testing as an engineering subject. Development is unfortunately still a craft. And unfortunately, we’ve got tool vendors kind of flog to you kitchen blenders with robots which isn’t helpful. But there we are.
Simon’s AI & Vibe Coding Experiments
-
I love vibe coding. My actual website swardleymaps.com, it’s entirely vibe coded. As in I haven’t looked at the code and all the rest of it as well. But it’s all stuck within a browser. And, there’s various restrictions on it. And I love vibe coding. It’s great fun. I do all these experiments and I come up against so many horrors.
-
I did one where I was building a particular system. I would get the AI to build me a testing engine within the system. Which it did. And so I then said, right, so every new functionality, build a test and add it. And it was doing this and it was great. So it was building new functionality, building more tests. I could run the testing engine, it would say everything passed. Every now and then things would fail. Copy the logs. Put them in. And the AI would fix it. It was marvelous.
-
And then after a bit of time, I started to think something was wrong. And I resisted but eventually I went and looked at the code. Now what the AI had done is it hadn’t built me a testing engine. It built me a simulation of a testing engine. So there was zero testing. It was entirely simulation.
-
You get loads of examples of this. Particularly you start to realize that these system are stochastic parrots. They don’t understand the thing that they’re doing.
-
I love getting these AIs to write me scientific papers. AIs, because they’re trained on data and large amounts of data, they’re very good at open-ended questions, i.e. ones you can’t check. And they’re good at closed ended questions when there’s a lot of data. But of course, if there’s no data, then they try to be helpful and they try to sound authoritative, but so they write something. It’s just total garbage. It’s just making things up.
-
I love them as tools. They’re great fun to use. You have to be very, very, very careful with them.
-
It’s extremely dangerous when people think, particularly in software, I can just vibe code something without looking at the code and then put that into production, and it’ll be great. And you get carried away with agents as well, because you have other agents testing and all the rest. And you end up then having this idea that I can give an architectural diagram, which is just a prompt. It’s going to build the thing which matches that. Probably isn’t. And then I’m gonna have something else which tests that it’s built the things that match. That’s hallucinating as well. You get into all sorts of problems.
-
Just remember they are stochastic parrots. And they don’t understand. There is zero understanding of what they are doing.
-
And when it comes to hallucinations, just remember. Everything an AI does is a hallucination. A hundred percent of the time it hallucinates. It’s just that a lot of the time, the hallucination is right, but they are all hallucinations. It is just a lot of the time it’s right. And so just be very careful.
-
And there’s no understanding. We have this idea that they think in terms of, they understand deeply. Now the question you have to ask is, do not humans operate in the same way? We have little to no understanding of how humans operate. We have very, very poor understandings of how the human mind works. We make terrible assumptions about AIs and trying to associate with the humans.
-
By hallucinations, typically the industry use it in a term of when it’s got it wrong. I think it’s far better to think of it’s hallucinating all the time. But, a lot of the time those hallucinations are right. ‘Cause it’s not thinking. It’s not understanding in the way that we think understanding is, even though we don’t actually understand what understanding is.
The Importance of Critical Thinking for Software Engineers
-
I like mapping out sectors, like mapping out defense, healthcare, and all the rest of it. And so one of the areas I mapped out, back in 2022, was education from multiple perspective. The reason why you do multiple perspectives is if you imagine no one had mapped Paris and you sent one group to map Paris and they came back and you said, what was the most important thing in Paris? They might say Pierre’s Pizza Parlor. Cause they mapped it from a perspective of eating nice pizzas. So what you want to do is map from multiple perspectives and then you aggregate across them all. And then you discover things like the Eiffel Tower matters.
-
This is why I deal with all these professors of education and everything else. Got them to map out education. Then we mapped it out from all these different perspectives and then we aggregate it to find what mattered. Now the question is, your final focus is, are you focusing on market benefit or are you focusing on social benefit?
-
If you map out education from market benefit, then it’s things like use of AI in classrooms, digital access, which are things the market sells and it loves doing. But if you map it out from a perspective of social benefit, where you should be investing is like lifelong learning and critical thinking.
-
Critical thinking isn’t even a course in most educational establishment, because most education despite the best efforts of teachers, most education systems seem to be set up to produce useful economic units. To produce people for the workplace. So critical thinking may not be high up having AI skills maybe higher up, but being critical thinking may not be high up on that list. We don’t teach it in schools, despite the best effort of teachers to sneak it in there. We’re pretty poor. Certainly, in the Western sphere.
-
Is it important to think about what we’re doing to review what we’re doing to ask questions of what we’re doing? Yes. Do I think this needs to be a specific subject that is taught? I think absolutely.
-
We are increasingly living in a world of misinformation where we were increasingly potentially relying on systems that we don’t understand to do things for us there was a case example about or two weeks ago that somebody on ChatGPT had asked questions about, healthcare. And it come up with a diet for them to exclude sodium. And they ended up with a whole bunch of conditions, they replaced it with something else, they’ve ended up with quite severe conditions. Why? Because they believe that the stuff is supposed to sound as though it comes with authority understanding. But there is none and they can be quite dangerous, which is why we have guardrails most of the time. Not that it fixes the underlying system. We just hide better by saying I can’t give you an answer on this because that could be dangerous. Doesn’t mean a thing underneath it. It’s still that mess that it is.
-
So yes, critical thinking is important. And there’s a whole bunch of issues actually around values being embedded in these large language models. The problem is with the AI space, we’re getting the tools’ changing. The language’s changing. So we’re moving away from a much more declarative to much more conversational languages, say prompts. The medium is changing, so moving away from text to sort of much more seeding with images, with diagrams, etc. The language medium and tools are how you reason about a world.
-
So if you imagine an equivalent example would be the tool would be the printed press. Language would be the written word. Medium would be paper. If that was controlled by one group of people that have immense power over your lives. And unfortunately, this is what we’re starting to see. And the way to challenge against that is through critical thinking, having people able to challenge and by having openness. And by openness, I mean open all the way down. Not just open models. All the way down to the training data. Everything has to be open. Interesting work in China and France in that space. Cool. Not so much in the West. But those are the two ways of defend openness and critical thinking against the formation of new theocracies and new power structures.
Navigating Career Anxiety Due to AI Fear
-
There’ve been some interesting papers which talks about the decline in people’s capabilities through exposure to AIs. The Machine Stops by E.M. Forster. It’s a wonderful book from 1909. I would totally recommend everybody. And just read it all the way through. Cause I think it’s incredibly relevant for today.
-
Large language models in these AI systems, again, it’s not a question of if, it’s when. So they’re going to spread, we’re gonna learn new practices, we’re gonna learn better ways of using agentic systems. But it’s also gonna create new opportunities, new jobs.
-
First of all, the amount of code. You’re gonna have this job of trying to review or keep control off the code base. ‘Cause the code bases are gonna explode from, I dunno what it is for a company these days, 30 million lines of code, they’re gonna be hundreds of billions of lines of code. And the problem is with the AI systems, you will get to the point that it all, something’s gone wrong. At some point, somebody has to go and have a look. And so we’re gonna need lots of skills in those areas. There’s gonna be new jobs, completely new jobs.
-
I did a talk about this back in 2014. So we’re gonna have things like machine psychologists. Some of these systems are gonna create just their own strange behavior. And we’re gonna have to learn how to cope with these networks of machines thing. And then we’re gonna create entirely new roles that we’ve never thought of.
-
If you go back to Cyber Nation: The Silent Conquest, 1986. Computing is gonna get rid of everybody’s job. Or 1896 New Catechisms of Electricity. Electricity’s gonna get rid of everybody’s job. In the electricity case, you could have said, well, don’t worry, you can be a radio personality to which people would go, what’s radio? Well, it hasn’t been invented yet, but as electricity develops, we’ll have radio and we cannot see the jobs coming.
-
Or Cyber Nation, you’re going up to somebody say, oh, you, you’ll lose your, actuarial jobs, or accounting jobs or whatever it happened to be. You could say you could be a social media consultant and people go, what’s social media? That hasn’t appeared yet. So we’re gonna get a whole range of new activities, new jobs, new things we’ve never thought of, things like machine psychologists and onwards. We’re gonna find an explosion of activities anyway, an explosion of code, and we’ve gotta review and understand that as well.
-
It’s really difficult. We love making these doom lead and predictions.
-
One of the ones we’ve got to be super, super careful about is use of large language models in policy areas. Mostly because they have extreme bias towards market benefit, not social benefit. And at some point, nation state societies have got to really make that decision of what matters more market benefit, social benefit.
-
Take healthcare. If you are focused on social benefit, you are all about patient reported outcome measures, improving health outcomes based upon those. But we don’t do that. If you are focused on market benefit, you’re all about preventive healthcare and wellbeing. ’ Cause the market sells loads of that stuff, even though we don’t have a good idea of what a healthy person is, because we don’t do the patient reported outcome measures. We do what’s called ClinROs. So most of our healthcare systems are actually sick care systems. We’re good at treating symptoms, not making people healthy. So at some point, you’ve gotta focus on, when we talk about growth, do we mean growth of the market, preventive healthcare, wellbeing, great market opportunity? Or do we actually mean growth of society as in making people healthcare healthier? So like patient reported outcome measures. We have to make those decisions in different places as well across all different industries.
-
So I don’t think it’s an AI specific problem. It’s been a long running problem. Maybe AI will force us to make these changes, but the danger you’ve got with the AI in policy is, ‘cause they’re trained on market data, they have a massive bias towards market benefit, not so much social benefits. You have to be really, really careful just in that one space.
Tech Lead Wisdom
- Look before you leap. I’m a great believer in looking at the space before seeing what choices you have and making a decision. It doesn’t matter whether I’m talking about social policy or economic policy or technology within a space, and this is why maps is so important.
[00:01:28] Introduction
Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I’m really, really excited to have Simon Wardley in the episode today. So, uh, maybe some of you would have heard about Simon, right? And heard about Wardley maps as well. So today we are gonna talk about Wardley maps and a little bit about his adventure, you know, doing all AI stuff. And we’ll learn a lot of things about, uh, you know, how his journey with the AI and also Wardley maps for sure. So Simon, thank you so much for this opportunity. So looking forward for the conversation.
Simon Wardley: Well, thank you very much for inviting me. It’s a delight to be here. So, uh, very, very much appreciated. And by the way, so weird, uh, having people say that you’ve probably heard of Wardley maps before. Because this all started as accident as ages and ages ago.
And, uh, so, you know, I’m sitting at a conference a couple of months ago. A couple of people came up to me and they said, oh, you are Simon Wardley. And I was like, wow, bit of a shock. Okay, yes, yes I am. And they said, we use your stuff all over the place. I said, that’s fantastic! Great! Wonderful! Uh, and I said, I asked them, which organization are you from? And then they went NASA. At which point you just go, wow, okay, this is weird.
[00:02:59] Career Turning Points
Henry Suryawirawan: Right. So I think Wardley maps have become really, really popular in the last few years, especially when people started talking and incorporating some of the ideas. But before maybe we go into the gist of Wardley maps, maybe for people to learn more about you and your story about Wardley maps, maybe if you can share a little bit some career turning points that you have that we all can learn from you. I think that will be great.
Simon Wardley: Oh, career defining points. Gosh, what, what, what, uh. My career is completely accidental. I, you know, it’d be nice to think I had a sort of career path and a plan or anything to begin with, but that is absolutely not true. I mean, I just wandered around, uh, traveled the world, helped people out, and I ended up, um, running a company in London. It was a small online photo service, which was acquired by Canon. And it was really interesting time ‘cause this is back in 2003, 2004, 2005. I mean, we were doing, uh, online photos. It was rapidly growing, but very profitable. But what happened is, I started to realize that, you know, I used to come up with these wonderful visions and strategy statements and they were just made up words I pinched from other company. I had no idea what I was really doing. And I was the CEO of the company.
And so I suppose the career defining moment for me was, when I wandered into a bookshop in Charing Cross. And I was talking to, uh, the bookseller and explaining this problem, how I’d read every book I could find on strategy. I had no idea what I was doing. She asked me if I’d ever read Sun Tzu’s The Art of War, which I hadn’t. And so she persuaded me ‘cause she was a great bookseller to buy two different versions of the book ‘cause they’re all translations. And I’m so grateful for that because it was in reading the second translation, I noticed a particular pattern. So Sun Tzu talked about five factors that mattered in competition: have a purpose and moral imperative; understand your landscape; understand how the heavens, the climate, the weather, how that is changing climactic patterns; then understand principles, doctrine, how you organize yourself; and then you get into leadership and gameplay. And what I was fascinated by was this concept of landscape. How do I understand the landscape around me? And that’s what kicked me off into the whole journey into mapping.
So, um, you know, I’d like to think that, you know, I had a vision and a plan of what…, it was just total random walk. And then this, this moment wasn’t even caused by me, it was caused by somebody else. It was caused by a bookseller who just asked me if I’d ever read this book. And I hadn’t. And it’s that book changed me. So there was, um, my career defining moment.
And personal defining moment, was actually when I was much, much younger. It was the poems of Lao Tzu had a fundamental impact on my life. So I suppose those are my two moments.
Henry Suryawirawan: Wow! So I think, uh, it is very interesting that you learn from the two so-called, I’m not sure, like it’s quite famous Chinese, um, war generals and philosophers, right? So Sun Tzu is like, uh, well known for like a war general and Lao Tzu is more like a philosopher.
Simon Wardley: Yeah.
Henry Suryawirawan: I think it’s really interesting that you kind of like, I dunno, like mix the insights from them and become, you know, like one philosophy of yourself as well, which is called the Wardley Map.
Simon Wardley: Well…
Henry Suryawirawan: Thanks for sharing that story. Mm-hmm. Thanks for sharing that story. I think it’s really interesting.
[00:06:45] Importance of Understanding Landscape for Leaders
Henry Suryawirawan: So maybe when we go to Wardley maps, right? I know that, you know, when I researched about you, uh, you were actually coming up with that kind of like learning about the landscape and all that, because of your struggle, right, as a leader, right? So I think many leaders these days are also probably experiencing the same struggles. Maybe in the first few minutes, if you can relate to your struggles back then and try to explain to those leaders who are also experiencing this now, what will be your message about Wardley map?
Simon Wardley: Gosh. So, so, it was the discovery for me that, um, landscape was important. I mean, if you think about, um, when we think about competition, there are three basic forms of competition. Competition is the act of groups of people seeking something. And we often, we can do it through conflict, fighting others, collaboration, you know, laboring with others, or cooperation, helping others. So they’re all forms of competition. And when we think about, um, military history and where you’ve got groups of people competing, and that doesn’t have to be through conflict. It could be collaboration, cooperation as well. And maps are really important as a means of communication between them.
So before I had maps, I was like everybody else, I think. I used to run my organization on something called stories. In fact, you know, I used to read all these articles about how great leaders were great storytellers, and I thought, you know, get the story right and everyone will follow you. But it’s a bit like generals saying, you know, doing the same thing. Well, we’ll give you a great story of what to do, but we’ll have no understanding of the landscape. No maps. No situational awareness. We’ll just say something like, I dunno, go bomb some hills. Yes, go bomb some hills. Wasn’t that good, right? Go bomb some trees. What hills, what trees where, I mean, where do we cooperate? Where are our borders, etc?
And so I very much, before I had mapping, felt very much like the fake CEO. And the reason why I felt like the fake CEO is I had no idea what I was really doing. I couldn’t find, I’d never done an MBA, so I assumed there was some sort of secret thing you learned on MBA, which you learned how to do the right thing. I now teach on multiple MBAs, and I teach mapping at places like LSC and things like that because it turns out, you know, they didn’t teach mapping. I was like a typical CEO, I suppose, all story led. Very worried that people might rumble that I really didn’t know what I was actually doing, had this terrible internal fear that people would discover. I didn’t understand what was wrong until I read that book. It was the, it took a bit of time, but it was the, um, that’s the book, the Art of War. It was the connection to landscape. That was the bit that was missing. Once I could see the landscape, I could see what was wrong with my stories and strategy and we had ways of communicating.
And then for me, it was like a revelation. And I assumed this is what you learn at MBAs. My God, this was the secret source that they teach you. It took me another six or seven years to realize, no, it wasn’t, they don’t teach you this, which was another shock. Cause I, you know, assumed if I knew it, then everybody else knows it. I suppose that was the, the beginning moment is very much, I felt like the fake CEO. I didn’t understand really what was going on. I was leading by stories. I didn’t understand what the problem was until I read that book or the second version of the book. And at the point, I could see the landscape is like looking at the chess board for the first time. It makes sense.
[00:10:42] The Problem of Leading with Stories
Henry Suryawirawan: Yeah, I suppose many leaders these days. I can relate when you said that many people tell that leaders should be able to tell stories, right? Be able to influence people through great storytelling, charisma, and, you know, narrate things, uh, you know, in a proper way, right? But I think most importantly is like we know the landscape, like what you mentioned. Maybe we’ll dive deeper later.
Simon Wardley: You just hit on one point there. The, one of the problems with stories is that because we tell people that great leaders are great storytellers, if I’m leading an organization, I give you a story. If you challenge my story, you’re actually challenging my leadership ability. So we love to say words like we like challenge and all the rest of it, but the answer is what we mean by that is I challenge you, not you challenge me. No, my story is, you know. And so the whole thing is built around these stories and there isn’t really a way of people communicating. So what you have is often people on the frontline, you can see the obvious fault with what we’re trying to do but have no effective way of communication because the story is almost sacrosant.
And that changes with a map because one of the beauties about a map is if you put everything down on a map, somebody can say, I think the map is wrong, not that you are wrong. So we now communicate through the map. It doesn’t matter whether you come from finance, engineering, operations, whatever, we can actually all communicate with one language, which is the map. And we challenge the map, not the person. So that’s a big, big fundamental change as well. And I didn’t learn that straight away. Took a little bit of time before that dawned on me.
Henry Suryawirawan: Yeah. And I think some people even infuse the so-called vulnerability, you know, the personal stories inside, you know, the way they communicate, you know, for leadership, right? And I think like what you mentioned just now, if we kind of like want to challenge the idea, not necessarily the person, right? I think it’s because of the personal stories and you know, the stories that the leaders tell, I think it can be deemed as an attack to, you know, the person as well. So I think that’s really, really, uh, insightful, right?
[00:12:49] Wardley Maps vs Other Types of Business Maps/Analysis
Henry Suryawirawan: So understanding the landscape. I think this could be translated into various forms, right? Like traditionally, I think you mentioned about MBA, people maybe have heard about SWOT analysis. There’s so many analysis that people are used to doing, I guess, right? So what’s the so different, you know, Wardley maps versus all the other analysis, you know, all these strategies that people also learn. So maybe if you can tell a little bit of the difference.
Simon Wardley: So I’ll start with the SWOT analysis ‘cause I used to live off those and I used to love them. For me, the challenging point was I had a of the battle of Thermopylae. So the battle of Thermopylae, Themistocles, ancient politician Greek general had a problem. Lots of Persians were invading. And so they had choices. And what they decided to do was to block off the Straits of Artemisium, force the Persians along a narrow pass known as Thermopylae. That literally means hot gates. It’s a narrow pass where a small number of troops could defend against a larger force. And there were about 4,000 Greeks in the army that faced these 140-170,000 Persians. And there was 300 Spartans. And this is where we get the story of the 300 from. I love this idea of using see the map, as a way of communicating where resources should be, what are we gonna do, what are our choices?
Um, but I use SWOT, so I created a SWOT for that battle. So, you know, Strengths, a high level of training with the, um, the Spartan Army. The Weaknesses, uh, the E force might stop the Spartans turning up, a truckload of Persians are turning up. You know, opportunities. Get rid of the Persians. Get rid of the Spartans for Athenian, we don’t actually like the Spartans. Uh, opportunities and threats, uh, that Persians get rid of us. And I made a joke about how the Oracle said a dodgy film would be produced a few thousand years later. I put this SWOT and this map together and asked how would I communicate what we’re gonna do in battle? And it was obvious I’d use a map. But everything I was using was a SWOT.
So I thought, right, okay, I’ve gotta find all the maps in my organization. We’ve gotta start communicating with maps. So I asked people to send me maps, and they sent me loads. Mind maps, business process maps, systems maps, customer, just loads of maps. And they were great. And I took one of them and I think it was a mind map. And I was looking at this mind map, or it might have been a systems map. I was looking at it and I took one component and I moved it and I asked, how has the map changed? And it hadn’t because the map had components with links and if you just move it left right up or down a little bit, it doesn’t make any change. But I’ve, I looked at geographical map. If I move UK and I don’t know, put it next to Australia, that really has changed that map. So why hasn’t my map changed?
It took me a bit of time to realize that all the things that I had in business, which were called maps, had one thing in common. None of them were maps. They were all graphs. They were all node and connection diagrams. And the distinction between a graph and a map is, in a map, the space itself has meaning. So you can’t just move a piece without changing the fundamental meaning of what you’re looking at. And so all the things I had, business process maps, mind maps, uh, customer journey map, all of them were actually graphs and we really should call them graphs. And so I couldn’t find any maps. And so I thought this must be the secret source they teach you at MBA schools. Because I hadn’t done MBA, I spent nine months, whatever, maybe longer, creating my super cheap and cheerful way.
It took a long, long time of doing my mapping, and which I called because I couldn’t work out a name for it, and I just ended up calling it Wardley mapping. ‘Cause people just used to say, what’s it called? I don’t know. A map? It must be something. Oh right, a Wardley map. There we are. Go away. So I made it all Creative Commons. And um, I thought, well, you know, this is for all the people who haven’t done an MBA, who haven’t done this super secret way of doing it properly. And it turned out that other people found it useful.
So I started off with SWOT diagrams and all those sorts of frameworks and analysis. Realized I didn’t understand my landscape. Asked people to send me all the maps they had. Found out that everything we had a map wasn’t a map, it was a graph. And eventually ended up having to create a map which I made Creative Commons and shared it with others in the hope that others would find it useful. And that was it.
[00:17:32] Wardley Map Overview
Henry Suryawirawan: Yeah. Thank you so much for sharing that as a Creative Commons. I think that’s the first thing, right? So I’ve started seeing, you know, people referencing Wardley maps in books that they wrote, right? Also you have the website as well. So one thing that is probably like for people who are new to Wardley Maps, right? The first few times that they read, probably is a little bit challenging to really understand fully what Wardley maps is. So maybe let’s go into like the next few minutes for you to try, I know it’s a bit hard to actually explain in full details, but since you maybe have…
Simon Wardley: Verbally without visual reference.
Henry Suryawirawan: Yeah. But maybe since you have done it many, many times. So if you can just give like a high level overview, what is Wardley maps and how can people use it effectively?
Simon Wardley: First of all, um, my website is swardleymaps.com. I mean there’s lots of other websites. There’s like wardleymaps.com, which I think is Chris Daniel. There’s Learn Wardley Mapping which is Ben. I made this all Creative Commons. Other people have built lots of sites and other things. I mean, you’ll just find my stuff on my Medium page and I have my vibe coded swardleymaps.com, ‘cause I’d like to do lots of experiments with vibe coding. But there’s lots of community stuff out there because it is Creative Commons.
The first challenge, I knew that space had to have meaning in a map, but where does that meaning come from? Uh, it comes from a number of places. One, you normally have an anchor, such as the compass, north. You have position of pieces. This is north, southeast, or west of this. And you have consistency of movement. So if I’m going north, I’m going north. I’m going south, I’m going south. So I had to recreate that not in a territorial space, but in technological, economic and also in these days, political, social, and even legal spaces.
So how do you recreate those characteristics? Well, the first thing I needed was an anchor. And so I looked at my own business and I had various systems diagrams, and I thought, well, what are we gonna anchor around? And I thought, well, we have the business, we have the consumers, we probably have government and legal requirements. So we’ll put those as the anchors we focus. ‘Cause you can have more than one anchor on a map. And I said, well, okay, what’s important? Well, if you take the business or the consumers, you would start by first of all thinking what do they need? So if we do a simple example, we’ll do a tea shop and we’ll ignore legal and the government. So we’ve got a business who wants to make money, which needs selling of tea. And we’ve got consumers who hopefully like to drink tea. So we’ve got a connection here. And business needs revenue needs tea, cups of tea. Consumers need cups of tea. Okay, a cup of tea has needs, it needs a cup. It needs tea. It needs hot water. Hot water has needs, it needs cold water, it needs a kettle. Kettle needs power.
So what you can do is create a chain of needs. So now I’ve got an anchor at the top and a chain of needs. You can think of this as a supply chain. Now when within an organization, the boundary, we normally call that a value chain. When we cross multiple organizations, we call it a supply chain. There’s no difference, okay? So we start off with an anchor and we’ve got a chain of components that make that thing possible, whatever need we’re meeting. So that gives me anchor and position, but it doesn’t give me movement.
Now, movement’s really easy in territorial spaces because if you move from A to B, like you go from, I don’t know, star themes, it’s highly unlikely that themes is going to move itself during your journey. If you took 200 million years or 300 million years, it may well have done okay, but we don’t normally see it in our lifespan. But technology does rapidly change in our lifespan. So in order to describe movement, I ended up having to describe it in terms of change itself. And it turns out there’s a common pattern by which things evolve.
So if we’re talking about physical activities, we normally talk about the genesis of new activities, uh, the first time we ever had radio, custom built examples, Crystal Radio sets, the products. You know, my radio is both in your radio, whatever. And then it becomes more commodity utility like. And that occurs not only against activities, but practices, data, ethical values, they all evolve. We just give slightly different labels to those different characteristics. But we’ll stick with the simple ones. So we’ve got genesis, custom built, product, commodity. So what I can do is I can take my graph, which is my anchor and the chain of needs, and simply ask the question, how evolved are those components? Like hot water, commodity. Cold water, commodity. Kettles, hopefully a commodity. If you are in a business which is custom building kettles for your tea shop, you might be thinking, does that actually make sense, okay?
And so literally by putting things where they are, then we enable others to look at the map. I say, this is my map of my tea shop. And people will go, you’re missing staff, you’re missing payment systems. You are missing whatever it happens to be, okay? And so they can add that challenge to the map as well. But that’s, that’s it in a nutshell. You start with the, by finding out who are the users, the anchors, say business consumers. What are their needs? What are the components involved in meeting those needs? And then you simply ask how evolved those components are. And that is a map.
Henry Suryawirawan: Wow, I think this is my first time hearing about the anchor thing which is quite important, right? So I know that you put the so-called the customers, the users at the top, right, which will then chain into, you know, how the business actually serve those needs, right? So there are many components along the way from the top to the bottom, right? And then for all those components, you kind of like put them into different categories like the genesis, the product, commodities, utilities, and all that, right? So I think that there then you can see the map of how you actually satisfies the value for the customers, right?
[00:23:54] Why Mapping is Not a Common Industry Practice
Henry Suryawirawan: So I think very importantly like in many companies, like this thing doesn’t really exist in such form, right? So maybe what they have is something called product roadmap or something called, I don’t know, like microservice architecture or something else, right?
Simon Wardley: All graphs. All graphs.
Henry Suryawirawan: Yeah. All graphs. Why do you think that this is something that is missing? Because you explain it, the Wardley maps actually allows people to have situational awareness, you know. Awareness not just internally, but also externally, right, towards competitors or change of technologies out there. So why do you think we miss this, uh, links in our day-to-day industry practice?
Simon Wardley: Vikings used to travel by stories, by narrative. They used to tell these fantastic stories, uh, which were basically navigational documents of how to get from A to B. They’re great yarns, great tales. And then at some point, somebody came along and came up with the idea of sunstones and started creating perhaps. And they turned out to be quite useful.
And, you know, maps actually have a much longer history in that. But it took a long, long, long, long time before we had the sort of modern day maps that we see. So why haven’t we done maps before? Why have we just used graphs and stories? Because that’s how we used to do it. That’s all I can say. Um, and that’s how I used to do it. And, I, you know, I thought, you know, this mapping was what you learned at various business schools. I created a way of mapping. And others have found it quite useful. So, yeah, I mean it has to start somewhere, I suppose and a bunch of accidents ended up with me doing something.
I’m sure somebody else… Eliot Sivowitch, one of my favorite quotes from Eliot Sivowitch is whenever you discover who was first, the more you look, you find someone else who was more first. And the harder you look, you find that the first person was actually third. So I love Eliot Sivowitch, the quote, so I’m sure there were people doing mapping of technological, economic, social, and political spaces, maps, not graphs. I just haven’t come across them yet. It has to start somewhere. I’m sure it started before me. It’s just I haven’t met them yet. Uh, but I will eventually.
[00:26:23] Climatic Patterns, Doctrines, and Gameplay
Henry Suryawirawan: Right. So I think another aspect of your Wardley mapping, apart from these maps so-called, right? So there are other three things like the very, very important as part of the maps, right, which is what you call the climatic pattern, uh, the doctrines, the principles, and also the gameplay. So how do you explain these three important things and why do you think they are very important, yeah?
Simon Wardley: So what happens is once you start mapping out a landscape, you start learning that things move on that landscape. Technological, economic, social, political spaces, they move quite rapidly because of competition. So the first thing, lesson you learn, the pattern you learn is everything evolves. If there is supply and demand competition, things are moving. And then you start learning patterns, like we have inertia to change, ‘cause of pre-existing capital. So when we shift certain product to commodity, we often have resistance because of pre-existing business models.
Classic example is Netflix x Blockbuster. So who was first with uh, a website? Blockbuster. First with video ordering online? Blockbuster. First with video streaming experiments? Blockbuster. First again, bankrupt? Blockbuster. So Blockbuster out innovated everybody, okay. They were a highly innovative company. Netflix was a DVD mail order company, alright? So sending things in the post. Blockbuster, highly innovative. But the problem is their business model depended upon physical stores and late fees. And so that created an inertia to change. So there’s that. You learn this pattern that we have inertia.
And then you start building up more and more patterns and then you start realizing. And so what we, what I often do is with the maps, I use maps before we do stuff. So we map it out and we use it to challenge what we’re gonna do. Then we go and do stuff and then afterwards we use maps to learn patterns. And so we start building up more and more of these patterns. And then you notice patterns about the patterns. So the first one you notice is that some of the patterns are gonna happen regardless of what you do. So they’re driven by competition itself. So those I call climatic patterns or rules of the game. So these are gonna happen on the map, regardless of what you do, things are gonna evolve. You are gonna have inertia as they evolve, they will create new practices. Fine.
And then you have a whole bunch of patterns you’ve got choice over. Okay, the first set of them are ones which we call, I call doctrine principles, which are universally useful as far as I can tell. So these are things like focusing on the user needs. That’s a good principle. You need to do it for mapping. It turns out it’s useful everywhere. Focusing on the users, you need to do that before you’ve done the user needs. Understanding the supply chain, turns out that’s a good thing to do as well. Understanding how evolved the components are, how we treat a custom-built kettle and a kettle are two different things. So you end up with a whole bunch of patterns there.
Now there’s about 30 of the climatic patterns and there’s about 40 of the doctrine principles. And then you get onto a big set of patterns, which are all context specific. So they change the map, okay, if you use them in the right context. And these we call gameplay, or I call gameplay. And there’s about well over 100 of these. So things like open source, fantastic for accelerating the industrialization of something. Fear, uncertainty, and doubt, great for exploiting other people’s inertia. And so once you have a map, you can, you apply the climatic patterns to see how things are changing. You put your gameplay into it. So you choose those patterns as well. And then you organize and structure yourself around it.
So you learn that of the patterns there are three basic types: climatic, principles and doctrine, or doctrine, and gameplay. And climatic will happen regardless of what you do. The doctrine principles are universally useful. You don’t have to do them, you don’t have to focus on user needs, you don’t have to understand the supply chain, but it’s quite good idea to do so. And then the last lot are contextual gameplay. So they are powerful when applied in the right places, like open source. Great if implied in the right places, not so good if applied in the wrong places.
[00:30:51] Understanding Disruption by Using a Map
Henry Suryawirawan: Right. I think I find these, uh, so-called categories are really, really, important, right? Especially if the first thing is something that you cannot control, right? So if you don’t know what those are, chances are you’ll be like Blockbuster, right? Or maybe like Nokia. We know all those stories where they stay true to what they are good, they were good at, but something changes outside externally. Maybe new technology, maybe new disruptions and all that. And because they were late in the game, right? The climatic pattern actually kind of like, uh, make them go bust.
Simon Wardley: So disruption’s really interesting, ‘cause one of the things you learn from mapping is there are at least two different forms of disruption. So Christensen who came up with disruptive innovation, used to get into this whole argument with Lepore. Lepore would say that Disruption Theory is not predictive and gave all these examples of why it was not predictive. And Christensen say it was predictive and here’s my examples. And then Christensen would say that Nokia, you picked Nokia would beat Apple. And then of course, what happened, Apple did. So the problem is, is there are actually two different forms of disruption.
So when you map it out, what you learn are product to product substitution, for example, can be highly disruptive, but it’s highly unpredictable. Whereas product to utility or product commodity disruption is highly predictable. You can say a lot about what’s gonna happen, when it’s gonna happen, what practice is gonna change, etc. So when we talk about Disruption Theory, there are at least two different forms. But if you can’t map it, you can’t see those two different forms. And so you end up with these big figures just having this argument over, oh, it’s just predictable, it’s not predictable, etc. And the answer is, they’re both right, both wrong. I mean, it’s both. There are at least two different forms, but if you can’t see the map, you totally miss that.
Henry Suryawirawan: Right. Thanks for highlighting that, right? So I’m sure like when people hear about this, they’ll be intrigued, right? How can they actually apply Wardley maps maybe in their day-to-day job or even organizational strategy, right? So I think I would definitely recommend people to go to, you know, Wardley maps, Simon’s, uh, resources, right? And I’m sure when you go to some references in some books, you will also be able to learn a few things, how they could be applied to technological, you know, architecture mostly, and organizational principles, right?
[00:33:17] Navigating the Recent AI Disruption
Henry Suryawirawan: So one thing when when we talk about disruptions these days, definitely, one big true disruption is the AI, right? And I know that you are also very active. I can see from your LinkedIn posts and all that, you are actually experimenting a lot with AI. So tell us first of all, how can we apply Wardley maps to the AI thing that is happening in the industry these days?
Simon Wardley: Oh my gosh, gosh, gosh, that’s such a big question. I mean, uh, there’s the trivial stuff. And the trivial stuff is that large language models that represent industrialization of machine cognition, that creates a whole new set of practices which appear. So you get things like, uh, vibe coding, prompt engineering. It’s gonna allow for new activities. So what you can take all the basic economic patterns, the climatic patterns, and go, you know, AI is a big field, but within it, many different components. Some of those are industrializing. As a result, you get co-evolution of practice, you’re gonna have inertia. You’ll get impacts like Jevons Paradox.
If we look back at cloud in 2007, 2008, loads of CIOs running around saying, oh, we can get rid of our sys admins because of cloud. There’s us going, well, if you map it out, uh, what’s happening is, yes, computes’ industrializing, but we’re gonna see a new set of practices, dunno what it’s gonna be called yet. Uh, your IT estates are gonna expand massively because you’re gonna be able to do new activities, which actually will industrialize. You’re not gonna save any money, ‘cause you’re in competition with others. And so all those people you want to find, you really need just to retrain. Otherwise you’re gonna have to hire them back as really expensive something else.
And then you get these CEOs, CIOs go, oh, get rid of our sys admins. And a few years later, they’re desperately running around trying to rehire DevOps engineers at inflated rates and all the rest of it, which were just their sys admins retrained. Well, you’re getting the same rubbish. I listen to these CEOs, CTOs, we could get rid of all our engineers. You’re having just no hope. I mean, Jevons Paradox is gonna come and hit you so hard in the same way. You know, your IT estate went from 2,000 servers to 200,000 virtual servers. Your lines of code are gonna go from a hundred million lines of code to 30 billion, 50 billion lines of code. And a lot of this stuff is gonna be AI generated and you’re gonna get VPs vibe coding, but you’re gonna have to review this stuff. And so you’re gonna have to have people really skilled in this space. All your software engineers, you’re gonna need them retrained in this space.
And I find it fairly delusional, um, but it’s normal. We had had the same nonsense in cloud and the same nonsense. But you can go back to 19, oh, what was it? 1968, Donald Michael’s, uh, Cybernation: The Silent Conquest - how computing was gonna make everybody unemployed - all the way to a 1896, new Catechism of Electricity, Nehemiah Hawkins. Sorry, I’m just getting off the top of my head. You know, electricity would make everybody unemployed. Uh, I mean, it’s desperate stuff. It’s great for vendors, as in, you know, VCs and AI vendors for people to believe this. But it, it is not. I mean, people are gonna make a lot of mistakes.
So one of the biggest questions that you have to answer today is where is human decision making valuable in our landscape, okay? Because some aspects you aren’t gonna vibe code. And by vibe code, what we mean by that is what we talked about back in 2018, which is conversational programming. So you are gonna have a conversation with the system and it’s gonna build you something, okay? And you are never gonna look at the code. That’s what we mean. Uh, you know, lots of discussions about this seven years ago. Wonderful examples built by Aleksandar Simovic demonstrated at AWS Re:Invent. The latest term is vibe coding, whatever. We love changing words. Um, so you’re gonna have that. You are also gonna have software engineering plus AI. This is where we get the AI to build stuff and people review, and then you’re gonna have a whole bunch of stuff which you’re gonna outsource to other parties. So how do you decide where that’s going to be? Well, one of the beauties about having a map, it’s a bit like practices and methodologies. Once we have a map of the space, it becomes much easier to see where I should use things like Six Sigma, where I should use lean, where I should use agile, extreme programming. It becomes much easier to look at the map and say, you know, this is the area I should outsource. It’s all commodity. And, uh, you know, it’s uh, industrialized utility. Stuff in the middle, I’m probably gonna use software engineering plus AI. Stuff on the left hand side, I’m gonna use vibe coding. So of course, if you don’t map your landscape, much, much more difficult to actually have those conversations.
Um, you’re gonna see lots of errors and mistakes and people do things like get rid of their software engineering departments and then hiring them back later as, I dunno, AI engineers or some other word. It’d be, uh, you know, that sort of mess. So there are some basic patterns. You have no choice. It’s not a question of, uh, if, it’s a question of when. You are gonna get a change of practices, you’re gonna have inertia, explosion of new activities, Jevons Paradox.
So there’s a whole list you can just go through. And the interesting stuff is the stuff about architecture and where does human decision making actually make. Um, our biggest problem in things like architecture is that, um, the architecture is actually in the code. We, we draw these wonderful diagrams and think that is architecture, um, that, that, that’s more like a prompt. Uh, the architecture is in the code. That is the code is to actually make the real decision. And of course, when you hand that over to an AI, it doesn’t matter what your, your diagram is, um, the real decision’s gonna be made by the AI and we might be okay with that in, in certain prototypes and things. In other areas, it’s gonna take us quite a bit of time before we trust that stuff. And that’s where you are more software engineering than AI. Does that answer your question?
[00:39:37] A Leader’s Guide to Adopting AI
Henry Suryawirawan: Yeah, sort of. I think it’s very interesting, right? Like because when people talk about AI, there are some who are like very positive, you know, about the efficiency, the amount of capabilities that it can introduce, right? And there are people that are more pessimistic. They think, you know, jobs’ gonna vanish, you know, software engineers won’t be needed anymore.
And, you know, you kind of like also explain it from the Jevons Paradox. And it has been, you know, we have seen it in the history before throughout, you know, like new technologies, right? And obviously all these, uh, come into play when organizations need to make decision, right? And I think what’s so important for you as well, you explain that because many organizations think hard of how to apply and adopt AI in their, you know, organizations. But many actually don’t have real strategy of how to do that apart from maybe buying from vendors, apply it and see how people, you know, use the AI to actually improve their productivity. But actually, if you use something like Wardley maps, you can kind of like see, okay, this part here, maybe this component or maybe these practices inside the organization is something that you can vibe code, and something like that, right?
What would you advise leaders to do now? Because it’s quite real. You know, AI trend is just gonna be advancing more and more, right? There are foundational models which are getting more capable, lots more tokens, a lot of more compute, you know? Yeah, what would be your advice for leaders now if, let’s say practically they need to adopt AI and they wanna start with Wardley mapping?
Simon Wardley: Well, you need to adopt AI, because the market expects you to adopt AI. Not that it’s gonna create… There’s very questionable results in terms of performance improvements. And it’s not a question, as I say, it’s not a question of if, it is a question of when. So number one thing, you will declare to the market that you are going full AI, because that’s what the market wants to hear. So that’s number one thing you do, declare, we are going full AI, okay? Number two, start getting your people to learn how to use AI, okay? Focus more on the prototype stages. There may be… If you don’t have maps, start mapping out your landscape and working out where you need human decision making in the loop, okay? Don’t get sucked into that, oh, we can get rid of our software engineers. Now, VP can just vide cobe something through a few prompts and we’ll put it into production and magically it will all work. Unless you’ve got very, very good lawyers. I mean, very, very good lawyers. ‘Cause you’re heading towards a world of pain, okay? I would take the position of we’re taking our people, we’re gonna give them some time to learn about this stuff, we’re gonna try and understand our environment. And when we understand our environment, we can see more easily where it needs to be applied.
Of course, we’re gonna tell the market, yeah, we’re a hundred percent AI. And oh yeah, it’s gonna be totally AI, and everything, and AI is gonna do everything, and just given that. But internally, start to understand your landscape, see where it can be applied, get your people, so they’re using the tools and getting familiar with the environment. That’s what I would be immediately doing, okay?
[00:42:49] Turning Coding From a Craft Into Engineering
Simon Wardley: There’s another side to this as well, which is, uh, the engineering question. You had Tudor on this. Uh, I’m sure there’s a wonderful session with Tudor. Uh, I do some work with Tudor on Rewilding Software Engineering. Part of the problem with software engineering is there’s two sides. There’s development and testing, okay? Testing is an engineering subject. Development is a craft. And there’s a whole reason, a little bit too complex to explain on this, and I can use maps to explain why it is, but it all boils down to tool sets. So, I love making soups, so I love cooking, and when I make a soup, uh, I like a smooth soup. I use a kitchen blender. That’s a tool, okay? Now, if I’m building a deep mine shaft, I’m not gonna use a kitchen blender to do that. I am gonna use another set of tools, mining tools for building a deep mine shaft.
There’s only one engineering subject which sort of believes it can use the same tool everywhere. And that’s software. So it doesn’t matter whether, you know, I’m building an electronic healthcare record system or online gambling site, I can use the same tools. We should be using highly contextual tools in the same way that every other engineering subject and there are reasons why this is. Now, currently, we’ve got all these vendors who’ve been trying to flog us the same tools and now flogging us the same tools with added AI. It’s a bit like somebody going and saying kitchen blender. Oh, you wanna build a deep mine shaft? Here’s a kitchen blender with a robot. I’m still going, no, actually I really want deep mining tools. Not a, not kitchen blender with a robot. So, um, we are being done a disservice at the moment in the field of software by the tool vendors. And we really need to get back to…
And you see it with the positive way in testing. So if you think about testing, whenever we have a problem, what we do is we start off by building a small test which the problem fails, you know, sort of system fails. So we build some code. And then it works. And so this is a test driven development. And what happens is through that process, we explore the space by building lots of small tests, and we build up a test suite. And we might have 50 to 100,000 tests, which are all for that contextual problem, okay? They’re all small tools. Inputs, outputs, you know, traffic lights. They’re just tiny tools. But we build up a highly contextual test suite for that space. No one turns up to us and says, here’s the ACME hundred thousand test suite. Just run it against your application. You’d look at them like, what are you talking about? My application is nothing like your application. Of course, my tests have to be contextual. Well, it’s exactly the same with any other toolset. But we don’t do this. We instead have like the ACME test suite, but it’s the ACME tool as we have a bunch of standard tool vendors trying to flog us the idea that tool building is hard and the best way you can do it is just using our standard tools. It’s nonsense. Tool building is easy.
And the interesting thing, if you take that approach, there is a bit where AI truly then starts to shine, because by building tools for the problem, we’ve seen massive improvements. Many, many orders of magnitude improvements in the speed of development, you know. On one case, you’ll have 600x, which is like a ridiculous figure. But when you start doing that, the AI really shines because it can be very, very useful in helping you build new micro tools, and also coming up with new hypothesis to test as well. That sort of combination is really, really interesting to me. Whereas I think the combination of, oh, we just write a prompt and magically, you know, it will write some code. It’s better than people just writing code as a craft. But I think what we really need to do is turn development into an engineering subject and then apply AI to it. And I think that’s where the real power is. And we’re not there yet. I mean, testing as an engineering subject. Development is unfortunately still a craft. And unfortunately, we’ve got tool vendors kind of flog to you kitchen blenders with robots which isn’t helpful. But there we are.
Henry Suryawirawan: Yeah. So I think the thing that you mentioned just now referencing the work that you did with Tudor, right, is called Moldable Development as well, right? So…
Simon Wardley: Well, he did Moldable Development. I am helping write Rewilding Software Development, which is a book which we’ve done five chapters of. We will, once… I’m traveling at the moment. Once we get back where do chapter six and continue from then on.
Henry Suryawirawan: Yeah, so when I read that Rewilding Software Engineering, uh, you know, the Medium post, right? I think it’s very, very intriguing and insightful at the same time, right? So I imagine that if you could build contextual tools that actually can analyze code. And especially these days with AI, the amount of code that gets produced will be a lot. And sometimes you won’t be able to rationalize what is happening inside your code base.
Simon Wardley: There’s billions of lines of code for every organization.
[00:48:05] Simon’s AI & Vibe Coding Experiments
Henry Suryawirawan: Right. And plus people love to build distributed architecture, right? A lot of services. So I think I’m sure it, it’s just mind blowing how you can actually understand how the systems actually work. So if we have these kind of tools, contextual tools, I think it will be helpful. And I think these days when you mention vendors start to kind of like sell you the same tools, either IDE or maybe these days, it’s the AI chat, you know, agentic AI interface, right? And you have a lot of experiments lately of doing vibe coding. So maybe tell us, share like what are some interesting things that you experience.
Simon Wardley: I love, love vibe coding. So my actual website, uh, because I said the, this community has built all sorts of. My one is swardleymaps.com, okay? So that’s where… and it’s entirely vibe coded. And I’ve got a games on there as well. One called Technomic Empire, which is like a… And I totally vibe coded, as in I haven’t looked at the code and all the rest of it as well. But it’s all stuck within a browser. And, you know, there’s various restrictions on it. And I love vibe coding. It’s great fun. But I do all these experiments and I come up against so many horrors.
I did one where I was building a particular system. And I actually thought, you know, this was ages ago. I would get the AI to build me a testing engine within the system. Which it did. And so I then said, right, so every new functionality, build a test and add it. And it was doing this and it was great. So it was building new functionality, building more tests. I could run the testing engine, it would say everything passed. Every now and then things would fail. Copy the logs. Put them in. And the AI would fix it. It was marvelous. And I was like, motoring. And then after a bit of time, I started to think something was wrong. And I resisted but eventually I went and looked at the code.
Now what the, um, AI had done is it hadn’t built me a testing engine. It built me a simulation of a testing engine. So every test was, actually, every time it wrote a test for a function. What that function, that test was was, if this function failed, what’s a likely error message for it? What’s the probability of that occurring? So there was zero testing. Zero. It was entirely simulation. And so there I was running this thing and it would come up with these logs and everything, ah, copy that put them in there, thinking it was doing something. Or in order to fix the problem, it just had to update the version number. Because then it recalculated all the probabilities. And it was just like, oh, you hoodwinked me. I mean, there was… You get loads of examples of this. Particularly you start to realize that these system are stochastic parrots. They don’t understand the thing that they’re doing.
So I do lots of, um, different areas. Uh, I love getting these AIs to write me scientific papers. So AIs are, because they’re trained on data and large amounts of data, they’re very good at open-ended questions, i.e. ones you can’t check. And they’re good at closed ended questions when there’s a lot of data. But of course, if there’s no data, then they try to be helpful and they try to sound authoritative, but so they write something. So I get them to do things like mutagenic benefits of belladonna. Belladonna’s a poison, okay? And again, they, it writes a paper. And then what I will do is hand this over to AIs and other AI and get it to write more papers based upon this. And I use this to eventually create human recipes, dietary recipes for impossible things.
And so I, so I love Manus. Manus is my favorite. It comes up with this, you know, a 100% backed by science dietary recipe to grow dragon wings just by eating nuts and fruits. It’s just total, you know, it’s 167 pages, it is 15 papers or whatever. It’s just total garbage. And the reason why I like Manus is when you point the glaring holes at it, it eventually admits that it, you know, it’s just making things up. It doesn’t know. It doesn’t. And that it is, it’s prompted. It’s supposed to be sound authoritive, uh, you know, authoritative.
So I love them. I love them as tools. They’re great fun to use. You have to be very, very, very careful with them. I think it’s extremely dangerous when people think, particularly in software, I can just vibe code something without looking at the code and then put that into production, and it’ll be great. And you get carried away with agents as well, because you have other agents testing and all the rest. And you end up then having this idea that I can give an architectural diagram, which is just a prompt. It’s going to build the thing which matches that. Probably isn’t. And then I’m gonna have something else which tests that it’s built the things that match. Well, that’s hallucinating as well. You get into all sorts of problems.
So great fun. Just remember they are stochastic parrots. That’s the word that, uh, was coined in the industry. And they don’t understand. There is zero understanding of what they are doing. And when it comes to hallucinations, just remember. Everything an AI does is a hallucination. A hundred percent of the time it hallucinates. It’s just that a lot of the time, the hallucination is right, okay? But they are all hallucinations. It is just a lot of the time it’s right. And so just be very careful. Great fun though.
Henry Suryawirawan: I like, I like that, right? So they hallucinated most of the time now is, right?
Simon Wardley: All the time! 100%. It’s all hallucinations. Uh, and there’s no understanding. But what happens is those hallucinations a lot of the time are right.
So we have this idea that they think in terms of, you know, they understand deeply. Now the question you have to ask is then gets onto a sort of question of humans are, do not humans operate in the same way? We have little to no understanding of how humans operate. I mean I did some work in neuropsychology about 30 years ago and friends of mine are still in that industry. And back then, we thought we understood about how 5% of the brain worked and I caught up with them, you know, about six months ago, and they’re still at universities and blah, uh, professors of this and I, we joked over dinner about that. And I said, where do you think we are now? And they said, well, after the last 30 years, I think we’ve got to 3%. So we now learn less than we did. They’ve learned how much more there is to know.
And so we have very, very poor understandings, uh, of how the human mind works. We make terrible assumptions about AIs and trying to associate with the humans. And hallucinations. One of the things by hallucinations, I mean, typically the industry use it in a term of when it’s got it wrong. I think it’s far better to think of it’s hallucinating all the time. But, you know, some of the time, a lot of the time those hallucinations are right. ‘Cause it’s not thinking. It’s not understanding in the way that we think understanding is, even though we don’t actually understand what understanding is, okay, so that’s the tricky place we’re in.
[00:55:28] The Importance of Critical Thinking for Software Engineers
Henry Suryawirawan: Yeah. So I, at the end of the day, LLM works by probabilistic thing, right? So definitely, you know, if you train based on a good data set, definitely, the hallucination most likely can be true, right? Can be right. But if they’re not well trained for some kind of patterns, right? Like what you said, building test cases. So if we don’t see the code, definitely, it’s gonna be tricky, right? If you don’t know what they’re doing internally, right? It could be just hard coding certain values, uh, that will just pass every time we run it, right?
And you explained it, uh, like there are three stages when you do, you know, lots of AI usage in your software development. It could be vibe coding and all that. The first time, obviously, it’s great, right? It can produce so many code just by prompting, right? And you call this, you know, the euphoria of instant code, like you can suddenly produce massive amount of code. But then there will be a point in time where you feel, okay, something is not right. So what you mentioned is about crash of integration when you start seeing patterns that breaking or even some people got, uh, bugs or security issues. And the last one is definitely the desperation of production that you mentioned. Like people maybe got their database deleted. I think recently we see some posts of that, right? Or maybe they did some, you know, very catastrophic mistakes. So I think it’s definitely very dangerous if you just purely rely on AI to produce something without reviewing. And hence it’s very critical for human to be in the loop, right?
And people these days mention about critical thinking. So maybe from your point of view, what is critical thinking that software engineers need to think about? You know, like what is critical thinking in practice?
Simon Wardley: So, oh gosh. Gosh. Oh, that’s such a loaded question. That’s such a hard question as well. So, one of the things I, I like mapping out sectors, like mapping out defense, healthcare, and all the rest of it. And so one of the areas I mapped out, back in 2022, was healthcare. So I took all these healthcare professionals and we mapped out healthcare from multiple perspectives, uh, sorry, education from multiple perspective. The reason why you do multiple perspectives is if you imagine no one had mapped Paris and you sent one group to map Paris and they came back and you said, what was the most important thing in Paris? They might say Pierre’s Pizza Parlor. Cause they mapped it from a perspective of eating nice pizzas. So what you want to do is map from multiple perspectives and then you aggregate across them all. And then you discover things like the Eiffel Tower matters.
So this is why I deal with all these professors of education and everything else. Got them to map out education. Then we mapped it out from all these different perspectives and then we aggregate it to find what mattered. And now the question is, your final focus is, are you focusing on market benefit or are you focusing on social benefit? So if you map out education from market benefit, then it’s things like use of AI in classrooms, digital access, which are things the market sells and it loves doing. But if you map it out from a perspective of social benefit, where you should be investing is like lifelong learning and critical thinking.
Now critical thinking isn’t even a course in most educational establishment, because most education despite the best efforts of teachers, most education systems seem to be set up to produce useful economic units. To produce people for the workplace. So critical thinking may not be high up, yeah. Having AI skills maybe higher up, but being critical thinking may not be high up on that list. We don’t teach it in schools, despite the best effort of teachers to sneak it in there. Uh, it’s not a specific course. We’re pretty poor. Certainly, in the Western sphere. And there’s some really interesting stuff about critical thinking and obviously use of AI coming out of China, which is just like amazing. Lots of interesting stuff in that space.
So is it important to think about what we’re doing to review what we’re doing to ask questions of what we’re doing? Yes. Do I think this needs to be a, a specific subject that is taught? I think absolutely. I mean, we are increasingly living in a world of misinformation where we were increasingly rely, potentially relying on systems that we don’t understand to do things for us as well without that process of. I mean, there was a case example about or two weeks ago that somebody, you know, I talk, talk about my diets to grow dragon wings ‘cause I get these. Somebody on ChatGPT had asked questions about, you know, with healthcare. I think it was ChatGPT. And it come up with a diet for them to exclude sodium. And they ended up with a whole bunch of conditions, ‘cause they’d replace it with, uh, sodium chloride. They replaced it with something else, which was, you know, they’ve ended up with quite severe conditions. Why? Because they believe that the stuff is supposed to sound as though it comes with authority understanding. But there is none and they can be quite dangerous, which is why we have guardrails most of the time. Not that it fixes the underlying system. We just hide better by saying I can’t give you an answer on this because that could be dangerous. Doesn’t mean a thing underneath it. It’s still not that mess that it is.
So yes, critical thinking is important. And there’s a whole bunch of issues actually around values being embedded in these large language models. I mean, um, the problem is with the AI space, we’re getting the tools’ changing. The language’s changing. So we’re moving away from a much more declarative to much more conversational languages, say prompts. Uh, the medium is changing, so moving away from text to sort of much more seeding with images, with diagrams, etc. The problem is when… The language medium and tools are how you reason about a world. So if you imagine an equivalent example would be the tool would be the printed press. Language would be the written word. Medium would be paper. If that was controlled by one group of people that have immense power over your lives. And unfortunately, this is what we’re starting to see. And the way to challenge against that is through critical thinking, having people able to challenge and by having openness.
And by openness, I mean open all the way down. Not just open models. All the way down to the training data. Everything has to be open. Again, interesting work in China and France in that space. Cool. Not so much in the West. But those are the two ways of defend, uh, openness and critical thinking against the formation of new theocracies and new power structures. So yes, it’s important. What’s the best way of teaching critical thinking? You’d have to ask an educational specialist.
Henry Suryawirawan: Right. Yeah. I find especially people if they start to rely a lot on AI, right? Definitely, it’s kind of like narrowed down their possibilities of thinking, right? Because AI will just give you authoritative answers. So if you never enrich yourself with more knowledge, it could be literature or it could be books. It could be teachers, uh, it could be mentors and all that, right? So definitely, you can be kind of like biased to think that whatever AI gives you is actually the right answer, right? So, and hence your critical thinking probably is gone by then.
Simon Wardley: So there’ve been some interesting papers in that, which is, uh, talks about the decline in people’s capabilities through exposure to AIs. But I’m just finding it now. The Machine Stops by E.M. Forster. It’s a wonderful book from 1909. I would totally recommend everybody. Cause it’s available freely online as well. To go get a copy, it’s not very long, of E.M. Forster’s The Machine Stops. And just read it all the way through. Cause I think it’s incredibly relevant for today.
[01:03:49] Navigating Career Anxiety Due to AI Fear
Henry Suryawirawan: Right. So for people who are feeling scared, right, because definitely they feel some existential crisis that could happen for personal, right? Do you think people should do something differently, especially for, you know, people who have been in the industry for long and they think, you know, all those of software engineers practice and knowledge that they had in the past may not be relevant anymore. Do you think that we can apply something like personal Wardley map as well on how we can actually start seeing the landscape? Yeah.
Simon Wardley: People do and have done for some, some time. I mean, so large language models in these AI systems, again, as we said, it’s not a question of if, it’s when. So they’re going to spread, we’re gonna learn new practices, we’re gonna learn better ways of using agentic systems. But it’s also gonna create new opportunities, new jobs.
So first of all, the amount of code. You’re gonna have this job of trying to review or keep control off the code base. ‘Cause the code bases are gonna explode from, I dunno what it is for a company these days, 30 million lines of code, they’re gonna be hundreds of billions of lines of code. And the problem is with the AI systems, you will get to the point that it all, something’s gone wrong. At some point, somebody has to go and have a look. And so we’re gonna need lots of skills in those areas. There’s gonna be new jobs, completely new jobs. I did a talk about this back in 2014. So we’re gonna have things like machine psychologists. So, you know, um, some of these systems are gonna create just their own strange behavior. And we’re gonna have to learn how to cope with these networks of machines thing. And then we’re gonna create entirely new roles that we’ve never thought of.
So if you go back to Cyber Nation: The Silent Conquest, 1986. You know, you’d go up to, oh, computing is gonna get rid of everybody’s job. And, or 1896 New Catechisms of Electricity. Electricity’s gonna get rid of everybody’s job. Well, in the electricity case, you could have said, well, don’t worry, you can be a radio personality to which people would go, what’s radio? Um, well, it hasn’t been invented yet, but as electricity develops, we’ll have radio and we cannot see the jobs coming. Or Cyber Nation, you know, you’re going up to somebody say, oh, you, you’ll lose your, you know, your, uh, actuarial jobs, which disappeared, or accounting jobs or whatever it happened to be. You could say you could be a social media consultant and people go, what’s social media? Well, that hasn’t appeared yet. So we’re gonna get a whole range of new activities, new jobs, new things we’ve never thought of, things like machine psychologists and onwards. We’re gonna find an explosion of activities anyway, an explosion of code, and we’ve gotta review and understand that as well. It’s really difficult. We love making these doom lead and predictions.
I think one of the ones we’ve got to be super, super careful about is use of large language models in policy areas. Mostly because they have extreme bias towards market benefit, not social benefit. And at some point, you know, nation state societies have got to really make that decision of what matters more, um, market benefit, social benefit. Take healthcare. You know, if you are focused on social benefit, you are all about patient reported outcome measures, um, improving health outcomes based upon those. But we don’t do that. If you are focused on market benefit, you’re all about preventive healthcare and wellbeing. ’ Cause the market sells loads of that stuff, even though we don’t have a good idea of what a healthy person is, because we don’t do the patient reported outcome measures. We do what’s called ClinROs. So most of our healthcare systems are actually sick care systems. We’re good at treating symptoms, not making people healthy. So at some point, you’ve gotta focus on, when we talk about growth, do we mean growth of the market, preventive healthcare, wellbeing, great market opportunity? Or do we actually mean growth of society as in making people healthcare healthier? So like patient reported outcome measures. You know, monitoring, gets that sort of data. Um, we have to make those decisions in different places as well across all different industries.
So I don’t think it’s an AI specific problem. It’s been a long running problem. Maybe AI will force us to make these changes, but the danger you’ve got with the AI in policy is, ‘cause they’re trained on market data, they have a massive bias towards market benefit, not so much social benefits. You have to be really, really careful just in that one space.
Henry Suryawirawan: Yeah, so I think that’s a very good reminder, right? I listened to a podcast recently, right? So the person is saying that there might be a dystopian thing that could happen in the short term. But turns out, you know, like it will lead to some kind of like, you know, bright future for humans because people will then realize, oh, maybe this thing won’t work, you know, for humanity, right? The social benefits, like you mentioned, right? And hence we probably find a new ways of using AI to actually improve humanity rather than chasing all those market benefits and all that, right?
Simon Wardley: Agreed.
[01:08:56] Tech Lead Wisdom
Henry Suryawirawan: So Simon, yeah, it’s been a pleasure, you know, hearing insights from you, right? So I only have one last question that I’d like to ask you. I call this the three technical leadership wisdom. It’s like a tradition in my podcast to ask my guests to share. So maybe if you can share, you know, some kind of advice that you want to give to us, what would that be?
Simon Wardley: So my one piece of advice is, look before you leap. I’m a great believer in looking at the space before seeing what choices you have and making a decision. And I, it doesn’t matter whether I’m talking about social policy or, uh, economic policy or technology within a space, and this is why maps is so important. So my one piece of advice is, look before you leap.
Henry Suryawirawan: Right. I really love that. So thank you so much again for this conversation today. So I think I’m really looking forward to seeing how AI, you know, play out in the next, I dunno, few months, few years, right? And hopefully still, like critical thinking will be a big aspect of our humanity. And maybe more Wardley maps being used to actually apply those critical thinking, uh, inside our job and inside our day-to-day life. So thanks again, Simon for today’s conversation.
Simon Wardley: Absolute pleasure! And thank you for having me.
– End –
