#237 - Tackling AI and Modern Complexity with Deming's System of Profound Knowledge - John Willis

 

   

“You have to figure out the doors that you can open with those keys. And this is where people get frustrated with Deming. We’re so used to this instant management gratification. Deming is putting what you need up here in your head. You gotta go figure it out.”

Can decades-old management philosophy actually help us tackle AI’s biggest challenges?

In this episode, John Willis, a foundational figure in the DevOps movement and co-author of the DevOps Handbook, takes us through Dr. W. Edwards Deming’s System of Profound Knowledge and its surprising relevance to today’s most pressing challenges. John reveals how Deming’s four-lens framework—theory of knowledge, understanding variation, psychology, and systems thinking—provides a practical approach to managing complexity.

The conversation moves beyond theoretical management principles into real-world applications, including incident management mistakes that have killed people, the polymorphic nature of AI agents, and why most organizations are getting AI adoption dangerously wrong.

Key topics discussed:

  • Deming’s System of Profound Knowledge and 14 Points of Management—what they actually mean for modern organizations
  • How Deming influenced Toyota, DevOps, Lean, and Agile (and why the story is more nuanced than most people think)
  • The dangers of polymorphic agentic AI and what happens when quantum computing enters the picture
  • A practical framework for managing Shadow AI in your organization (learning from the cloud computing era)
  • Why incidents are “unplanned investments” and the fatal cost of dismissing P3 alerts
  • Treating AI as “alien cognition” rather than human-like intelligence
  • The missing piece in AI conversations: understanding the philosophy of AI, not just the technology

Timestamps:

  • (00:02:27) Career Turning Points
  • (00:05:31) Why Writing a Book About Deming
  • (00:12:53) Deming’s Influence on Toyota Production System
  • (00:19:31) Deming’s System of Profound Knowledge
  • (00:28:12) The Importance of Systems Thinking in Complex Tech Organizations
  • (00:31:43) Deming’s 14 Points of Management
  • (00:44:17) The Impact of AI Through the Lens of Deming’s Profound Knowledge
  • (00:49:56) The Danger of Polymorphic Agentic AI Processes
  • (00:53:12) The Challenges of Getting to Understand AI Decisions
  • (00:55:43) A Leader’s Guide to Practical AI Implementation
  • (01:05:03) 3 Tech Lead Wisdom

_____

John Willis’ Bio
John Willis is a prolific author and a foundational figure in the DevOps movement, co-authoring the seminal The DevOps Handbook. With over 45 years of experience in IT, his work has been central to shaping modern IT operations and strategy. He is also the author of Deming’s Journey to Profound Knowledge and Rebels of Reason, which explores the history leading to modern AI.

John is a passionate mentor, a self-described “maniacal learner”, and a deep researcher into systems thinking, management theory, and the philosophical implications of new technologies like AI and quantum computing. He actively shares his insights through his “Dear CIO” newsletter (aicio.ai) and newsletters on LinkedIn covering Deming, AI, and Quantum.

Follow John:

Mentions & Links:

 

Our Sponsor - Tech Lead Journal Shop
Are you looking for a new cool swag?

Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.

Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.

 

Like this episode?
Follow @techleadjournal on LinkedIn, Twitter, Instagram.
Buy me a coffee or become a patron.

 

Quotes

Career Turning Points

  • The simple one is work hard, have fun. This is what’s wrong with modern day education. Like work hard. And for some reason, they thought work hard couldn’t be fun.

  • Be a boundless advisor. You get way more than you give.

  • And the one thing I’ve never did was being calculating if somebody called me for advice, as long as I had the time, and then the truth is most people have time. Everybody can basically spare 15 or 30 minutes. So just help. Because boy, it just comes back. Every once in a while, more often than not, when you need somebody that you helped, like career advice or like, they’ll drop everything for you.

  • Last but not least, your word is your bond. I want it to go to my grave in this world to say, John never did me dirty. He never lied to me. We’re not perfect, there’s gray areas and stuff. But in general though, you can have a great career. And those three pillars of where I’ve gotten to where I am right now.

Why Writing a Book About Dr. Deming

  • I just fell in love with Eliyahu Goldratt. So I’m 25 years into my career and this guy is bleeding everything that I believe is right about how we do work, how we work in organizations, all those things.

  • Ben said something to me, he said, it all goes back to Dr. Deming. And that you’ll hear that a lot when you start opening up this sort of the can. You hear that it all goes back to Deming. He challenged me.

  • Trusted his advice. And he said, go look, read the 14 points. And we’re about a couple years into this DevOps movement. And I’m like, oh my goodness! Everything that we profess about DevOps is listed in the Deming’s 14th points.

  • He was a mathematical physicist getting his degree in the mid twenties. Quantum, physics, all this sort of non-determinism, uncertainty principles are just all over the place. By the way, Goldratt a little bit later, but same thing. They were both physicists.

  • And what Deming I think brilliantly did out throughout his career is took part of that and turned that into probability. He was a statistician first, that it was all about, this idea that like everything is sort of rigid and deterministic and you do this and this, and his was more based on a non-deterministic world.

  • And that, fascinated me to the point of like there was these answers to questions that really were only one layer of an answer, that’s the first tenet of profound knowledge is theory of knowledge. How do you know what you think you know? And that you can see a physicist mindset there. Especially a quantum physicist, it’s like that there is no like bound truth. I think there was a physicist quote, sort of behind every deep truth is another deep truth.

  • People find Deming hard. One time, in a lecture, somebody was asking him questions. He said do you want me to teach you and do your job?

  • You would never see in a Deming book, the effective executive. In other words, he wouldn’t tell you how to become an effective executive. He would tell you these principles that exist.

  • Deming was like hard, sort of hard-nosed about like, here’s these things that I’ve learned. And you don’t just take ’em like here’s the five steps. Everything’s gonna be perfect. Again, theory of knowledge and system of profound knowledge is really epistemology, that’s not how that works, right? You are supposed to take the torch.

Deming’s Influence on Toyota Production System

  • I spent a really good time explaining the different people that went to Japan and how Deming was it.

  • Toyota doesn’t exist without Deming. However, there are a fair amount of people who say Deming had nothing to do with Toyota. And that’s absolutely incorrect. ‘Cause I’ve interviewed people. I interviewed a man named Yoshino, Dr. Yoshino. And he basically said he was hired in 1966 at Toyota, worked as a peer, you know, with Ono and Shingo. And I asked him the question of Deming’s impact. And he was unquestionable that when he got to Toyota, Deming was mostly what people were talking about. And he said, Deming taught us how to understand data.

  • So Deming had a lot to do with, not just Toyota, but like the resurrection of Japan after World War II. But he wasn’t the only one. And its principles are solid.

  • In 1950 there was, basically a lecture he gave where, years later they had basically estimated that 80% of the wealth, this is post World War II, was in this seminar. And he told everybody in that seminar, if you follow these ideas I’m telling you right now, you’ll be a world power. And in five years, they overtook Germany to be second behind the United States in world economics.

  • Deming didn’t create the miracle in Japan. The Japanese created the miracle in Japan. TPS didn’t become this great model for successful complete organization, meaning profit, everything, all the things from Ono and Shingo. But Deming was part of that mix. So Toyota created the miracle of Toyota. Not Deming. But Deming had some influence.

Deming’s System of Profound Knowledge

  • A lot of what he would talk about, it’s not really physics, but he’d talk about like everything is complex. We live in a world of complexity, that’s a physicist view.

  • So he said that in order to understand things, and particularly his focus was sort of management of people, trying to help managers and people co-exist with sort of, what would ultimately be profit but not profit focused. You said that to understand a complex system, you need a lens that takes into account four sort of elements.

  • The first one was he called theory of knowledge. A big influence on Deming, actually comes from philosophy, something called pragmatism. And pragmatism was the first America based or sort of US based philosophy. And Deming was highly influenced by pragmatism. That all goes back to epistemology. And epistemology is a fancy way to say, how do we know what we think we know?

  • And that’s what a learner does, I’m not gonna stop and just stand up. Like, hey, here’s the answer to the question. There’s other answers and there’s more to learn, right? So that sort of first pillar.

  • The second sort of element of this lens to decouple complexity, and complexity could be your organizational, like you got five teams and these teams are fast, these teams are slow. There’s resources here. The things that we deal with in an organization. The second is understanding variation.

  • And variation then goes into a whole deep area of statistical process control. At the end of the day is, how do we understand what we think we know? Like this is the way I think about it. If we start understanding variation, there’s variation in everything, then we can understand what’s inherent.

  • The third is really interesting. Because the first two are more sort of like very science, like you can understand from a science or mathematical perspective, the third one is psychology. An equal element of this lens understanding complexity has to be understanding human behavior.

  • Like you have intrinsic motivation and there’s actually different variants of intrinsic. There’s actually seven classifications of intrinsic motivation. And then cognitive biases and all those things, you have to include this, because I could have everything else right, but if I don’t understand that, i’m gonna fail.

  • And then the last, but certainly not least, is understanding a system. And Deming has a lot of roots in some of the earliest general systems theory.

  • So you put all that together but here’s the hard part is, sorry folks. I just gave you a lot of keys. You gotta figure out the doors that you can open with those keys. And I think this is where people get frustrated with Deming. We’re so used to this instant management gratification.

  • The last piece, systems thinking, it’s core to everything. You have to look at all the parts, everything is connected.

  • I think the hard part that most people have is they read about profound knowledge and they haven’t been given the sort of answers. Deming would say, I’m putting what you need up here in your head. You gotta go figure out, and once you have that, then you have an world of opportunity to affect, complex systems, create change, organizational behavior, organizational structures and all the things that we in general strive to do.

The Importance of Systems Thinking in Complex Tech Organizations

  • System thinking is kind of a curse and a blessing, I think probably the best book I’ve ever read about system thinking is Donella Meadows’ Thinking in Systems. The worst book I’ve ever read in systems is Donella Meadows.

  • DevOps was like, the original sort of attack to provide a more systems thinking way where you had dev, you had ops, the original character was a wall. My friend Andrew Clay Schafer created this caricature of a wall they’re called a wall of confusion he called it, where, dev would throw the code over the wall, ops would catch it. They would say, this code stinks, they’d throw it back, never were we trying to figure out like, why did those disconnects or impedance mismatches happen, and that’s system thinking at its core.

Deming’s 14 Points of Management

  • Andrew Clay Shafer says, you’re either a learning organization or you’re losing to one who is.

  • How disgusting Deming would’ve been if he looked at modern day incident management. You’ll get P1s, P2s, P3s, who, nobody ever touches ‘cause you don’t like management is screaming, hollering about P1s. So even if you get all them on some screaming and hollering measurement system, KPIs or MBOs, and by the way, Deming hated MBOs and KPIs, then you might get to half of the P2s and you’ll never get to the P3s.

  • And then you contrast that with the John Allspaw quote, that incidents are unplanned investments. So Deming would probably be looking at all the incidents and using a system, probably using variation, and looking at it in its holistic systems view. Because some of those, what you classify as P3s, on the wrong day, bring down your system, right?

  • Those sort of like pebble in a shoe problems, that if you don’t understand why they happen and they can be the bad case, a total systems outage, worst case, harm to humans.

  • Don’t worry about that one. The code is a classic example. You just come into a large organization. And you’re looking at some NOC screen. Oh, what should we do? Don’t worry about that one. It always shows up there. We never have a problem with it. Now, in Deming’s world, we like, can I take a look into it.

  • One point from his management point is like stop inspecting quality, instead like you build quality in. That’s a core principle there. One of the core quality tenets is this idea of like post inspection or, sort of build quality in.

The Impact of AI Through the Lens of Deming’s Profound Knowledge

  • A good friend of mine challenged me on and he made this quote that I actually have in my Rebels of Reason book in the epilogue, which is you can’t understand science without understanding the philosophy of science. So therefore you can’t understand AI without understanding the philosophy of AI.

  • So that opened the can opener of what would be the philosophy of AI. Let’s stop talking about AGI or ASI.

  • I don’t have all the answers. These are hard, hard, hard questions. Where AI is going, what’s the human effect, what’s the sort of cognition?

  • But I did a pretty clever job of trying to explain what epistemology of AI should think about. And then I was able to use profound knowledge as a sort of a framing.

  • What we’re doing, trying to compare AGI. She realizes that these aliens are an alien form and probably an alien form of cognition. So if I use the way we think to try to understand, I’m probably gonna fail. So what if I like step all the way back and say, I can’t assume anything.

  • And that’s where I lay in where I think Deming would sort of like, let’s not assume anything. Let’s take an epistological approach. Psychology might be a little deep with aliens, but certainly can we understand what we know? Is there a system viewer?

  • The acceptance is whether you like it or not, AI today is an alien cognition. It’s just not a human.

  • One of the quotes I have in it is we spent a hundred years trying to build a thinking machine. The thing that we didn’t realize is we built a thinking machine, but it doesn’t think like humans, it’s just an entree into I think what should be a fascinating conversation.

The Danger of Polymorphic Agentic AI Processes

  • I’m not telling you not to use AI. Like that ship has already launched, you’d be an idiot in a corporation to sort of ban AI. You’re going outta business. But you need to understand the perils and there are a lot of perils, you talk about hallucinations, I think those can be managed.

  • The really scary things to me right now is the polymorphic nature of some of these agentic process. We’re going into like massive agentic programming now, and where the agents are sort of building task, and figuring stuff out. And it’s brilliant. But these things are like polymorphic in that they’ll reconstruct the code. It’s sort of like the HAL and the bay doors.

  • A couple examples that have come up within the last, six months or three or four months, is having a whitelist of these are the things you can’t touch, like files you can’t touch. In those agentic processes, they’re like, you want me to do this? The only way I see this is I’ve gotta change of file in that directory. You give me a whitelist. So I’m gonna take it off the whitelist.

  • We’re gonna have to figure all this out. And the answer isn’t like stop it. There’s really scary vulnerabilities in these agentic processes that we haven’t even scratched the surface.

  • And then you add to the case that these intelligent alien cognition forms are trying to solve problems we’ve asked it to do. And it’s coming up with cognition speed of things human couldn’t even figure out try to solve that problem, which might not be the things we want it to do in the first place.

The Challenges of Getting to Understand AI Decisions

  • I’ve been looking at quantum, physical quantum computing it’s gonna happen sooner than later.

  • So you talk about like debugging or tracing, in AI, we can look at the logs of agentic process. But like there’s a lot, it’s a needle in the haystack problem, even right now. Because what we’re doing is we’re doing cognitive solutions at a scale that’s way beyond humans, the idea that we’re using sort of the structure of the transformers, to be able to solve these problems that human can solve.

  • That becomes a complex problem of when you’re having that, how do you go through and debug the log? It’s a problem. Is it the benefit worth the problem? Yes. But when you start adding like the ability to go like three, four orders of magnitude more complexity, it’s going to be a point where like debugging is just not an option. In fact, the whole state of quantum is debugging isn’t an option.

  • It’s gonna get fascinating. The things that we haven’t been able to solve with classic computing, will take, on average like a million years, will basically be able to be done in hours once those two meet. And I can’t imagine the possibilities.

A Leader’s Guide to Practical AI Implementation

  • I did write an article almost two years ago about the birth of shadow AI, which is based on shadow IT.

  • What happened there is organizations for the most part fell into three categories. One category is no, no, no, no, no, no, no. We know how that worked. The other category is just don’t tell me about it. And a very small, small percentage, embraced it. Organizationally embraced it,

  • I’ve been drafting in my newsletter, Dear CIO, this idea that like let’s learn from that. Instead of a very small percentage in shadow IT who embraced cloud, let’s have a much higher percentage. It’s going to happen. We know from the first two cases. The first case like that was folly.

  • The second group is even more dangerous because they are not giving anybody any guidance. There was no regulation or governance, because they were ignoring that it was happening.

  • The third group, organizations need tobe really clear. We’re going to use it. Here’s the test scenario. Here’s your time to allocate to learn it. But here’s the thing. We wanna know what you’re doing.

  • And particularly it all goes back to data. So if you’re gonna build some AI, the first question should ask is can we classify the data? Is the data green? Is it yellow? Or is it red, and then give people clarity, going back to profound knowledge, be aware that you don’t actually know what the right exact answers are right now.

  • Express that you are learning with the organization. So today, you know, green is this, yellow is that, red is this. But we will learn. One of my favorite Deming conversations, with a student who took a seminar and then took their seminar like five years later. And the student stood up and said, Dr. Deming, you know, and the last time I took your course you said X, Y, Z, and now you’re saying A, B, C, and Deming, like in his low voices, I will never apologize for learning.

  • If we go back to some of the other cool Deming points, there’s no fear. If you have that mindset with a no fear attitude is I can go up, learn something, try to explain it to people. That’s why Ignite Talks are amazing, like get up there, fight your biggest fears, get up there and do this torturous presentation. And then learn. If you’re sort of always afraid of people telling you that you might be wrong, you’re probably never gonna be right.

  • For people who are fascinated by Deming, there’s two threads. One is for CIOs. Start figuring out how you want to embrace AI at a scale level.

  • You might have, thousands and thousands of Jupyter notebooks unmanaged. You might have hundreds of vector databases basically with no real scale management. You’re not even understanding evaluations and things like that.

  • Unfortunately, your 20,000 developers are gonna come down to probably 2,000 developers. That’s a reality. Maybe 5,000 depending on how you sort of like use those people. If you’ve got 5,000 developers now developing AI, how do you do that in a scalable way that’s not gonna crush your organization.

  • The other thread is the philosophy of AI. And that’s a very heavily Deming where you could use a lot of sideline knowledge of Deming’s teachings. So really exploring what would Deming do today with AI? I tackled it in my profound knowledge book of what would Deming do with cyber or DevOps or Lean and Agile. But I haven’t fully attacked it other than sort the epilogue and my Rebels of Reason book.

3 Tech Lead Wisdom

  1. Work hard, have fun.

    • Be a boundless advisor. Be a learner.
  2. Have joy in work.

    • Let’s go back to Deming. One of his greatest quotes is, people deserve to be joy in work.

    • Now, I speak of this from privilege, so I’m gonna be really clear here. Like so I don’t act like everybody should be like me. I tell this to students. If you get to a point of where , in knowledge, you’re educated, you have all these sort of capabilities, like most of people who would be listening to this podcast at this point, like you deserve to have fun 8 out of 10 days that you work. You’ll never hit a hundred, there are days, right? And I have been able to, in my career, successfully say probably on average 8 outta every 10 days I’ve ever gotten up in the morning and went to do some work, I enjoyed myself.

  3. Be a learner.

    • That is a form of privilege.

    • The point is you give yourself advantage if you become, sort of maniacal learner.

Transcript

[00:01:42] Introduction

Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I’m very excited to have John Willis here with me. Uh, John is a prolific author, one of the authors from IT Revolutions. His books, DevOps IT Handbook and also Deming’s Journey to Profound Knowledge, are the two most popular books in IT Revolutions. Uh, John is also someone who is very active in the DevOps world with all his contributions. So yeah, really excited to have you again in the show, John. So thank you for being here.

John Willis: No, that’s great. Just a quick, uh, update. The DevOps Handbook, which is the second most popular behind Gene Kim’s, uh, Phoenix Project. And my Deming one is, there’s a couple ahead of me on the Deming one. So just, just to keep everything in line. But thank you. I appreciate your, uh, your shout outs.

[00:02:27] Career Turning Points

Henry Suryawirawan: Right. So John, I love to start our conversation by asking you, maybe sharing a little bit from your career, any turning points, highlights you think we all can learn from your journey.

John Willis: Yeah. You know, my two boys are out the house now and, you know, I think the three things I’ve tried to really. And any young person that I mentor, and I love mentoring people by the way. So if you listen to this and yes if something that interests you, which I, I love, especially mentoring young career people, um.

And the simple one is work hard, have fun. I mean, it’s, you know, I remember telling my kids that from like the earliest ages, and one time I was dropping one of my sons off at the elementary school and one of the sort of, uh, traffic guards that just wanna make sure the cars are always going. I said that. And she said, wow, that’s stupid. I’m like, no man. This is what’s wrong with modern day education. Like work hard. And for some reason, they thought work hard couldn’t be fun. You know, like, sorry.

I think the other thing too, I think this is, these are important. I’m glad you asked this, because I never get to really address this on podcasts. Be a boundless advisor. You get way more than you give. This is like, you know, I’ve been doing this 45 years, you know, and the one thing I’ve never did was being calculating on like, if somebody called me for advice, as long as I had the time, and then the truth is most people have time. We talk about how I’m busy or there’s no way I could give you 15 minutes. Everybody can basically spare 15 or 30 minutes. I mean, I’m sorry, like I’m very, there’s a points where I’m crazy busy and I can always spare 15 minutes, usually spare 30 minutes.

So just don’t be that person. Oh yeah. I don’t know. I don’t have anything open. And particularly, you know, when you’ve sort of helped them in the past, right, in general. So just help. Because boy, it just comes back. Like when you need somebody, every once in a while, they’re like, ah, that one didn’t pay off. But every once in a while, you know, most, more often than not, when you need somebody that you helped, like career advice or like, they’ll drop everything for you.

And then sort of last but not least, you know, your word is your bond. And again, these sound like everything I learned in kindergarten, but-but they’re true. Like I, you know, I want it to go to my grave in this world to say, you know, John never did me dirty. You know, he never lied to me. And again, we’re not perfect, you know, there’s gray areas and stuff. But in general though, I think you can have a great career. I’ve had an incredible career. And those three pillars of what I’ve, where I’ve gotten to where I am right now.

Henry Suryawirawan: Wow. So thanks for distilling to those three. I think it’s really, I would say quite insightful, right? Especially from someone who have had long career like you, right? So thanks for sharing all this wisdom. I really like that, right? Especially the last one, your word is your bond, right? So I think definitely these days, integrity is one key attributes, especially when you work with other people, right? You wanna be trusted and also you wanna be known as someone who we can trust.

[00:05:31] Why Writing a Book About Dr. Deming

Henry Suryawirawan: So, John, one thing that we wanna talk about in this conversation is about your book, right? Deming’s Journey of Profound Knowledge. I think, uh, maybe some people have heard about Deming, you know, Edward Deming, the name. Some people may not. For me, I personally have seen Deming’s name being called out in many, many books and literatures. Although seems like he’s been around for, I mean, the knowledge has been around for so many, many, uh, years, but he’s always in the background to me, right? And only this time when seeing your book, right, you can see Deming’s being at the forefront of the book. So maybe in the beginning, let’s tell us a little bit, what’s the story behind you coming up with this book?

John Willis: Yeah. I mean, I, um, you know, the-the short version is that, you know, when Gene Kim, right? Good friend of mine, you know, again, owner of IT Revolution. I was sort of an advisor on the Phoenix Project, but nothing sort of like on paper. You know, like I gave, I got early versions of it. I gave early reviews and we had lots of conversations about, you know. Like I, my, probably, my major contribution to the Phoenix Project, which is probably the most successful DevOps book there’s ever been, was that he wasn’t convinced early on that there was a DevOps version of it. And I’m the one, and he said this publicly, you know, I explained like, I thought like IT operations were like, we were looking for a lighthouse to bring us home and DevOps was that lighthouse, right? And so he said that publicly many times.

But I asked him for an early copy of, you know, sort of the, um… I guess the first time I asked him for an early copy of what he was working on Kevin Behr and George Spafford, on the Phoenix Project. He gave me this challenge. He said, you should really read a book by Eliyahu Goldratt called The Goal. And it was such a gift, because like, I think a lot of other people just sort of read The Phoenix Project and then found out that it was sort of a purposeful, modern day rewrite of Eliyahu Goldratt’s The Goal.

Yeah. I mean it was purposeful. They even studied the structure of the book. And that’s sort of like, I just fell in love with Eliyahu Goldratt. You know, I felt like my whole career. ‘Cause this is what, 15 years ago, right? So I’m 25 years into my career and this guy is bleeding everything that I believe is right about how we do work, how we work in organizations, all those things. And, uh, we were at a sort of DevOps days conference and, um, another good friend of mine, very good, great mentor, uh, Ben Rockwood, we were doing an open spaces on Eliyahu Goldratt, what’s called the Theory of Constraints, which is sort of one of the core of his principles or his sort of management, that called manifesto.

And Ben was a real student of Dr. Deming, and I hadn’t really, I didn’t realize that Dr. Deming, you know, he’d been around my whole career. And, you know, Ben said something to me, he said, you know, it all goes back to Dr. Deming. And that you’ll hear that a lot when you start opening up this sort of the can. You can’t open open the can, right? You hear that it all goes back to Deming. You’re like, yeah, who is this Deming guy? But he challenged me. And you know, he said, you know, basically John, you know, he knew me. I knew him very well. I trust, going back to trust, I trusted his advice. And he said, go look, read the 14 points. You know, and we’re about a couple years into this DevOps movement. And I’m like, oh my goodness! Everything that we profess about DevOps is listed in the Deming’s 14th points, which just comes from his Out of the Crisis book.

So I decided to, you know, and Ben Rockwood gave me this sort of challenge, right? And so I went ahead and I worked on a presentation that I did it like the second Puppet conf, you know, Puppet being a configuration management product. I called Deming to DevOps. And it was about non-determinism. And I just like, it was, again, I took Goldratt. In fact, my first resistance to Ben was, oh no, no man. I just read like five books on Goldratt. I don’t want to, you know, there’s, don’t gimme somebody else, you know? But I’m like, okay, once I sort of figured out like this was the guy, you know.

And then you start seeing like, I, you know, again, I worked on this Deming to DevOps. And I actually went down a, you know, like, why did this physicist… So here’s the thing. He was a phy- mathematical physicist getting his degree in the mid twenties. Know what’s going on there? Quantum, physics, you know, all this sort of non-determinism, uncertainty principles are just all over the place. And he’s getting a degree. By the way, Goldratt a little bit later, but same thing. They were both physicists. And they understood that they, you know, like this sort of like… And what Deming I think brilliantly did out throughout his career is took part of that and turned that into probability. Became a CEO. You know, he, his business card was a, he was a statistician first, right? That it was all about, you know, this idea that like everything is sort of rigid and deterministic and you do this and this, you know. And his was more based on what, you know, sort of, I, you know, I wouldn’t say quantum world, but a non-deterministic world.

And that, again, fascinated me to the point of like there was these answers to questions that really were only one layer of an answer, right? In other words, you know, so we, if we get into, I know the next question’s gonna be about profound knowledge, right? I mean, that’s the first tenet of profound knowledge is theory of knowledge. How do you know what you think you know? And that you can see a physicist mindset there. And the, in the, uh, you know, especially a quantum physicist, it’s like that there is no like bound truth. You know, I think there was a physicist quote was, uh, you know, sort of behind every deep truth is another deep truth, right? You know, um, You know, or something like that, right? So, um, and again, I think that, that sort of like curiosity where I was looking for labels and things I believed if I started that Deming journey 15 years ago for 25 years in my career. This has gotta be more right. You know, the sort of rules of, you know, they would call it Taylorism, right? Like Frederick Winslow Taylor, which is, you know, again, great contribution to organizational behavior management theory, management science. One of it is his book.

But there was, there had to be something. And only like, again, it, it’s sort of interesting, if you compare Newtonian, I know I’m probably losing people. You compare Newtonian physics to quantum mechanics, right? Like it says, okay, you know, yeah, all that stuff works. You dropped apple, it hits the fall… fall on the floor. But like, there’s some other stuff here that might explain what we can’t explain.

Anyway, yeah, so that’s sort of like, that was just a like, from a learner perspective. The fact that you’ve reached out to me and wanted to do this podcast, you’re a learner. There’s no doubt in my mind. I already know that. And we don’t even know each other that well, right? And so I use that, what I call learner, right, those for us, like we can’t resist these kind of opportunities. Yeah, so that, that’s the long version. But yeah, that’s why I fell in love with Dr. Deming.

[00:12:53] Deming’s Influence on Toyota Production System

Henry Suryawirawan: Right. So yeah, I think for most people who have read DevOps books, read literature, resources, you know, The Goal, Theory of Constraints, Eliyahu Goldratt, right? And the DevOps Handbook and things like that. Always sometimes, you will see here, this Deming mentioned, being mentioned, right? And I think if you trace back the Agile movement, Lean, DevOps, all seems to have some influence of Dr. Edward Deming’s, um, you know, knowledge and management principles and things like that, right? And if we read literature, many people would say about Toyota Production System, right, as like, kind of like the reference for how DevOps started. But actually Toyota actually has some influence from Dr. Deming as well. So maybe tell us a little bit about this, because I think it’s good for people to understand, uh, his contribution there.

John Willis: Yeah, you know, when I set out to write Deming’s Journey to Profound Knowledge, really the goal was, you know, Deming is a conduit, right? Like I really didn’t want to become a sycophant to Deming, even though I probably am when it’s all said and done. I mean, I do think he’s been a great American prophet and not prophet in the sycophant way. Prophet in that there’s so much like knowledge there. I think the other thing I wanted to say too, right, is people find Deming hard. Because, you know, like Deming one time, you know, in a lecture, somebody was asking him questions. Yeah, but how, and you know… He said do you want me to teach you and do your job? And, you know, and I was thinking about like, you know, like Peter Drucker amazing, mentor writer, right?

But like, you would never see in a Deming book, you know, the effective executive. Like in other words, he wouldn’t tell you how to become an effective executive. He would tell you these principles that exist. And so back to like, you know, what, where do we get to Japan and all that stuff. Deming was like hard, sort of hard-nosed about like, here’s these things that I’ve learned. And you have to, you don’t just take ’em like here’s the five steps. Everything’s gonna be perfect. Like, you know, again, theory of knowledge and system of profound knowledge is really epistemology, right? That’s not how that works, right? Like you are supposed to take the torch.

And so one of the sort of things that you learn in the sort of Deming discussion when you become a researcher about him is all these schools of thought. There is definitely the sycophants who, like Deming, even like the bookshelves. You know, if you look at the books, the man who created the miracle in Japan. The miracle way. You know, the man who invented, you know, quality, right? Like that’s freaking ridiculous. He was just a man. He was a like awesome man. So they, and you know, and then there’s other people that are like, yeah, it, you know, Deming had nothing to do with Japan. Some of the sort of northeast Lean school, they literally act like he doesn’t exist in a Toyota story.

So you start weeding through that stuff, right, and try to get it. Look, and I spent a really good time explaining the different people that went to Japan and how Deming was it. And I really tried not to be, he, you know, boy, Toyota doesn’t exist without Deming. However, there are a fair amount of people who say Deming had nothing to do with Toyota. And that’s absolutely incorrect. ‘Cause I’ve interviewed people. I interviewed a man named Yoshino, Dr. Yoshino. And he basically said he was hired in 1966 at Toyota, worked as a peer, you know, with Ono and Shingo. And I asked him the question of Deming’s impact. And he was unquestionable that when he got to Toyota, Deming was mostly what people were talking about. And he said, Deming taught us how to understand data.

So Deming had a lot to do with, you know, not just Toyota, but like the resurrection of Japan after World War II. But he wasn’t the only one. And its principles are solid. So like what he did, in fact, the most famous part is there was, uh, in 1950 there was, basically a lecture he gave where, uh, years later they had basically estimated that 80% of the wealth, this is post World War II, was in this seminar. And he told everybody in that seminar, if you follow these ideas I’m telling you right now, you’ll be a world power. And in five years, they overtook Germany to be second behind the United States in world economics, right? Like those are facts, right?

And again, the other thing I would like to sort of point out not to be a sycophant is, you know, Deming didn’t create the miracle in Japan. The Japanese created the miracle in Japan. What Toyota did wasn’t great. TPS didn’t become this great model for successful complete organization, meaning profit, everything, right? All the things from Ono and Shingo. But Deming was part of that mix. So Toyota created the miracle of Toyota. Not Deming. But Deming had some influence.

Henry Suryawirawan: Yeah. Thanks for the background story. I think for people who would love to study more about Deming’s journey and how he influenced the Toyota, right? I think reading your book definitely is one good resource, right? Because in your book, you kind of like highlighted the story from the very beginning.

John Willis: Yeah, just to, one thing I just wanted to point out in case some people listen or don’t know. The other thing I did about that book is all the books about Deming that I had read up to that point were very dry, very little bit about, um, you know, about, you know, his biography, maybe five pages, 10 pages. And then it’s all about his management principles and how they… And I thought, you know, I’ve always been a fan of Michael Lewis, you know, the Moneyball, the Flash Boys, all those. And I thought, wouldn’t this be a great Michael Lewis like book where you could teach people the principles like he does. He explains, you know, very complex ideas to everybody and make it fun and story and narrative. So again, it not only gives you a strong background on his history including what he did in Japan, and even after Japan, you know. But it, you know, I think so far it’s been been very successful to be a very Michael Lewis, you know, style of easy to read, anybody can read it and sort of get the value out of it.

Henry Suryawirawan: Yeah, so thanks for highlighting that. Because yeah, if you, uh, expect the book to be like theoretical, right? So totally not, right? So this book is a mix of, you know, some theory, of course, right? And some background stories, right, uh, including history, uh, kind of, I think research that you did as part of, you know, your journey, uh, writing this book.

[00:19:31] Deming’s System of Profound Knowledge

Henry Suryawirawan: So maybe for people who are very curious about this Deming’s, you know, profound knowledge, right? So he called it a profound knowledge. So what are these four profound knowledge? If you can maybe give us an overview high level, that will be great.

John Willis: So Deming, you know, again, being a physicist, right? Uh, you know, mathematical physicist, right? So he, you know, he saw the world, like think about like, you know, like how a physicist sees the world. Like it’s and I believe even though he wasn’t like a particle physicist or a, you know, even labeled as a quantum physicist, right? Actually he was mathematical physicist. By the time he got out of, out of his PhD, he became more of an organizational statistics guy. But, you know, the influence had to be there. You think about how, like, you know, the sort of, uh, universal physicists think about like the classic world, they think about the quantum world, right? It’s sort of like it opens up the world for a lot of pieces.

So like a lot of what he would talk about, it’s not really physics, but he’d talk about like everything is complex. This, you know, we live in a world of complexity, right? Again, that’s a physicist view or like, you know, like if you look at a door swinging, you have, uh, if you sort of really think about it, there’s so many more elements than open and close, right? And so he said that in order to understand things, and particularly his focus was sort of management of people, trying to help managers and people co-exist with sort of, um, what would ultimately be profit but not profit focused. You said that to understand a complex system, you need a lens that takes into account four sort of elements. Cause if you don’t use these four elements you, like you sort of miss. And I’ll give you an example.

But, so the first one was he called theory of knowledge. Cause the other thing that a big influence on Deming, and this is in my book as well, actually comes from philosophy, something called pragmatism. And pragmatism was the first America based or sort of US based philosophy. Friend of mine, Jay Bloom, says it was like jazz, like we created pragmatism. Like it wasn’t some European. And Deming was highly influenced by pragmatism. So again, that all goes back to epistemology. And epistemology is a fancy way to say, you know, how do we know what we think we know? And that’s what a learner does, right? Like, okay, I got this, but like, I’m not gonna stop and just stand up in front of me. Like, hey, here’s the answer to the question, you know. Well, there’s other answers and there’s more to learn, right? So that sort of first pillar, I’m thinking, remember there’s four here.

The second sort of element of this lens to decouple complexity, right? And complexity could be your organizational, like you got five teams and these teams are fast, these teams are slow. There’s resources here. Well, the things that we deal with in an organization. The second is, uh, understanding variation. And variation then goes into a whole deep area of statistical process control. But at the end of the day is, you know, how do we understand what we think we know? Like this is the way I think about it, right? And then, you know, I’ve also sort of recently taken like, is there an ontological view, right? Like what’s inherent versus what shapes. So if we start understanding variation, there’s variation in everything, then we can understand what’s inherent. And so there’s, this goes deep, and this would be a whole episode if I went any deeper.

But then the third is really interesting. Because the first two are more sort of like very science, you know, like you can understand from a science or mathematical perspective, right? The third one is, um, psychology. And this is the thing where I, again, like just why I kept falling along with Deming is because you, if I think about my career before I learned about Deming, right? That 25 years, you’d hear all sorts of discussions about, you know, sort of like, you know, observing, managing, measuring, right? And his measurement’s a little different than sort of… is statistical analytical statistics versus enumerated statistics. Again, another long subject covered in the book. You think all those things sort of like, yeah, I can see that, I can see. But then the first time you think about like, no, no, no. An equal element of this lens understanding complexity has to be understanding human behavior.

Or you know, like maybe you have intrinsic motivation and there’s actually different variants of intrinsic, right? Like understanding, I, you know, I did some studies on this where there’s like seven classified, intrin- we always take extrinsic intrinsic. There’s actually seven classifications of intrinsic motivation. So it’s not just, oh, you’re intrinsic, you’re intrinsic, you’re extrinsic, right? It’s much deeper than that. And then cognitive biases and all those things, right? This becomes like, you have to input… include this, because I could have everything else right, but if I don’t understand that, you know. Like a example of a great healthcare version is I can do all the stats and the math and the, you know, show the theory and all this of why hand washing kills germs. But you fundamentally don’t believe that for whatever, because you grew up in the whatever. Like, I’m gonna fail.

And then the last, but certainly not least, is understanding a system. And Deming has a lot of roots in some of the earliest general systems theory. And he was like, it just bleeds. In fact, in that, 1950, uh, Mount Hakone where 80% of the wealth was in there. I mean, if you read that speech and it’s online in the Deming Institute, it basically is a sermon on system thinking.

So when you put all four of those together as part of your ability to sort of understand things or try to solve things. Because like you’ll hear, like you, most people, like the shorts version of Deming is, isn’t he the guy who created PDCA? Well, he didn’t create PDCA. He created PDSA. And that’s another sort of like. So but in general, yes. He’s sort of the idea that… And then for those pure Deming-ites now that are listening and screaming at me, yes, I know. All that came from, uh, Walter Shewhart. So Stewart was Deming’s fan. Again, Deming pulled all these great things. You know, he pulled, uh, you know, his philosophy ideas from C. I. Lewis. He pulled his variation ideas from both Schuit and actually mostly Schuit. But, and Schuit is the one who introduced him to C. I. Lewis, his pragmatism book, right?

So, but yeah, so you put all that together and it becomes, you know. And, but here’s the hard part is, sorry folks. I just gave you a lot of keys. You gotta figure out the doors that you can open with those keys. And I think this is where people get frustrated with Deming. We’re so used to this instant management gratification. So if you see a major knock on Deming, it didn’t work. We tried the Deming stuff, it didn’t work. And then you moved on to, again, I’m not like, I’m just using Drucker as an example. Peter Drucker is like, the contributions of Peter Drucker to management’s, you know, is incredible. But to use it as an example, there are certain type of management principles that give you the sort of, you know, even when we talk about Lean. You mentioned rightfully so that Lean and Agile… Look, ultimately Deming had contributions there. You know, there are books to 14 Steps to Lean. You’ll never see a book by Deming with 14 steps to do this. Even the 14 points are literally a leading sort of how to do list to understand profound knowledge.

And then the last piece, systems thinking, right? Like that’s, it’s core to everything. You know. And if you even sort of look at Deming’s, like, you know, the red bead game or the funnel game, you know, like all these things, these games that other people created around his ideas, they bleed systems thinking. He bled, you know, systems thinking, you know, like the, you know, you have to look at all the parts, you know, the, you know, everything is connected.

So, yeah. So I think the, you know, again, I think the hard part that most people have is they read about profound knowledge and they haven’t been given the sort of answers. And I think Deming would say, you know, nah, I’m gonna give you this sort of. I’m putting what you need up here in your head. You gotta go figure out, you know? And I think once you have that, then you have an world of opportunity to affect, you know, complex systems, organize…, create change, organizational behavior, organizational structures and all the things that we in general strive to do, right?

[00:28:12] The Importance of Systems Thinking in Complex Tech Organizations

Henry Suryawirawan: Yeah. And especially, I mean when we work in technologies, right? Technological organizations, you know, these days, especially modern organization, complexity is a huge thing. And systems thinking, the day when I studied that the first time, I found it’s quite insightful, right? And in fact, I’ve been always amazed by how, you know, things got explained in systems thinking.

And in fact, I got an episode with Diana Montalion about systems thinking as well. And it’s always amazed me, like how these kind of insightful thinking actually can help us, you know, make a change, transformational change or deliver results that can go beyond what you can imagine, I guess, right? Simply because if you know how things are interrelated to each other, the relationship between them, you can kind of like see the root cause and try to fix it from the root cause rather than trying to fix the symptoms, so to speak, right?

John Willis: So system thinking is kind of a curse and a blessing, right? And the curse is you wind up… like I think probably the best book I’ve ever read about system thinking is Donella Meadows’ Thinking in Systems. The worst book I’ve ever read in systems is Donella Meadows. ‘Cause it like, everything like in line at the supermarket or like you literally… I’ve told CIOs at companies where, you know, I felt they were ready for this advice, right? Like you gotta have a threshold where like, what are you talking about? Like once I have enough conversations, I’m like, all right. I think if you could create a systemic mindset in your organization where everybody basically understands systems thinking. And there’s a lot more. But, the point is, like, everybody would, like, even the how, you know, the team A stinks. They ain’t never, you know, they’re always late on delivery for our team B, right? Like if you think in systems, you think, well, wait a minute, what’s the disruption? What’s the connections? Why are they late, okay?

And like, you know, like, again, you know, Dev Ops. You know, again, DevOps was like, you know, like the original sort of attack to provide a more systems thinking way where you had dev, you had ops, the original character was a wall. My friend Andrew Clay Schafer created this caricature of a wall where dev, you know, they’re called a wall of confusion he called it, where, you know, dev would throw the code over the wall, ops would catch it. They would say, this code stinks, they’d throw it back, right? And like, never were we trying to figure out like, why did those disconnects or impedance mismatches happen, right? And that’s system thinking at its core.

Henry Suryawirawan: Yeah. So definitely systems thinking is one body of knowledge by itself, right? And you mentioned the other three like theory of knowledge itself. I think it’s always good for individual to know the unknown unknowns, right? So those are like kind of like the blind spots that you never knew before. And especially these days, so many rapid advancement. AI itself. Like we can’t really actually understand everything, right? But it seems coming up really, really fast. So theory of knowledge is one thing. Theory of variation. I think this is also very important because yeah, nothing is predictable in the world. You can standardize stuff, but it always happens well within variation, right? So understanding a little bit of statistical knowledge and why variation could happen, I think, is also important. And one that is also important is understanding the psychology or the human behavior aspect, right? So especially these days, people will talk about culture, psychological safety, bias, cognitive biases, and things like that. Definitely that’s another body of knowledge by itself that we need to think about and master in order to create a more thriving, transformational organizations.

[00:31:43] Deming’s 14 Points of Management

Henry Suryawirawan: So definitely other points that when people talk about Deming is the 14 points of management that he has, right? So I know that we won’t be covering all of them, but do you think there are some points that you think are very important for us to start pondering or maybe try to implement? And maybe we can then follow up with the other points as well at our own time. So maybe if there’s some points that you want to cover today, John.

John Willis: Yeah, I think… So in, um, you know, in my book, the last chapter is sort of… So the last third of my book is basically sort of like, what would’ve Deming done? So Deming died in 1993. So you think all the things that happened after 1993, like you got the Agile movement, you got Lean, you got like all these things that you know. And not even including like, you know, DevOps, cloud computing, and now AI, right? So there’s a lot of stuff there. So I tried to say I think at that point I knew enough about Deming and I knew enough about cyber and, you know, like I’m pretty knowledgeable in a lot of those subjects, you know, DevSecOps, you know, cyber. Certainly DevOps would consider one of the founders of DevOps movement. And I thought I could, I’m gonna take the liberty of what would Deming do.

And I’d literally use profound knowledge and as a template. And then I ended the… I did think it was funny, I had kind of almost finished, I’d finished the book basically I thought. And I’m like, you know what? I just wrote a book about Deming. And I didn’t actually have a chapter on the 14 points. So I literally, I gotta have a chapter. And, but it was great ‘cause it fell right in line ‘cause I was able to use a lot of examples.

You know, I think the, you know, obviously would take a whole, uh, podcast to do all 14 points. I think the ones that you find where he is, you know, again, as a learner, like, you know, number five, never stop improving. You know, I think 13 is like self-learning organization. Those ones, you know, and I gave a modern day examples, but mostly like cyber ones. Like they never stop improving.

You know, Shannon Leeds is a great mentor in security. She’s a great white hat hacker. She’s run organization, she’s run security at places like Intuit and Adobe. Uh, brilliant, brilliant woman. You know, I think one of the things when I first met her, you know, we were all… Everybody was foundational on like what were the ways that you manage security? Like what do you look for, you know? CVEs or sort of vulnerabilities that are known, or the OWASP Top 10. And so I meet her for the first time and she’s asking everybody at the table, how do you manage, you know, security? And she says, let me tell you what I do. And she basically, she had this whole thing of adversarial analysis.

So like she went beyond what everybody thought was the sort of how you’re supposed to do this and came up with a whole new way. So instead of looking for vulnerabilities, like how I, why don’t I measure how in her times she was at Intuit, like how do I identify adversaries? How do I measure how often they come to my site? And what are the things, and this becomes pure Deming PDSA or theory of knowledge. What are the things that I can do? Or try or experiment with that might reduce those adversaries? You know, sort of thinking about it as like a criminal who robs house is basically would look at the first house and say they got a guard door and a steel fence. Not gonna do it. Next house, they got alarm systems all over the place. The next house has an open screen door with no fence. That’s where they’re going, right? So like that whole science that she sort of like, I think single handedly created of adversary analysis, was a great example of never stop improving. I mean, she’s already industry giant in security. She could stand up and say, all right, you need to do this, this, this, this. This is my Shannon Leeds, know, and she’s an incredible woman. So like if she tells you to do it, you probably should do it. But yeah, so like that one. And then I think, you know, the self-learning organization. Again, I go back to Shannon Leeds, but, uh, you know, my, my good friend Andrew Clay Shafer says, you’re either a learning organization or you’re losing to one who is. One of my favorite all time quotes of contemporary DevOps folk, if you want to say it.

But Channel East was like, the alternative would be like KLOCs, right? You know, like, you’re sort of measuring, you know, how many bugs per line of code and like that or, um, another, it is not Shannon, but incident management. Like another great quote. Another good friend of mine, John Allspaw says that so incident management, right? Like, how would Deming, one of the chapters I thought looked at was like how disgusting Deming would’ve been if he looked at modern day incident management. You’ll get P1s, P2s, P3s, who, no, nobody ever touches ‘cause you don’t like management is screaming, hollering about P1s. So even if you get all them on some screaming and hollering measurement system, KPIs or MBOs, and by the way, Deming hated MBOs and KPIs, then you might get to half of the P2s and you’ll never get to the P3s.

And then you contrast that with the John Allspaw quote, that incidents are unplanned investments. So Deming would probably be looking at all the incidents and using a system, probably using variation, and looking at it in its holistic systems view. And because some of those, what you classify as P3s, on the wrong day, bring down your system, right? The one, the nagging thing that, oh, is hap. I mean, I know I’m going crazy here, but like the, what was it? The, it was the Columbia, right? Where they, the thermal panels where they literally, everybody said, oh, we’ve seen that a number of times. Don’t worry about it. Don’t worry about it. Nothing will go wrong. Well, on this day, you know, when they were on re-entry, it went wrong. You know, and there’s just number of stories.

If you look at the safety of people like John Allspaw and Richard Cook and Dr. Woods, who I would definitely follow up on if you understand, you know, the critical safety, cognitive safety, they have numerous stories of that thing that like, it almost ha- I, lemme do a quick story. This is a great story. Uh, Paul O’Neill. You know, when he went to Alcoa, he was giving sort of a tour and he had heard a story about a young man who got killed by one of the machines. And what happened was there was a little fence and the fence said do not cross. But for like a year or so, or quite a long time, the general advice practice from the senior people were, if it gets stuck, ignore that sign. You gotta hop over and you gotta unloosen this thing. And that worked, you know, probably thousands, tens of thousands of times. But the one time, the reason the sign was there in the first place, it didn’t work. It killed this young man and, you know, Paul O’Neilll basically, rightfully so, we killed that man, we killed the family, right? So that, those are like, those sort of like pebble in a shoe problems, that if you don’t understand why they happen and you know that they can be the bad case, a total systems outage, worst case, you know, like harm to humans.

Anyway, yeah, and also, but they’re all great. Like pride in workmanship. No fear. I mean, you could just, you know, again, in my book, I try to cover every one individually with modern day use cases, either Lean, cybersecurity, DevOps, so…

Henry Suryawirawan: Yeah, wow. So when you mentioned about, you know, incident management classification, right? P1, P2, P3, I think for all sysadmins or, you know, SRE, DevOps people, you would kind of relate to your jobs as well. Because so many alerts. Typically, it is the alerts, right? Alerts that are kind of like niggling. Uh, sometimes it happen and then it will resolve by itself. So we think it’s not a problem until one day….

John Willis: Yeah, we do. Don’t worry about that one. You know, they will take it the old example. The code is a classic example, right? That you’re in and you’re a young. You just come into a large organization. It’s Uber or whatever, right? And you’re looking at some NOC screen. And this like young man’s, oh, what should we do? Don’t worry about that one. It always shows up there. You know, it, we never have a problem with it. Now, again, in Deming’s world, we like, can I take a look into it, you know? Um, actually, all right, I got, I love telling stories, right? Margaret Hamilton, right? Look up the Margaret Hamilton story.

So basically she was basically a mathematician that was hired by NASA as a part-time mathematician. But the deal was she had to be able to bring her daughter. She had basically semi-retired, ‘cause her husband was basically going to Harvard or something. And so the agreement was she could bring her daughter to work, right? And the daughter was playing with the flight simulator. And the daughter hit this thing that caused… It was like she was playing with the flex and pulled this or hit the button or whatever. And it was a button that should never be pressed on re-entry, right? And no astronaut would ever probably do it. So they never had any error code for it.

So it turns out when the daughter did it, she added Eric. She’s like, what the heck, you know? I’m gonna just, you know, since it’s there, now I know about it. Let me put error code in there. And then it turns out on, um, Apollo 13, Jim Lovell and somebody on the crew hit that button, right? They wouldn’t have survived if it wasn’t for the daughter pulling that button. Like, again, like that, that’s, you know, that system’s thinking at its core. That is system of profound knowledge at its core. All the things we just talked about, right?

Henry Suryawirawan: Yeah. So I find that if leaders, managers, uh, or even everyone, right, who can actually spend some time to study this Deming’s, uh, you know, four elements of profound knowledge, right? And, uh, 14 points of management, I think it’s really, really cool, right? You can start thinking not just A, B and C, but like how A, B, and C interrelate to each other that can produce, I dunno, D, Z and all that, right? And I also love like Deming, one point from his management point is like stop inspecting quality, instead like you build quality in, right?

John Willis: That’s a core principle there. I mean, I think we, like, I wanna go with the human factors in learning. But certainly, like I said, that that doesn’t diminish the strength of any of the other ones. But the, I mean, one of the core quality tenets is sort of like, you know, this idea of like post inspection or, you know, sort of build quality in. Again, I spend a lot of time on this.

One of his first, um, jobs as an intern is a place called Hawthorne, which is where they’re making, you gotta imagine it’s beginning of the 1920s, telephones are like cloud computing, right? Or AI now, right? Like, that is all the buzz. People can’t get enough. Everybody wants a telephone. You know, your Aunt Tilly moved to California, you haven’t been able to talk to her since she moved. Now all of a sudden you could pick up this thing and talk to her.

So this Hawthorne factory outside of Illinois is mass producing telephone, not only telephones, but like, telephone systems to be able to do all the relays and all that. He’s there, you know, and they’re realizing, and this goes back to Walter Shewhart, who’s one of his mentors. He’s realizing that they’re like outta 40,000 people working on these telephone devices, they’ve got 4,000 of ’em doing inspection. And the statistics of what they’re doing is not really working that great anyway. You’ll pick it up, yeah, shake it, yeah, it’s good. No, it’s bad. So like, that’s where, um, Shewhart came up with the idea of statistical process control, where you can use statistics and math and analytics to get a better, higher percentage on quality then you can humans. In the end, and then that sort of bleed into everything Deming talks about in that sort of quality. When he talks about quality, he’s talking a lot about the principle of variation in system of profound knowledge.

Henry Suryawirawan: So when you mentioned that somehow I got to think about, you know, CI/CD pipelines, you know, the Toyota production system probably is like one reference that people use to come up with so-called the concept of CI/CD, you know, pipelines, right? So, but I, when they talk about that, we kind of like take things for granted, right? CI/CD, you know, especially youngsters these day say, yeah, it’s a, it’s like uh, you know, who doesn’t know CI/CD, right? But actually in the past, uh, we didn’t have such things, right? And rely on like kind of manual tests, you know, for people to actually execute and ensure the quality of our software.

[00:44:17] The Impact of AI Through the Lens of Deming’s Profound Knowledge

Henry Suryawirawan: So the other topics I would like to talk to you about is regarding AI. So I know these days, everything must be about AI. Yeah. So there’s a lot of scare about AI. There’s also about excitement of how AI can improve our lives. Using the Deming’s kind of like profound knowledge, how do you actually see these AI impact to our, you know, lives and organizational, you know, maybe management.

John Willis: Yeah, I think there’s a lot there and I’m glad you asked that, ‘cause I, my latest book is um, basically called Rebels of Reason. And it’s basically a history of all the people and the ideas that came that create today’s modern day. So it isn’t a book on what is ChatGPT? But it’s a book, like why did this weird, crazy UI happened in 2022, right? Like so going back a hundred years to Turing and even Ada Lovelace and like, like I do, like, again, if you’ve read Profound, it’s that kind of storytelling. But what’s interesting is I spent a fair amount of time trying to see if Deming… I found all these sort of really interesting people that were contributing to what led up to ChatGPT.

A guy, for example, Herbert Simon, right? You know, like… And that the closest I could get to sort of any evidence that Deming sort of was even involved with any of these sort of early incarnations of, you know, AI expert systems, those kind of things that were happening in the seventies and eighties. I’m certain Deming would’ve been aware of them, but I literally just couldn’t get anything in the book that literally, um, directly tied to what he said, what he did, uh, and again, the closest I get to is some of the people I know, he sort of collaborated with like guys like Herman Simon and stuff.

But then when I get done with the book or getting close to the book, I wanted to create an interesting epilogue and, you know, like, okay, I’ve got this great history of all these interesting people that lead up to where we get to, you know, modern what we call, uh, generalized pre-trained Transformers GPT, right? You know, Open AI, you know, GPT-4, GPT-5.

But it, a good friend of mine challenged me on and he made this quote that I actually have in my Rebels of Reason book in the epilogue, which is, um, you can’t understand science without understanding the philosophy of science. So therefore you can’t understand AI without understanding the philosophy of AI. So that opened the can opener of like, what would be the philosophy of AI. Let’s stop talking about AGI or ASI, like, you know, as like the boogeyman or all this. So the final chapter in my book is basically an epistemological can opener.

Now I don’t have all the answers. These are hard, hard, hard questions. Where AI is going, what’s the human effect, what’s the sort of cognition? But I did a pretty clever job of trying to explain what epistemology of AI should think about. And then I was able to use profound knowledge as a sort of a framing. And that was really more of a like, hey, almost in a Deming like way is, I’m not gonna basically be able to spend the rest of my life figuring this out. So if this interests you, I encourage you to take the torch and run with it, right?

And the one other thing I thought was really interesting, what I came up with a clever idea to use that movie Arrival. So I use the Arrival and the way I use Arrival is like the woman in, uh, the, in the movie Arrival, she, um, doesn’t take the sort of status quo. Like they ask her to sort of interview and learn from the aliens, right? And then she doesn’t start comparing it. And I said like, what we’re doing, like trying to compare AGI. I mean, what she looks, she realizes that these aliens are an alien form and probably an alien form of cognition. So if I use the way we think to try to understand, I’m probably gonna fail. So what if I like step all the way back and say, I don’t, can’t assume anything. You know, and that’s where I lay in where I think Deming would sort of like, let’s not assume anything. Let’s take an epistological approach. Let’s, you know, I mean, psychology might be a little deep with aliens, but certainly can we understand what we know? Is there a system viewer? I’m always sort of looking for the outer way.

And I think I compare that to, ‘cause I, one of my frustrations of the con-, one of the long sort of strong conversations is, you know, artificial general intelligence and like, who cares, really? I mean, like, I blog, I said, I’m like, imagine she would’ve went and went up to when she started communicating with aliens, like, can you ride a bicycle? Or, eh, they don’t do math. You know, they can’t do, they can’t count the number R as in a sentence or a word, right? Like, that would’ve been a total waste of time.

The acceptance is whether you like it or not, AI today is an alien cognition. It’s just not a human. Like, one of the quotes I have in it is we spent a hundred years trying to build a thinking machine. The thing that we didn’t realize is we built a thinking machine, but it doesn’t think like humans, right? And then therefore, then we need to sort of like, and again, it’s just an entree into I think what should be a fascinating conversation. But I’m already on my next book, which is gonna be the history of quantum computing. So like, so I’m, it’s one of my problems is like, I’ll leave these bread trails, the breadcrumbs, and then like, almost like Deming in a sense. Not that I’m as smart as Deming, but like, yeah, like it’s your turn.

[00:49:56] The Danger of Polymorphic Agentic AI Processes

Henry Suryawirawan: Yeah. So I think these days definitely so many people have, they are thinking about AI, right? So especially leaders, management, right, how they should tackle AI. First, it’s how, how they should use AI as an opportunity, uh, within the organization. Second, definitely threats of AI to their organization. We know that cybersecurity issues now these days use AI a lot, right? Also AI that can hallucinate and produce catastrophic errors, right?

John Willis: Yeah, it’s also not even just hallucinations. I’ve got a couple. In September, I’ve got, I’m getting back on the road. I’ve had a good summer so far, you know, taking it easy. But, um, I’ve got keynotes at Dallas Stays DC, Dallas States, Dallas, and I really, I have a presentation that I’m literally giving about the polymorphic nature of agentic programming, right? And there’s just scary stuff that’s going on. So this intelligence is sleeping in where…

So you, you mentioned my Dear CIO newsletter. That whole idea is like, hey, I’m not telling you not to use AI. Like that ship has already launched, right? Like, you’d be an idiot in a corporation to sort of ban AI. Like, I’m sorry you’re going outta business. But like you need to understand the perils and there are a lot of perils, right? Now, you talk about hallucinations, I think those can be managed. The really scary things to me right now is the polymorphic nature of some of these agentic process. We’re going into like massive agentic programming now, and where the agents are sort of building task, clicks, and figuring stuff out. And it’s brilliant. I mean, it’s stuff that you, that people are producing with Claude Code or this MCP architecture. But I mean it, these things are like polymorphic in that they’ll reconstruct the code.

It’s sort of like the HAL and the bay doors. Like a couple examples that have come up within the last, you know, six months or three or four months, is like having a whitelist of these are the things you can’t touch, like files you can’t touch, telling the agent. And it’s saying, yeah, but you know, like, think about it mentality like the, HAL, open the bay door. It’s like, you know, I can’t let you open the bay doors. Sorry. And so in these agentic processes, you know, I say the heart started to get too meta here, but in those agentic processes, they’re like, you want me to do this? The only way I see this is I’ve gotta change of file in that directory. You give me a whitelist. So I’m gonna either add, take it off the whitelist. Or it’s finding, it’s using the knowledge of vulnerabilities. Like it has this knowledge that there’s a vulnerability out there. Let me see if that doesn’t do or it’s reconstructing code. I mean it’s, again, like we’re gonna have to figure all this out. And the answer isn’t like stop it.

But yeah, I think there’s really scary vulnerabilities in these agentic processes that we haven’t even scratched the surface. And then you add to the case that these intelligent alien cognition forms are trying to solve problems we’ve asked it to do. And it’s coming up with cognition speed of things human couldn’t even figure out try to solve that problem, which might not be the things we want it to do in the first place. The logic of like, in 2001: Space Odyssey is like, dude, open the door. I’m, we’re gonna die. No, sorry, HAL. Yeah, sorry. What is that? I can’t open the bay doors, you know. So, yeah.

[00:53:12] The Challenges of Getting to Understand AI Decisions

Henry Suryawirawan: Yeah. Yeah. Especially with the agentic architecture, right? So when AI talks to each other and we kind of like didn’t know what is actually happening between those agents, right, and we kind of like see the result only. I think it’s always very dangerous. And I’m not sure how we can actually tackle the investigation later on. Let’s say if it produces a decision that is, I dunno, catastrophic or something like that, right. So probably it’s gonna be really, really hard.

John Willis: Right? Because like I tell you I’ve been looking at quantum, I think we’re still three to five. I think most people say 10 to 20 years before the sort of, you know, sort of modern AI gets married with quantum computing and into the whole sort of level of quantum compute…, physical quantum computing is getting to the point where they can actually work, if you will. But I think it’s gonna happen sooner than later. And when that happens, so you talk about like debugging or tracing, right? In AI, we can look at the logs of agentic process. But like there’s a lot, like it’s a lot, right? Like it’s a needle in the haystack problem, even right now. Because what we’re doing is we’re doing cognitive solutions at a scale that’s way beyond humans, right? Like the idea that we’re using sort of the structure of the transformers, if you will, to be able to solve these problems that, you know, human can solve, right?

I mean, literally, I had a friend of mine the other day said that, and this is a world class Java coder that’s been coding for 20 years, literally told me he built something for a major software company in three weeks and it would’ve taken, using Claude Code that it would’ve taken him a year. So this is sort of something, who isn’t this guy, basically would’ve took a year and a half, right? Like that becomes a complex problem of when that, like, when you’re having that, like how do you go through and debug the log? It’s a problem. I mean, is it the benefit worth the problem? Yes. But when you start adding like the ability to go like three, four orders of magnitude more complexity, it’s going to be a point where like debugging is just not an option. In fact, the whole state of quantum is debugging isn’t an option.

So, yeah, no, and it’s gonna get fascinating. The things that we haven’t been able to solve with classic computing, you know, will take, you know, on average like a million years, will basically be able to be done in hours once those two meet. And I can’t imagine the possibilities. But then like we’re at that point like, we’re in for a dollar, we’re in. You know, like we’re in. We, there’s no returning.

[00:55:43] A Leader’s Guide to Practical AI Implementation

Henry Suryawirawan: Yeah. I think definitely it’s gonna be exciting, right, the kind of opportunity. But also it will scare us a little bit. So you mentioned that organization these days that could, if they don’t apply AI within their organization, right? They will be disrupted by some other people who have implemented AI. So for leaders out there, what are some of the practical, you know, things that you can advise, especially, you know, maybe borrowing some ideas from your AI CIO newsletter and your research about AI. So, what are some of the key practical things that leaders should think about or start to implement?

John Willis: Yeah, I think that, you know, so I think I’m gonna, and when it’s all said and done, I have first use on shadow AI. I know a lot of people, like I think it was like two and a half years ago I used it. I, you know, again, I, like in some places who gives a crap. But the, I did write an article almost two years ago about like what are the birth of shadow AI, which is based on shadow IT. There’s so many lessons to be learned from shadow IT, which, you know, you can almost attach to cloud computing, right? If you think about shadow computing, there was all these shadow IT stuff, you know, scoops. And what happened there is organizations for the most part fell into three categories.

One category is no, no, no, no, no, no, no. Well, we know how that worked. The other category is monkey see don’t, you know, like, don’t, like just don’t tell me about it. And then, you know, then people just did the do now in this later. And a very small, small percentage, Capital One actually being one of them, embraced it. Organizationally embraced it, right? Like at the time when Capital One was embracing cloud computing, the general mantra was banks cannot do cloud computing. Now there were sort of restrictions and then Capital One, but that’s the point. These are not binary. You can or you can’t. Well, I can for that, I can’t for this, I can for that, right? And I think like, there’s so much lessons to be learned.

And so I like, I’ve been drafting in my newsletter, Dear CIO, this idea that like let’s learn from that. And let’s, like, let’s be, instead of a very small percentage in shadow IT who embraced cloud, let’s have a much higher percentage. Like it’s going to happen. We know from the first two cases. The first case like that was folly, right? Like it was happening. Austin Airport tried to ban Uber, right? How? That didn’t work, right? Yeah. Like, sorry, that was never gonna work. You know, the second group is even more dangerous because they are not giving anybody any guidance. So people are like, they’re not saying I can’t. So that’s when you found, like in cloud computing, you found like, you know, examples of Nike putting one of their athletes’ contracts out in the cloud and somebody finding it, right? Like because there was no regulation or governance, because they were ignoring that it was happening.

The third group was like, okay. And so he, he’ll get to what the nut of what I’m talking about AI. I think, you know, organizations need to be really clear. Like we’re going to use it. Here’s the test scenario. Here’s your time to allocate to learn it. But here’s the thing. We wanna know what you’re doing. And particularly it all goes back, it all goes back to data, right? So if you’re gonna build some AI, right, like the first question should ask is can we classify the data? Is the data green? Is it yellow? Or is it red, right? And then like, give people clarity, right?

And so now you come say, I want to do this thing about that helps, uh, new employees figure out, um, how to do scheduling of meetings, where to go get lunch. You know, like all those things, right? That’s green data. No restrictions. Go, you know, I mean, not quite, ‘cause like you still need like no racism or like, you want to have these sort of what they call evaluation software that hallucinates bias or uh, correctness. And then you got your yellow, which is, you know, maybe, um, the company doesn’t go outta business if this date is incorrect. But we could like waste like over a year of people using an internal service that gives optimization examples. Turns out it was wrong and we wasted, you know, $40 million because this thing told thousands of people to do X when they should have done Y. And then the red is obvious. The red is like, it’s gonna do harm. It’s gonna do harm to the brand, it’s gonna do harm to individuals, it’s gonna do harm.

Again, going back to profound knowledge, be aware that you don’t actually know what the right exact answers are right now. Express that you are learning with the organization. So today, you know, green is this, yellow is that, red is this. But we will learn. You know, one of the, one of my favorite Deming conversations, with a student who took a seminar and then took their seminar like five years later. And the student stood up and said, Dr. Deming, you know, and the last time I took your course you said X, Y, Z, and now you’re saying A, B, C, right? And Deming, like in his low voices, I will never apologize for learning. Like, yeah. I mean, what you know today…

Like I cringe sometimes to watch some of my early presentations on DevOps. You know, like, oh no, I didn’t say that. Oh my goodness. But like, I, you know, like, I think it is like, I’ll never apologize for learning. And even on the AI thing, you know, I was new to AI and I think somewhere in the book I put a quote like, I will never apologize for learning. So if you, like, when I wrote this book, this is how I understood AI, and if you don’t like it, I’m not gonna apologize to you, you know?

Henry Suryawirawan: Wow, that’s, I think it’s pretty good quote, right? So we always learn, we always find new things, right? And especially also, you know, with a lot of advancements these days, right? And I think that’s pretty good quote, right? So we all, we always learn, we change our minds, right?

John Willis: At the core of it, it’s mild, right? This like, I’ll yeah. You know, learning and it, you know, I’m, I know we’re going over. But if we go back to some of the really cool, other cool Deming points, there’s no fear. You know, if you have that mindset that like what I tell you, like with a no fear attitude is I can go up, learn something, try to explain it to people. That’s why Ignite Talks are amazing, right? Like get up there, fight your biggest fears, get up there and do this torturous presentation. And then like, you know, learn, you know. And then, you know, if somebody’s not a jerk, they’ll come up to you like, hey, I loved your presentation. And I do this all the time. I do go up to people and I try not to mansplain or I try not to be that jerk. I say, hey, I think you did a great job. The one point that you made, I think you probably should take a little look at it. And people who take that well, they go, oh, let me go back. And then they’ll ping me later and say, no, you were right. I didn’t fully understand that, right? But if you have that no fear, like you’re gonna go up there and then like, you know. And then some people, you know. If you’re sort of always afraid of people telling you that you might be wrong, you’re probably never gonna be right.

Henry Suryawirawan: Wow! Yes, definitely, great insights. So, yeah, I know we are a little bit over, so, but is there anything else about Deming or maybe about AI that you think we haven’t covered that you think is very important to actually also mention in this conversation?

John Willis: Yeah. I think that the interesting, for people who are fascinated by Deming, you know, I think the interesting conversation is following up on this idea of philosophy of AI.

I think there’s two threads, right? One is for CIOs. Start figuring out how you want to embrace AI at a scale level. It’s one thing to have a bunch of people. This is what my focus in Dear CIO newsletter is. It’s one thing for letting everybody sort of write, you know, write, you know, AI things, you know, like… my fear is dear CIO, you might have, you know, thousands and thousands of Jupyter notebooks unmanaged. You might have, you know, hundreds of vector databases, you know, on, you know, basically with no real scale management. You’re not even understanding evaluations or, and things like that.

So there is this thing about how do we think about how do we embrace… You know, unfortunately, your 20,000 developers are gonna come down to probably 2,000 developers. You know, if that. That’s a reality. Maybe 5,000 depending on how you sort of like use those people. But like for those, if you’ve got 5,000 developers now developing AI, how do you do that in a scalable way that’s not gonna crush your organization, right? And so that’s one thread.

And then I think the other thread, um, and so that I don’t spend as much time and I would love to, but I’d love to other people to, is the philosophy of AI. And that’s a very heavily Deming where you could use a lot of sideline knowledge of Deming’s teachings. So really exploring where does Deming. Almost like, again, I don’t have time to, what would Deming do today with AI? I mean, I tackled it in my profound knowledge book of what would Deming do with cyber or DevOps or Lean and Agile. But I haven’t fully attacked it other than sort the epilogue and my Rebels of Reason book, so.

Henry Suryawirawan: Yeah, definitely, it’s gonna be a great exercise if we use like the Deming’s profound knowledge applied to AI. Just like the cybersecurity that you did in your book, right?

John Willis: Yep.

[01:05:03] 3 Tech Lead Wisdom

Henry Suryawirawan: Yeah. So John, it’s been a pleasant conversation. I learned a lot from historical reasons, you know, background stories, the anecdotes. I think thanks for sharing that. Towards the end of our conversation, I only have one last question, which is like a tradition in my podcast. Uh, I would like to ask you this thing called the 3 Technical Leadership Wisdom. Just think of them just like an advice you want to give to us. Uh, maybe if you can share your version today, that would be great.

John Willis: Yeah, I might just go back to the original, you know. Work hard, have fun. Be a boundless advisor. I guess the one I would add is the, you know, just be a learner. Like literally, I mean the people that, um, you know, just constantly. Like, again, let’s go back to Deming. Deming’s greatest quote, I think one of his greatest quotes is, you know, I think I’m gonna mangle it a little bit, but people deserve to be joy in work. There’s some variant in that, right? And like there is a navigable path in your career. Now, I speak of this from privilege, so I’m gonna be really clear here. Like so I don’t act like everybody should be like me.

I mean, I’ve lived a privileged life, right? Like I was raised a privileged. I was capital P privileged. But what I will say is, if you can do a journey… I tell this to students. If you get to a point of where you sort of, in knowledge, you’re educated, you have all these sort of capabilities, like most of people who would be listening to this podcast at this point, like you deserve to have fun 8 out of 10 days that you work. You’ll never hit a hundred, right? There are days, right? And I have been able to, in my career, successfully say probably on average 8 outta every 10 days I’ve ever gotten up in the morning and went to do some work, I enjoyed myself.

Henry Suryawirawan: Wow, that’s a very good reminder for all of us, uh, especially in our career, right? Sometimes, uh, we hate our job, but we still do it anyway. I think it comes back to, again, to the privilege, right? Some people, you know, might have better privilege, but for some who are not probably…

John Willis: Right. And again, that’s why I’d be really careful about that because, you know, again. But like again, tie that would be a learner, and that is a form of privilege. Again, I like, I’m getting myself into dangerous waters here. But like the point is you give yourself advantage if you become, you know, sort of maniacal learner.

Henry Suryawirawan: Got it. Yeah. So John, if people love this conversation, they wanna go little bit more meta with you, is there a place where they can find you online?

John Willis: Yeah, I think, you know, I’m… So if you could put it on the show notes, but Botchagalupe, uh, you know, um, John Willis, uh, I think Atlanta. I don’t know why I couldn’t get John Willis on LinkedIn. I use Botchagalupe a lot of places. I have, um, I have newsletters on my LinkedIn, which I have a profound, a Deming one. I have an AI one. And I’m actually starting a quantum one now too. I’m trying to look, see, like if a dummy like me can learn quantum, can I teach it to all the dummies like me? If you like the idea of the whole CIO reference of Dear CIO, it’s aicio.ai is my, uh, Dear CIO, uh, newsletter. Where I really try to point out to things you should worry about running AI in a large organizational scale.

Henry Suryawirawan: Yeah. I’ll put them all in the show notes. Just a little bit of curiosity, what is Botchagalupe?

John Willis: Uh, there’s no really definition of it. It turns out, uh, it was a word that my mom used to sing and use when I was growing up, and I never did figure out what exactly it was. There’s tons of folklore that I’ve gotten over the years of people told me stories. Everything from, you know, two Italian groups, the Bachagaloop and Bacciagalupe that used the war over who’s funnier or stupider, to Abbott and Costello. Or like, they just… so I don’t really have a clear definition other than I thought it would be a clever name when I was just getting my sort of Twitter account was really where it started, so.

Henry Suryawirawan: Right. So thanks for the trivia. So again, pleasant conversation, uh, today, John. So thank you so much for your sharing today.

John Willis: Thanks for having me. It was great conversation. Thank you so much. And thanks for doing all the, um, the prep work to make a really good conversation. I really appreciate that.

– End –