#254 - Why Incumbents Will Fall: How to Build a Hyperadaptive AI-Native Organization - Melissa Reeve
“I believe that giant brands will fall because they won’t be able to pivot fast enough.”
Why do 80-95% of AI initiatives fail — and why is your organization’s structure to blame? Most companies are treating AI like a software upgrade, when it actually demands a complete rewiring of how work gets done.
In this episode, Melissa Reeve, author of Hyperadaptive and organizational change expert, shares a practical model for transforming legacy enterprises into AI-native organizations built to thrive — not just survive — in the age of AI. Drawing on her experience with the Toyota Production System, Scaled Agile, and deep research into leading AI adopters, Melissa argues that the real barriers to AI adoption are structural: Taylorist hierarchies, functional silos, and decision bottlenecks that organizations have never been forced to dismantle — until now. She introduces the Hyperadaptive model, a five-stage maturity path that gradually rewires how organizations operate, from establishing AI governance and identifying champions, to deploying agentic AI and organizing around customer value streams. Unlike past transformations, AI will compress both the strategy-to-execution and concept-to-delivery dimensions simultaneously — and the organizations that fail to adapt will be displaced by AI-native competitors rising far faster than Uber or Airbnb ever did.
Key topics discussed:
- Why AI-native startups will disrupt incumbents faster than Uber did
- The three structural barriers blocking real AI adoption
- How Taylorism is still sabotaging modern organizations
- Five capabilities every hyperadaptive organization must build
- Shifting from functional roles to AI-augmented value streams
- Why skipping governance foundations causes AI projects to fail
- Human as evaluator: how jobs evolve as agents take over tasks
- Using the triple bottom line to unlock AI’s full potential
Timestamps:
- (00:02:50) How Did Melissa’s Background in Lean and Agile Lead to the Hyperadaptive Model?
- (00:05:57) How Is the AI Revolution Different From Past Digital Transformations?
- (00:07:39) Will AI-Native Companies Disrupt Incumbents the Way Airbnb and Uber Did?
- (00:09:08) How Did the DevOps Model Inspire the Concept of Automated Execution Pipelines?
- (00:12:41) What Is a Hyperadaptive Organization?
- (00:14:10) Why Has AI Adoption Failed to Deliver Results in Most Organizations?
- (00:17:05) What Are the Three Structural Barriers to AI Adoption?
- (00:19:39) Why Is Taylorism Considered a Major Barrier to Becoming Hyperadaptive?
- (00:22:48) What Are the Five Capabilities Required to Become Hyperadaptive?
- (00:26:45) Why Does AI Make Age-Old Principles Like Lean and Agile More Relevant Than Ever?
- (00:28:49) How Will the Human-in-the-Loop Role Evolve as Agentic AI Takes Over?
- (00:32:52) How Should Organizations Start Transitioning from Functional Silos to Value Streams?
- (00:35:07) How Is AI Enabling Adjacent Competencies and Expanding Professional Roles?
- (00:38:43) Will AI Replace Workers or Unlock More of What Organizations Can Achieve?
- (00:41:52) What Are the Five Stages of Maturity for Becoming Hyperadaptive?
- (00:48:21) Why Do Most AI Implementations Fail When Organizations Skip the Foundation?
- (00:50:55) What Does Dynamic AI Governance Look Like in Practice?
- (00:55:20) How Does Kahneman’s Thinking Fast and Slow Explain the Human-AI Partnership?
- (00:58:07) How Can AI Help Organizations Optimize for People, Profit, and Planet?
- (01:00:24) 3 Tech Lead Wisdom
_____
Melissa Reeve’s Bio
Melissa Reeve creator of the Hyperadaptive Model and author of Hyperdaptive: Re-wiring the Enterprise to Become AI-Native. Hyperadaptive brings together process excellence, systems thinking, and the human side of AI integration to help leaders reimagine how their organizations learn and adapt.
Prior to leaning into AI, Melissa spent 25 years as an executive and Agile thought leader, which led to pioneering work in Agile marketing and her role as the first VP of Marketing at Scaled Agile and co-founding the Agile Marketing Alliance. She lives in Boulder, CO, with her husband, dogs, and chickens, where she enjoys hiking and gardening.
Follow Melissa:
- LinkedIn – linkedin.com/in/melissamreeve
- Website – hyperadaptive.solutions
- Substack - https://intel.hyperadaptive.solutions/
- 📖 Hyperadaptive - https://hyperadaptive.solutions/book
Mentions & Links:
- 📖 DevOps Handbook - https://itrevolution.com/product/the-devops-handbook-second-edition/
- 📖 Thinking, Fast and Slow - https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
- Toyota Production System - https://en.wikipedia.org/wiki/Toyota_Production_System
- Agile marketing - https://en.wikipedia.org/wiki/Agile_marketing
- Scaled agile - https://en.wikipedia.org/wiki/Scaled_agile_framework
- Peter Senge’s learning organization - https://infed.org/dir/welcome/peter-senge-and-the-learning-organization/
- John Kotter’s Change management - https://www.kotterinc.com/methodology/8-steps/
- DevOps - https://en.wikipedia.org/wiki/DevOps
- Lean methodology - https://www.atlassian.com/agile/project-management/lean-methodology
- Taylorism - https://en.wikipedia.org/wiki/Scientific_management
- Value streams - https://en.wikipedia.org/wiki/Value_stream
- Kanban - https://en.wikipedia.org/wiki/Kanban
- etrospectives - https://www.atlassian.com/agile/scrum/retrospectives
- Retrieval-augmented generation (RAG) - https://en.wikipedia.org/wiki/Retrieval-augmented_generation
- Peter Senge - https://en.wikipedia.org/wiki/Peter_Senge
- John Kotter - https://en.wikipedia.org/wiki/John_Kotter
- Frederick Taylor - https://en.wikipedia.org/wiki/Frederick_Winslow_Taylor
- Gene Kim - https://itrevolution.com/author/gene-kim/
- Clayton Christensen - https://en.wikipedia.org/wiki/Clayton_Christensen
- Daniel Kahneman - https://en.wikipedia.org/wiki/Daniel_Kahneman
- ChatGPT - https://en.wikipedia.org/wiki/ChatGPT
- Copilot - https://en.wikipedia.org/wiki/Microsoft_Copilot
- Gemini - https://en.wikipedia.org/wiki/Google_Gemini
- NotebookLM - https://en.wikipedia.org/wiki/NotebookLM
- IT Revolution - https://itrevolution.com/
- Agile Marketing Alliance - https://agilemarketingmanifesto.org/alliance/
- Airbnb - https://en.wikipedia.org/wiki/Airbnb
- Uber - https://en.wikipedia.org/wiki/Uber
- Facebook Ad - https://www.facebook.com/business/tools/ads-manager
- Google Ads - https://business.google.com/aunz/google-ads/
- Unilever - https://en.wikipedia.org/wiki/Unilever
- Ping An Insurance - https://en.wikipedia.org/wiki/Ping_An_Insurance
- Anthropic - https://en.wikipedia.org/wiki/Anthropic
- Enterprise Technology Leadership Summit - https://itrevolution.com/events/
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
[00:02:02] Introduction
Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Journal podcast. Today, I have with me another IT Revolution author, Melissa Reeve. She’s the author of the book titled Hyperadaptive. So this is kind of like rewiring the enterprise, you know, working model, structure for getting us ready to work in this AI era. So I think we all know about the change, the pace of change that AI has brought to the world as of this day. So I think today we’ll talk a lot about how we can cope with that and how organizations can actually thrive in this AI era. So Melissa, thank you so much for your time, looking forward for this conversation because I’m sure all of us here are still kind of a bit clueless.
Melissa Reeve: Thanks for having me on the show and it’s a pleasure to be here with you today.
[00:02:50] How Did Melissa’s Background in Lean and Agile Lead to the Hyperadaptive Model?
Henry Suryawirawan: Right. Melissa in the beginning I’ll, I have taken a look at your background, right? So I think you have dealt a lot with, you know, changing culture, transformations, and things like that. Even back then when you studied the Toyota Production System. Maybe tell us a little bit more how those experience actually shaped you for who you are now and what made you can relate those experience to the current AI era.
Melissa Reeve: Yeah, I’d like to say that the impetus of the book or the start of the book happened on the Toyota factory floor in Tokyo. And it was there where I saw firsthand the Toyota Production System in action. And I saw how one little change to the system could affect, could have an outsized effect. And even though I continued my career as an executive and with a specialization in marketing, I always had this system view. And I didn’t necessarily have the words around growth mindset or systems thinking or agility for a long time. That’s exactly what was happening in terms of the way that I led. And in 2011, I did come across something called Agile marketing, and I thought, oh, this is — this feels much more natural, a much more natural way of working. Being able to take things in small batch sizes and experiment with them, that felt very, very natural.
So then, when I landed at Scaled Agile, who’s the provider of the Scaled Agile framework, all of a sudden I was exposed to the world’s experts in not only things like Agile and scaling Agile, but the fellows who worked there had a really rich understanding of everything from Peter Senge’s learning organization to John Kotter and change management and leading organizations through change. So I had a front row seat to not only success full implementations, but what was happening to failed implementations. All of that came to fruition when I was running the Agile Marketing Alliance and was continuing to spread the word, good word of Agile to marketing audiences, when ChatGPT came along. And it just sucked all of the oxygen out of the room. And it really was all anybody could talk about.
So I pivoted to really think about AI, and I had this notion in my head of what I was calling automated execution pipelines, and that’s just the end-to-end execution of a workflow. The light bulb that went off for me was this feels very much like DevOps. And so I’m sure you’re gonna ask about more about the book, but that’s like my history up until that moment, and everything that led me to this a-ha moment.
[00:05:57] How Is the AI Revolution Different From Past Digital Transformations?
Henry Suryawirawan: Very interesting, you know, insight, how you probably can like kind of come up with this book in the first place, right? So I think one question that I would like to ask you first. Like you have seen a lot, you know, in Toyota Production System, back then it was probably more like Lean methodology and you have dealt with, you know, Agile marketing, you know, Agile transformations, and things like that. Some people also might have heard about digital transformations that happen. What made those transformations actually different with the AI transformation that we are currently having now?
Melissa Reeve: Yeah. I think speed is a primary differentiator. You think about — I think about digital transformation really taking hold around 2008, when we saw the advent of things like Airbnb and Uber really threatening the existence of other giants whether that was Marriott with Airbnb or whether that was taxis with Uber. But it still played out over many years. And I feel like AI is much more intense in terms of its speed. And these AI native companies will rise much more quickly than I think digital transformation will. I think the other big piece is that digital transformation was really contained for the most part to technology and to IT. And AI, I believe, is going to hit every part of the business. And because of that it will finally be the catalyst for changing the way we actually organize the business and the operating model.
[00:07:39] Will AI-Native Companies Disrupt Incumbents the Way Airbnb and Uber Did?
Henry Suryawirawan: Wow! So yeah, indeed this, the speed of change, it’s quite intense. Like every, maybe every few weeks we will hear new inventions, new advancements of the, you know, maybe state-of-the-art model or whatever that is. So, but I, yeah, it feels like it’s more intense now, especially for leaders who are kind of like new to all these AI craze. I think probably the kind of level of stress to keep up with the pace of change is definitely very frightening.
So you mentioned about digital transformation back then. There are disruptors like Airbnb, Uber, and all that, and we all know what happens since then, right? They become giants, big enterprises. Do you think AI native companies will be doing the same, this kind of disruption? Like would…
Melissa Reeve: Absolutely.
Henry Suryawirawan: … that be in fact the most danger for incumbents that they have to act now?
Melissa Reeve: That’s right. Yeah. And that was the impetus for the hyperadaptive model was, existing organizations are what I call linear organizations. And we’ll explore this more. But that’s different than a hyperadaptive organization, which is one that’s more or less AI native. And they’re able to — their ability to sense and respond in near real time is much greater because they have fewer handoffs, they have fewer layers of hierarchy, they’re able to execute more quickly. So I do believe that giants will fall, giant brands will fall because they won’t be able to pivot fast enough.
[00:09:08] How Did the DevOps Model Inspire the Concept of Automated Execution Pipelines?
Henry Suryawirawan: Wow! So quite frightening, I guess, for some of us here, who are used to this linear organization model. So maybe if, before we go into the book, I’m a bit curious about the first idea when you came up with this concept, right? You call it like automated execution pipeline. Are you referring more towards the marketing pipeline in the very beginning that you can automate the process? And how since then it evolved to this hyperadaptive thing?
Melissa Reeve: Thanks for asking. And the original automated execution pipeline was, in my mind, I could see an AI agent end-to-end executing a marketing campaign from ideation to the copywriting to the developing an ad, all the way to deploying it on a platform especially a digital platform, like a Facebook Ad or Google Ads. But the spark that I had was we’ve seen this before, and we saw it with DevOps and the automation of the software delivery pipeline. So we went from having an individual who did the coding to an individual who did the testing to an individual who put it into production, to all of a sudden being able to deploy hundreds if not thousands of times a day for the most mature DevOps implementations. And so I went back to the DevOps Handbook, and that was my connection with Gene Kim and IT Revolution. I had read it before when I was with Scaled Agile, and we started to explore the DevOps space, and had been to the DevOps Summit many times before transitioned into the enterprise leadership summit, Enterprise Technology Leadership Summit. But I went back, and I said, well, what lessons can we learn from DevOps that we can now apply to this moment?
And the biggest shift that I saw was that we went from having people do a lot of those individual tasks, like I’m gonna manually do this test or I’m gonna manually push this into production, to building monitoring and maintaining automations that did those parts of the software delivery pipeline. And I thought a-ha, that’s it. That’s the pattern that we’ll most likely see as we introduce these agents. And yes, they might be able to build themselves, but we probably want some level of telemetry and monitoring. And because AI capabilities don’t stand still, we’ll have to maintain them. We’ll have to continually update them as the capabilities continue to expand. And so I took that concept and then I also used AI deep research to surface the patterns of leading companies around the world who already were integrating AI into their organizations.
And then the third bucket was just research and organizational change patterns — Peter Senge, John Kotter, Clayton Christensen — and how could we take the lessons that we learned from digital transformation and failed digital transformation, and weave those three things together: the DevOps, the research, and the success patterns of leading organizations into a model that people could follow. And that’s really the underpinning of the book and the hyperadaptive model.
[00:12:41] What Is a Hyperadaptive Organization?
Henry Suryawirawan: Thank you for sharing, you know, the kind of like origin, how the book all started, right? So maybe some people already kind of like curious when they hear about this hyperadaptive. Maybe if you can start by defining what do you mean by hyperadaptive organizations?
Melissa Reeve: Yeah. So I started to describe it a little bit. And I wanna dive into that notion of a linear organization. So you have strategy to execution, a lot of layers of hierarchy, and then you have concept to delivery, and handoffs and delays as people are doing their specialized area. And AI compresses both of those dimensions. And AI native organizations, because of this compressed structure, are more quickly able to sense and respond and react in near real time. And that’s the state of hyper adaptivity that I think we’re headed toward, as the learning loops compress, as execution compresses. And it will be a competitive differentiator for organizations to have those capabilities. And things like AI-assisted decision-making. And even organizing around value, and instead of being organized functionally, organizing around value streams. I think those types of things will be very much competitive differentiators going forward.
[00:14:10] Why Has AI Adoption Failed to Deliver Results in Most Organizations?
Henry Suryawirawan: You know, taking action in real time, I think we all aspire doing that, especially in big organizations where they have lots of people and, you know, lots of hierarchy. Like everything seems to take a lot of time, if you imagine working in those companies, right? Plus the hands-off and maybe the misalignment, miscommunications that could happen. So definitely very interesting how AI could compress all those, you know, from strategy to execution and delivery. But we have seen in, maybe in the past one year or so, despite a lot of these AI advancement, what do you think is the adoption of AI really in the industry so far, especially for those linear organizations?
Melissa Reeve: Yeah, so we’ve seen some pretty consistent patterns. And it started with confusion. AI is so big and it’s so amorphous, so it’s hard to pin down, that a lot of executives were frozen. And we, what we saw is we saw the board of directors putting pressure on the C-suite to go do something with AI. And then the C-Suite would tell people under them to go do something with AI. And then people defaulted to treating AI like a software installation. And what that means is they bought the licenses from Copilot or ChatGPT or Gemini. And then they invested in a little bit of training, like we’ve always done training: video libraries or a two-hour session. And then they told people in the organization to go play with AI. Unfortunately, it hasn’t worked. so what organizations are waking up to now is that AI isn’t just a software installation, that it requires deeper exploration. AI learning is what I call social learning. So we learn from each other, so the learning patterns look differently. And it requires a different mindset than just updating a piece of software. And I feel like that’s what I’ve, that’s the path that I’ve seen many organizations. And now they understand that they need to invest in infrastructure to support the rollout of AI.
Henry Suryawirawan: Yeah, definitely. I mean I can also borrow from my experience, right? I’m a bit confused as well like how can we actually implement that in kinda like a big part of organizations. You know, talking about software development probably is more straightforward. We can use a lot of these coding assistance tool and it helps a lot, right? We can see the benefits. But making a transformational change in organizations probably is a bit like, you know, very confusing and some people probably are paralyzed by like the sheer of amount of change, not to mention the cultural change that might happen as well, right, because of this.
[00:17:05] What Are the Three Structural Barriers to AI Adoption?
Henry Suryawirawan: So I think you mentioned also in your book, which I kinda like, because you mentioned there are three kind of like barriers that maybe hinder some of these organizations from adopting AI or integrating AI. Maybe if you can elaborate a little bit, what are these barriers so that people can at least conceptualize these kind of barriers?
Melissa Reeve: Sure. So information friction is definitely a barrier. And think about even something as simple as an as AI governance or an AI council. You have AI that’s moving so, so quickly, and that means the rules are changing. When AI was first rolled out, the data that was input into AI was now owned by the company. And that scared a lot of enterprises because they didn’t want their data in these large language models. Well, that quickly changed, but the perception that the data was not secure really remained. And that’s an example of information friction within the organization. Like there might have been a pocket of people who understood that the large language models had moved on and now protected enterprise data if you had an enterprise license. But disseminating that information throughout the organization is very, very difficult. And you need structures in place to create information flow in a more ongoing way. So that’s one example of a barrier to AI integration.
Decision bottlenecks is another barrier to AI integration. So who determines who gets to use AI, what the use cases are. If I don’t feel safe that I can use AI, I’m not gonna do it. I’m just gonna continue using the safe way to do it.
And then functional boundaries. Maybe somebody over in accounting has created a great use case for AI, but getting that into other parts of the business becomes very, very difficult. And so those are some of the structural barriers that people see when they’re thinking about AI and trying to integrate it into their day-to-day work.
Henry Suryawirawan: Yeah. I can see all these barriers actually happening in many organizations, especially again like big, large corporations where they have a lot of functional silos, different hierarchies, right? All like information, decision making and a functional thing definitely becomes a big barrier for them.
[00:19:39] Why Is Taylorism Considered a Major Barrier to Becoming Hyperadaptive?
Henry Suryawirawan: Another thing that I also find interesting is that for linear organizations is the concept of Taylorism. You know, Frederick Taylor, the management system that he brought up maybe decades ago. So maybe tell us why this Taylorism is also kind of like one of the big barrier for organizations to adopt hyperadaptive.
Melissa Reeve: Sure. So Taylorism, John Winslow Taylor, 1911, wrote the first book on management theory. And he said that there are two classes. There’s a management class and there’s a laboring class. And it’s management’s job to find the one best way of doing it and impart that knowledge onto the laboring class. And you can imagine this. It’s the world is assembly lines and you have the managers figuring out how to best assemble materials. And then they have, they share that knowledge with the people on the front of the assembly line. And that’s what made the Toyota Production System so unique is now it empowered the people who were closest to the work to actually give their input on what was best. But when you think about that hierarchical construct, it still exists in many organizations. The manager has the best answer and it’s your job to take that and just execute with it. So that’s one part of Taylorism that’s blocking organizations.
The second part is the functional silos that emerged right after World War II. And this was the idea that in order to have a big global organization you needed to put all the salespeople together and all the marketing people together, and that you gained efficiencies by doing that. And that might have been true in a world that operated much more slowly than it does now. But in today’s world, like we’ve known that this has been a problem for a while, right? Everybody feels it in their work. You know, how slowly things move across function or up and down the hierarchy. But we haven’t really had a forcing function to change it. And I think now that this is the forcing function that will cause these embedded structures to reinvent themselves. And the people who don’t reinvent themselves will be struggling.
Henry Suryawirawan: Yeah, I mean, from my point of view, all I can see all organizations are structured this way. You know, like you have like senior executives, you have managers, and you have the people who are doing the work. And in fact, you know, I mean you talk about functional silo, it’s, I think, still predominantly every organizations are structured this way, right? I personally haven’t worked in, you know, a company that is structured differently. But actually, very interesting, after I read your book, right? And also looking back on how other companies like, you know, Zappos or some of these companies which seem to operate kind of like differently and the success stories that they have, I’m sure some of these definitely can help for us to adopt AI much more effectively.
[00:22:48] What Are the Five Capabilities Required to Become Hyperadaptive?
Henry Suryawirawan: So for these, actually you mentioned five different capabilities, right, for organizations to be hyperadaptive. I think you have named some of them. But if you can maybe walk us through what are these big five capabilities that companies need to invest so that they can actually, you know, become much more hyperadaptive?
Melissa Reeve: Yeah and I think what I love about these capabilities is that they supersede or they go above the actual capabilities of the models, and so they’re much more durable in terms of things you can invest in. For example, augmented decision-making and using AI to make better decisions. That’s something that I think we’ll be doing today, tomorrow, three years from now, five years from now. And maybe AI will get better at making some decisions, but I think we’ll still want humans in the loop. And I think about all the ways that we shortchange decision making today. So for example if you’re doing financial scenario modeling, you might have a best case scenario, a worst case scenario, and a mid-range scenario. I think what AI will unlock is our ability to create 40 scenarios or 100 scenarios, and then as we are seeing reality unfold, then we get a better sense of how the scenario might play out. And that’s what I mean when I talk about AI-augmented decision making.
Value orientation is another one. Why are we doing AI? What value is it bringing to the organization? When we orient toward customer value, we’re using this powerful new tool in a way that shows and demonstrates ROI. And I actually believe that value organization, value orientation will extend into our new organizational structure. And that’s one where we actually organize around value streams. And value streams, just think about it as your end-to-end delivery of value. So if you’re a bank and you service people who are new grads from college, that’s an example of a value stream. You’re delivering products and services specifically to that market in a value stream, you would have everybody needed to deliver that value. So it’s another durable capability that you can incrementally rewire yourself toward. Things like AI-powered sensing, responding, we touched on that. Continuous adaptation. So building in the ability in your AI systems to adapt over time.
And then, the last capability is integrated learning loops. And I think that this one — it’s been around for a long time. When you think about, again, that Toyota Production System, the Kanban — not the Kanban systems, but that’s part of it — Agile, retrospectives, it’s all about capturing those learnings so we can continuously improve. I think the breakthrough with AI is that we’ll finally be able to close the loop. So a lot of times what happened, what I saw happen in Agile is that people would run their two-week sprints, they would get to the end of that, they would hold their retrospective. They would surface some learnings, and then those learnings would just sit there. They would never get actioned. But the opportunity we have now is for AI to not only capture some of those learnings, but then improve itself as it’s running that process the next time. And I think it allows us to get much more structured in our not only capturing the learnings, but also integrating them in a way that enables continuous improvement.
[00:26:45] Why Does AI Make Age-Old Principles Like Lean and Agile More Relevant Than Ever?
Henry Suryawirawan: I like when you mentioned closing the loop, right? I think from like very primitive view of me, right? So some of these principles actually already exist. You know, things like feedback loop, right? You know, your learning organizations, adaptation. I think it’s mentioned a lot of times during those transformations, Agile transformations. Also organized around value stream is kind of like maybe Lean principles as well. Like I feel all these principles kinda like exist so many years ago. And probably now it’s kind of like more amplified. Like why the need of making them such more important because of AI is kind of like maybe amplified. Is that the correct interpretation from, you know, my primitive sense of understanding?
Melissa Reeve: Oh it is. And I believe that AI will unlock many of these things that we’ve been trying to do on a small scale, and really help us refine them. And we need to remember the, these roots. So like you said, the lean and the value stream, absolutely, I’m not inventing anything new here. And I’m saying we’ve known for a long time that that’s a more effective and efficient way to organize. And yet, the organizations as a whole have remained structured by functional silos. And so the question becomes how do we gradually shift the organization in that direction so it doesn’t feel so disruptive. Because what we know for sure is that we can’t shut one way of operating down and then spin one up. And so we have to do this gradually over time.
Henry Suryawirawan: Yeah. I would assume like many people would think about the effectiveness and efficiency of, you know, creating, you know, different value stream-based organizations or teams, right? Because like there will be duplicates of roles definitely, right? When, you know, many people in the world now is talking more about, you know, downsizing, efficiency and all that. So definitely will be interesting like how, you know, organizations can transform gradually to that way.
[00:28:49] How Will the Human-in-the-Loop Role Evolve as Agentic AI Takes Over?
Henry Suryawirawan: I wanna touch on a little bit about augmented decision making. I think some people might already embed AI in their day-to-day workflow. You know, even for me, like asking questions now, I don’t go to Google more, I go to maybe Gemini more, to get much more, you know, richer interactions and be able to iteratively, you know, understand a problem. But I think many organizations, many people are not, yes, are not yet used to this model. And in fact, now with agentic AI that it’s not just like a chat interaction anymore. The AI agents can do much more beyond. You know, maybe doing some sensing and responding based on those outputs, right? And also asking you back, as a human in the loop. So tell us like how would this human in the loop process evolve over the time and what would be your advice for people who are still not yet used to this kind of a model?
Melissa Reeve: Yeah, I mean I think you have to think about the risk profile for your decisions, and also become more aware of the places AI already exists. So for example, I come from this marketing background, and things like dynamic ad, bidding, dynamic bidding for ads, has been around for a long time. Now that’s an AI algorithm that’s making decisions around pricing and around purchasing ad space that doesn’t have a human in the loop. I mean it does in that it, the human might set the ceiling and it might set the floor, but in general then the algorithm is operating between those two guardrails. When the stakes get higher, you never want AI to be the one making the decision to fire somebody. You never gonna want AI to deliver that communication. You never want AI to make a decision that would cost the organization millions of dollars. You want humans to stay in the loop.
Now AI can do some analysis. It can make a recommendation. But you have to think about the stakes and really start to tease your decisions apart. I want to invite your listeners to really think about how their decisions are made today organizationally. Because what we often do today is to reduce risk, we actually have decisions go through layers in the hierarchy, and that’s the way that we reduce the risk. So I’ll propose a decision I’ll have my boss check it, I’ll have that person’s boss check it, and we assume that if four people’s eyes have been on it then it must be an okay decision. And as you’re thinking about AI-augmented decision, I want you to think about how decisions are made right now and how you can start shifting that risk profile with the help of AI. If the only reason we’re having four human beings look at it is to reduce the risk and cover ourselves so we don’t lose our job, maybe there’s a different way to be thinking about that decision making process.
Henry Suryawirawan: Wow, I really like that practical tips, right? Because, yeah, I can see some people would already understand how they can benefit using this AI. And in fact, you know, AI is such a, you know, like compressed so many different capabilities and you know, knowledge that is available out there, right? Maybe in fact, arguably it can be even better in terms of, you know, understanding a concept, you know, getting some experts from resources out there to actually vet your decision making and ideas, right? And in fact, some people already can like build different skills, different personas, you know, ask AI in different ways such that they can shape, you know, the kind of, you know, thinking or output that the AI can respond to us. So definitely that’s practical tips for those of you who are still not yet used to this working model.
[00:32:52] How Should Organizations Start Transitioning from Functional Silos to Value Streams?
Henry Suryawirawan: Another thing that I wanna dive a little bit deeper is the value orientation that you mentioned, right? So I think many organizations now is kind of like structured in a functional way, like what you said, sales, marketing, technology and all that. How would organizations start, you know, kinda like shape their organizations by this value stream?
Melissa Reeve: Yeah, so in these large organizations in particular there are usually areas of the business that are more leaning in. Like they’re more progressive than other areas of the business. And these are probably the areas of the business that are already adopting AI more, like they’re more advanced in their use of AI. And so the, these are the same areas of the business that you might target and say, hey, let’s see if we can start doing a small value stream in this area of the business. And I’d really encourage organizations to experiment with this idea of a value stream on a smaller pilot scale, maybe a couple of pilots. And what that will allow you to do is it allows you to see the impact of organizing in this way on the organization. So you’ll understand how roles start to change when you orient around value. You start to understand maybe how some of the upskilling needs to change. What impact did it have on productivity? What impact did it have on your ability to deliver value faster?
And then as you start to learn these things, and I spin up something I call the AI impact hub that’s specifically tasked with measuring the shift. What happens when you shift from the functional hierarchy into value streams? And then you start to integrate those learnings before you attempt to do that on any sort of a broad scale. And it’s only when we get to, we’ll dive into the stages, but stage four, where we start to dabble with orchestrating those value streams, that we’re doing it on any sense of scale at all. So you can see how gr- you can gradually grow into these new ways of operating.
[00:35:07] How Is AI Enabling Adjacent Competencies and Expanding Professional Roles?
Henry Suryawirawan: Yeah. I think the interesting part is definitely you mentioned the roles kind of change. I mean, since I’m in technology, I can see like the dynamicism of the roles have changed quite dramatically, you know, with the introduction of AI. Thinking about, you know, software development, product manager. And even could be like designer, right? These roles seem to evolve a lot because like for example, as an engineer, you can maybe do a little bit of product management now, you can also do a little bit of design. Product managers can do a bit of like vibe coding, they say, right? Being able to create prototypes. And, you know, use them to actually validate the idea of getting more feedback and all that. So outside of technology, have you ever seen, you know, such kind of roles that have been blended simply because of the AI?
Melissa Reeve: So I love what you’re touching on which is something I call the adjacent competencies. So it’s things that we would do if we had the skills. And when you think about Agile, one of the challenges of Agile was that they wanted a fully staffed cross-functional teams. But the challenge was that what if you only needed a designer 20% of the time or you only needed the UX person 50% of the time, it was very hard to dedicate full-time employees to an Agile team. I think what AI does is it starts to fill in some of those gaps. So maybe you still have a full-time UX designer, but their job is to monitor custom GPTs and make sure things like custom GPTs or agents have the right context, that they’re operating in the way that they’re supposed to, so that somebody like you could go and interact with that agent or interact with that GPT and get reliable results. And I start to feel like that’s where we extend ourselves in different directions.
In terms of what we’re already seeing, there was a great article that was just produced by Harvard, and they talked about exactly this phenomenon. And they said that what’s happening with people who are using AI is they’re starting to explore these adjacent competencies and they’re doing more than they were doing before, when they just had my job as a software developer, and so that’s all I’m gonna do, I’m not gonna go into to UX or I’m not going gonna go into prototyping. And they say that what’s happening for people is their days are becoming more intense, because they’re actually doing more because they’re able to do more. And I think it’s really interesting to think about how our roles change. And maybe it’s not about the function anymore of being a software development. It’s about what outcomes are we producing.
Henry Suryawirawan: Yeah. So I really like that, you know, exploring the adjacent. Especially if many things can be automated, right? Part of your daily tasks, right? Obviously, I mean, you would want to explore other things, right? Especially those things that you didn’t have the skills before. Like I’m thinking about maybe in the past when you talk about financial model, right? You will require like a data scientist or people who can query the data, right? Now you can actually chat. Even the natural language chat interface itself is kind of like a door opener for people to actually, you know, start asking, be curious and explore all these adjacent capabilities, right?
[00:38:43] Will AI Replace Workers or Unlock More of What Organizations Can Achieve?
Henry Suryawirawan: So I think definitely this is something that people must try, in order to kind of like maybe blend yourself into different areas of the organizations. But the other argument to this for many organizations is that AI will actually reduce the number of people that we actually need in the organizations. What is your view of this? Because I think still many organizations are thinking about downsizing. I don’t need a lot of developers, I don’t need this, maybe so many manual people anymore. What is your view on this?
Melissa Reeve: Yeah, my view is… I have this great graph and it has three circles on it. And in the left hand circle, it’s everything that an organization wants to do and it’s very large. Think of it like the sun. And then in the middle is what the organization actually does. And think of that like the Earth, t’s a fraction of the size. And we can only get so much done because of budget constraints and time constraints and people constraints. And then what AI starts to do is it starts to unlock value. So what we can actually get done is one of the bigger planets like Jupiter. And I think for the forward looking organizations, they will see that opportunity. And they’ll say we have good people and let’s unlock more value with those good people. Unilever is a great example of a company that thinks this way. So they have a program that was used to be called Flex, and now it’s called U-Work. And it’s understanding that they’ve hired good people, and those good people have a purpose that they want to fulfill, and they have a set of skills. And then they use AI to match those skills with the different opportunities within the organization. And in this way, they start to envision their workforce not as one person doing a specific task for their lifetime but as a whole group of individuals who can be deployed in multiple ways to accomplish business goals. And I think that’s a much healthier way of looking at it than this productivity shrinking mindset that seems to be predominant in our headlines.
Henry Suryawirawan: Yeah, I would also assume like the people who are thinking that way is like people who are still thinking within their functional thing, you know, like the silos, right? And also like layers of decision making, right? Because then, yes, because you mentioned about compressing all this, you know, organizing around value streams definitely makes sense. We reduce those kind of people, but actually at the same time, these people can be transformed into like much more beyond just doing some, you know, specialized skills, right? So I think thanks for highlighting that. Definitely for people who are, you know, still thinking AI will just reduce the number of people we need, I guess like you have to rethink again because yeah, all organizations want so many different things, right? And I think with AI, maybe they even want more, right? They are more ambitious.
[00:41:52] What Are the Five Stages of Maturity for Becoming Hyperadaptive?
Henry Suryawirawan: So let’s go into the stages. You outlined in your book the five different stages, how company in terms of maturity adopt this, you know, AI and becomes hyperadaptive. So maybe walk us through what are these stages so that people can identify where they’re at the moment.
Melissa Reeve: Sure. So we start with stage one and I don’t think there’s any magic in these stages, but I think it is helpful to have a shared language. There’s some magic in the other parts of the model, but the stages really are just reference points. So we think about laying the foundation. We’ve got to put some governance in place, we’ve gotta make sure that governance is dynamic. We start to identify our AI leads, because we know from John Kotter that we have to identify champions, and those champions will help change the culture within the organization.
In stage two, we’re all about process optimization. This is task augmentation with AI. And it’s so important for the organization to really take a look at its processes and figure out where AI can be plugged into those processes. And one, the discipline of it is incredible, but we’re also teaching the organization how to analyze processes. And we need to do this because we know that processes will continue to reinvent themselves over and over as the AI capabilities continue to grow. So that’s all about stage two. We also fire up what I call AI activation hubs. And these are the people who start to own, it’s a network of these hubs that own AI integration in their area of the business. And one of the mistakes I see organizations doing is they expect everybody to keep track of AI advancements. And it’s just not realistic. We all have day jobs, so let’s ta- let’s task groups of people to keep track of these AI advancements, and atomize, do what I call atomize the learning. So break the learning down into little bite-size things. Let’s say that Claude 4.6 just gets – Opus 4.6 just gets released. It would be the res- the job of this AI activation hub to understand how that impacts your software developers, and then send that information to the AI leads, that then also sends that information to the frontline people who are either building agents or coding. And that’s the hard work we’re doing in stage two. With stage three, we start to really fire up agentic AI, and we’re automating entire parts of workflow. And I think it’s important to articulate that I believe jobs are made out of tasks, processes, decisions, and human interactions. So when we say jobs are going away, it’s prob- it It kind of bothers me because we treat jobs as monoliths when we do that. But the reality is that AI might automate parts of jobs or parts of processes and parts of decisions, but probably not the human interactions. And so the work in stage three is taking those puzzle pieces, breaking them apart, and seeing where they fall back together with agentic AI. And like we talked earlier, I believe that we’re going to go from people doing a lot of the tasks to building, monitoring, and maintaining the agents that do the tasks.
So we take a pause in stage three where we fire up those AI impact hubs that I mentioned, to really look at the impact of agentic AI on the organization. Stage three is also where we start to fire up our value stream pilots, and we do that learning before we move into stage four, which is really starting to scale agentic AI across the organization. We’re really rewiring the roles in stage four. We’re expanding our value stream experiments.
And then stage five is now we’re starting to experiment with an entirely new way of operating. This is orchestrated value streams. We fire up our telemetry network so that these AI agents are all learning from each other. We start to operate in a new talent model, one that instead of viewing a career as a ladder, it’s a portfolio of experiments or experiences. And we even start to really implement different funding models where we might be funding value streams, where we’re funding experiments that I call innovation circles. And then we’re funding a very stable layer of infrastructure. Things that are pretty predictable. And you can see gradually over time how radically the organization starts to change and shift as we move through these stages.
Henry Suryawirawan: Wow, very inspiring indeed, right? So if we can, you know, aspire to go to stage five. Have you identified companies that have operated in kind of like stage four, stage five?
Melissa Reeve: The one that I profiled, and I do call stage four and five the emerging frontier. So these are early signals that we’re getting from the market. The company that I profiled out in stage five was Ping An Insurance out of China. And they started their AI journey in 2008. And that’s when they first got their data infrastructure in place, that now empowers them to become this AI powerhouse that they are. They are both insurance and healthcare and finance all interwoven together, and they’re using the power of that ecosystem to really create unique customer experiences.
Henry Suryawirawan: Wow! So for people who are kind of like aspiring to build this kind of, you know, hyperadaptive capabilities, definitely you can check out, refer to this Ping An Insurance, you know, case study. I think Ping An has also been mentioned a couple of times in previous digital transformations book. Definitely very exciting to hear a lot more how they implement AI within their organizations.
[00:48:21] Why Do Most AI Implementations Fail When Organizations Skip the Foundation?
Henry Suryawirawan: So I wanna touch on about this maybe shortcut that leaders are thinking, right? Because when you mentioned in the very beginning, people still think that AI is kind of like a tools, it’s like a technology thing that they just embed, integrate into organizations. And hence they kinda like jump into stage three, you know, like applying this agentic AI and you know, maybe this task augmentation or whatever that is, right, automation. So tell us why there is this danger if they don’t actually set up, you know, like stage one, stage two. You mentioned a few things like, you know, pointing AI leads, AI activation hub, and those kind of things. What are some of the dangers if leaders are kind of like shortcutting themselves to stage three?
Melissa Reeve: Yeah. We see it right now, right? We see it in the failure rates. You know, I’ve heard 80% failure rates from Rand Corporation, 95% failure rates from MIT. And it’s this disconnect between — sometimes it’s the simple disconnect, that we see a lot between IT and business, right? So IT spins up an agent and the business isn’t ready to use it or it doesn’t meet the business requirements. But I think what we’re doing is we’re trying to push ahead to orchestration and to agents when we really haven’t built those foundations. We don’t really have good governance in place. We don’t have ways to update the organization. We haven’t done the hard work of creating psychological safety through what I call the AI North Star. What is that reason that you are implementing AI? Is it AI just for AI’s sake or are we trying to do something meaningful with it? And those are some of the mistakes that I see leaders making with AI. And I get it because there’s so much pressure out there to do something with AI. And the Sam Altman’s of the world, the Mari, sorry, Dario [Amodei] of the world from Anthropic — like they are out there every day making it seem like this is happening and you are missing out. And I don’t think it — here’s the thing. AI is moving fast, labor markets move much slower.
Henry Suryawirawan: Yeah. So yeah, definitely they kind of like, you know, mentioned a few, you know, bombastic sentiments, you know, out there. You know, no more software engineers, you know, everyone will just use AI. So in fact, yeah, it hasn’t really happening at full scale. So yeah, definitely for leaders who are thinking like just shortcutting yourself probably is a bit of a danger, right?
[00:50:55] What Does Dynamic AI Governance Look Like in Practice?
Henry Suryawirawan: And speaking about governance, you mentioned governance a few times now, right? Because I’m sure like executives and all that, they all hear about this risk of adopting AI. First is like definitely hallucination, your data leaks, right? Those things sometimes are in the news and they’re kind of like wide big scale, right? And that’s why people are probably more scared about it. So what are, what do you think are like practical governance thing that we can do in organizations such that, you know, we are not paralyzed to adopt AI, but at the same time, we also don’t want to bear too much risk?
Melissa Reeve: Yeah, I like to talk about dynamic governance. And I like to talk about it in four different layers. So when I think about traditional governance, I think about a committee that’s gotten together, decided the rules, maybe they meet quarterly and they put the rules. They do two things. They put the rules out on the intranet, and then they might create some online learning that everybody in the organization has to take. And most people in the organization take this learning. They watch it at 3x speed. You know, they try and pass the test and they check the box, like okay, we’ve, I understand what the rules are. With AI, I think we have to create a much more dynamic system. So yes, you probably need a group at the top who’s establishing high level guardrails. You know, again, Ping An Insurance has a very robust ethical guidelines and guardrails around their AI usage. And we need that in place to directionally show where the organization is going. Those people need to be meeting on a very regular basis, every two weeks, every month, you know. Not quarterly, it’s too slow.
And then you probably need a layer underneath them, maybe at the functional layer, to interpret those guardrails for your area of the business. And really hash through any exceptions or any more granular detail you need to guide your area of the business. Then your AI leads become your frontline lieutenants. So those people really understand. They work with those functional area groups to really understand what the limitations of AI are and what it translates to in the real world on the front lines. And so if you’re working together in a team or a group, you could ask your AI lead, “Hey, I’m thinking about doing this with AI. What do you think?” And that person could get you your guidance.
But I think even more powerful than that is a custom GPT or a custom engine that contains all of your guardrails, that anybody in the organization could query and say, “I’m thinking about doing these things with AI.” And maybe it’s a traffic signal, where the GPT responds and says, “That’s a red light. Absolutely do not do that.” or “that’s a yellow light. You can do it but consider these, tweaking it in this way.” or “It’s a green light, you’re free to go.” And even though I use the words custom GPT, it would have to be connected to something that’s much more predictive like a RAG or, you know, a database, that’s contained. But the advantage of doing governance in that way is: one, it feels much more natural. People can ask about their specific instance. And then it can be continually updated so that people always have the most updated version and you don’t have to go searching out into the internet and maybe finding an outdated doc document from a quarter ago.
Henry Suryawirawan: Wow! I think, again, good practical tips for people. you know, when you want to kind of like, you know, upscale, you know, the implementation of AI. Definitely there are a lot of questions, right? And I think even the leaders themselves probably are also still catching up, right? Because there are so many advancements out there. And how do you actually catch up with all these, definitely is very difficult. And I think some of these practical tips, like I like the custom GPT that you mentioned. I’m also thinking like, for example, the way you interact, like with NotebookLM, you can kind of like ground it with your guardrails, policies or whatever that is, right? So that people can just query in their own way and maybe also meeting them where they are, right? Because some people understand about AI much more naturally, but some may think it’s like an alien technology.
[00:55:20] How Does Kahneman’s Thinking Fast and Slow Explain the Human-AI Partnership?
Henry Suryawirawan: So another thing that I like, reading in your book is you mentioned about how these organizations would behave once they are more hyperadaptive and you kind of like brought this concept from Daniel Kahneman’s Thinking Fast and Slow. I like that analogy. Maybe if you can explain to us that would be great.
Melissa Reeve: Sure. So Daniel Kahneman, Thinking Fast and Slow, there’s the fast brain that’s like your instinct, right? You react to something, you process it. And then there’s the slow brain which is more deliberate, more mulling things over, thinking things through. And I feel like the AI is really like the fast brain, right? it can process things very quickly. And we still need the human to be the slow brain, to think about AI’s output. And what I see job shifting to is actually from doing a lot of the tasks to evaluating the output of AI. And that requires a real level of critical thinking, because you’re trying to identify what is AI missing, what are the holes in its logic or what are the holes in its reasoning. And we need to be very deliberate about this new emerging role of human as evaluator of output. And I think where it gets scary for some people is a lot of our identity is tied to the doing of the thing. And it’s almost like we’re furniture craftsmen, and we we’ve gotten so used to building this beautiful furniture, but all of a sudden IKEA comes in. And there’s a new type of furniture that is built and everybody still wants the craftsman furniture, but there’s a place for IKEA furniture.
Henry Suryawirawan: Yeah, definitely. Especially, software engineers. Also the same, right? We kind of like spend years understanding, you know, language, syntax and you know, the nitty gritty of how you write programs. But now suddenly, you know, everything you can just prompt and it will write the program for you. Definitely there’s a little bit of identity crisis here happening. But yeah, nevertheless, I think people needs to be able to still exercise their critical thinking, which I don’t think it will go away, especially if there’re still human in the loop in the process, right? Even though maybe some people would think I can just outsource everything to AI agents, definitely there will be a risk there. Because we all know, like, you know, the brain system, you know, thinking always in a fast mode definitely is not the best thing, right? Especially because for things that require, you know, high stakes, more risk, you would still want kind of like human to be in the picture.
[00:58:07] How Can AI Help Organizations Optimize for People, Profit, and Planet?
Henry Suryawirawan: Another aspect that I find in your book, which I find really interesting is you mentioned about this three triple bottom line, the 3Ps. Normally I would not see that elsewhere. You know, people talk about process, technology, and you know, people. But you mentioned about people, profit and planet. So what do you mean by these triple bottom line? Are we as organizations need to aspire doing that more?
Melissa Reeve: So the triple bottom line has been around for a long time, and it’s this notion that we as enterprises, we as corporate citizens — You know, certainly there’s governments, but I feel like enterprises are this global influence. And in some ways enterprises lead the world. And so we have this responsibility to not only think about the profit, but also our impact on people and our impact on the planet. But when you think about human cognition, that is very difficult for a human to hold all three of those at once. You know, it’s hard enough to optimize for the profit part, much like, much less think about second and third degree impact on the planet or what are we doing to humans. And so my hope is that AI can help unlock some of that additional thinking for enterprises and for corporate citizens, that we might finally, you know, this is, again, this is the circle getting bigger. Like, oh, we now have the capacity to focus on some of these other things in a more meaningful way. And that’s where I land the book, and that’s really my hope for humanity.
Henry Suryawirawan: Wow, that’s definitely a lovely message, lovely advice for all of us, right? So I think if we can aspire to do something more ambitious, much more bigger, right? Hopefully we can, you know, invent much more thing and, you know, solve a lot of human problems that are currently probably still, you know, unsolvable.
So Melissa, we have spoken a lot of things. Are there any other things that you think I’m missing that we need to talk about before we wrap up for the last question?
Melissa Reeve: I, we’ve, it’s been a wide ranging conversation. Really appreciate being here. I think you’ve got one last question and I’d love to close on that.
[01:00:24] 3 Tech Lead Wisdom
Henry Suryawirawan: Yeah. So my last question is like tradition in my podcast, I call this the three technical leadership wisdom. You can think of it just like advice you wanna give to the listeners. Maybe if you can share yours, that will be great.
Melissa Reeve: Yeah. I think the first thing is really thinking about installing AI is needing to install a new system. So systems thinking. Instead of optimizing for an agent, really think through how do you create an interconnected system that looks different from what we have today.
I think the second thing that we’ve touched on is thinking through what’s possible with AI, not just the minimum. Not how can you keep doing what you do today, but what new capabilities does AI unlock for you, your business? There’s always more things that we want to do.
And the third thing is really embracing a growth mindset. So we have these concerns around our identity, and I actually think it gets bigger. The bigger concerns, the farther up in the hierarchy you go. Because there’s more at stake. You’ve worked your entire career to get to this position. You have great financial benefits, and now you’re potentially being asked to change. And I think that we’re under investing in the AI literacy of our C-Suite. I think of everybody in the organization, those are the people who need to invest in themselves at this moment to really understand AI’s capabilities, its limitations, separating hype from reality. Because it’s your decisions that will be reverberating throughout the organization. And it’ll be your decisions that determine if you’re going to be a giant that falls or if you’re gonna be one of the winners in the AI revolution.
Henry Suryawirawan: Again, lovely wisdom, lovely message for all of us here to listen, right? So thank you so much for sharing those. So Melissa, if people love this conversation, they would check out more resources or ask you questions online, is there a place where they can find you?
Melissa Reeve: Thanks so much for the ask. So my website is hyperadaptive.solutions. And feel free to connect with me on LinkedIn, melissa.mreeve. And I’d love to connect with anybody in your audience.
Henry Suryawirawan: Right. Thank you so much for your time today. I think this book is kinda like one of the rare thing that people can check out, especially if you are into, you know, doing transformations with AI, building new types of culture. Because I’m sure people are still kinda like scrambling how to actually adopt AI, implement AI effectively in organization and not just talking about, you know, downsizing, you know, optimization and all that. So definitely check out Melissa’s Hyperadaptive book from IT Revolution. So thanks so much for being here today. And yeah, looking forward for our next conversation.
Melissa Reeve: Thanks so much for having me. Take care.
– End –
