#219 - Why Learning Systems Thinking is Essential in Tech - Diana Montalion
“Relationships produce effect. Systems thinking is understanding the effect and being able to architect for the kinds of effects we want in a system.”
Tired of feeling like your team is stuck in a cycle of frustration and miscommunication? What if the biggest blocker in your tech career isn’t your code, but your thinking?
That’s the core premise of Systems Thinking, and in this episode, Diana Montalion (author of “Learning Systems Thinking”) shares the practical insights and mental models to help you make that essential shift.
Key topics discussed:
- What systems thinking is and its core principles
- The difference between linear thinking (which we need) and systems thinking (which we’re missing)
- Why building a metaphorical “car boat” is a failure of “conceptual integrity” and how to avoid it
- How to break free from a “change-my-mind” culture and improve our collaboration
- The critical skill of metacognition: why you must understand your own thinking before you can influence others
- Practical ways to foster collective systems thinking and bridge the gap between Product and Tech
- Using modeling and visual tools to create alignment and solve the right problems
- How AI’s inability to handle true inference makes human systems thinking more valuable than ever
Whether you’re a software engineer, architect, team leader, or anyone tackling complex problems, learn why your technical skills alone are not enough and how a shift in your thinking can revolutionize your work and career.
Timestamps:
- (00:02:23) Career Turning Points
- (00:04:35) Writing Learning Systems Thinking
- (00:08:53) Definition of Systems Thinking
- (00:13:39) Systems Thinking vs Linear Thinking
- (00:19:31) Definition of System
- (00:24:13) Conceptual Integrity
- (00:30:02) Practices to Improve Our Systems Thinking
- (00:36:21) Metacognition and Self-Awareness
- (00:44:42) Practices to Improve Our Collective Systems Thinking
- (00:53:04) Collaboration with Consent
- (00:55:29) The Importance of Modeling
- (01:02:20) AI Usage and System Thinking
- (01:11:04) 3 Tech Lead Wisdom
_____
Diana Montalion’s Bio
Diana Montalion is a systems architect, learning facilitator, and founder of Mentrix Group, with over 20 years of experience delivering transformative software initiatives for organizations like Stanford, The Gates Foundation, The Economist, and The Wikimedia Foundation. As the author of Learning Systems Thinking: Essential Nonlinear Skills & Practices for Software Professionals (O’Reilly), she empowers tech professionals to navigate complex systems through practices like systemic reasoning, metacognition, and collaborative modeling.
Follow Diana:
- LinkedIn – linkedin.com/in/dianamontalion
- Website – montalion.com
- Twitter – @dianamontalion
- Mastodon - @diana@hachyderm.io
- Bluesky - @mentrix.bsky.social
- Mentrix Group – https://mentrixgroup.com/
- SystemCrafters Collective – https://mentrix.systems/
- 📚 Learning Systems Thinking – oreilly.com/library/view/learning-systems-thinking/9781098151324/
Mentions & Links:
- 📚 Thinking in Systems – https://www.amazon.com/Thinking-Systems-Donella-H-Meadows/dp/1603580557
- 📚 Mythical Man-Month – https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959
- 📚 Animal, Vegetable, Junk – https://www.amazon.com/Animal-Vegetable-Junk-Sustainable-Suicidal/dp/1328974626
- 📚 Team topologies – https://teamtopologies.com/key-concepts
- Systems thinking – https://en.wikipedia.org/wiki/Systems_thinking
- Conway’s Law – https://en.wikipedia.org/wiki/Conway%27s_law
- Architectural decision record (ADR) - https://adr.github.io/
- Event storming - https://en.wikipedia.org/wiki/Event_storming
- North Star model - https://amplitude.com/blog/product-north-star-metric
- Kubernetes – https://en.wikipedia.org/wiki/Kubernetes
- Kafka Streams – https://kafka.apache.org/documentation/streams/
- Mark Bittman – https://en.wikipedia.org/wiki/Mark_Bittman
- Donella Meadows – https://en.wikipedia.org/wiki/Donella_Meadows
- Fred Brooks – https://en.wikipedia.org/wiki/Fred_Brooks
- Robert Pirsig - https://en.wikipedia.org/wiki/Robert_M._Pirsig
- Cat Morris – https://www.linkedin.com/in/catmo/
- Mel Conway – https://en.wikipedia.org/wiki/Melvin_Conway
- Matthew Skelton – https://uk.linkedin.com/in/matthewskelton
- FedEx – https://en.wikipedia.org/wiki/FedEx
- QCon – https://qconferences.com/
- UML - https://en.wikipedia.org/wiki/Unified_Modeling_Language
- Miro - https://en.wikipedia.org/wiki/Miro_(collaboration_platform)
- Jira - https://en.wikipedia.org/wiki/Jira_(software)
- Fediverse – https://en.wikipedia.org/wiki/Fediverse
Check out FREE coding software options and special offers on jetbrains.com/store/#discounts.
Make it happen. With code.
Get a 45% discount for Tech Lead Journal listeners by using the code techlead24 for all products in all formats.
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
Career Turning Points
-
When I started, I pushed code to a monolith, worked on multiple teams, and we were pushing code together. We had merge conflicts regularly. That was very frustrating and it was complex.
-
Then the world around us started to change, and digital information became ubiquitous. Now software had to talk to other software, and infrastructure is code. Everything became about relationships—designing relationships. Yet, as an industry, we don’t have great relationship skills, either in tech or with people. And yet, I was being asked to still deliver features—feature-driven engineering.
-
The reason I became interested in this subject is that we were still designing software in a world where we worked in systems of software. It was painful. The outcomes were not great, and the work got harder. So I really wanted to figure out what helps.
Writing Learning Systems Thinking
-
I was often mad and frustrated, getting caught up in the noisy back channeling—just complaining about how product is bad to us, leadership is bad to us, everybody’s bad to us, nobody understands us, and isn’t this terrible. Then I had this radical idea: what if I tried to be part of the solution and provide more signal than noise?
-
This really sent me down a rabbit hole of exploring the system science in general, and then specifically how it applies to the challenges we have.
-
What amazed me is how much we have in common with, say, agriculture. Agriculture, especially in the US, became monolithic. Then you have all these independent farmers trying to rebuild ecosystems, growing different things in harmony, and solving problems in a systemic way.
-
When O’Reilly asked me about writing the book, the challenge was can we take like Donella Meadows’ “Thinking in Systems” and the other things that we know about systems and natural systems and mechanical systems, and apply them to our challenges in ways that we can use? And that’s really difficult, because we don’t know what we don’t know. And so I got a lot of pushback initially for talking about thinking. It’s too abstract.
-
Aren’t we knowledge workers? Don’t we think for a living? And also, code is abstraction. We’re not planting trees—we’re literally writing in a language we made up, on machines we made up, doing things in hardware we built. Everything we do is abstraction. So adding a little more helpful abstraction doesn’t seem like the worst sin ever committed. But it’s a challenge. It’s a challenge to find the language, because we don’t really have a language to talk about this stuff.
Definition of Systems Thinking
-
My straightforward answer is relationships produce effect. And systems thinking is understanding the effect and being able to architect for the kinds of effects we want in a system.
-
Donella Meadows says that we think because we understand one, we understand two—because one and one make two. But we forget that we have to understand ‘and’. ‘And’, to me, is the art and science of systems architecture. When you have two microservices, and you design the interaction between them, you get a third thing—whatever it is they do together, they can’t do alone. And yet we are very linear when we design these relationships.
-
Fred Brooks says that most software systems are many good but uncoordinated ideas. This is true of every model I have ever made of a software system. These might be good, but they’re duct-taped together. Systems thinking is about understanding how all these relationships deliver an outcome.
-
The challenge, though, is that systems thinking is defined differently by different people. Many do not like that there isn’t one answer. It really depends on what kind of system you’re looking at. How that system needs systems thinking will govern what you prioritize about systems thinking.
-
Pattern thinking, which is sort of adjacent to systems thinking, is probably even more important for us than systems thinking itself. Critical thinking—the ability to create sound recommendations using reasoning—is part of systems thinking. It’s called systemic reasoning. But if you read about systems thinking, you don’t often read about systemic reasoning.
-
That’s the challenge is that, in any given situation, there could be a hundred systems thinking practices that you could apply, but you’re only going apply four or five of them. If I were to define systems thinking, I’d have to talk about all a hundred. But in fact you don’t need all a hundred. It’s the ability to discern which of those practices or tools will be the most helpful in your situation.
-
That’s not a thing people love either. They’re like, where are the templates and checklists? I want my templates and checklists.
Systems Thinking vs Linear Thinking
-
We think in binary, meaning linear thinking is good and systems thinking is bad. Or systems thinking is good and linear thinking is bad. But in fact, we need both.
-
Linear thinking is predictable, procedural, and top-down. The way we make decisions—where strategic people hand decisions down to implementers—is linear thinking, concerned with control. We want our software to do what we designed it to do all the time, under every circumstance. So we are concerned with control. Test coverage gives us control. These things are essential.
-
The challenge is that for many of us, this is what we mean by thinking. This is everything, and it’s the most important thing, and everything else can just go away, because it doesn’t matter. And that the problem is we can reduce complexity. So that’s reductionism. Object-oriented programming encourages us to break a complex piece of software into its parts.
-
The challenge is it doesn’t work the other way, because relationships produce effect. Nowadays, when we experience a bug in production, I joke that it’s a great day when the bug is in the code. But usually, it’s in something impacting eventual consistency—some timing, asynchronous timing of something is not working.
-
And so when we want to design a system that supports fast package delivery, we can’t just focus on the placing an order part, the managing the movement of the package part, the software that handles deploying the delivery truck every in the different regions. We also have to think about how they work together to provide that capability.
-
The challenge is that we don’t have a practice or language for this. We work in organizations that are only concerned with control and top-down thinking, and that don’t create environments for knowledge workers to share knowledge, learn together, and innovate together. We still apply an industrialized mindset to the development of what is functionally a knowledge system.
-
It’s more systems thinking why I’ve gotten into this, is because of that friction, that tension. It’s not that what we’re doing doesn’t work. It’s that as relational complexity increases in a system–what we’re doing isn’t sufficient. What other skills do we need in order to be effective in our role, be effective, make an impact, have influence, do hard things together. That’s kind of the whole point for me.
Definition of System
-
When people hear about ‘system’, they may have different interpretations. A system could be a process, a workflow, or something else. But actually, ‘system’ here refers to many things—including relationships.
-
One of the challenges with this entire subject is that a single word can mean different things. What we discover is that that word has a whole bunch of different contextual meanings, that what we are asking from someone with that label, there’s no consistency there. So if there’s not really a definition, it varies. So that’s a challenge with system, because we’ve used ‘system’ to mean infrastructure.
-
But from a more purist point of view, components, parts like software parts, for example, people, they are elements when they sort of exist in the same space. Those are just elements. It becomes a system when they’re in relationships with each other and when that relationship begins to generate patterns and outcomes and things that the parts don’t do alone.
-
But for our purposes, anytime two or more people, software parts, or a combination of teams, people, and software parts have to integrate, form relationships, and communicate back and forth to do what they’re doing—then you have a system. That’s when you can apply everything we’re talking about.
-
It’s not just about code, infrastructure, or architecture. It’s the people and their relationship with the system. And actually, time is also a factor. Even if you have the people and the system, as time goes by, the system might change.
Conceptual Integrity
-
Fred Brooks in Mythical Man-Month said that conceptual integrity is the most important consideration in systems design.
-
Fred Brooks says basically, when you look at a system, you see similar patterns and structures, it looks like it’s designed by one mind. But he also argued that he thought there should be one architect that is sort of designing the system. So it has conceptual integrity. And I don’t agree with that. I don’t agree with it, because I wouldn’t like to work that way, but also because there’s too much complexity for somebody to do that. And also then the whole system is held back by that one person and what that one person knows and thinks.
-
So I challenged myself to try and understand how we generate conceptual integrity. By this, I just mean that it isn’t just good but uncoordinated parts. When a software system has some reliable patterns of communication, it might use different tools or different programming languages. But if you moved from one part of the system, from one team building software here to another team building software there, the basic mindset, the way of working, the way we think about fast package delivery, the way we make decisions, the way we collaborate cross-functionally—those things are relatively familiar and really useful.
-
We understand how to work together and how to form relationships between system parts. But we can also be self-organizing. Everything doesn’t have to be the same, because we’ve created good boundaries.
-
Team Topologies is one way of trying to create conceptual integrity in the system. Our concepts are the building blocks of knowledge work in our software. Our concepts are similar enough that what’s in production has some cohesiveness, some sanity, some elegant simplicity. There’s enough integration, but not so much that we’re straightjacketed and can’t do anything new or use a language that’s right for our team, while another team can do something different without disrupting that integrity.
-
There are a lot of words to describe it, because it isn’t a measurable thing. I can’t write a test and say, “Oh, Diana, we scored eight out of 10 on the conceptual integrity list.” Because what can be the same and what can be different will vary in every system.
-
Sometimes it would be disintegrating if everybody just went off and started writing microservices in any language, using any event system. That would be bad. In other systems, that can work. So it depends.
-
The way I think of it: whenever you try to build a system, there is a purpose you try to achieve—be it business outcomes, architecture, alignment, whatever that is. As long as you serve that purpose, there’s a conceptual integrity that is well-defined in the system. So if you can serve that purpose, that’s what matters.
Practices to Improve Our Systems Thinking
-
The first thing I’m going to say doesn’t sound like it has anything to do with systems thinking, but it is the most important thing we can do. It’s very simple, but much harder than it sounds.
-
The challenge with systems thinking is that it’s about how relationships produce effect—about looking at a problem from multiple points of view so you can really understand it, not just from your own perspective but from other people’s experiences.
-
Our “no” culture—our “change-my-mind” stance—is antithetical to systems thinking. You stay in the silo of your own head and force people to your ideas. They have to siege you like a castle and climb your walls to get in. As long as that’s what’s happening, systems thinking cannot happen. It can’t happen.
-
So the first thing is “yes and” or even “no and”. Improv teams learn this. Improvisational comedy teams get on stage with no script, so they have to make things up and figure things out using their skillset.
-
That’s what we do. We get together and make things up and figure things out using our skillset. They practice “yes and” before they go out—it’s a warmup. If you’ve ever seen an improv team where somebody says no or “that’s a bad idea” or “that looks like a graph, and graphs don’t scale,” the whole scene falls apart. Nothing good comes of it. The audience feels it when it happens because it stops the flow of knowledge. It stops the relationship—the informational relationship.
-
So the practice is trying to acknowledge what you’re hearing, acknowledge other people’s ideas. You don’t have to agree—you’re just acknowledging, “Okay, this is what’s happening,” and offering something that helps improve the idea or the thinking, that helps steer it in a different direction.
-
Someone might say something I disagree with, and I can first repeat, “So what I understand you to be saying is this,” because oftentimes I think they’re wrong because I didn’t understand them. That happens at least 50% of the time. If I open my mouth and start saying, “That’s the stupidest idea I’ve ever heard,” 50% of the time I didn’t understand them. Either they didn’t express it well, or I just didn’t bother to try, or I have my own biases.
-
And when you have understood them, then can you help improve it? Maybe you have an experience that is counter to their experience that will help them understand the problem more holistically. Maybe you have a fact you can share, maybe you have a question that will help them think more about it. If we just started there—if we just decided this was going to be a year of communicating that way—we’d be 30% down the road.
-
The challenge is people will say that has nothing to do with tech or system science. “Where are my templates?” And all I can say is try it and see if I’m wrong. If you try it and think, “That was ridiculous, that had nothing to do with it,” cool. But it’s often the people who push against that idea the most who are the ones causing the most block towards being able to work well together in designing a system. So, yeah, that’s my hard pitch for the first practice.
-
There are so many things, especially in tech, nobody actually knows everything. And these days, humans and culture—we all get exposed to different things: internet, books, resources, culture. Different people will have different thoughts.
-
I like the way you mentioned that, first, we should have an open mind. Accept what others have in terms of opinion. Be curious why they’re coming from that perspective. The third aspect is psychological safety—because you want to acknowledge others, accept their ideas, and improve on each other rather than against each other.
Metacognition and Self-Awareness
-
Conway’s Law says that organizations that design systems will produce designs that mirror their communication structure. And this just makes sense—what’s in production is what we thought and talked about, right?
-
Pirsig says that the real system is the construction of rationality itself—that we create concepts and then act on those concepts. If we want something different in production, we need to think and communicate differently. So different things will end up there. And then I hijacked Conway’s Law—because I’m not very creative—and said, “Diana’s Law” is that the way you think and communicate is what you’ll push to production. That’s our own minds.
-
Systemic reasoning is when you’re making a recommendation for a change, like wanting a new tool. Systemic reasoning is not just giving your opinion. How did you reach that conclusion? What are the reasons that convinced you, and why does this matter right now to whatever our version of fast package delivery is? So every time we make a recommendation, the next step after we learn “yes and” more is also not just to share our opinion, but to make the map—how did you reach this conclusion?
-
Because often that’s where the parts we can work together on are. Instead of “yes it is, no it isn’t,” we can figure out how we’ve come to different conclusions and then examine. Maybe my reason is it’s faster and your reason is it’s more reliable. Then we realize, oh, we have to figure out which is more important: reliable or fast. Or maybe there’s a third solution that gives us both. So then we’re talking about the right things, solving the same problem.
-
When we’re practicing systemic reasoning and giving the reasons that convinced us, we’re having to work with our own minds first. Before you say anything, sit down, write it out, and then write three to five reasons that convinced you—and include why it matters, why this is important to talk about right now. Not just why it matters to the tech, but why does it matter to fast package delivery? Why will this improve?
-
If you’re like most people, you’ll discover you suck at this. We don’t know how we’ve come to our conclusions. And if we have to show our work, we discover we’ve got biases, logical fallacies, so many bugs in our thinking. And we don’t know that, because we just go around sharing our opinions—because they convince us.
-
The metacognition, the self-awareness, is recognizing that we need to create conceptual integrity in our own minds before we can share it. And as smart as we are, we’re not generally great at that.
-
Also, we often are reacting. We start communicating our frustration, our aggravation—everybody hates us, product sucks. But it will almost never get us what we need. Pretty much all it does is add fuel to the fire. Notice your reaction.
-
So you recognize, hey, there’s something wrong here. This doesn’t have conceptual integrity. Then take a step back, put on some headphones, set a timer for 30 minutes, and try to write a recommendation. How would you improve this situation? What would you do differently than what you’re hearing?
-
And then you discover you love to complain way more than you like to come up with a recommendation. We all do. It’s not a bad thing—like linear thinking’s not a bad thing. But it doesn’t get us what we need.
-
The other thing is, when you do it, you realize the patterns happening in your brain are happening at scale around you. And it starts to teach you how you can help other people also come to better conclusions, because you’re doing the work in your own mind, and you begin to see things that help.
-
Architectural decision records (ADRs)—a thing that stands out to me often is that people don’t describe other options they considered. They’re just recording the decision. So just the question of, “Hey, what other options did you consider? Can you show me why you came to this?” Like, this seems solid. But when things change, I won’t actually know what else was considered and why it was not chosen, so can you add that? Just that brings systems thinking to an ADR. Because there was no one right answer—they came to the best possible conclusion. That’s systemic reasoning.
-
Systemic reasoning is coming to a conclusion and taking action, even though there’s not one right thing to do. That’s systems thinking. And you can do that in one artifact. For many of us, that would be sufficient to improve our impact, influence, and career significantly—just those, because it’s unfortunately pretty rare.
Practices to Improve Our Collective Systems Thinking
-
In order to figure out what to do, we have to understand other points of view. We have to understand: okay, we want to make this change, and we need money to do it. Now, I have to talk to people who won’t understand my tech language to get the money. But if we don’t get the money, it doesn’t matter what tech language I’m using, because we can’t build it.
-
This is something we haven’t really been thinking about as engineers and tech leaders—we don’t realize that we have to get the money. That’s just part of it.
-
But also, we are the worst group of people to predict what users will experience, because we are not users of the software. They had all these hacks and workflows that I would have built everything wrong for, because I thought I knew what they did—but I know what the software wants them to do. That is not what they actually do.
-
So my point to all of this is that I am not very impactful alone. I need to partner with people who have expertise I don’t have. I think I speak great to business and can be very persuasive about why we need a million dollars—until I try it, and then they’re like, “Geek. What the hell are you saying?”
-
I have discovered that in order to build hard things and work on teams where we build hard things, I need more skills than I have. And so I need product.
-
When [Cat Morris and I] started, I said all of my stories are “product ruined everything.” And she said, all her stories are “the architect is the complete pain in my butt, I can never get anything done.” What we realized is that in our own tribes, we’re fighting about how to think about things, and then cross-functionally we are too. We realized if she and I could work together from the beginning—she could bring in the knowledge she has, and I could bring in the knowledge I have—we’d get an outcome so much better than this linear, “first I do a thing, then she does a thing, then we hate each other in between.”
-
And so, partnering within the ability of my current team: we understand what we’re trying to do and then figure out how we’re going to do it. Sometimes we experiment or prototype. Sometimes somebody recommends something, and we just go in that direction. So can you sort of self-organize to decide what to do? But then also, can you get the partners and the information you need to build something that really matters?
-
For me, that comes from recognizing that your skills are insufficient in the modern world. You can know everything about JavaScript there is to know. But if you don’t know how to make people’s lives better with JavaScript, how much value are you really bringing? So it’s both: being really good at JavaScript and being really good at making people’s lives better—and they don’t even have to know what JavaScript is to benefit from it.
Collaboration with Consent
-
When I say partnering or collaboration, I don’t mean it in a kumbaya summer camp way where we all live in peace. Oftentimes, the other team or person is trying to make trouble for you. And we often work with people who are mean, bullying, or awful. That is true. In that situation, you can’t do systems thinking. That’s a political problem—a behavioral problem.
-
I don’t want to suggest that if we just collaborate, yay, everything will work. We also have to fire the people who refuse to create social learning. And we have to move away from the idea that the 10x developer can be the worst person in the world, but as long as they’re delivering code, they’re productive. Because that ignores all the emotional labor everybody has to do every single time they have a meeting with that person.
-
I mentioned partnering, but I left out the fact that both parties have to be consenting to do that. I don’t mean trying to get people being hateful to work with you. No. The people being hateful need to stop being hateful. That’s not what systems thinking is. It’s not changing hearts and minds. Change your own mind—that’s your job, not mine. So this willingness at the heart, it really matters.
The Importance of Modeling
-
There’s a war over what modeling is. Is it C4 diagrams? Oh, it’s not UML anymore, I’ve learned. I don’t necessarily mean a specific model—boxes and lines—although I do a lot of that.
-
When you’re trying to solve a problem, try to model the problem. I also mean use things like event storming to understand systemic issues. If you’re trying to figure out what is our fast package delivery, do it in a model.
-
Because we get so entrenched in language and our communication styles, we often think we understand each other. We think we’re saying the same thing, but we’re not. If I go into a room of six people and say, “We are going to be agile,” there are six reactions to what I’ve said, and they are completely different universes. One person’s like, “Yay.” One person’s like, “I will murder you.” And then there’s everything in between. And then three people say, “You mean Jira?”
-
So we often think we’re solving the same problem, and when we’re going around and around and bike shedding, usually it’s because we have completely different mental models of what we’re doing. We’re just not looking at the same thing. So just having a conversation, but including the visual element—moving things around, making relationships—helps. If you’re having a conversation and we are having discourse, we don’t really understand the relationship between the ideas we’re sharing.
-
But if you have three stickies and I have three stickies, we are seeing if there’s a relationship. The mind just automatically thinks about the relationship between those six stickies. So you’ve taken a step into systems thinking as soon as you have two stickies, and then you think, “Are they the same? Are they different?”
-
So modeling is really a conversation. Modeling isn’t reality, because you can only ever model one point of view. Modeling is defeasible, meaning I can only draw what I understand right now, but next week it will look different. I don’t mean a North Star model, like “I’m going to show the engineers what you’re building.”
-
That doesn’t work. But I do mean that instead of just writing bullet points and lists of requirements, when you’re making these kinds of decisions or understanding a systems problem, use visual language—use shapes, lines, relationships. When you’re trying to work together, this is how the product person and I discovered we have the same pain. But when we talk about it, we talk about how much I suffer because of her point of view, and she suffers because of my point of view. When we modeled it, we saw a completely different reality.
-
A model can be text too. I was having this conversation yesterday—how a ticket is actually a model, even if we don’t use shapes. Every artifact we create, anytime we share a concept, we are making a model.
-
What I discovered is, if there are six of us, and I write a ticket to describe what we need to build, I would write six completely different tickets depending on who was going to pick up the story. Like, Claire loves lots of details. But for another engineer, I don’t even need to say much, because he works best if he can then have the discussions, think about the problem, and write his own ticket.
-
A lot of us would do better if we could write our own ticket. If we could say, “Here’s the information I need to think well,” and then take that and have any follow-ups, ask any questions, and decide how to approach the work.
-
A model—yes, I mean shapes and such—but even the way we describe a piece of work is a model. Being flexible with how you have these discussions to fit the brains of people who are going to be making these micro-decisions benefits the outcome, because you get better outcomes when people get information the way their brains process that information.
AI Usage and System Thinking
-
So with AI, we all have the perception of what AI can do, and what it actually can do are far enough apart that I get frustrated with all the organizational adoption of AI that is not actually going to solve their problems.
-
Systems thinking is about inference. If I have an idea and I tell you my three reasons for doing it, the thing that makes that idea strong is the relationship between the reasons.
-
So if we think of a graph database, we think in nodes, objects, and data objects. But decisions, tech, software thinking—all of that is about the nodes. It’s about the relationships between them and what those relationships signify. And AI can’t do that unless it’s told to do that, or unless we’ve juxtaposed.
-
I’m rightly not a fan of AI to help us as much in that way—because of what it doesn’t do.
-
That said, oh my goodness, AI is so much better at summarizing me than I am at summarizing me. And AI is so fast at connecting complex ideas. For example, if I were to say, “Who out there is thinking about systems and teaching about systems, and what’s their point of view and how does it relate to these four chapters in my book?” It would take me months to do that research. Now, some of it is wrong, and it’s missing people that should be there. So I still need knowledge to know that AI is wrong and limited and biased, but that’s true whenever we talk to other people—we have our own biases.
-
So AI is just a very smart person that has more knowledge than most. But is it knowledge or is it information? I think the really big thing is that we say intelligence, and we say knowledge, and I’m not sure it’s knowledge or intelligence. I think it’s just very well-crafted information.
-
I’ve asked code questions, and it’s broken my code as often as it’s fixed it. But also, it’s very quick to point out things I don’t know. It’s very quick to show me how things are related. It knows best practices for structuring language and things like that.
-
A lot of us struggle with using spoken language or informal language to describe our ideas. But I think AI can really help us with that—not wholesale, not cut and paste. I think it can help us communicate our ideas better and can help us bring in other perspectives we don’t know exist, and then we can go explore them. So in that way, I think it’s a good partner. And it’s still fancy search, is my argument.
3 Tech Lead Wisdom
-
Be a little more patient and kind with each other, because we’re all doing hard things—especially with everything going on in the world. Make a little more space for each other.
-
Partnering with skills you don’t have gives you more impact and influence. It’s a good thing for you—it makes you better and more trustworthy. So it’s definitely worth doing.
-
The value of deep work—of knowledge work. How many of us are trying to fight for three or four hours a day, where we’ve just put on our headphones and focused to do hard things, challenging things, creative things? Not just fulfill tickets, but really try to solve problems and use our minds to generate new thinking.
-
Be a little kinder. Do deep work and improve your skills by partnering and leveraging other people’s skills, because that’s the magic that helps you.
-
The number one question I get asked when I teach is, “What do I do about the fact that nobody listens to me?” This is our number one pain. And I feel that pain—oh my gosh, so much. And so those three things together are at least part of the answer. It will help people listen to you. And we really do want to be able to share our ideas and have a positive impact in the world.
[00:01:27] Introduction
Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I’m very, very excited. So we are gonna talk about a topic that always intrigue me every time I learn about it. So today we have Diana Montalion, the author of Learning Systems Thinking. So if you have heard about systems thinking, or if you haven’t heard about systems thinking, hopefully today, we’ll give you a lot of insights of what systems thinking is all about. And hopefully you can use it in your career or in your work so that you can produce a better outcome for your work. So Diana, thank you so much for this opportunity. Really excited to have you.
Diana Montalion: Thank you, Henry. I’m really glad that, um, you’ve set this time for us to explore this.
[00:02:23] Career Turning Points
Henry Suryawirawan: Right. Diana, before we go into systems thinking, so I’d like to invite you probably to share a little bit more about yourself. Maybe sharing some turning points in your career that we all can learn from.
Diana Montalion: I think the biggest turning point in my career is probably very similar for anyone who’s been doing this for a long time. And that is that when I started, I pushed code to a monolith, worked on multiple teams, and we were pushing code together. We had merge conflicts regularly. That was very frustrating. And it was really complex. I felt like it was complex. And then the world around us started to change, and digital information became ubiquitous. And now software had to talk to other software, and infrastructure is code. Everything was a relationship, was designing relationships. And we are not an industry with great relationship skills in tech and in people. And um, and yet I was being asked to deliver features still. Like feature driven engineering. But where to put it, how to design the system? So the reason I ended up really being interested in this subject is that we were still designing software in a world in which we worked in systems of software. And it was painful. The outcomes were not great. The work got harder, and so I really wanted to figure out what helps. Like what will help us.
Henry Suryawirawan: Yeah, something you said I think is really intriguing, right? So we are not good in, you know, working with relationship, be it with other people, first of all, yeah. And multiple systems. Multiple distributed systems, especially, right? So these days is the era of, you know, microservices, a lot of SaaS applications that we have to talk to. I think this is really, really important, yeah.
[00:04:35] Writing Learning Systems Thinking
Henry Suryawirawan: So let’s go to the main topic today, right? So you wrote this book, Learning Systems Thinking. From me when I see it, right? There’s not many literature written on this, even though I think the subject is really, really important, right? So tell us what is your background, you know, writing this book, what kind of problems or gaps that you see, maybe in the industry, in the tech professionals, right? What kind of gaps that you actually try to solve by writing the book?
Diana Montalion: Yeah, I love that question, because the short answer is I was mad at everything often and frustrated and getting caught up in this noisy back channeling of just complaining about how product is bad to us, and the leadership is bad to us, and everybody’s bad to us, and nobody understands us, and isn’t this terrible. And then I had this sort of radical idea of what if I tried to be part of the solution to provide more signal than noise? Like what helps? And this really sent me down a rabbit hole of exploring not just Kubernetes, like how we do it in a Kafka stream, right? But also the system science in general, and then specifically how it applies to the challenges we have.
And the thing that was amazing to me is how much we have in common with, say, agriculture, right? Agriculture especially in the US became monolithic. And then you have all, all these independent farmers who are trying to rebuild these ecosystems where you grow different things and harmony with each other and you solve problems in a systemic way. And I read a book, Mark Bittman wrote a book ‘Animal, Vegetable, Junk’, and I kept highlighting things in the book, because they were the same pain that I had in tech.
And so the challenge when O’Reilly asked me about writing the book, the challenge was can we take like Donella Meadows’ ‘Thinking in Systems’ and the other things that we know about systems and natural systems and mechanical systems, and apply them to our challenges in ways that we can use? And that’s really difficult, because we don’t know what we don’t know. And so I got a lot of pushback initially for talking about thinking. It’s too abstract, Diana.
What’s in production except what we thought? Like aren’t we knowledge worker? Don’t we think for a living? And also code is abstraction. We’re not planting trees. We literally are writing in a language we made up on machines we made up. Doing things in hardware we built. Like everything we do is abstraction. So adding a little bit more helpful abstraction to me is not the worst sin ever anyone’s ever committed. But But it’s a challenge. It’s a challenge to find the language. ‘cause we don’t really have a language to talk about this stuff.
Henry Suryawirawan: Yeah. Especially also in my experience, you know, working in tech, right? I’ve been in the industry for maybe about 20 years. It’s always the same thing that you mentioned in the beginning, right? We think everyone else is against us, right? Be it the product, be it the CEO, the founders, the stakeholders, or whoever that is, right? They just don’t understand us.
And especially, if you have become a leader, right? Sometimes, you know, working in the tech teams, you can actually see a lot of things becoming a problem simply because the system is not right. And to me, when I found about systems thinking, that gave me a lot of revelations as well. Even though every time I read, right? it’s always a new thing, a new insights that I got from reading the book and including your book as well. So I think I can see the really important things that, uh, we could learn as a tech professionals by understanding about systems thinking.
[00:08:53] Definition of Systems Thinking
Henry Suryawirawan: So maybe the big question first is that what is actually systems thinking? You know, there’s the advanced way of defining it, and hopefully you can also define it in the simpler term.
Diana Montalion: Yeah, and it’s, uh, I frustrate people. I frustrate people with this, because we want a straightforward thing, and this, my straightforward answer is relationships produce effect. And systems thinking is understanding the effect and being able to architect for the kinds of effects we want in a system. That’s the straightforward answer. But Donella Meadows says that we think because we understand one, we must, that we understand two. Cause one and one make two, but we forget that we have to understand ‘and’. ‘And’, to me, is the art and science of systems architecture, right? That when you have two microservices and you design interaction between them, then you get a third thing. Whatever it is that they do together, they can’t do alone, right? And yet we are very linear when we design these relationships.
So Fred Brooks says that most, most software systems are many good but uncoordinated ideas. And this is every model I have ever made of a software system. Like these might be good but they’re duct taped together, right? We just build these rickety bridges. And so for us, systems thinking is about understanding how all of these relationships deliver an outcome. So for example, FedEx. FedEx fast package delivery. That’s what FedEx does. And anything we can do technologically to create fast package delivery is a priority. And anything that we do that is extraneous to that, less of a priority. But we aren’t very well trained to think about what we’re doing in the context of fast package delivery. Instead, we think about a fast response from an API, which is important, right? But does that fast response actually have an impact on the system?
The challenge though is that systems thinking is defined differently. In, so academia for example, would define it if you went to a workshop for marketing people or you went for a workshop for academics. You went for a workshop on more biological systems. They’re gonna say things differently than I am. And a lot of people do not like that. They don’t like the fact that there’s not one answer. But it’s system thinking. Of course, there’s not one answer. It really depends on what kind of system you’re looking at. How that system needs systems thinking is gonna govern what you prioritize about systems thinking. So for me, pattern thinking, which is sort of systems thinking adjacent is probably even more important to, for us than systems thinking. Critical thinking, the ability to create sound recommendations using reasoning, those are all part of systems thinking. It’s called systemic reasoning. But if you read about systems thinking, you don’t often also read about systemic reasoning, right?
So that’s the challenge is that, in any given situation, there could be a hundred systems thinking practices that you could apply, but you’re only gonna apply four or five of them. If I were to define systems thinking, I’d have to talk about all a hundred. But in fact, you’re not. You don’t need all a hundred. So it is, I would add, it’s the ability to discern which of those practices or tools will be the most helpful in your situation. That’s not a thing people love either. They’re like, where are the templates and checklists? I want my templates and checklists. I mean, I have those, but will they help you in your situation? I don’t know. Like that’s something you’re discerning. Yeah.
Henry Suryawirawan: Yeah.
Diana Montalion: So anyway, see, it’s a long answer and that drives people crazy a little sometimes, yeah.
[00:13:39] Systems Thinking vs Linear Thinking
Henry Suryawirawan: Yeah, I think one of the main challenges, especially maybe for us engineers, right? We try to think logically and also trying to kind of like build the abstraction, you know, like in a way that it is, uh, easier to understand. But obviously this is like maybe against the systems thinking way of thinking, right? Because we try to think linearly, trying to reduce, you know, an abstraction into something that we can, I don’t know, understand in terms of relationship, right? So maybe tell us this bias that we have as a, maybe as a human in fact, right? Trying to think in linear terms. Like what you said in the beginning, one plus one equals two. We always like facts, logical thinking, right? So why this bias can become a challenge when understanding systems thinking?
Diana Montalion: Yeah. And that, it’s my favorite subject because we also think in binary, meaning linear thinking is good and systems thinking is bad. Or systems thinking is good and linear thinking is bad, right? But in fact, we need both, right? I can’t write software. And by linear thinking, I mean akin to following a recipe, right? That either you are following a recipe or you could write a recipe to describe what you’ve done, right? Even when we are debugging, we’re breaking down complexity to get closer to understanding exactly where something is happening.
So linear thinking is predictable, procedural, top down. So the way that we decision make where strategic people hand decisions down to implementers, that is linear thinking concerned with control. So we want our software to do what we designed it to do all the time under every circumstance. And so we are concerned with control. Test coverage gives us control. So these things are, they’re essential.
The challenge is that for many of us, this is what we mean by thinking. This is everything and it’s the most important thing, and everything else can just go away, because it doesn’t matter. And that the problem is we can reduce complexity. So that’s reductionism. Object oriented programming, for example, encourages us to break a complex piece of software into its parts. My first professor used a car as an example. You don’t just have a car code base, right? You have a brakes code base, and a steering code base, and that these work together, right?
The challenge is it doesn’t work the other way because relationships produce effect. Nowadays, when we are experiencing a bug, for example, in production, I joke, I make this joke all the time, so I’m gonna have to find a better one, but that it’s a great day when the bug is in the code. It’s okay. It’s right here on line 492. But it’s usually in something that’s impacting eventual consistency. Some timing, asynchronous timing of something is not working. And so when we want to design a system that supports fast package delivery, we can’t just focus on the placing an order part, the managing the movement of the package part, the software that handles deploying the delivery truck every in, in the different regions. We also have to think about how they work together to provide that capability.
And so the challenge is that we don’t have a practice, we don’t have language. We work in organizations that are only concerned with control, that are only concerned with top-down thinking, that don’t create environments for knowledge workers to share knowledge and to learn together, and to innovate together. We still apply an industrialized mindset to the development of what is functionally a knowledge system. And so it’s more systems thinking for me, where why I’ve gotten into this is because of that friction, that tension. It’s not that what we’re doing doesn’t work. It’s that as relational complexity increases in a system, what we’re doing isn’t sufficient. So what other skills do we need in order to be effective in our role, be effective, make an impact, have influence, do hard things together. That’s kind of the whole point for me, right? And, um, I don’t work on an assembly line. I create something that doesn’t exist. And I need broader skills to do that, yeah.
Henry Suryawirawan: Yeah, so I think that even though maybe some people may have heard about this, you know, knowledge worker, the term knowledge worker, right? But I think still many leaders or many organizations still think, you know, for us these days, right? Even though there are plenty of knowledge workers, it’s not just a coder, right? So almost everyone now is a knowledge worker. They think, you know, they can just create a predictability, so top down control, right? Those kind of stuff. So I think it’s… First, I think we have to be aware that actually with knowledge worker, these industrial practices may not work as it used to be, right? Predictable, consistent result, and things like that.
[00:19:31] Definition of System
Henry Suryawirawan: And I think one thing that I’d like to clarify with you as well, in terms of definition of systems, right? Because when people hear about system, maybe they have different interpretation. System could be process. System could be workflow. System could be something else. But actually system here refers to many things. You mentioned about relationship. But relationship between what, you know. So maybe, clarify a little bit, what do you mean by system? And what are the parts in the system?
Diana Montalion: One of the challenges with this entire subject is that a single word can mean different things. For example, go to a conference and bring up the word architect and watch everybody lose their mind about what that means and whether it’s a good thing or a bad thing. But what we discover is that that word has a whole bunch of different contextual meanings, that what we are asking from someone with that label, there’s no consistency there. So if there’s not really a definition, it varies. So that’s a challenge with system because we’ve used system to mean infrastructure.
There was, I was giving a talk in 2019 and a young man sat down next to me and saw my badge and said, oh, you are an architect. I wanna be an architect too. But I don’t know enough about Kubernetes yet. And I was like, oh, you know, oh, I’m sorry that you’re not gonna be happy that you sat down at my table, because I have so many thoughts on this. And it’s not that an architect can’t be somebody who’s really good at infrastructure implementation and container orchestration. It’s just not what I mean by architect. Or at least it’s not what I’m usually exclusively doing.
So I say all those words to say system is kind of the same kind of word that if in context, when people say system, they mean the infrastructure or they mean something different than I mean. Cool, because as long as we understand it, right? But from a more purist point of view, components, parts like software parts for example, people, they are elements when they sort of exist in the same space. Those are just elements. It becomes a system when they’re in relationship to each other and when that relationship begins to generate patterns and outcomes and things that the parts don’t do alone. So if I have two microservices and there’s an API between them, is that a system? I’d say so. But people could argue with that to say, well, there are two components and there’s a one way flow of information between them. I mean, it’s a very simple system. There’s no real patterns….
So I think reasonable people disagree. But for our purposes, I think anytime two or more people or software parts or combination of teams and people and software parts are… have to integrate, have to form relationships, have to communicate back and forth in order to do the thing they’re doing, then you have a system. That’s when you can apply what everything that we’re saying. Or you can stay up till three in the morning drinking beer and endlessly arguing about whether or not this is a system. And whether that word means what you think it means, because it is an imprecise. It is imprecise, unfortunately as much, ‘cause we love imprecision. We love nuance and ambiguity and imprecision and, like, it depends. Everyone loves when they say it depends. Makes people so happy.
Henry Suryawirawan: Yeah, it makes you clever as well. When, you say it depends, right? So I think, um, I think that.
Diana Montalion: Actually it does. It does. I can’t fix that. It does depend. I’m sorry, but it does.
Henry Suryawirawan: Yeah. So I think it totally makes sense, right? And especially so many different new knowledge being like discerned these days about, you know, in the software team is a socio-technical problem, right? So you mentioned the elements, like the people, the systems, right? It’s not just about code, it’s not just about infrastructure, architecture. It’s the people, the relationship with the system. And actually time is also a factor, which I found in your book, right? You mentioned time is also a factor in the system, right? Even though you have the people and the system. But as the time goes by, system might change, right? So I think this is also a, a very good insight.
[00:24:13] Conceptual Integrity
Henry Suryawirawan: This whole thing, I think there’s this term called conceptual integrity that you mentioned in the book, right? Which I find is quite important to understand so that you can actually understand about systems thinking. Maybe try to explain to us what is actually conceptual integrity.
Diana Montalion: Hey, I’m chuckling, because this is the third question that has a non, the word concrete. I’ve come to dislike that word very much. I have slides with the chemical makeup of concrete to show that concrete’s not concrete. Concrete also is a system of interrelated things. But anyway. So the challenge is, so Fred Brooks in Mythical Man Months said that conceptual integrity is the most important consideration in systems design, but doesn’t define conceptual integrity. And so we’re, we, we’re kind of… I am very challenged by giving a very specific definition, because it’s sort of like art. It’s the you know it when you see it kind of thing.
But one way that I can describe it, at least my experience, it resonates with my experience is that, so you have two parts of an organization that have budget. One of them wants to spend their budget, we need a car. And the other one wants to spend their budget, we need a boat. And so car, boat, they’re having these requirements discussions. And it ends up that the engineers are asked to build a car boat. And everybody hates it, cause nobody wanted a car boat, right? That didn’t resolve the capabilities that the people were trying to design, right?
And so conceptual integrity, Fred Brooks says basically, when you look at a system, you see similar patterns and structures. It looks like it’s designed by one mind. But he also argued that he thought there should be one architect that is sort of designing the system. So it has conceptual integrity. And I don’t agree with that. I don’t agree with it, because I wouldn’t like to work that way, but also because there’s too much complexity for somebody to do that. And also then the whole system is held back by that one person and what that one person knows and thinks.
So I’ve challenged myself to try and understand how we generate conceptual integrity. And by this, I just mean that it isn’t good but uncoordinated parts that when that a software system has some reliable patterns of communication, it might use different tools but in different programming languages. But if you moved from one part of the system, from one team that was building software over here to another team that’s building software over here. The basic mindset, the way of working, the way that we think about the fast package delivery, the way that we make decisions, the way that we collaborate cross-functionally, those things are relatively familiar and really useful. Like we understand how to work together, how to form relationships between the system parts. But we also can be self-organizing. It doesn’t, everything doesn’t have to be the same, because we’ve created good boundaries.
So team topologies, for example, we could say that team topologies is one way of trying to create conceptual integrity in the system. Meaning we think our concepts, right? So our concepts are the building blocks of our, of knowledge work of our software. Our concepts are similar enough that what’s in production has some cohesiveness, has some sanity, has some elegant simplicity. There’s enough integration, and yet not so much that we are just straight jacketed and we can’t actually do anything new or do right in a language that’s right for our team, and the other team can do something different without disrupting that integrity.
So that’s the thing. There’s a lot of words to describe it, because it isn’t a measurable thing. I can’t write a test and give it to you, and you run it against production and says, oh, Diana, we scored eight out of 10 on the conceptual integrity list. Because what can be the same and what can be different will be different in every system. Sometimes it would be disintegrating if everybody just went off and started writing microservices in any language, doing any, using any event system. Any of like, some using queues and some are using Kafka and some… Like that would be bad. And in other systems that can do that. So it depends.
Henry Suryawirawan: Yeah, so one thing for sure, the illustration that you gave, right? The car boat thing, I think that gives a very good insights, right, about this conceptual integrity. The way I think of it, just like whenever you try to build a system, right, there is a purpose that you try to achieve, right? Be it the business outcomes, be it, I don’t know, architecture, alignment, whatever that is. As long as you serve that purpose, right? There’s a conceptual integrity that maybe is well defined in the systems, right? So if you can serve that purpose.
[00:30:02] Practices to Improve Our Systems Thinking
Henry Suryawirawan: So systems thinking I think is really hard. You can’t master it just by reading the book or maybe listening to this episode. So in your book, you mentioned some practices that we can do in order to improve our systems thinking. Maybe elaborate some for us. If we want to improve our systems thinking, what should we do?
Diana Montalion: So the first thing I’m gonna say, doesn’t sound like it has anything to do with systems thinking or anything, but it is the most important thing we can do. It’s very simple. It’s much harder though than it’s gonna sound. But I am gonna say what it is and then I’ll describe why it’s a really important practice, right?
So we started this conversation with, we’re all mad cause everyone’s against us, right? So part of either what we would’ve done anyway or as a reaction to these, to the way we work or both, is that we’re a very ’no’ culture. We’re a very change-my-mind culture. We’re a very, I can’t tell you how often I share an idea. I give a talk, I write a book. I, uh, model something. And the feedback I get is, that’s wrong, that’s wrong, that’s wrong, that’s wrong. And that’s it. That is the whole, that’s the whole thing. Now, often it is wrong in those spots, and that’s really helpful, cause I want to improve my own thinking.
The challenge with systems thinking is systems thinking, because it’s about how relationships produce effect, because it’s about looking at a problem from multiple points of view so that you can really understand it, not just from your own but from other people’s experience. Our ’no’ culture, our change-my-mind, is antithetical to systems thinking. You stay in the silo of your own head and you force people to their ideas. They’re sieging. They have to siege like you’re a castle, and they have to climb your walls to get in to make you do. As long as that’s what’s happening, systems thinking cannot happen. It can’t happen. So the first thing is ‘yes and’ or even ’no and’. But improv teams learn this. So improvisational comedy teams, they get out on stage, they have no script. So they have to make things up and figure things out using their skillset.
That’s what we do. We get together and we make things up and we figure things out using our skillset. They practice ‘yes and’ before they go out. That’s a warmup. And if you’ve ever seen an improv team where somebody said no or somebody said that’s a bad idea, or you know, oh, that looks like a graph, and graphs don’t scale. Like the whole scene falls apart. Nothing good comes of it. And the audience feels it when it happens. Because it stops the flow of knowledge. It stops the relationship. The informational relationship. So the one practice is trying to acknowledge what you’re hearing, acknowledge other people’s ideas, acknowledge you don’t have to agree. You’re just acknowledging, you’re just, okay, this is what’s happening. And offer something that helps improve the idea, that helps improve the thinking, right? That helps steer it in a different direction.
Again, it’s someone might say something I disagree with, and I can first repeat. So what I understand you to be saying is this, because oftentimes I think they’re wrong cause I didn’t understand them. That happens at least 50% of the time. If I open my mouth and start saying, that’s the stupidest idea I’ve ever heard, 50% of the time I didn’t understand them. Either they didn’t express it well or I just didn’t bother to try, or I have my own biases. And when you have understood them, then can you help improve it? Maybe you have an experience that is counter to their experience that that will help them understand the problem more holistically. Maybe you have a fact that you can share, maybe you have a question that will help them think more about it. If we just started there, if we just decided this was going to be a year of communicating that way, we’d be 30% down the road.
The challenge is people will say that has nothing to do with tech, that has nothing to do with system science. Where are my templates? And all I can say is try it and see if I’m wrong. Like if you try it and you’re like, that was ridiculous. That had nothing to do with it. I’m going back to Kubernetes. Cool. But it’s often the people that push against that idea the most, that are the ones causing the most block towards being able to work well together in designing a system. So, yeah, that’s my hard pitch for the first practice.
Henry Suryawirawan: Right. I love that you mentioned about it, because I think there are so many things, especially in the tech world, right? So nobody actually knows everything, right? It’s just so difficult. And plus these days, you know, humans, culture. We all get exposed to different things, right? You know, internet, books, resources, culture, whatever that is, right? And different people will have different thoughts, right?
So I think I like the way that you mentioned that maybe first is we should have open mind, right? Accept what others have in terms of opinion. Be curious why maybe they’re coming from that perspective, right? Maybe the third aspect is about psychological safety, right? Because you want to acknowledge others and you accept their idea and you improve on each other rather than against each other, right? So I think I love this fact that actually, it improves system thinking by, you know, doing all that, right?
[00:36:21] Metacognition and Self-Awareness
Henry Suryawirawan: And I think one challenge, again, like you mentioned, right? Sometimes we are not aware of ourselves thinking in this, you know, kind of like closed box. Or, you know, we have a lot of bias. And in your book, actually, the first thing that you think can improve system thinking is actually by being self-aware, improving yourself.
Diana Montalion: Yeah.
Henry Suryawirawan: Tell us the importance of this. How can we improve ourselves? Because most of the time we don’t know what we don’t know, right?
Diana Montalion: Exactly. And metacognition. And this, so the Conway’s Law says that organizations that design systems will produce designs that are mirrors of their communication structure. And this just makes sense in that what’s in production except what we thought and talked about, right? Like the way we think together structures, Pirsig says that the real system is the construction of rationality itself. That we create concepts and then we act on those concepts. And if we want something different in production, we need to think and communicate differently. So different things will end up. And then I hijacked Conway’s Law because I’m not very creative and said, Diana’s Law is that the way what you think and communicate is what you’ll push to production. Like that’s our own minds, right?
And so systemic reasoning when you’re making a recommendation for a change that you want a new tool, for example. Systemic reasoning is not just giving your opinion, right? Oh, React is terrible and Vue is good like we should do Vue. How did you reach that conclusion? What are the reasons that convinced you and why does this matter right now to whatever our version of fast package delivery is, right? So everytime we make a recommendation, the next step after we learn ‘yes and’ more is also not just to share our opinion, but make the map. How did you reach this conclusion?
Because often that’s where the parts we can work together on are, instead of yes it is, no it isn’t, yes it is, no it isn’t. We can figure out how we’ve come to different conclusions and then examine. Maybe my reason is it’s faster and your reason is it’s more reliable. And then we realize, oh, we have to figure out which is more important: reliable or fast. Or maybe there’s a third solution that gives us both, right? So then we’re talking about the right things, then we’re, we’re solving the same problem.
So when we’re practicing systemic reasoning and when we’re giving the reasons that the reasons that convinced us we’re having to work with our own minds first. I challenge anyone listening, anyone who listens to this, to do it three times. To have an idea. So maybe there’s an action you want people to take. Maybe you have a theory about why something isn’t working and you wanna share that theory. Maybe you have a solution that hasn’t been considered before. Before you say anything, sit down, write it out, and then write three to five reasons that convinced you as and include why it matters, why this is important to talk about right now. And not why it matters to just the tech, but why does it matter to fast package delivery, right? Why will this improve?
If you’re like most people, you’ll discover, you suck at this, you really, like the, we don’t know. We don’t know how we’ve come to our conclusions. And if we have to show our work, we’ve discovered that we’ve got biases, we have logical fallacies. We have so many bugs in our thinking. And we don’t know that, because we kind of just go around sharing our opinions. Cause they convince us. So the metacognition, the self-awareness is recognizing that we need to create conceptual integrity in our own minds before we can share it. And that as smart as we are, we’re not generally great at that.
Also, we often are reacting. So I go into a meeting, someone says something and like, I wanna bite this person. I just wanna bite this person so much. Like I’m just, so this is awful what I’m hearing. And then we respond from, we start reacting, we start communicating our frustration, our aggravation, everybody hates us, product suck. There are these, we start doing that. And okay. But it will almost never get us what we need, right?
Pretty much all it does is add fuel to a fire to notice your reaction. So you recognize, hey, there’s something wrong here. This doesn’t have conceptual integrity. Then take a step back, put on some headphones, set a timer for 30 minutes. And try and write a recommendation. How would you improve this situation? What would you do differently than what you’re hearing? And then you discover you love to complain way more than you like to come up with a recommendation. We all do. It’s fun, it’s addictive, it’s awesome, right? And it’s not a bad thing, like linear thinking’s not a bad thing. But it doesn’t get us what we need. So those two practices of metacognition.
The other thing is when you do it, you realize the patterns happening in your brain are happening at scale around you. And it starts to teach you how you can help other people also come to better conclusions, because you are doing the work in your own mind and you begin to see things that help. What helps, what question to ask? What… and, uh, for example, for architectural decision records, for ADRs, a thing that stands out to me often is that people don’t describe other options they considered. They’re just recording the decision. So just the question of, hey, what other options did you consider? And can you show me why you came to the, like this seems solid. But when things change, I won’t actually know what else was considered and why it was not chosen, so can you add that? Just that brings systems thinking to an architectural decision record. Because there was no run right answer. They came to the best possible conclusion. That’s systemic reasoning.
Systemic reasoning is coming to a conclusion and taking an action, even though there’s not one right thing to do. That’s it. That’s system’s thinking. And you can do that in one artifact. And for many of us that would be sufficient to improve our impact and influence and our career significantly. Just those, because it’s unfortunately pretty rare.
Henry Suryawirawan: Wow. I like that you mentioned so many different things, right? I’ll try to summarize as best as I could, right? I think the first thing is about trying to explain what you think it is, right, to other people by, I don’t know, elaborating in three reasons, three bullet points. And maybe we can even use writing as a thinking tool, right? And it reminds me also with, I don’t know, I think it’s Richard Feynman who said that if you can’t explain to others what you think, you actually don’t understand it, right? So I think it’s a very good reminder. And I like the mentioning about, you know, instead of reacting you should, you know, maybe take a pause. I think like creating a gap in the mindfulness practice, right? Instead of reacting straight away, create a gap and then respond, right? So I think I like that practice as well, because sometimes engineers, we are passionate about something, right? We always try to argue with each other. So I think that’s a very good practice as well.
[00:44:42] Practices to Improve Our Collective Systems Thinking
Henry Suryawirawan: So I know that if we improve our self-awareness, it’s not enough to improve the system thinking. It’s a collective thing. The relationship with others need to improve as well. And in your book, you also cover another thing, another aspect is that we have to improve in terms of system thinking as well. Maybe it’s a team, organizations, whatever that is. So tell us maybe some practices that we can do in order to improve the collective, you know, I don’t know, understanding about systems thinking.
Diana Montalion: Yeah. So the first one is the, we come right back to the war between product and tech, and everybody hates us. But here’s what I discovered. My leadership responsibilities increased, but also my primary responsibility is to the code always, right? It’s always to… If we don’t have quality in the software, then it doesn’t matter what else. Like well, and that’s not entirely true, cause sometimes really crappy software actually does the job just fine. So that’s not always true. But generally speaking, right, my responsibility is to the system, which means to good tech, right? And whatever good tech means in that situation.
So I think that I speak business well. So I’m in a situation where we need millions of dollars. We need a bunch of money to do the thing that we need to do. The system, the software is becoming obsolete. The organization has whatever it’s fast package delivery goals are, but the legacy system wasn’t designed for the modern world where there’s so much relationship between information, right? That they could just put everything in a database, you know, just stuff it in Oracle and it’ll all work. And now that doesn’t work for them anymore, right? So what do we do? How do we figure out what to do?
In order to figure out what to do, we have to understand other points of view. We have to understand, okay, we wanna make this change and we need money to do it. Now, I have to talk to people that won’t understand my tech language to get the money. But if we don’t get the money, it doesn’t matter what tech language I’m using, because we can’t build it. Cause we have to, I have a mortgage, I do not work for free. So like I mean, I wouldn’t mind working for free, but I have a mortgage. I need to work for free. So this is something that we haven’t really been thinking about as engineers and tech leaders, is that we don’t realize that we have to get the money. Like that’s just part of it.
But also we are the worst group of people to predict what users will experience. Cause we are not users of software. And I know this because I’ve been dragged kicking and screaming to watch user testing to… Once I was architecting a systems change, and the head of graphics made me go watch 12 people use the legacy system for what it was designed for. And I’m rolling my eyes. I know what the system does. I built a bunch of this software and, oh my God. They used it 12 different ways. They had all these hacks and workflows that I would’ve built everything wrong for that. Cause I thought I knew what they did, but I know what the software wants them to do. That is not what they actually do, right?
So my point to all of this is that I am not very impactful alone. I need to partner with people who have expertise I don’t have. I think I speak great to business and I can be very persuasive about why we need a million dollars, until I try it and then they’re like, geek, geek. What the hell? What the hell are you saying? The point is like three pages down. An accountant once said to me, you sent me 26 pages, Diana, I can’t read 26. And I’m like, but this is a complex thing we’re doing. And she, no. So I have discovered that in order to build hard things and to work on teams where we build hard things, I need more skills than I have. And so I need product.
I’m giving a talk with Kat Morris at QCon next week. And Kat is on the product side. Always has been. And I’ve been on the tech and architecture side. We started to the planning the talk by modeling our stories and discovering we had the same pain. But when we started, I said, all of my stories are product ruined everything. And she said, all my stories are the architect is the complete pain in my butt, I can never get anything done. And what we realized is that in our own tribes, we’re fighting about how to think about things and then cross-functionally we are. And realize if she and I could work together from the beginning and she could do, bring in the knowledge she has, and I can bring in the knowledge I have, we get an outcome that was so much better than this linear, first I do a thing then she does a thing, then we hate each other in between. And like… and so partnering that within the team, the ability for my current team, five of us can get in a room. We understand what we’re trying to do and then we figure out how we’re gonna do it. And sometimes we experiment or prototype. Sometimes somebody recommends something and we just go in that direction. So can you sort of self-organize to decide what to do. But then also can you get the partners and the information that you need in order to build something that really matters? And for me, that comes from recognizing that your skills are insufficient in the modern world. You can know everything about JavaScript there is to know. But if you don’t know how to make people’s lives better with JavaScript, how much value are you really bringing, right? Like so it’s both. It’s being really good at JavaScript and being really good at making people’s lives better, and they don’t even have to know what JavaScript is to benefit from it.
Henry Suryawirawan: Well, so I really love that because it reminds me of the past, right? When I used to work in a bigger organization, it’s always, you know, full of blame or maybe not just blame, right? It’s misunderstanding, you know. We think other people are trying to make trouble for us, right? And the same thing. I believe the other team will, will also think we are making trouble for them, right? So I think like partnering, leveraging each other’s skills and maybe perspectives, right? Bring into the table and come up with a better solution. I think it’s a really good way of, you know, improving the collective systems thinking in the team.
And I think you mentioned when you prepare the presentation, you mentioned about modeling. I think this is also a very useful practice. In fact, you mentioned it, arguably it might be the most important activity or skillset that one can do in order to improve the systems thinking. So maybe tell us why modeling is so important? And should we practice more modeling inside our, you know, day-to-day work in the tech industry?
Diana Montalion: Okay, so I have… I wanna say one more follow up for the last one, then I’ll answer your question. But at the moment, I’m distracted by… so I’m just starting my next book. And I’m like, Henry, will you please be a reviewer in the book to give feedback because your questions are exactly structured to really help people understand how to build the skillset. So I’m like, yeah, I love these questions. These questions I want now I’m, when I watch, I’m gonna write them down and I’m gonna make a talk that answers them in this order, because this is the best order of questions. So thank you for that.
[00:53:04] Collaboration with Consent
Diana Montalion: The one follow up I want to have, cause this is really important. I say partnering, I say collaboration, I say these things. I don’t mean them in a kumbaya summer camp, we all live in peace. Oftentimes, the other team, the other person, they are trying to make trouble for you. They are. And we often work with people who are being mean and bullying and awful. That is true. In that situation, you can’t do systems thinking. Like that’s a political problem. That’s a behavioral problem. So one of the things, especially being an outlier in tech, there aren’t very many people, there are very few people who do what I do, who look like me. What that means is that people presume that I want to be the glue role, that everyone gets along role. I do not want to be that, and I’m not particularly good at that.
But also, I think that people feel, I suspect I get more of the bad behavior than other people might, and I suspect that I sometimes have to work harder to convince people to pay attention to me. That’s not great. My point being though, I don’t want to suggest if we just collaborate, yay, everything will work. We also have to fire the people who refuse to create social learning. And we, we have to move away from the 10x developer can be the worst person in the world, but as long as they’re delivering code, they’re productive. Because that it ignores all the emotional labor everybody has to do every single time they have to have a meeting with that person, right?
So I mentioned the partnering, but I left out the fact that both parts have to be consenting to do that. I don’t mean try and get people being hateful to work with you. No. The people being hateful need to stop being hateful. We talk about cat herding roles. Nobody gets to be a cat. That’s not what systems thinking is. It’s not changing hearts and minds. Change your own mind. That’s your job. That’s not my job. So this willingness at the heart, it, it really, it matters.
[00:55:29] The Importance of Modeling
Diana Montalion: And so I said that and now I forgot. Oh, modeling question. The modeling question. Okay. So the, again, we’re going, there’s a war of what modeling is, right? Is it C4 diagrams? Oh, it’s not UML anymore, I’ve learned. I’m supposed to hate UML. I actually like UML. It’s useful. I mean, not for everything, but there’s some things. So I don’t necessarily mean a specific model, boxes and lines, although I do a lot of it. I mean, open a Miro board or even better if you’re in person, an actual whiteboard.
And when you’re trying to solve a problem, try and model the problem. I also mean use things like event storming to understand systemic issues, which is something absolutely worth Googling. I mean, if you’re trying to figure out what is our fast package delivery, do it in a model. Because we get so entrenched in language and our communication styles, and often we think we understand each other. We think we’re saying the same thing, but we’re not. If I go into a room of six people and say, we are gonna be agile, there are six reactions to what I’ve said, and they are completely different universes. The one person’s like, yay. And one person’s like, I will murder you. And then there’s like all the things in between. And then three people say, you mean Jira? Like, no, I mean Jira, right?
So like we often think we’re solving the same problem, and when we’re going around and around and we’re bike shedding, usually it’s because we have completely different mental models of what we’re doing. We’re just not looking at the same thing. So just having a conversation, but including the visual element, moving things around, making relationships. If you are, if you are having a conversation, we are having discourse, we don’t really understand the relationship between the ideas we’re sharing.
But if you have three stickies and I have three stickies, we are seeing if there’s a relationship. The mind just automatically thinks about the relationship between those six stickies. So you’ve taken a step into systems thinking as soon as you have two stickies, and then you think, well, are they the same? Are they different? Are they…
And so modeling is really, it’s a conversation. Modeling isn’t reality, cause you can only ever model one point of view. Modeling is defeasible, meaning I can only draw what I understand right now, but next week it will look different. I don’t mean North Star model, like I’m gonna show the engineers what you’re building. Here’s a model, go build that cause that’s dumb. Like that doesn’t work. But I do mean that instead of just writing bullet points and lists of requirements and when you’re making these kinds of decisions or you’re understanding a systems problem, use visual language, use shapes, use lines, relationships. When you’re trying to work together, this is how the product person and I discovered that we have the same pain. But when we talk about it, we talk about how much I suffer because of her point of view and she suffers because of my point of view. And when we modeled it, we saw a completely different reality.
Henry Suryawirawan: Yeah. This is like the funny, maybe, cartoon, right? Where people draw different shapes of, you know, the requirements, right? Some draw triangles, some draw, maybe, circles, some draw squares. Or also the other analogies, like you’re touching elephant in different sides, right? People describe elephant in a different way, right? So I think modeling is an exercise for you to actually align kind of like the same perspective, same understanding. I think it’s always great to have this exercise. In fact, every time I have, you know, requirement issue, uh, it’s always about different perspective, right? It’s about different or you think different and I think different. And if we can come up with a model that we agree and align together, I think that will improve our understanding about the problem. So…
Diana Montalion: No, you’re just the, so model can be text too. I was just thinking, cause I was having this conversation yesterday, how a Jira ticket, a ticket, however we do work, right? That’s actually a model. Even if we don’t use shapes. Every artifact we create, anytime we are sharing a concept, we are making a model. And one of the things that I really learned, my most recent team I built got to build it. And so I brought in people that I know we can do something really hard together. Some new, complex, cause it was a big challenge.
What I discovered is, if there are six of us, if I write a ticket to describe what it is that we need to build, I would write six completely different tickets depending on who was gonna pick up the story. Like Claire loves, loves lots of detail, right? Loves lots of detail. But for another engineer, I don’t even need to say much, because he works best if he can then have the discussions and think about the problem and write his own ticket. And that’s true for a lot of us, I think. A lot of us would do better if we could write our own ticket. If we could say, here’s the information I need to think well, and then take that and have any follow-ups, ask any questions and decide how to approach the work.
So I wanted just to interject that a model, I mean shapes and such, but that even the way that we describe a piece of work is a model. And that being flexible with how you have these discussions to fit the brains of people who are going to be making these micro decisions benefits the outcome because you get better outcomes when people get information the way that their brains process that information. So that, sorry, that was my, my add-on.
Henry Suryawirawan: Yeah, very, very good addition in fact, right? So I think that’s a very, I would say it’s a good insight, right? Because people interpret things differently. People like details, some like maybe more abstract, more visuals, right? I think, yeah. Thanks for adding that into your explanation.
[01:02:20] AI Usage and System Thinking
Henry Suryawirawan: So I was about to ask one thing that is probably can be a big disruption these days in the systems thinking. Maybe the introduction of AI with all this kind of systemic problem that could happen. So maybe in your view, right, as a system thinker, what do you think would happen if, let’s say, people start to use more AI or AI become more entrenched in many of the things that we are doing these days?
Diana Montalion: Yeah, so that’s a big question. A common question. I will say, speaking of metacognition and knowing one’s own mind, I am a relatively late adopter. People would say AI, and I’d say, you mean fancy search? They were like no, AI can do that. Do you mean fancy search? And, um, because it’s not intelligence, right? So also, I’ve been in tech long enough to see we are so trendy. We’re so trendy. We’re like this is the best thing ever. And then this is the worst thing ever. Like my joke now is that apparently Agile is causing climate change based on how much people hate it. But I remember when it was the silver bullet that everybody wanted, right?
So with AI I’ve had a little bit of like, I think we all have the perception of what AI can do, and what it actually can do are far enough apart that I get frustrated with all the organizational adoption of AI that is not actually going to solve their problems. And so I’m like, so just to caveat with, I am in it now and I’m developing my skill, but there are other people out there that actually have more expertise and answer this question, I think, with more knowledge than I do.
What I can say though is that systems thinking is about inference. If I have an idea and I tell you my three reasons for doing it, the thing that makes that idea strong is the relationship between the reasons. How you can say if this and this and this, then, of course, that. That’s inference, that and, that relationship. So if we think of a graph database, we think in nodes. We think in objects and data objects. But decisions, tech software thinking all of that is about the nodes. It’s about the relationships between them and what those relationships signify. And AI can’t do that unless it’s told to do that, or unless we’ve juxtaposed. We juxtaposed Taylor Swift and singing in all the information. So AI knows that Taylor Swift is a singer, but that’s not really inference as much as it’s those things are next to each other all the time.
So I have a colleague Abraham, who gives a wonderful talk on AI for architecture. I’ll send it to you. Maybe we can include it in the comment piece. But he does a great job of showing how AI can really help us and also what its limitations are for architecture and problem solving. And a lot of it is inference. And in the talk he makes a think tank with Mel Conway and Matthew Skelton, someone else, and me. And so I’m sitting in the talk, the first time I saw it, and I’m watching what AI thinks I’ll say when it’s answered a question about what would Diana recommend? And I’m like, am I that lame? Am I that lame in life? Like it made me so lame. And it made me lame, because it doesn’t understand it depends, right? It doesn’t understand that what I would say would be based on exploring a particular situation and not just saying sociotechnical. It’s sociotechnical. That’s the thing, right?
So in terms of what we are talking about, I think that I am rightly not a fan of AI to help us as much in that way. Because it’s actually what it doesn’t do. That said, oh my goodness. AI is so much better at summarizing me than I am at summarizing me. And AI is so fast connecting complex ideas. So for example, if I were to say, who out there is thinking about systems and teaching about systems, and what’s their point of view and how does it relate to these four chapters in my book, it would take me months to do that research. Now, some of it is wrong and it’s missing people that should be there. So I still have to, I still need knowledge to know that AI is wrong and limited and biased, but that’s true whenever we talk to other people, we have our own.
So AI is just a very smart person that has more knowledge than most. But is it knowledge or is it information? And that’s, I think, the really big thing is that we say intelligence and we say knowledge. And I’m not sure it’s knowledge or intelligence. I think it’s just very well-crafted information. So I do think that it’s broke. I’ve asked code questions and it’s broke my code as often as it’s fixed it. But also it’s very quick to point out things I don’t know. It’s very quick to show me how things are related. I had to write a bio for my, and I hate writing a bio for myself. I asked it to give me four bios of me, and I’m like I’m really cool. That’s really good. Like, I like right, because it knows best practices for language, for structuring language and things like that.
So I was a writer before I went into tech. And I quit writing. I was tired of being poor. And I also wanted to do, I wanted to use my university learning. So I moved to Austin, Texas where all the cool tech was happening so that I could really just focus on my career. And then ha ha ha, as a systems architect, I write a lot because language is what, how we communicate, right? And then I wrote a book and then I didn’t quit anything. But the structure of, the way we structure code, the way we structure language, these are all very similar. We’re doing an in informal logic where there’s not one right way. We figure out sort of the best way.
And so, AI where a lot of us struggle with using spoken language or informal language to describe our ideas, I’m sad to say cause I wanted to hate it, but I think AI can really help us with that. Not wholesale, not cut and paste. And sometimes, like I have a friend who, she writes a talk abstract and then she asks AI for the titles and then she makes sure she doesn’t use those titles, because those will be like marketing speak. Like because it comes from the internet, like they’ll be awful. So you can even use it as an example of what too many emojis and exclamation points in your writing look like. But I think it can help us communicate our ideas better and it can help us to bring in other perspectives we don’t know exist and then we can go explore them. So in that way, I think it’s a good partner. And it’s still fancy search is my argument.
Henry Suryawirawan: Right. Thank you for giving, like, such a balanced view about AI, right. I really think like this is, AI is gonna be like one elements that we need to understand in order to really understand the systems, how the systems work, right? Because so many people leverage AI these days, right? And sometimes they don’t fact check, they just copy and paste, right? And it creates, I don’t know, like impact into the downstream or maybe to the other things that generates, right, the result that AI produce, it gets used by other thing, other systems or other person, right? And I think the systemic impact is gonna be a bit difficult to reason about, because you don’t know where it comes from, right? The inference is missing, like what you said. So I think thanks for giving that perspective.
[01:11:04] 3 Tech Lead Wisdom
Henry Suryawirawan: So Diana, unfortunately, due to the time, we have to finish our conversation. But before I let you go, I have one last question that I would like to ask you, uh, which I call the three technical leadership wisdom. Think of it just like advice that you want to give to us. So maybe if you can share your version to us.
Diana Montalion: Yeah. So repeating the theme, uh, you get more bees with honey, that kind of thing. That like maybe we just be like a little bit more patient and kind with each other because we’re all doing hard things, and especially there’s a lot going on in the world. There’s, I’m in the US there’s a lot going on here. And that maybe we just make a little more space for each other. That one.
Partnering with skills you don’t have gives you more impact and influence. It’s a good thing for you. Like it makes you better and it makes you more trustworthy. So definitely worth doing.
And the value of deep work, of knowledge work. Like I go back to the perennial question. How many of us are trying to fight for three or four hours a day where we’ve just put on our headphones in focus and do hard things, challenging things, creative things? Not just fulfill tickets, but really try and solve problems and really use our minds to generate new thinking. I wouldn’t go all the way, to say, cause I’ve also had people who have that all day and then they have 15 minute standup and they complain, cause they don’t wanna be there for 15 minutes. It’s cause they don’t wanna socialize at all. So not to that extent.
But yeah, be a little kinder. Do deep work and improve your skills by partnering and leveraging skills, other people’s skills because that’s the magic that helps you.
Last thought, the number one question I get asked when I teach is what do I do about the fact that nobody listens to me? This is our number one pain. What do I do about the fact that no one listens to me? And I feel that pain, oh my gosh, so much. And so those three things together are at least towards that answer. It will help people listen to you. And we really do want to be able to share our ideas and have a positive impact in the world.
Henry Suryawirawan: Wow. Lovely piece of wisdom. Thank you for sharing that. It’s a lovely message. So Diana, if people want to connect with you, ask you more questions or you know, maybe find out resources about systems thinking, is there a place where they can find you online?
Diana Montalion: Yeah. So I am Diana Montalion. My name, on LinkedIn and just the Fediverse. So Mastodon and Bluesky. And there’s a site Learning Systems Thinking that tells about the book. Um, my company is mentrixgroup.com and that will show, also I have a blog. The best way, my favorite way is I do a fair amount of speaking and workshops and I have a community SystemCrafters Collective that we’re building. We’re for people, all of us out there that say, no one listens to us to get together and talk about how to do those. So joining SystemCrafters Collective, following and asking, especially on LinkedIn. And ideally, in person. If you are somewhere that I am and you’ve listened to this, I’d love to come up and introduce yourself. And I love hallway chats. We can talk more about some of these ideas. Reading the book, too. Reading the book is a good thing. I always forget to mention that. Or people in the comments would be like, oh, she wrote a book, she’s just pitching the book. No, I wrote a book cause I’m in pain and so are you. So let’s figure out together how to solve that.
Henry Suryawirawan: Yeah. So I can highly recommend the book as well. So I think it’s rare to find systems thinking book, right? Uh, Donella Meadows book is one that always get referenced, but something that is new with perspective of tech, I think Learning Systems Seeking is one book that I can recommend. So thank you for your time, Diana, today. It’s been a pleasant talk. So thanks for all the insights today.
Diana Montalion: Thank you, Henry. Thank you so much. Thank you for the questions. They were terrific.
– End –