#201 - Building Evolutionary Architectures: Automated Software Governance - Rebecca Parsons
“Evolutionary architecture became a necessity, not because anybody wanted it to be, but because you didn’t have a choice. You have to be able to change your systems to keep up with changing business and consumer expectations, let alone regulatory frameworks.”
In this episode, I have an insightful conversation with Rebecca Parsons, coauthor of Building Evolutionary Architectures and ex-CTO of ThoughtWorks, on the topic of evolutionary architecture. Rebecca shares the definition and principles of evolutionary architecture, as well as some important practices that software engineering teams can adopt to support it. Rebecca also offers her perspective on the impact of AI in software development and evolutionary architecture.
Key takeaways:
- Evolutionary architecture supports guided, incremental change across multiple dimensions.
- Fitness functions are a key tool for implementing evolutionary architecture.
- Some of the important engineering practices for evolutionary architecture include continuous delivery, evolutionary database, contract testing, and choreography over orchestration.
- AI coding assistants can help analyze and understand complex legacy systems, aiding in refactoring and modernization efforts.
- Over-reliance on AI coding assistants may hinder the development of proper abstraction and critical thinking skills, especially in junior developers.
Listen out for:
- (00:02:35) Career Turning Points
- (00:08:38) Why Adopt Evolutionary Architecture
- (00:11:06) Evolutionary vs Rewrite
- (00:13:41) Architecture Definition
- (00:16:45) Evolutionary Architecture Adoption
- (00:20:56) Evolutionary Architecture Definition
- (00:22:32) Fitness Function
- (00:26:07) Commonly Adopted Fitness Functions
- (00:29:33) Principles of Evolutionary Architecture
- (00:35:24) Conway’s Law & Postel’s Law
- (00:39:40) Practices of Evolutionary Architecture
- (00:45:41) The Impact of AI to Evolutionary Architecture
- (00:48:44) The AI Worries
- (00:52:32) 3 Tech Lead Wisdom
_____
Rebecca Parsons’ Bio
Dr. Rebecca Parsons is currently independent, having been Thoughtworks CTO and CTO Emerita for over 15 years. She has more years of experience than she’d like to admit in technology and large-scale software development. She recently co-authored the book Building Evolutionary Architectures with Neal Ford and Pat Kua.
Before ThoughtWorks she worked as an assistant professor of computer science at the University of Central Florida, after completing a Director’s Post Doctoral Fellowship at the Los Alamos National Laboratory. Her interests include parallel and distributed computation, programming languages, domain specific languages, evolutionary architecture, genetic algorithms, and computational science. Rebecca received a BS in Computer Science and Economics from Bradley University, and both an MS and Ph.D. in Computer Science from Rice University.
Follow Rebecca:
- LinkedIn – linkedin.com/in/dr-rebecca-parsons
- X – x.com/rebeccaparsons
- 📚 Building Evolutionary Architectures – https://www.oreilly.com/library/view/building-evolutionary-architectures/9781492097532/
Mentions & Links:
- 🎧 #120 - Software Architecture: From Fundamentals to the Hard Parts - Neal Ford – https://techleadjournal.dev/episodes/120/
- 🎧 #131 - Data Essentials in Software Architecture - Pramod Sadalage – https://techleadjournal.dev/episodes/131/
- 📚 Refactoring Databases – https://martinfowler.com/books/refactoringDatabases.html
- Virtual machines – https://en.wikipedia.org/wiki/Virtual_machine
- Containerization – https://en.wikipedia.org/wiki/Containerization_(computing)
- Quality attributes (ilities) – https://en.wikipedia.org/wiki/List_of_system_quality_attributes
- Docker – https://www.docker.com/
- Simian Army – https://netflixtechblog.com/the-netflix-simian-army-16e57fbab116
- Neal Ford – https://nealford.com/
- Pramod Sadalage – https://www.thoughtworks.com/profiles/p/pramod-sadalage
Check out FREE coding software options and special offers on jetbrains.com/store/#discounts.
Make it happen. With code.
Get a 45% discount for Tech Lead Journal listeners by using the code techlead24 for all products in all formats.
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
Career Turning Points
-
The first one, what I learned from that is, sure, go ahead and do the analysis, but that process of, okay, I’m going to sit with myself for half a day as if I’ve taken the computer science job, and I’m going to sit with myself for half a day as if I’ve taken the economics job. And your gut is going to tell you what’s the right thing to do. And it’s good to listen to your gut.
-
And then the second one, I didn’t listen to my gut. What I learned is that to have a success criterion for yourself that isn’t aligned with the success criteria of your organization is a recipe to be very, very unhappy. Because either they think you’re successful and you’re personally miserable, or you feel like you’re successful and they think you’re a failure. And neither one of those is a good place to be.
-
The third came after I’d been CTO for a little while. I’m actually an introvert. I don’t really like getting up on stages. But it was important for me to be up on those stages, to be talking as a technologist. Not just talking about diversity, but talking about agile and enterprise architecture and evolutionary architecture and domain-specific languages and all of these different technical topics. Because women needed to see someone who looks like me up on a stage talking about things like that. Prior to that, I’d been on a couple of stages. But it wasn’t an important part of what I did. And it became a very important part of the job I did for ThoughtWorks.
Why Adopt Evolutionary Architecture
-
When I started, back decades ago, you probably didn’t need an evolutionary architecture because technology was not moving that quickly, expectations were not moving that quickly. But what we see now is that business model lifetimes are shorter. Customer expectations for businesses, consumer expectations for businesses, are being driven not by what’s happening in the financial services industry, but what’s happening in TikTok. And you don’t have nearly as much control over what it is you must build to remain competitive. And when you have that level of change, having a system that you can’t change to reflect what it is that your customers are demanding you give them is not going to allow you to succeed as an organization.
-
So, with all the change that is happening around, to say, ‘but, oh, the architecture will never change.’ Well, you know, that’s ridiculous. And we can’t really predict where that change is going to come from.
-
The impact on how people think about their technology estate was incredibly impacted by Docker and its ilk, in a way really that virtual machines didn’t have that same level of impact. And so even if you weren’t going to use Docker right away, you have to be thinking about it from an architectural perspective of how is this something that I can take advantage of.
-
And so it became a necessity, really, not because anybody wanted it to be, but because you didn’t have a choice. You have to be able to change your systems to keep up with changing business expectations and consumer expectations, let alone regulatory frameworks and things of that nature.
Evolutionary vs. Rewrite
-
One of the goals of having an evolutionary architecture is simplifying whatever change it is that you ultimately have to make. And if you’ve got something that’s small enough and self-contained enough that you can just throw it away and rewrite it, that’s probably easiest. And you don’t have to worry about being evolutionary if it’s only going to run once.
-
You don’t worry about evolving something that will only run once. You get it to the point that you need it to let it run once, and then you throw it away.
-
The problem is when you look at most of the enterprises out there that have been around for any length of time, they don’t have pieces of their architecture that they can just throw away. They probably have five or six generations of technology and languages and frameworks and all of that kind of stuff.
-
Sure, if you’re a startup, if you’re a mom-and-pop, if you’ve been running on an Access database or an Excel spreadsheet, or whatever, maybe you can do that. Maybe you haven’t customized yourself into a corner or something like that. But for a lot of the kinds of clients that we dealt with at ThoughtWorks, that just wasn’t the reality. They didn’t have pieces of their architecture they could just throw away.
Architecture Definition
-
I work very hard never to precisely define it. Part of the problem is I’m an academic, and I can’t call it a definition unless it very clearly includes things and very clearly excludes things.
-
But one aspect of architecture that I think is important is it depends on what is important to that organization. You want your architecture to support the things that matter. And this is where Neal starts to get into the trade-off discussion because you might have two things that are important to you, but you have to decide which one is going to take precedence. There are also things that are not important.
-
And so the characteristics that lead to the success or failure of your system, those are the architectural characteristics. It might be network, security, data, performance, operability, resilience. There are all kinds of different characteristics. And for each one of those ilities, there’s a system that it matters for. But not every system needs to worry about all of them. And in fact, you can’t because many of them are mutually inconsistent.
-
And so what I talk about, in terms of architecture, is what are the things that are of importance in your industry for this particular system and even for your particular organization because organizations have particular challenges. Retailer A might not have the same perspective on everything that Retailer B does. So they’re in the same industry, they might be in the same geography, but that doesn’t mean that they worry about the same things.
Evolutionary Architecture Adoption
-
I would say that it is not widely adopted in the way, say, microservices are. Because even if you don’t have a microservice, people are talking about microservices pretty broadly.
-
But if you step one level down, some of the ideas that we talk about, particularly around fitness functions and how you can use that to assess the state of your architecture, those ideas are getting more traction.
-
My favorite example for that is maintainable. What does that mean? You and I could disagree on how maintainable a particular piece of code is. We couldn’t disagree on what the cyclomatic complexity was, whether or not it followed a particular coding standard or a naming standard. Did it respect architectural layering rules? Those things we can measure, but maintainable is in the eye of the beholder.
-
I do think we’re making progress in getting people to think more specifically about what they’re trying to achieve and that the mechanism of fitness functions gives them the ability to say, ‘A, here is what I’m trying to achieve, this is my target architecture, which is the theoretical composition of all of your fitness functions. And this is how I’m doing. Maybe something in your system load has changed, or maybe a new technology is starting to be used, and all of a sudden, you will need to reevaluate some of the architectural choices that you’ve made because the situation is now different.
-
But I would agree with you. It isn’t widely adopted. And I think that also goes back to, it’s one thing to start from a greenfield and build a completely evolutionary architecture. It’s another thing to take a big ball of mud and turn it into something that’s evolvable. You’ve got a long journey ahead of you if you’ve got lots of balls of mud.
-
It really gets down to the fact that there is nothing inherently wrong with a monolith. Yes, there are some things that you can do with a microservice that you can’t do with a monolith. But then there are problems that you can’t have in a monolith that you have to deal with in a microservices architecture.
-
Too much of a focus on ’let’s be as nimble as possible,’ you can end up with a much more complex system than you need. And that’s why when we talk about this, we always say you need to start by deciding which of these things are most important to you. And if evolvability isn’t one of your important ilities, don’t worry about it.
Evolutionary Architecture Definition
-
The three things. Evolutionary architecture supports guided, incremental change across multiple dimensions. And fitness functions come in with the guided because they are your guide. They are your assessment of how close have you gotten to what it is that you are trying to achieve.
-
Incremental. We want to be able to make these changes incrementally as a risk reduction strategy. And then across multiple dimensions, as we need to think about all of those different ilities, and we need to think about all the different characteristics and how they interact with each other in determining whether or not we’re being successful.
Fitness Function
-
Some fitness functions act like unit tests, but some of them are much more system-wide.
-
We come up with basically two different axes to define fitness functions. Static versus dynamic. Static, obviously, it’s something that is some kind of analysis. Maybe you run it in your build, something like cyclomatic complexity, as an example.
-
Dynamic is something that happens at runtime. So maybe you have some kind of monitor for CPU utilization. You don’t watch your CPU utilization to get above X. Or maybe it’s some kind of tracer, transaction tracer through a microservices architecture. Those are dynamic fitness functions.
-
And then there are fitness functions that test just one specific thing, and then there are more holistic fitness functions. And if you go to the extreme, the Simian Army is a dynamic holistic fitness function. They often look at system-wide characteristics, and they’re looking at multiple aspects of the architecture and how the running system actually behaves under particular kinds of stresses.
-
Fitness functions should be thought of as a unifying term for a lot of the different kinds of tests that we’ve been putting systems through for a long time. And the advantage of having that unifying terminology is that you can now talk about security requirements and operability requirements and performance requirements as the same thing. How far are we away from what we said our objective was? What is it going to take to get there? As opposed to all security requirements are created equal and must all be implemented before you can go live. Generally, that’s not the case.
-
The thing about fitness functions is you want to have it be automated if possible, but some of them you don’t actually want to automate. But the single most important characteristic is that you and I will never disagree on whether it passes or not. It has to be defined in that way. And that process that you have to go through to go from maintainable to a suite of defined functions that represent your definition of maintainable. That’s a difficult exercise, but it’s a quite valuable exercise.
Commonly Adopted Fitness Functions
-
Assuming you decide that you want your system to be evolvable, to me, one of the most universal requirements is you have to be able to understand what the code is doing and what the system is doing. Because you can’t change something you don’t understand.
-
And so the closest I’ve got to a universal, with the caveat you’ve decided evolvability is important, you want to have guardrails in there for code quality. Do you have a good separation? Do you have low cyclomatic complexity? That’s probably the closest to a universal that you’re going to get.
-
As people get more used to using fitness functions, we’ll start to see more ideas on, well, here’s some fitness functions around X. Here’s some fitness functions around Y. In much the same way that many people use variants of the Simian Army, we’re going to have similar kinds of things happening across architectural patterns.
Principles of Evolutionary Architecture
-
So often governance is this dirty word because you’ve got these architects sitting on high, looking down upon the minions doing all the work, and they’re going to say, ’no, you can’t.’ When you get to any kind of scale, you have no choice. You have to have some level of governance.
-
The value of fitness functions for governance is enormous. Because anything that’s in a fitness function, particularly an automated fitness function, you never have to do any kind of architectural review. Because you’ve got a test in there that will fail the build.
-
And so all of those concerns go away from a governance perspective, and you can focus your governance discussions then, on those places where you’ve got two things and you’ve got to make a trade-off, and you don’t really know how to make that work, and so you can put the brainpower of the humans in the places where you need that creativity and you can leave the rote stuff.
-
The shift of the governance that we’re talking about from the perspective of evolutionary architecture really comes down to focusing on the outcomes, not the implementations. What are the outcomes we are trying to achieve? What are the behaviors we want the system to exhibit? Not how you’re going to get there. And that allows the delivery teams to work within the sandbox that the governance organization has put into place, but then be creative about how they might actually implement something to achieve that behavior.
-
You can also then have a basis of a conversation that says, ‘I know our standard tool for this is X. But we’re trying to achieve the outcome. And in our situation, because of PQ and R, Y works better to achieve the outcome.’
-
And so the discussions become less about ’no, you can’t use that because I told you that you had to use something else,’ but ’this is how we’re going to go about achieving the outcome, and this is why we think this is a better way to achieve the outcome.’ And so that’s one of those kind of underlying philosophies is let’s be focused on outcomes, not implementations.
-
Another aspect that is important here is it has to do with how you architect your system. And domain-driven design has really helped us here because it’s given us this language and this idea of a bounded context that makes sense within the business domain. Because if you think about a system in terms of its implementation, you’re going to talk about SAP, or you’re going to talk about Salesforce, or you’re going to talk about the customer ordering system. The people who are redesigning business processes and creating new business processes, they don’t care if something is stored in Salesforce, a CRM, or a shipping system. They think about the customer, they think about the product, and they think about the logistics flow.
-
The more our systems have their boundaries drawn around aspects of functionality that correspond to what the people who are creating the business process have as their chunks. They’re going to design the business process by rearranging their chunks. We can much more readily implement that process if our chunks have the same ability to move around.
-
And I actually think that’s where microservices have been successful, where SOA version one failed so miserably, is the boundary so often, in that early implementation of SOA, were all around systems.
Conway’s Law & Postel’s Law
-
Postel’s Law says, “Be generous in what you receive and stingy in what you produce.” The standard example I use, if you are receiving address information, and all you need is the zip code, postcode, some kind of geolocator, don’t validate the whole address. You don’t need to. And that way, if somebody decides, ‘oh, I need to add in that address line 2 to this thing,’ you won’t break.
-
The point is, focus on the information that you really need. Because that way, your system will only require change if it actually has to change. There’s no way that we can prevent any breaking change from ever happening, but we want to limit it to where it really has to break because we are trying to do something fundamentally different. But you want to be very stingy in what you expose, because, again, you have no idea who’s actually using what you put out there.
-
Even when you don’t intend for somebody to use it, people still are going to use it if they can. And you’ve made a contract, even though you’re in a contract that you don’t know you’re in.
-
People try to fight Conway’s Law. And you just can’t do it. Conway’s Law, a system will reflect the communication dysfunction of the organization that builds it. If the people don’t talk effectively to each other, the systems that they’re responsible for are not going to talk to each other.
-
You can use Conway’s Law to your advantage when you look at what do I really want my architecture to reflect and then reorganize your teams and they’re going to produce it. It’s just going to happen. We call it the inverse Conway maneuver.
Practices of Evolutionary Architecture
-
First off, an underlying prerequisite is the discipline, the infrastructural discipline, and the deployment discipline that comes from continuous delivery. You don’t have to go all the way to continuous deployment. But you at least need to know that your deployments are going to run smoothly.
-
And so the risk mitigation aspects of continuous delivery are important. When you’re talking about the kinds of dramatic changes, you need to know what you’re deploying into and so that you can more readily debug anything that’s happening.
-
The second is this whole idea of evolutionary database design and database refactoring. I’ve been in many conversations over the years with people who would say, ‘okay, well, agile and incremental, that’s fine for developers.’ The team that I think has always had the strongest argument for ’no, it can’t be incremental,’ were the DBAs because data migration is hard. It sounds so simple: copy it from here to here. But it’s hard. And so there’s an entire book called Refactoring Databases.
-
I also like to talk about contract testing because one of the things you were trying to do with an evolutionary architecture is to make it as easy to change things as possible. And so if I understand the assumptions that you’re making of my system and you understand the assumptions that I am making of yours, so we both know what’s happening.
-
And then we can make whatever changes that we want, paying absolutely no attention to each other until one of those tests breaks. It maximizes the amount of independent work that can take place, and it helps us understand what those boundaries are and why. And that is a critical piece to being able to evolve an architecture. Because if I don’t know what you’re expecting of me, I can inadvertently break you, and we don’t want that.
-
And then you have to have the right kind of test and safety net. One of the things that we found is if you think properly about testing, you’re actually going to end up with a cleaner architecture because you have to have good boundaries to be able to properly test things.
-
Then we often talk about choreography over orchestration. And this is where you really start to get into these trade-off discussions, much like should I go with a well-structured monolith or should I go to microservices? You have much more flexibility with microservices than you do with a well-structured monolith. Emphasis on the well-structured. This is not spaghetti monoliths.
-
If you don’t need that level of flexibility, it’s not worth paying for the complexity. But sometimes you do. And it’s the same with choreography versus orchestration. If you’ve got an orchestrator, that orchestrator is going to solve some of those problems that you have with these independent actors. But you’re introducing a coupling that is not strictly necessary. But there are all kinds of errors that you have to take care of yourself in a choreographed system. And so, if you need that flexibility, take it. But if you don’t need the flexibility, then go with something that’s simpler.
The Impact of AI to Evolutionary Architecture
-
We can use fitness functions, particularly the suite of code quality fitness functions. We can use that to assess the generated code.
-
There’s still anecdotal evidence, I wouldn’t call it solid evidence yet, that these code generators, they will tend to copy, paste, and modify as opposed to trying to abstract. And so, running a simple copy-paste detector can help you see if your codebase is starting to get out of control in that way.
-
There’s certainly a lot of hype going on right now. But these models are qualitatively more powerful than any models that we’ve had in the past. And so I do think that we have the potential to use these LLM-based systems, particularly the more coding focused ones, to help us in development.
-
One of the things that ThoughtWorks has been experimenting with, as an example, is using these LLMs on a legacy code base to help understand how the information actually flows through that legacy code base. And to help use that information to start to refactor and ultimately replace a legacy code base. It’s still early days, but what we’re seeing is really a fundamental increase in the ability of a human to understand a code base. And it’s because the human is relying on information, and the LLM in the background is doing a lot of hard work.
-
As I said earlier, you cannot evolve a system that you can’t understand. And that’s one of the problems with many of these old legacy systems is people just don’t understand how they work anymore. And so the more we can build tools to help understand these legacy systems, that puts us in a much better position to actually be able to modify those systems.
The AI Worries
-
That is one of the things I worry about is that we basically increased the productive capacity of our industry to create that code. And that doesn’t help anybody. As I said, I do think we can use fitness functions to at least monitor what’s happening with the code base.
-
One of the things that I worry about, though, is, in many ways, our industry is a kind of apprentice model, where you have junior developers who are learning from more experienced developers, and it goes on. Unless these coding assistants get much better pretty quickly, I would worry about in 20 years’ time, where are our star developers going to have come from? The notion that somebody is going to learn how to code from a coding assistant, we’re not there yet. They’re too likely to put out things that are wrong.
-
There was one study that was done by the CodeScene people, when the coding assistant that they were working with, and they went across a suite of models recommended a refactoring, in the best case, they were right 37% of the time. So in over 60%, the refactoring that they suggested did not maintain the correct behavior of the code.
-
If you as a developer got things wrong two-thirds of the time, you’re not going to keep your job for very long. As a professor, if two-thirds of the stuff that I said was wrong, I am not doing a service to my students. They’re not going to be able to learn if they have to figure out which two-thirds of the stuff that I’ve said is nonsense. So that’s what worries me, is how are we going to train the next generation if we’re relying so much on coding assistants?
3 Tech Lead Wisdom
-
We need to understand how our organization makes money, what they are doing, what the pressures are on that organization. And that’s our responsibility.
-
I firmly believe as technologists, it’s our responsibility to communicate to the rest of the organization, in their language, the potential consequences of the decisions that they are making. We’re the ones that know the tech, but we have to do it in their language so that they can understand the business risks or the business opportunities, for that matter.
-
And so the first thing is we need to understand how our organization makes money, what they are doing, what the pressures are on that organization. And that’s our responsibility.
-
-
As the technology landscape has become so broad, questions of generalist versus specialist have taken on a different meaning.
-
It used to be, when I started, one person could understand the entire stack. You can’t do that to any level of specificity anymore. JavaScript frameworks and other front-end frameworks and non-relational databases and this different kind of network architecture and dot, dot, dot, dot, dot, it just keeps going on.
-
And so a crucial decision that an individual needs to make is what kind of technologist do they want to be? Do they want to be a somewhat generalist? Do they want to think more big picture from a technology perspective or do they want to become a true specialist in something? And that’s something to decide relatively early in your career.
-
-
With how rapidly our industry is changing, you have to think of learning as fun.
-
We have to embrace that because new languages are coming out, new frameworks are coming out, new architectural approaches are coming out. And we need to be able and we need to enjoy continuing to learn new things. Because you don’t want to be that person who is hanging on at the tail end of the career because they’re the only person left on the planet that understands this programming language.
-
You don’t want to be that person. You want to be someone who has continued to evolve your career. And to do that, thinking of learning as fun and not a chore is crucial.
-
[00:01:46] Introduction
Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I’m very excited here to have an honorary guest, Dr. Rebecca Parsons. She’s an ex-ThoughtWorks CTO for a long time, around 15 years, I guess, until very recently she left the position. So Dr. Rebecca today is here to talk about evolutionary architecture. I think this topic has been around for quite some time, although I think the adoption has not been really there in the industry. So Dr. Rebecca, really looking forward to have this conversation with you today. I hope to learn a lot about evolutionary architecture.
Rebecca Parsons: Happy to be here, Henry.
[00:02:35] Career Turning Points
Henry Suryawirawan: Right. Dr. Rebecca, I always love to start my conversation by asking my guests to maybe tell us a little bit more about you. Maybe turning points that you think we all can learn from you.
Rebecca Parsons: Well, I guess the first one would be, as I was just getting out of university and taking my first real job. And I had offers at the same company, but from two very different departments. Because I have actually both a degree in computer science as well as a degree in economics. And I went through the classic, okay, pros and cons on the spreadsheet kind of process. And had decided I was going to take the job in economics. I mean, I thought, you know, what could be better? Somebody is going to pay me to read the Wall Street Journal. You know, why wouldn’t that be what I did? And when I got on the phone talking to the recruiter, I said I’m going to take the computer science job. And it’s like, wait a minute. But what I realized is in the back of my head, I’d been playing with, okay, yes, I’m going to take the economics job. I’m going to take the computer science job. And I realized that even though all of the clinical analysis said I should do the economics job, what I really wanted to do was the computer science job.
And what I learned from that is, sure, go ahead and do the analysis, but that process of, okay, I’m going to sit with myself for half a day, as if I’ve taken the computer science job, and I’m going to sit with myself for half a day as if I’ve taken the economics job. And your gut is going to tell you what’s the right thing to do. And it’s good to listen to your gut. And, you know, I’ve often wondered what would have happened if I had taken the economics job. I probably would have ended up in law school as a lawyer or something like that, as opposed to a computer scientist. But, um, I’d say that was the first one.
And then the second one, I would say, I didn’t listen to my gut. And this was, I completed my postdoc at Los Alamos and I was choosing between staying at Los Alamos as a researcher or going to the university as a professor. And when I first started my PhD full time, I told myself I’m never going to be an assistant professor of computer science and I’m never going to live in the state of Florida. And I ended up at the University of Central Florida as an assistant professor of computer science.
And part of the reason I did that, academia, at least in the U.S. at that time, if you didn’t pretty quickly go into academia, they just didn’t take you seriously. And it was one of those things, okay, if I think I might want to do this, I need to do it now. But I was right. Maybe it’s because the amount of time I spent in industry. I don’t know. But what I learned is that to have a success criteria for yourself that isn’t aligned with the success criteria of your organization is a recipe to be very, very unhappy. Because either they think you’re successful and you’re personally miserable, or you feel like you’re successful and they think you’re a failure. And neither one of those is a good place to be.
And then I would say that the third came after I’d been CTO for a little while, and I was having a conversation with our president and CEO. And I’d had an experience. I had been on a CTO panel for the Grace Hopper Celebration of Women in Computing. And I was the only woman CTO on the panel, which I thought was kind of ironic for the Grace Hopper Celebration of Women in Computing, but I understood the rationale for it. But what I learned later is we had lunch and they had students at the tables with the different CTOs. And all of the other CTOs ended up just talking about jobs at their company.
I was talking with the women and students at my table about their careers and their aspirations and such. And one of them said to me, I think I need to leave graduate school or at least change my advisor, because my advisor told me that I was taking the spot of a man and I needed to go home and make babies like I was supposed to. And I thought, you know, this is the 21st century. The fact that anywhere on the planet, some professor would think it was alright to say something like that just appalled me.
And I decided during that conversation with Trevor, the the the CEO, was that even though I’m actually an introvert. I don’t really like getting up on stages. But it was important for me to be up on those stages, to be talking as a technologist. Not just talking about diversity, but talking about agile and enterprise architecture and evolutionary architecture and domain specific languages and all of these different technical topics. Because women needed to see someone who looks like me up on a stage talking about things like that. And that was a real turning point for me. Prior to that, I’d been on a couple of stages. But it wasn’t an important part of what I did. And it became a very important part of the job I did for ThoughtWorks.
Henry Suryawirawan: Wow. Thanks for sharing. First of all, I think the stories really are, you know, strong and beautiful, right? So the first is about listening to your gut, right? So I could imagine back then you were kind of like torn between the two or, you know, two majors, computer science and economics. And then after that is a lesson about not listening to your guts. I think sometimes we all did that as well, especially in our career, right? We wanted to leave, but we couldn’t for whatever reasons, right, and ended up being miserable in the job. And the third one is about making a stand, right? Being there as an inspiration for some women out there. So thanks for sharing the story.
[00:08:38] Why Adopt Evolutionary Architecture
Henry Suryawirawan: So Dr. Rebecca, you are well known about evolutionary architecture. In fact, you have written this book Building Evolutionary Architecture, which is in the second edition now. So maybe first of all, right, tell us a little bit more, why do we need evolutionary architecture? Because I think when we talk about architecture, typically people talk about something that is difficult to change. And why do we need an evolution for our architecture?
Rebecca Parsons: Well, when I started, back decades ago, you probably didn’t need an evolutionary architecture, because technology was not moving that quickly, expectations were not moving that quickly. But what we see now is that business model lifetimes are shorter. Customer expectations for businesses, consumer expectations for businesses are being driven not by what’s happening in the financial services industry, but what’s happening in TikTok. And you don’t have nearly as much control over what it is you must build to remain competitive. And when you have that level of change, having a system that you can’t change to reflect what it is that your customers are demanding you give them is not going to allow you to succeed as an organization. And so, with all of the change that is happening around, to say, but, oh, the architecture will never change. Well, you know, that’s ridiculous. And we can’t really predict where that change is going to come from.
You can, in hindsight, look at, for example, virtual machines and containerization and Docker and, you know, and see that as a natural progression. But the impact on how people think about their technology estate was incredibly impacted by Docker and its ilk, in a way really that virtual machines didn’t have that same level of impact. And so even if you weren’t going to use Docker right away, you have to be thinking about it from an architectural perspective of how is this something that I can take advantage of.
And so it became a necessity, really, not because anybody wanted it to be, but because you didn’t have a choice. You have to be able to change your systems to keep up with changing business expectations and consumer expectations. Let alone regulatory frameworks and things of that nature.
[00:11:06] Evolutionary vs Rewrite
Henry Suryawirawan: Yeah, and also during the pandemic back then, right, so situational impact, right? Suddenly everyone has to scramble and find a solution. So I think the other aspect about architecture that I have seen, typically in the startups, is that instead of doing evolutionary changes, they make a revolutionary changes. You know, things like rewrites, breaking monolith into microservice. How about this kind of case? Do you see it as also something that is doable or more advisable to do for, you know, industry or companies who are changing very, very rapidly.
Rebecca Parsons: Well, one of the goals of having an evolutionary architecture is simplifying whatever change it is that you ultimately have to make. And if you’ve got something that’s small enough and self contained enough that you can just throw it away and rewrite it, that’s probably easiest. And you don’t have to worry about being evolutionary if it’s only going to run once.
You know, I saw a talk by a scientist from the Jet Propulsion Laboratory in the U.S. And the purpose of the talk was to talk about how they took advantage of cloud to do this particular analysis. But the point I took away from it is the fact that this data download was only ever going to happen once, and you would never have to run it again. No, you don’t worry about evolving something that will only run once. You know, you get it to the point that you need it to let it run once and then you throw it away.
But the problem is when you look at most of the enterprises out there that have been around for any length of time, they don’t have pieces of their architecture that they can just throw away. They probably have five or six generations of technology and languages and frameworks and all of that kind of stuff. And so, you have to start by getting yourself in a position where maybe you do have things that you can throw away. But for a lot of enterprises, that isn’t the case. Sure, if you’re a startup, if you’re a mom and pop, if you’ve been running on a, on an Access database or an Excel spreadsheet, or, you know, whatever, maybe you can do that. Maybe you haven’t customized yourself into a corner or something like that. But for a lot of the kinds of clients that we dealt with at ThoughtWorks, that just wasn’t the reality. They didn’t have pieces of their architecture they could just throw away.
[00:13:41] Architecture Definition
Henry Suryawirawan: So maybe in your definition, what do you define as an architecture? Because when I talk to, you know, when I learn about architecture in so many different resources and books, right? The first thing that we often hear about is about, you know, architecture is stuff that is hard to change or you make until the last responsible moment, right? Or the other thing, like Neal Ford always say, architecture is about trade offs, right? It’s there’s always something that you trade off. So maybe in your definition, before we actually go into the evolutionary aspect, so what is architecture in your definition?
Rebecca Parsons: I work very hard never to precisely define it. Part of the problem is I’m an academic, and I can’t call it a definition unless it very clearly includes things and very clearly excludes things. But one aspect of architecture that I think is important is it depends on what is important to that organization. You want your architecture to support the things that matter. And this is where Neal starts to get into the trade off discussion, because you might have two things that are important to you, but you have to decide which one is going to take precedence. Well, there are also things that are not important, like that Jet Propulsion Laboratory program. It had absolutely no need to be maintainable or recoverable because they had one data set they were going to run at once. And then they’re…
And so the characteristics that lead to the success or failure of your system, those are the architectural characteristics. It might be network, security, data, performance, operability, resilience. I mean, there are all kinds of different characteristics. When Neal and I give this talk, we’ve got a screenshot. We took, actually, when the first edition came out. And so it’s quite old, it doesn’t, for example, have observability on it. But there are, you know, dozens of different ilities. And for each one of those ilities, there’s a system that it matters for. But not every system needs to worry about all of them. And in fact, you can’t because many of them are mutually inconsistent.
And so what we, what I talk about in terms of architecture is what are the things that are of importance in your industry for this particular system and even for your particular organization, because organizations have particular challenges. You know, Retailer A might not have this, the same perspective on everything that Retailer B does. So they’re in the same industry, they might be in the same geography, but that doesn’t mean that they worry about the same things.
Henry Suryawirawan: Right, definitely makes sense, right? So something that is most important to you, right? You define that as like the so called attributes of your architecture, and from there, we kind of like evolve as and when there’s a change, as and when there’s a need, right?
[00:16:45] Evolutionary Architecture Adoption
Henry Suryawirawan: So one thing in particular that about evolutionary architecture, even though this resource, the book, you know, and the theory has been around for quite some time, I actually rarely listen people talking about evolutionary architecture in the industry for whatever reasons. Maybe it’s because I’m not exposed in those kind of conversations. But in your view, what is the state of adoption in the industry, actually?
Rebecca Parsons: I would say that it is not widely adopted in the way, say, microservices are. Because even if you don’t have a microservice, people are talking about microservices, pretty broadly. But I think if you step one level down, some of the ideas that we talk about, particularly around fitness functions and how you can use that to assess the state of your architecture, I think those ideas are getting more traction. People are looking at what am I really trying to achieve here and how can I make this concrete enough that I can actually test for it?
My favorite example again for that is maintainable. What does that mean, you know? You and I could disagree on how maintainable a particular piece of code are. We couldn’t disagree on what the cyclomatic complexity was, whether or not it followed a particular coding standard or a naming standard. Did it respect architectural layering rules? Those things we can measure, but maintainable is in the eye of the beholder.
So I do think we’re making progress in getting people to think more specifically about what they’re trying to achieve and that the mechanism of fitness functions gives them the ability to say, A, here is what I’m trying to achieve, this is my target architecture, which is, you know, the theoretical composition of all of your fitness functions. And this is how I’m doing. Maybe something in your system load has changed or maybe a new technology is starting to be used and all of a sudden you will need to re-evaluate some of the architectural choices that you’ve made, because the situation is now different.
And so I think at that level, we are starting to make more progress. You do hear more people talking about fitness functions. But I would agree with you. It’s not, it isn’t widely adopted. And I think that that also goes back to, you know, it’s one thing to start from a greenfield and build a completely evolutionary architecture. It’s another thing to take a big ball of mud and turn it into something that’s evolvable. You’ve got a long journey ahead of you, if you’ve got lots of balls of mud.
I saw this great graphic on LinkedIn yesterday. There’s nothing wrong with building lasagna, a nice layered monolith. Well structured. You don’t want to build a pile of spaghetti. But you don’t necessarily have to start with raviolis either, which I guess is the metaphor for microservices and that.
But I thought it was a great visual, because, you know, it really gets down to the fact that there is nothing inherently wrong with a monolith. Yes, there are some things that you can do with a microservice that you can’t do with a monolith. But then there are problems that you can’t have in a monolith that you have to deal with in a microservices architecture. And so, I think too much of a focus on let’s be as nimble as possible, you can end up with a much more complex system than you need. And that’s why when we talk about this, we always say you need to start by deciding which of these things are most important to you. And if evolvability isn’t one of your important ilities, don’t worry about it.
Henry Suryawirawan: Yeah, makes sense. It comes back to what we discussed earlier, right? It’s about what’s important to you and pick the right architecture based on your context, right? My suspicion is also regarding the tools, right? Because I don’t see many tools focused on solving these kind of things. But maybe in the future, we might see, start seeing these kinds of tools.
[00:20:56] Evolutionary Architecture Definition
Henry Suryawirawan: So maybe let’s go to the definition first. Like I liked your definition, evolutionary architecture in the book, which kind of like defines the three biggest things from that definition. Fitness function is one. So maybe if you can help us define first so that the listeners here can also understand the big picture of evolutionary architecture.
Rebecca Parsons: Okay, so the three things. Evolutionary architecture supports guided, incremental change across multiple dimensions. And fitness functions come in with the guided, because they are your guide. They are your assessment of how close have you gotten to what it is that you are trying to achieve. Incremental, you know, we want to be able to make these changes incrementally as a risk reduction strategy. And then across multiple dimensions as we need to think about all of those different ilities and we need to think about all of the different characteristics and how they interact with each other in determining whether or not we’re being successful.
Henry Suryawirawan: So just to recap, like, there’s a guided thing that happens in the evolutionary architecture. So this is like the fitness function that kind of like guides you towards a certain fitness, I suppose. And then incremental change, if you do, if you don’t do incremental change, I think there’s very little reason for you to evolve your architecture. Simply just like what you mentioned, the example, right, the jet propulsion thing. And the last thing is architecture is involves multiple dimensions, right? So pick the most important dimensions for you, and use the fitness function and the incremental change that you do to actually kind of like evolve your architecture.
[00:22:32] Fitness Function
Henry Suryawirawan: So maybe let’s go to the fitness function first. I think this is taken from the theory of evolutionary computing. And the analogy also is something like a unit test in, you know, automated test world, right? So tell us how can we implement this fitness function in, you know, day to day project or kind of like service that we build?
Rebecca Parsons: Well, first, some fitness functions act like unit tests, but some of them are much more system wide. We come up with basically two different axes to define fitness functions. Static versus dynamic. Static, obviously, it’s something that is some kind of analysis. Maybe you run it in your build, something like cyclomatic complexity, as an example. Dynamic is something that happens at runtime. So maybe you have some kind of monitor for CPU utilization. Well, that’s a dynamic fitness function. You don’t watch your CPU utilization to get above X. Or maybe it’s some kind of tracer, transaction tracer through a microservices architecture. Those are dynamic fitness functions.
And then there are fitness functions that test just one specific thing, and then there are more holistic fitness functions. And if you go to the extreme, the Simian Army is a dynamic holistic fitness function, or a collection of them, actually. And those happen at runtime. They often look at system wide characteristics and they’re looking at multiple aspects of the architecture and how the running system actually behaves under particular kinds of stresses.
And so fitness functions should be thought of as a unifying term for a lot of the different kinds of tests that we’ve been putting systems through for a long time. And the advantage of having that unifying terminology is that you can now talk about security requirements and operability requirements and performance requirements as the same thing. How far are we away from what we said our objective was? What is it going to take to get there? As opposed to all security requirements are created equal and must all be implemented before you can go live. Generally, that’s not the case.
And so the thing about fitness functions is you want to have it be automated if possible, but some of them you don’t actually want to automate. But the single most important characteristic is that you and I will never disagree on whether it passes or not. It has to be defined in that way. And that process that you have to go through to go from maintainable to a suite of defined functions that represent your definition of maintainable. That’s a difficult exercise, but it’s a quite valuable exercise.
Henry Suryawirawan: Yeah, so speaking about quality attributes or the ilities, right? Definitely, it’s kind of like vague whenever we discuss about it. I think you brought a point about maintainability. What do you mean by maintainability, right? Everyone has their own definition. And sometimes people focus on certain aspects like code, maybe maintanability of the service itself or maintainability of infrastructure, right? There are so many different maintainability. So I think the first exercise is to come up with a kind of like a baseline, right? Everyone needs to understand the same thing. And as much as possible, we should be able to quantify maybe some kind of metrics. Doesn’t have to be automated. But at least, people have the same understanding, right?
[00:26:07] Commonly Adopted Fitness Functions
Henry Suryawirawan: Maybe in your experience, having done this for quite some time, do you think there are some fitness functions that we, all software engineering team, right, must have within our system? I know that it’s hard because we mentioned that your importance of certain characteristics is different, right? But maybe there are some basic ones that you think are the most important for everyone to adopt.
Rebecca Parsons: Well, assuming you decide that you want your system to be evolvable, to me one of the most universal requirements is you have to be able to understand what the code is doing and what the system is doing. Because you can’t change something you don’t understand. And so the closest I’ve got to a universal, with the caveat, yes, I know I’m a, you know, I’m one of those, you know, scientists. I need to be precise, with the caveat you’ve decided evolvability is important. You want to have guardrails in there for code quality. Do you have good separation? Do you have low cyclomatic complexity? That’s probably the closest to a universal that you’re going to get. But we’ve actually seen quite a bit of creativity where people are trying to solve for a particular problem.
One of my favorite fitness functions that we heard about is we had a client and their legal department had come to the delivery team, and said, now, we’re using all these open source frameworks and they’re going to notify us if they change their license, right? And, you know, the team just laughed, as you are right now. Because, of course, they’re not going to notify everybody. They’d have no idea who’s using this. And the lawyer got all worried about that because, oh, well, we could inadvertently be using something that has a license that I haven’t approved. And so they started to, you know, come up with all of these natural language processing ideas for, you know. And then somebody came up with this very simple, elegant solution. Hash all of the license files. Put in a unit test that hashes the current license file and compares the hash to what’s in the test. And if it’s different, you send an email to the lawyer with a link to the new file. And then the lawyer can check. You know, and it’s, it was, it’s brilliant in its simplicity. And then as soon as the lawyer signs off, they do the rehash, they put it back into the build and now they go merrily along. And so the system is notifying the lawyer when the license file changes. And it was a brilliant solution, you didn’t need to do anything fancy to say, okay, did the semantics of this, well, how’s the team going to maintain that? All they need to know is that it’s changed.
So I think as people get more used to using fitness functions, we’ll start to see more ideas on, well, here’s some fitness functions around X. Here’s some fitness functions around Y. In much the same way that many people use variants of the Simian Army, we’re going to have similar kinds of things, I think, happening across architectural patterns, things of that nature.
Henry Suryawirawan: Yeah, I hope to see a lot more patterns or recipe book, you know, out there that people use in terms of fitness functions. So as far as to get inspiration or maybe the tools also that we can just, you know, use in an open source fashion, right?
[00:29:33] Principles of Evolutionary Architecture
Henry Suryawirawan: So I think another thing that is very fundamental in the evolutionary architecture implementation, right, there are two aspects, which is kind of like the governance aspect. And the other one is the engineering practice aspect. Maybe if we can cover each, right, because I find those two are really important, because they are kind of like a collection of multiple different principles, paradigms, philosophies that I think we all need to get reminded of. So maybe let’s start with the governance aspect.
Rebecca Parsons: Well, I think, so often governance is this dirty word because you’ve got these architects sitting on high looking down upon the minions doing all of the work, and they’re going to say, no, you can’t. Governance, when you get to any kind of scale, you have no choice. You have to have some level of governance. Now you, an engineering leader can decide what level. I know of one CTO that was trying to break a pattern of excessive reuse. And so he said, no two microservices can be written on the same technology stack and language. And so it makes it impossible to really reuse anything. I thought that was a bit extreme, but he was trying to make an organizational change and sometimes the pendulum has to swing like that.
Henry Suryawirawan: So what are the principles of evolutionary architecture that you can share with us?
Rebecca Parsons: The value, though, of fitness functions for governance is enormous. Because anything that’s in a fitness function, particularly an automated fitness function. You never have to do any kind of architectural review. No, you have no cyclic dependencies, because you’ve got a test in there that will fail the build if anybody inadvertently adds a cyclic dependency. And so all of those concerns go away from a governance perspective, and you can focus your governance discussions then, on those places where you’ve got two things and you’ve got to make a trade off and you don’t really know how to make that work, and so you can put the brainpower of the humans in the places where you need that creativity and you can leave the rote stuff.
One of the, I think, it’s the Dr. Monkey in the Simian Army checks to make sure all RESTful endpoints are properly configured. So you never have to worry about it, because it’ll get tossed out if it’s not properly configured.
So the shift of the governance that we’re talking about from the perspective of evolutionary architecture really comes down to focusing on the outcomes, not the implementations. What are the outcomes we are trying to achieve? What are the behaviors we want the system to exhibit? Not how you’re going to get there. And that allows the delivery teams to work within the sandbox that the governance organization has put into place, but then be creative about how they might actually implement something to achieve that behavior.
And you can also then have a basis of a conversation that says, I know our standard tool for this is X. But we’re trying to achieve the outcome. And in our situation because of PQ and R, Y works better to achieve the outcome. And you can have those discussions. And so the discussions become less about, no, you can’t use that because I told you, you had to use something else, but this is how we’re going to go about achieving the outcome, and this is why we think this is a better way to achieve the outcome. And so that’s one of those kind of underlying philosophies is let’s be focused on outcomes, not implementations.
Another aspect that is important here is it has to do with how you architect your system. And domain driven design has really helped us here, because it’s given us this language and this idea of a bounded context that makes sense within the business domain. Because if you think about a system in terms of its implementation, you’re going to talk about SAP or you’re going to talk about Salesforce or you’re going to talk about, you know, the customer ordering system. The people who are redesigning business processes and creating new business processes, they don’t care if something is stored in Salesforce, a CRM, or a shipping system. They think about the customer, they think about the product, and they think about, you know, the logistics flow.
The more our systems have their boundaries drawn around aspects of functionality that correspond to what the people who are creating the business process have as their chunks. They’re going to design the business process by rearranging their chunks. We can much more readily implement that process if our chunks have the same ability to move around. And I actually think that’s where microservices have been successful, where SOA version one failed so miserably, is the boundary so often, in that early implementation of SOA, were all around systems. Okay, we are going to create exposed services for SAP. Why? Um, and so I, of all of the principles and you know, there’s several others. I think those are the most important ones underlying evolutionary architecture.
[00:35:24] Conway’s Law & Postel’s Law
Henry Suryawirawan: Right. I think it’s really insightful, right? First, it’s like focus on the outcome, not the actual how or implementation, right? And I think DDD is kind of like the fundamentals of, you know, coming up a software that is aligned with the business, right? And the other thing that is commonly also mentioned when we talk about bounded context, you know, microservice and all that is about Conway’s law.
Rebecca Parsons: Oh, yes.
Henry Suryawirawan: You have Conway’s law and Postel’s law as part of this evolutionary architecture as well. Maybe explain to us why these two laws are important in evolutionary architecture?
Rebecca Parsons: Well, we’ll start with Postel’s Law. Um, simply put, Postel’s Law says be generous in what you receive and stingy in what you produce. And so the standard example I use, if you are receiving address information and all you need is the zip code, postcode, some kind of geolocator, don’t validate the whole address. You don’t need to. And that way, if somebody decides, oh, I need to add in that address line 2 to this thing, you won’t break.
Now, of course, you’ve got the great big asterisk, don’t open up a security hole. But the point is, focus on the information that you really need. Because that way, your system will only require change, if it actually has to change. There’s no way that we can prevent any breaking change from ever happening, but we want to limit it to where it really has to break, because we are trying to do something fundamentally different. But you want to be very stingy in what you expose, because, again, you have no idea who’s actually using what you put out there.
One of the sadder examples I saw of this, we had one client who did everything right in their package selection. They said, we’re going to change all of our business processes to do what the package wants. And that way we don’t have all of these customizations. And so we can upgrade whenever we get a new version. Wonderful idea. But what they didn’t do was keep track of who was actually connecting directly to the database. And they had 87 reports that were completely tied to the database schema.
And so they had to rewrite all of those reports before they could upgrade. And they hadn’t realized that. Because people just said, oh, well, you know, I’ll just hit the database for that report. And so even when you don’t intend for somebody to use it, people still are going to use it if they can. And you’ve made a contract, even though you’re in a contract that you don’t know you’re in. And so that’s Postel’s Law.
Conway’s Law. People try to fight Conway’s Law. And you just can’t do it. Conway’s Law, my version is a system will reflect the communication dysfunction of the organization that builds it. If the people don’t talk effectively to each other, the systems that they’re responsible for are not going to talk to each other.
And, you know, I would sometimes look very clever, because I would come in and have lunch with the architects, and then go in and talk to the VP and say, okay, the integration between these two systems is broken. How in the world do you know that? You haven’t looked at a piece of code. I said, yeah, but I, I saw the tech leads in the lunchroom and they walked past each other like this. Yeah.
And so you can use Conway’s Law to your advantage in when you look at what is, what do I really want my architecture to reflect and then reorganize your teams and they’re going to produce it. It’s just going to happen. We call it the inverse Conway maneuver.
Henry Suryawirawan: Yeah. So I think Conway’s Law is always kind of like brought up in so many various discussions, right? I think for listeners who are not yet familiar with these two laws, right? Make sure you research more because they are fundamentals, even though they are like around for so many, many years, right? People try to kind of like beat them, but eventually they couldn’t.
[00:39:40] Practices of Evolutionary Architecture
Henry Suryawirawan: So those are kind of like the governance and principles aspect. What are some of the engineering practices that you think software engineering teams have to adopt and practice?
Rebecca Parsons: Well, first off, I think an underlying prerequisite is the discipline, the infrastructural discipline and the deployment discipline that comes from continuous delivery. You don’t have to go all the way to continuous deployment, although there’s a new book out that makes a very strong case for why you should try to get there. But you at least need to know that your deployments are going to run smoothly. And so the risk mitigation aspects of continuous delivery are important. When you’re talking about the kinds of dramatic changes, you need to know what you’re deploying into and so that you can more readily debug anything that that’s happening. So I think that’s the first.
The second is this whole idea of evolutionary database design and database refactoring. I’ve been in many conversations over the years with, you know, people who would say, okay, well, agile and incremental, that’s fine for developers. But I need to have a holistic vision of my complete user experience, or no, I can’t test the system until it’s done, because you’re going to be changing it and then I’m going to have to retest it and all of those things. The team that I think has always had the strongest argument for, no, it can’t be incremental, were the DBAs, because data migration is hard. It sounds so simple. Copy it from here to here. No. But it’s, it’s hard. And so there’s an entire book called Refactoring Databases. One of our co-authors in the second edition, Pramod Sadalage, is one of the authors of that. So that, that is another critical engineering practice.
I also like to talk about contract testing, because, again, one of the things that you were trying to do with an evolutionary architecture is to make it as easy to change things as possible. And so if I understand the assumptions that you’re making of my system and you understand the assumptions that I am making of yours, you know, so we both know what’s happening. And then we’ve got the same with Neal. And then we can make whatever changes that we want, paying absolutely no attention to each other until one of those tests break.
And let’s say my test with Neal breaks. So I have a conversation with Neal, because I’m trying to implement something that violates something that he’s expecting of me, and so we negotiate what change has to happen. We’re continuing to ignore you. Because none of your tests are broken, and as long as your tests don’t break, we can continue to ignore you. And then we get all the tests working again, and then we go back to ignoring everybody. It maximizes the amount of independent work that can take place, and it helps us understand what those boundaries are and why. And that is a critical piece to being able to evolve an architecture. Because if I don’t know what you’re expecting of me, I can inadvertently break you, and we don’t want that. And so, that’s another important technique.
And then you start to get into things that aren’t necessarily as fundamental. I believe those things are fundamental. You have to have the right kind of test and safety net. One of the things that we found is if you think properly about testing, you’re actually going to end up with a cleaner architecture, because you have to have good boundaries to be able to properly test things. But then we often, for example, talk about choreography over orchestration. And this is where you really start to get into these trade off discussions, much like should I go with a well structured monolith or should I go to microservices?
You have much more flexibility with microservices than you do with a well structured monolith. Emphasis on the well structured. This is not spaghetti monoliths. This is nice, nice structured lasagna monoliths. And if you don’t need that level of flexibility, it’s not worth paying for the complexity. But sometimes you do. And it’s the same with choreography versus orchestration. If you’ve got an orchestrator, that orchestrator is going to solve some of those problems that you have of these independent actors. But you’re introducing coupling that is not strictly necessary. But there are all kinds of errors that you have to take care of yourself in a choreographed system. And so, again, if you need that flexibility, take it. But if you don’t need the flexibility, then go with something that’s simpler.
Henry Suryawirawan: Right. Thanks for highlighting all these important practices. I think continuous delivery is something that is a must, right? Especially when you want to do incremental change, because without continuous delivery, there’s just no way for you to do incremental change. And the other one is like the contract testing, especially in the microservice world, right, where you integrate with so many different third parties and services. You know, knowing where you break or when you break, I think is very important as well. You talk about evolutionary database design. I don’t know whether this has been a common thing, but I think many, many language and framework kind of like covers this aspect of evolutionary, at least RDBMS kind of like database migration. And the last one is about choreography. I think it touches a little bit about event driven architecture where you kind of like have choreography rather than orchestration. So thanks for sharing all this.
[00:45:41] The Impact of AI to Evolutionary Architecture
Henry Suryawirawan: So definitely one aspect that is kind of like trendy these days is about introduction of AI, LLM, generative AI, and all those stuff, right? So what is your take about the invasion of AI and with evolutionary architecture? Because one good aspect about AI is that they can create tests these days. People also use a lot of suggestions from AI assistant to actually generate code. Is this something that evolutionary architecture needs to kind of like govern as well? What’s-your take about AI?
Rebecca Parsons: Well, that’s one of the things that, that Neal and I’ve been talking about for a while. We can use fitness functions at least to, you know, particularly the suite of code quality fitness functions that are brought up. We can use that to assess the generated code. You know, and there’s still anecdotal evidence, I wouldn’t call it solid evidence yet, that these code generators, they will tend to copy, paste, and modify as opposed to trying to abstract. And so, running a simple copy paste detector can help you see if your code base is starting to get out of control in that way. I’ve been interested and involved in AI for most of my career. And so I’ve seen, you know, the AI winters. There’s certainly a lot of hype going on right now. But these models are qualitatively more powerful than any models that we’ve had in the past. And so I do think that we have the potential to use these LLM based systems, particularly the more coding focused ones, to help us in development.
One of the things that ThoughtWorks has been experimenting with, as an example, is using these LLMs on a legacy codebase to help understand how the information actually flows through that legacy codebase. And to help use that information to start to refactor and ultimately replace a legacy code base. It’s still early days, but what we’re seeing is really a fundamental increase in the ability of a human to understand a code base. And it’s because the human is relying on information, and the LLM and the background is doing a lot of hard work. So I think we’re going to see more of that. As I said earlier, you cannot evolve a system that you can’t understand. And that’s one of the problems with many of these old legacy systems is people just don’t understand how they work anymore. And so the more we can build tools to help understand these legacy systems, that puts us in a much better position to actually be able to modify those systems.
[00:48:44] The AI Worries
Henry Suryawirawan: Right. You mentioned something that is, uh, I think quite insightful for me, right? So you mentioned that code generation is typically kind of like copy paste modifying evolving a little bit here and there, right? But it’s pretty rare to see AI who can suggest you some abstractions or, you know, kind of like a domain driven kind of a suggestion as well. Is this something that every developer has to be concerned with or kind of like the gotcha that we all need to be aware of? Because most of the time now, everyone integrates their Copilot, and suddenly, you know, you see a lot of code being suggested and they just accept, accept, accept. I think there’s also a lot of studies saying that code churn is really high these days and a lot more codes are being generated. Maybe in terms of lines of code, right, grows really fast simply, because, yeah, we just accept rather than think through, you know, what kind of solution that AI is suggesting. So what is your view about these, especially in terms of fitness function or architectural aspect that maybe one day is going to be like another big ball of mud generated by AI, which is kind of like even worse.
Rebecca Parsons: Yeah, that is one of the things I worry about is that we basically increased the productive capacity of our industry to create that code. And that doesn’t help anybody. Um, as I said, I do think we can use fitness functions to at least monitor what’s happening with the code base. One of the things that I worry about though is, in many ways, our industry is kind of an apprentice model, where you have junior developers who are learning from more experienced developers, and it goes on. Unless these coding assistants get much better pretty quickly, I would worry about in 20 years time, where are our star developers going to have come from? The notion that somebody is going to learn how to code from a coding assistant, we’re not there yet. They’re too likely to put out things that are wrong.
There was one study, we actually did a podcast on this that was done by the CodeScene people, when the coding assistant that they were working with and they, they went across a suite of models, recommended a refactoring, in the best case, they were right 37% of the time. So in over 60%, the refactoring that they suggested did not maintain the correct behavior of the code. If you as a developer got things wrong two thirds of the time, you’re not going to keep your job for very long. As a professor, if two thirds of the stuff that I said was wrong, I am not doing a service to my students. They’re not going to be able to learn if they have to figure out which two thirds of the stuff that I’ve said is nonsense. So that’s what worries me, is how are we going to train the next generation if we’re relying so much on coding assistants.
Henry Suryawirawan: So yeah, thanks for highlighting this apprenticeship. I kind of like agree with you, right? Because in some of my experience using coding assistants, sometimes yeah, it got wrong answer in a confident way, you know. Like this is it, you know, this is the solution. And then when you give it a try, actually, it’s kind of like wrong. You try again, it’s still wrong. Until maybe certain times that, okay, you finally got it right. So I think maybe people talk about, you know, replacing junior or junior being able to upskill themselves really, really fast just by using AI assistant. I think there’s a worry there about this apprenticeship aspect that you mentioned, right? How can someone be trained maybe in terms of abstraction, domain driven design, or even evolutionary architecture, you know, the multiple dimension aspect. So I think AI currently is not capable of doing that. So thanks again for highlighting that.
[00:52:32] 3 Tech Lead Wisdom
Henry Suryawirawan: So Dr. Rebecca, it’s been such a pleasure. I learned a lot about evolutionary architecture and all the fundamentals about it. So, unfortunately, we reached the end of our conversation. I have one last question that I’d like to ask you. I call this the three technical leadership wisdom. You can think of it just like an advice that you want to give to the listeners. Maybe you can share your version of wisdom for us to learn from.
Rebecca Parsons: Okay, well, the first one is, I firmly believe as technologists, it’s our responsibility to communicate to the rest of the organization in their language, the potential consequences of the decisions that they are making. We’re the ones that know the tech, but we have to do it in their language so that they can put it, so they can understand the business risks and or the business opportunities for that matter. And so the first thing is we need to understand how our organization makes money, what they are doing, what are the pressures on that organization. And that’s our responsibility.
The second I would say that as the technology landscape has become so broad, questions of generalist versus specialist have taken on a different meaning. It used to be, as I said when I started, one person could understand the entire stack. You can’t do that to any level of specificity anymore. JavaScript frameworks and other frontend frameworks and, you know, non relational databases and this different kind of network architecture and dot, dot, dot, dot, dot, it just keeps going on. And so a crucial decision that an individual needs to make is what kind of technologist do they want to be? Do they want to be a somewhat generalist? Do they want to think more big picture from a technology perspective or do they want to become a true specialist in something? And that’s something to decide relatively early in your career.
And then the final thing is, with how rapidly our industry is changing, you have to think of learning as fun. I was one of those silly people who loved school. So summer school was perfect. We have summer and we have school at the same time. Isn’t this great? And everybody thought I was mad. But we have to embrace that because new languages are coming out, new frameworks are coming out, new architectural approaches are coming out. And we need to be able and we need to enjoy continuing to learn new things. Because you don’t want to be that person who is hanging on at the tail end of the career because they’re the only person left on the planet that understands this programming language. You know, you don’t want to be that person. You want to be someone who has continued to evolve your career. And to do that, thinking of learning as fun and not a chore is crucial.
Henry Suryawirawan: Well, I wasn’t expecting the last one, you know. Treating learning as something fun, right? So I, I’m sure many people would find it also kind of like insightful, right? So if we don’t take learning as something that we enjoy doing, I think it’s kind of like a chore, right? Especially with all these rapid changes that are happening. I think it’s going to be difficult to keep up if we don’t treat it as a fun thing. So I think that’s really beautiful.
So Dr. Rebecca, if people want to talk to you or maybe reach out to ask you more questions, is there a place where they can find you online?
Rebecca Parsons: I’m on LinkedIn. And my readable handle is Dr. Rebecca Parsons.
Henry Suryawirawan: Right. I’ll put it in the show notes. So thank you so much for your time, Dr. Rebecca. I really enjoyed this-conversation.
Rebecca Parsons: Thank you, Henry. I had fun.
– End –