#188 - Balancing Coupling in Software Design: Principles for Architecting Modular Software Systems - Vladik Khononov
“Coupling is an inherent part of system design, not something that is necessarily good or evil. How we design coupling can take our system either towards complexity or towards modularity.”
Vladik Khononov returns to the podcast to discuss his latest book “Balancing Coupling in Software Design”. In this episode, Vlad revisits the essence of coupling, a term often not fully understood, and explores its implications on software complexity and modularity.
Vlad introduces the concept of shared lifecycle and shared knowledge, revealing the hidden dependencies that can undermine even the most well-intentioned designs. He also explains complexity through the lens of the Cynefin framework and delves into the differences between essential and accidental complexity.
One of the episode’s highlights is Vlad’s unique framework for evaluating coupling. He introduces the three dimensions of integration strength, distance, and volatility, providing a practical model for assessing and balancing coupling in software design. He also challenges traditional definitions of modularity, emphasizing the importance of knowledge boundaries.
Whether you’re a seasoned tech lead or an aspiring software engineer, this episode offers invaluable insights into building maintainable and modular software systems. It will leave you with a deeper appreciation for the delicate balance between coupling and complexity.
Listen out for:
- Writing about Coupling - [00:03:28]
- Coupling - [00:06:09]
- Shared Lifecycle & Knowledge - [00:08:17]
- Cynefin - [00:12:28]
- Essential vs Accidental Complexity - [00:19:00]
- Modularity - [00:22:45]
- Abstraction & Knowledge Boundary - [00:29:04]
- 3 Dimensions of Coupling - [00:36:25]
- Balancing Coupling - [00:58:11]
- 3 Tech Lead Wisdom - [01:02:30]
_____
Vladik Khononov’s Bio
Vlad Khononov is a software engineer with extensive industry experience, working for companies large and small in roles ranging from webmaster to chief architect. His core areas of expertise are distributed systems and software design. Vlad consults with companies to make sense of their business domains, untangle monoliths, and tackle complex architectural challenges. Vlad maintains an active media career as a public speaker and author. Prior to Balancing Coupling in Software Design, he authored the best-selling O’Reilly book Learning Domain-Driven Design. He is a sought-after keynote speaker, presenting on topics such as domain-driven design, microservices, and software architecture in general.
Follow Vladik:
- LinkedIn – linkedin.com/in/vladikk
- Twitter / X – @vladikk
- 📚 Balancing Coupling in Software Design – https://www.amazon.com/Balancing-Coupling-Software-Design-Addison-Wesley-ebook/dp/B09RV3Z3TP
Mentions & Links:
- 🎧 #76 - Learning Domain-Driven Design - Vladik Khononov – https://techleadjournal.dev/episodes/76/
- 📚 Learning Domain-Driven Design – https://www.oreilly.com/library/view/learning-domain-driven-design/9781098100124/
- Domain-driven design– https://en.wikipedia.org/wiki/Domain-driven_design
- Cynefin framework – https://en.wikipedia.org/wiki/Cynefin_framework
- Dave Snowden – https://en.wikipedia.org/wiki/Dave_Snowden
- Connascence – https://en.wikipedia.org/wiki/Connascence
- Sonya Natanzon – https://www.linkedin.com/in/sonya-natanzon
Check out FREE coding software options and special offers on jetbrains.com/store/#discounts.
Make it happen. With code.
Get a 45% discount for Tech Lead Journal listeners by using the code techlead24 for all products in all formats.
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
Writing about Coupling
-
It happens quite often that we’re using certain terms without fully understanding them. Like we have that gut feeling about something. So we are using that in our day-to-day language. However, we are not able to clearly describe what that something means. And that’s something for me was coupling and cohesion for many, many years.
-
You’ll see that although it’s called Balancing Coupling and Software Design, its main topic is complexity and modularity. I want to show what complexity is exactly in software design, because that’s another thing that we understand on the gut feelings level, but it’s challenging to describe and it’s even worse for modularity.
-
Today, we all know that’s the goal we should strive towards. But what is modularity exactly? Like how can you put a number on a design? How can you evaluate whether it’s modular or not? So, that was my goal, to show how these two things are actually rather two sides of the same coin, and that coin is coupling. Coupling can take your system either towards complexity or towards modularity.
Coupling
-
Let’s say we’re working on a system. A system is a number of components that are working together to achieve a goal. That goal is the reason why that system exists. Now, in order for those components to work together, they need to be connected. Because they cannot be fully independent of each other. That will be just a collection of components. But to make the value of that system higher than the sum of its parts, we need to connect them. And those connections are basically coupling.
-
If you go to the dictionary and look up the word coupling, you’ll see that initially what it means is not monolith, not big ball of mud, but it means connection. If two things are coupled, they’re connected.
-
Coupling is an inherent part of system design. Not something that is necessarily good or evil, but something that is needed. We need coupling. It’s like a glue that holds a system together.
-
However, how we are designing those interactions, how we are designing those connections between the components, that’s where it gets interesting. That’s where that coupling can take us either towards complexity or towards modularity.
Shared Lifecycle & Knowledge
-
I’m purposefully using these abstract terms “components in a system”. I’m not telling what these components are. They can be services, they can be objects, classes, even methods within a class. And the same is true regarding the system. It can be a distributed system. It can be a module or an object, an object-oriented code base, doesn’t matter.
-
Let’s say we have two coupled components. Now when we say that coupling is bad, what we usually imply is that we have to change those two components simultaneously. For example, you want to change one of them, but then you’re forced to apply a corresponding change to the second component. And that connects their lifecycles. They have to be evolved simultaneously.
-
Let’s assume that coupling introduces that dependency in the life cycles of connected components. Now, propagating changes in almost all cases is not something that we want. We want to be able to modify each component individually. Once we introduce those connections between components, in order to work together, they have to share some knowledge about each other. And the amount of knowledge they share, it actually dictates what is the amount of cascading changes that are going to follow because of that design.
-
That knowledge can be as simple as a knowledge of integration interfaces. Let’s say I have a module, so I define its public interface. If you want to work with it, you have to know the details of its public interface.
-
Or you want to use some other means of integration. Let’s say you want to reach out to my database. In that case, maybe you will have to know the implementation details of my module. So that’s the other extreme of knowledge that can be shared. And of course, if I’m sharing only the integration interface, then the amount of cascading changes that are going to follow is going to be dramatically lower than in the second case where you are coupling yourself to my implementation details.
-
Overall, these are two properties that are very interrelated. The first one is how connected the lifecycles of those connected components are. And the second one is the amount of knowledge that’s being shared across the components’ boundaries.
Cynefin
-
When people are talking about coupling or rather decoupling, they mean let’s take something big and break it apart into those tiny parts. Let’s take a monolith and break it into microservices. However, if you are not following a proper decomposition criteria for drawing the boundaries of those resultant smaller components, you will end up with sharing lots of knowledge.
-
As a result, you will end up with lots of cascading changes. As a result, you are going to end up cursing that very day you started decomposing it. And as a result, your system is going to be full of complexity.
-
Complexity is something that you know it when you see it. But we need something more precise, something that we can explain. So that’s why I chose to use the Cynefin framework. It provides with different scripts for making decisions and how to make decisions in different situations. And it categorizes those contexts into five domains, each one requiring a unique approach to making decisions.
-
Let’s say we’re working on a code base. And I want to make a change. If I know exactly what the effect of that change is going to be, then that code base is not complex. It’s probably simple. That’s why this domain, in Cynefin terms, is called clear. It’s something that is simple, clear. If I am making a change, I know what’s going to happen.
-
If, on the other hand, I am looking at the code base and I have no clue what’s going on there. However, I can consult somebody, some external expert, and that expert will tell me exactly what’s going to happen, then it’s not complexity. This time we have a complicated system.
-
And the third domain, which is a complex domain. Here, we have no idea what the effect of a change is going to be. Also, there is no expert we can consult. So, the only option we have is to conduct a safe experiment, which means to make a change and observe its results. And based on those results, to try to understand its impact. So that’s complexity. Now, in this domain, we assume that there is a connection between an action and its outcome. A cause and effect.
-
In the fourth domain, which is called chaos, there is no relationship between a cause and effect. The effects are random. Now, such things do happen in software, of course, especially in production environments after deploying on Fridays or on public holidays. However, this is something that is rather not related to our discussion of software design. It’s more related to the runtime of the system. But in software design, when we’re changing the system, it should be somewhat deterministic, one way or another.
-
And finally, in the fifth domain of Cynefin, it’s unknown. When you’re looking at a domain, you have no idea where you are. And in this situation, you may assume that it’s clear, it’s something simple. And when you get to details, it’s only discovered that it’s a complex domain or chaotic domain or whatever. So these are the five domains.
-
-
And I really like the way Dave Snowden, the author of this framework, defined complexity. When the only way of identifying the outcome of an action is conducting safe experiments. Unfortunately, that’s something that is pretty frequent in software design. When you’re looking at the code base and you want to refactor well. If you know what’s going to happen, great. Maybe it’s complicated, and that expert is the compiler. Maybe the compiler can tell you what’s going to happen. Or a colleague that built that codebase in the first place. But in many cases, when we’re working with legacy systems, brownfield projects, or big ball of muds.
Essential vs Accidental Complexity
-
Assuming that you are working in a non-trivial business domain, there will be some complexity in it. Complexity that is driven by its requirements. For example, let’s say it’s a financial system. Or a medical system. If there’s something interesting that the system is doing, usually, it involves some complexity. That’s the reason the system is being built in order to help its users to encapsulate that complexity in the system, to take care of it.
-
So that’s the essential complexity of that business domain of the problem you’re solving. There is no way of avoiding it. If you avoid that essential complexity, then your system is not doing its job. We cannot avoid it, so we have to manage that complexity.
-
How can we manage it? We have a plethora of tools, from domain-driven design to design patterns. So that’s the essential complexity. Something that you manage.
-
The accidental complexity, on the other hand, is something that you introduce. Something that you introduce by not designing your system properly. And if we define complexity as that implicit relationship between an action and its outcome, then accidental complexity is designing a system in such a way that whoever is going to work on it next or maybe even me tomorrow, won’t have any clue if a line has been changed, what effect of that change is going to be on other components within the same system. Or maybe it’s interactions with other systems, which are even worse and even harder to predict than effects within my code base.
-
It’s all a matter of design of how you architect your system and how you design those interactions, which means those connections, which means coupling between its components. Whether it’s going to introduce that accidental complexity. Or on the other hand, it will help you to manage that essential complexity.
Modularity
-
First of all, to define modularity. You will find the recursive definition of like a modular system is one that built of modular components. You may also find definitions saying that a modular system is one that is built out of interchangeable components. A system that resembles Lego bricks, that you can connect with each other and form something else.
-
Let’s say you’re working on a typical knowledge management system for a business. Does it resemble Lego bricks? I don’t think so. Do you need interchangeable components? Maybe you will have to change the database one time, because you will need to run your tests on a memory database which is not going to be a real one, but that’s it. That’s probably the limit of our interchangeable components. Maybe you should encapsulate all the managed services of your cloud vendor so that you can migrate to another cloud vendor. Maybe you should, but trust me, those abstractions you are going to introduce are going to be leaky anyway, and that migration is not going to be as simple as people hope.
-
Does it mean that we don’t need modularity? No. We definitely need modularity. However, we also need to define what do we mean by it.
-
I want to go back to the definition of a system that I started with early on, saying that a system is a set of components working towards a goal. A modular system is a set of, let’s say, modular components, but working not only towards its goal, the goal of that system today, but it is also designed in such a way that we’ll be able to support goals that will be defined in the future.
-
Now to support goals that are going to be defined in the future doesn’t mean that out of the box it has to implement all possible use cases. No. Of course not. And that’s not possible. The design of that system should be able to withstand implementations of those future use cases, those future requirements.
-
When thinking about modularity, we have to take into account future goals, but they have to be reasonable. How do we identify those reasonable changes? That’s another thing that makes our job so hard. So we have to think about those future changes. Which changes are going to be reasonable and which are not?
-
Going back to the definition of modularity and modules and modular systems. The goal is to design a system in such a way so that it will support future requirements. Once it does, then we can say that those components it is composed of are actually modules.
-
That brings me into another problematic topic of modules versus components, because I defined the system as a set of components and the modular system as a set of modules. If you look up on the internet the difference between these two terms, you will see some problematic definitions. Some sources claim that modules are logical boundaries, whereas components are physical boundaries.
-
However, if you go back to the origins when the term modularity was introduced to software engineering, you will see that back then they had three properties of a module. One of them was that it should encapsulate a behavior, meaning that we have some behavior that is encapsulated in one module instead of being spread across multiple modules. Second, it has to expose an interface through which other parts of the system can execute that functionality. And third, it should have the potential of being independently compiled. Only potential. It doesn’t mean that whether you’re compiling your module independently turns it into a component. No, it is still module.
-
So the difference between components and modules, it’s not about the type of boundaries, whether they’re physical or logical. It’s about encapsulating functionality. And that’s something that is essential for managing the essential complexity and not introducing accidental complexity.
Abstraction & Knowledge Boundary
-
Introducing abstractions, it can be beneficial, but it can be a very effective way of introducing accidental complexity. Whether you’re using it to manage the essential complexity or introducing a new type of complexity depends on your concrete context.
-
Let’s say I have two components and I’m thinking, what are they going to interact with each other directly? Or should I introduce an abstraction between them that will encapsulate something? We have to ask, will that additional abstraction help us to encapsulate knowledge, or in terms of David Parnas, to hide some information? Whether it will help us to use the boundary of one of the modules as a boundary of knowledge.
-
We could have used a module to encapsulate that knowledge, to hide that encryption algorithm in one place. And in that case, the functionality of that module would be encryption and decryption. Its interface would be probably two methods for encrypting and decrypting data. And it could be independently compiled or it could be compiled within our monolithic code base, doesn’t matter, it’s still a module.
-
If we compare the details of an encryption algorithm versus having an interface with two methods, encrypt and decrypt data, the difference between the two amounts of knowledge is substantial. In that case, that abstraction is going to be super useful. It manages the essential complexity. That encryption algorithm is encapsulated. So it helps us to make our system simpler, because all those other components that will have to use it, they don’t have to be aware of the internal details of that logic.
-
On the other hand, let’s say we’re building a system and that pattern data transfer objects–it gets lots of hate. People are saying, okay, so I have my business entities and then I have to introduce DTOs that are going to be on the boundary of my application. And my APIs are going to return those DTOs.
-
Why people love to hate DTOs, because in their code bases, usually, what they are doing is translating one data structure to another data structure, which looks exactly the same. So you’re just basically copying values from one data structure to another one. However, as long as the attributes of those data structures, those types of those attributes are exactly the same, we’re not actually encapsulating any knowledge by introducing that abstraction.
-
What we’re doing is we’re introducing another moving part. We’re introducing another object or another class that we’ll have to think about when we are going to change our model. We’ll probably have to apply the same change, let’s say, we want to change the name of a field in both places. Now, in this case that DTO is not an effective abstraction. It increases complexity. It introduces accidental complexity.
-
Am I saying that DTOs are bad? Of course, not. I just gave you an example in which DTOs are not providing value. But on the other hand, they’re introducing another something you have to think about. And the more things you have to think about, the more the cognitive load. And our cognitive load limits are not looking good.
-
Going back to your question of abstractions, it depends. If abstraction helps you to encapsulate knowledge, it’s effective. If it doesn’t, then it’s just going to introduce additional moving parts and you’ll end up increasing the accidental complexity of the system.
3 Dimensions of Coupling
-
Evaluating coupling is something that we are trying to do for ages now. The problem with that approach is that it was based on counting variables or counting methods. But is that number something that you can really, really trust?
-
And there is a metric called stability, which is about the relationship between the incoming connections and outgoing connections, afferent coupling and efferent coupling. That metric says that if there are more modules that depend on you than the number of modules you depend on, then your stability score is going to be higher.
-
However, the devil is in the details. What I’m showing is that it uses reflection to read a value of a private field. So what is going to happen? Well, once I’m introducing that dependency on an implementation detail, any change to the implementation details of that external module has the potential of breaking my component. So now what we’re getting is a perfect stability score ended up being a big ball of mud.
-
That’s why initially I tried to avoid putting numeric values on coupling. Because it’s all about details, details that are hard to quantify and hard to describe. In the book, I propose a different model of evaluating coupling that is based on three dimensions.
-
The first one is the dimension that measures the knowledge. What is the amount of knowledge that is shared between two connected components? How do you measure knowledge? Can you put it on a scale and have a number next to it? Of course, not. We cannot put a number on it yet. However, we can look at that early work back from the late 60s, early 70s, the Structural Design Methodology. And back then, they introduced a model called module coupling. It had six levels that are kind of challenging to apply in our modern systems, because that model is based on languages such as COBOL and FORTRAN.
-
In the Balancing Coupling book, I propose a different model. I called it integration strength. It is based both on a structured designs model. It adapts the six levels of integration strength into terminology that’s going to be more convenient for us today. The four terms are contract, model, functional, and intrusive coupling. They are not defining the amount of knowledge. They’re defining the type of knowledge.
-
Intrusive coupling. This one means that I’m using something other than public interfaces for integration. Let’s say I’m using reflection. Let’s say I’m using another microservices database directly or something else, whatever, that wasn’t intended for integration. I’m introducing an intrusion into its boundaries. That’s why it’s called intrusive coupling.
- Now once we’re introducing intrusive coupling, we have to assume that the author of that module might have no clue that we’re doing that thing. Which means almost any change that they’re applying has the potential of breaking the integration. So, lots of knowledge, the integration interface is fragile, it’s implicit, so expect cascading changes.
-
Another type of knowledge that can be shared is the knowledge of functional requirements. I called it functional coupling. This one is about what is the functionality that the component is implementing? Now, let’s say that we have two components and they implement closely related business functionalities. That means that probably they will have to change simultaneously, because of the same changes in business requirements. That means we have functional coupling.
-
An extreme example of functional coupling would be, let’s say, we have the same business rule implemented in two places, or the same business algorithm or the same business invariant. And from the business standpoint, if the requirements or the definition of that rule change, they have to change simultaneously. Because otherwise the system is going to be in an invalid state. In that case, we have a very strong case of functional coupling. By the way, they don’t have to be physically connected. I call it wireless coupling.
-
Another case of functional coupling is, let’s say, we have multiple operations that require concurrency control, probably because they’re going to work on the same set of data. In that case, again, functional coupling. Or maybe we have operations or functionalities that have to be implemented in a specific order, one after another. That’s also a case of functional coupling.
-
That requirement of being executed in a specific order is probably there for a reason. Probably, it introduces some kind of business dependency there. So that’s the level of functional coupling. Here we are sharing the knowledge of our business requirements.
-
-
To implement those business requirements, usually we have to model a business domain. We have to understand what is the system we are implementing and define a model that represents that business domain. And then we are going to implement the functionality, the requirements in code using that model. If you have two components that are based on the same model, which means if the model changes, both of them have to change, then you have model coupling.
-
And finally, the lowest level is contract coupling. So contract coupling, we can think about it as a model of a model. Remember the discussion about DTOs? I used an illustration of an ineffective DTO, and then you, Henry, elaborated and discussed an effective use of DTOs. In that case, those effective details are contracts, integration contracts. That’s a model of a model that was crafted with the purpose of encapsulating that model that’s being used internally. Whenever that model changes, we can contain those changes behind the same integration contract. So we are minimizing the knowledge that we are sharing across our boundary.
-
-
Overall, these are four types of knowledge. We can share the knowledge about our integration contracts, about how we see and how we think about the business domain, its model. Model coupling. Then we can share knowledge about our business requirements, functional coupling. And finally, we can share knowledge about our implementation details, and that’s intrusive coupling.
-
Overall, these four levels are not going to put an exact number on the weight of knowledge you’re sharing. However, these are four different types that signal different amounts of knowledge that’s been shared.
-
Does it say that, let’s say, functional coupling is necessarily bad? Or model coupling, is it worse than contract coupling? Well, it depends. If you can reduce it, of course, you have to reduce it. If you can turn a model coupling into contract coupling, probably, you should do it. But not always. Sometimes you have to share a model. Or sometimes you have to share business requirement. Does it mean that your design is bad? No. It depends on the next dimension, the dimension of distance between those connected components.
-
We can introduce coupling across different levels of abstraction. We can have coupling between methods, coupling between objects, coupling between modules, namespaces, services, whatever, whole systems. Now, the higher we go on that layer of abstraction scale, the higher the physical distance between the source code in which those components are implemented.
-
Let’s say, you have two methods within the same class, probably they are going to be close to each other. Probably in the same file, right? Different objects, probably different files. Different namespaces, different folders. Different services, maybe, different repositories. Different systems, maybe, different companies, etc. So the higher you are on that scale, the longer the distance.
-
Why is that important? It’s important because if you combine it with the knowledge, you get a sense of whether you’re going towards complexity or towards modularity. Because let’s say that you have two components with functional coupling between them, which means they’re sharing a lot of knowledge. And you’re putting them in separate microservices, which means the distance between them is big as well. Now, that functional coupling kind of implies that we’re sharing lots of knowledge. So if something is going to change with that knowledge, that change is going to be propagated across the boundaries. So both of them are going to be changed simultaneously.
-
If the distance is big, is it going to be an easy change? Probably not. The bigger the distance, the harder it is. The harder it is going to be to imply the change. In other words, we can say that the bigger that distance, the more coordination effort will be needed to implement a change that affects both coupled components.
-
So if we have both integration strength and distance high, we get complexity. We’re looking at a system in which we want to change a component, but in order to understand the effects of that change, we have to investigate components that are located far away from us, maybe, in different repositories even. Is it easy? No. Will it require cognitive efforts? Lots of them. So that will result in cognitive load and as a result it will result in complexity.
-
What if we do the opposite of that? Let’s say we have two components that are not sharing knowledge. Let’s say we are on that contract coupling level. And we are putting them close to each other in the same module, the same namespace, the same package, whatever you call it. So both values are low. And if all we have are two components, then probably, yeah, who cares.
-
But usually in a real system it’s not going to be two, it’s going to be way more. And once you have way more unrelated things located close to each other, then when you have to make a change, you suddenly have to find that thing you have to change. And the more options you have, the harder the cognitive load. The harder the cognitive load, as a result, the harder the complexity.
-
At this point, we can identify complexity as a situation in which integration strength is equal to distance. Both are low, or both are high, we get complexity. Now what is modularity then? Well, modularity is the opposite of complexity. If you are working on a modular system, you should know exactly what the effect of a change is going to be.
-
If complexity is in the case of both strength and distance being equal, then modularity is when they’re not equal. If you have high integration strength and there is no way for you to reduce it, because that’s the business domain, that’s your essential complexity, deal with it.
-
Then how can you manage it? You can put those closer related things close to each other. You can minimize the distance between them. Yes, they will have to change together, but once they’re close to each other, the cognitive load on you is going to be lower because it’s almost like modifying the same thing. Or vice versa. Let’s say we have minimal knowledge shared across coupled components. We have contract coupling. Well, what should we do? The distance should be the opposite. Let’s spread them apart. Let’s spread them across, let’s say namespaces or services. That’s the relationship between distance and integration strength. We can use them for evaluating complexity and modularity of the codebase.
-
Now, there is another dimension, and that’s the dimension of time or the dimension of cutting corner. I call it being pragmatic. Let’s say we have two systems. Once we’re talking about systems, then the distance is big, and we’re introducing intrusive coupling between them. Is that design necessarily bad? That’s a tricky question, because from a complexity standpoint, we should say yes, right?
-
However, what if that upstream system is not going to change? Never. Let’s say it’s a legacy system and you have to integrate with it. And that system is dead. So, should you roll up your sleeves and get your hands dirty and implement additional end points for proper integrations through contract coupling? You probably could. However, given that’s a legacy system, and it’s not going to change, it’s fine to take its data from its database, for example. So yeah, you are introducing intrusive coupling. However, since the volatility of that upstream system is low, then you are not going to feel any pain in your future, because of that intrusive coupling.
-
Overall, we have three dimensions. We have integration strength and distance to show us the way, whether we are headed towards complexity or towards modularity. And we have that dimension that can help us to make pragmatic decisions based on the volatility of our components.
-
If you are into domain-driven design, then supporting subdomains, generic, when you’re integrating generic subdomains, I say that it’s okay to cut corners. That’s something that usually is implemented, as Eric Evans says, with a rapid application development framework.
-
Is it going to be super modular? Probably not. Why is it okay? Because those subdomains are not going to change frequently. Core subdomains, on the other hand, that’s where you should expect your changes, and that’s not a place to cut corners. That’s a place where you want modularity.
-
If you follow domain-driven design, that aggregates, basically, take the idea of functional coupling to the extreme, we have those transactional boundaries. So we are putting all those entities that share those transactional boundaries within the same aggregate. Bounded contexts are there to protect our models. So we can use the same model of the business domain within a bounded context, but not across bounded context. Across bounded context, we need integration contracts. In the DDD language, these are open host service, published language, or anti corruption layer.
-
And if you analyze design patterns, architectural patterns, or cases where people are saying that one pattern is evil and should be considered harmful versus people saying that pattern will save your life. Well, consider those extreme opinions from the perspective of those three dimensions. Probably, you’re going to find the explanation for those conflicting opinions in one of those dimensions.
Balancing Coupling
-
When you were making software design decisions at whatever level of abstraction, think about those three dimensions. What is the knowledge that is being shared? What is the distance across which the knowledge is being shared? And also, of course, what is the volatility of that knowledge is going to be? How can you evaluate that volatility? That’s a place where you can use different models.
-
I prefer domain-driven design subdomains, but there are other methods. My model of balanced coupling, it’s not going to give you a score, like a number, like a grade. Unfortunately, not. It’s not something that is really trivial. But at the same time, I wanted to offer that model that you can keep in the back of your head. Something that is easy to remember. Something that doesn’t require memorizing tons of different patterns.
-
All you have to remember is that there are four types of knowledge and there are three dimensions. If you evaluate that knowledge and you compare it with a distance, you know whether you’re headed towards modularity or complexity. If you’re headed towards complexity, then you can look at the volatility and decide whether it’s something that is worth your effort, or maybe you should focus on something else.
-
Overall, I would say keep these simple terms or ideas in the back of your head when you’re making software design decisions and apply them. It’s not something that is going to be easy to incorporate in a continuous integration pipeline. But it’s something that is supposed to be easy to incorporate in your software design decision-making process.
3 Tech Lead Wisdom
-
If you stumble upon something that you cannot explain clearly, I strongly recommend getting into it, learning what’s going on there.
-
We kind of understand on the gut feeling level. But it really, really helps to get past that gut feeling level towards more explicit definitions.
-
Chances are you’re not alone. Usually, there will be more people struggling to define a concept. For me, it was, for example, coupling and modularity and cohesion.
-
Once you are there, once you’ve found you were able to find such term and to define it, then you should probably share your wisdom with the world, because people are going to be grateful to you for doing that.
-
-
Modeling is a very important part of what we’re doing.
-
As software engineers, I don’t think we spend enough time on training that modeling muscle. We spend more time doing workshops on Kubernetes and Lambda functions, for example, and things that are more technical.
-
Modeling is about our ability to understand the real world, those real-world systems that we have implemented in code. So I would say spend time modeling. It’s super important to train that muscle, to get better at it.
-
And it also helps to analyze other models. And models are everywhere. Even if you’re looking at a model of a toy car, it’s still a model. So think about, analyze it from that perspective. A model is not a copy of the real world. It’s a human-made construct that is supposed to solve a problem.
-
Ask yourself, what is the problem that this specific model solves? Does it do a good job at it? Do it with your software models. And then, of course, apply that knowledge for evaluating what I called earlier models of models, integration contracts, how effective are they of encapsulating knowledge. And that will help you to become a much better software designer, again, at whatever level of abstraction you’re working on. It doesn’t matter. The underlying ideas are the same.
-
-
Getting better at design.
-
Design is another overloaded term that different people understand in different ways. We have graphical design. We have software design. We have product design, whatever. But if you ask yourself about what’s the purpose of design, it’s usually to solve a problem. The design is of a solution. So getting better at design, it’s like getting better at modeling, but at a different level of abstraction.
-
Evaluate designs. Is it good or bad? Does it solve the problem? Probably that says something about the design.
-
And once you will get into that notion of design. Underneath, usually there are the same principles that are driving that design being good or bad. And usually there will be some representation of distance and knowledge.
-
In software design, we have integration strength and distance and balance. In graphical design, for example, you have sizes of components on the web page. The greater the distance, the bigger should be the size, right? The greater the distance that your mouse travels.
-
- These are the topics that are sort of philosophical, but underneath they will definitely make you a much better, software architect, software designer or just a software engineer.
[00:01:25] Introduction
Henry Suryawirawan: Hello, guys. I’m very happy to bring back another repeat guest to the Tech Lead Journal podcast. Today, we have Vladik Khononov. So Vlad, really looking forward for this conversation. I still remember our first conversation, right? It was probably two years ago. Your episode is still within, you know, top 10, and I’m really glad to speak to you for your new upcoming book, Balancing Coupling in Software Design. So welcome to the show.
Vladik Khononov: Hey Henry, thank you so much for having me again. And wow, time flies, two years. Yeah, and I remember two years ago we were talking and I said that the coupling book is going to be finished soon and here we are two years later.
Henry Suryawirawan: Yeah. So Vlad, I think it’s been quite a while, right? So maybe in the beginning you can share, apart from this book, anything else that you did, probably interesting for you to share with us.
Vladik Khononov: So I’m mainly known by my work in the domain of DDD, Domain-Driven Design, and that’s primarily the Learning Domain-Driven Design book and the Cardboard Domain-Driven Design and people who are on Twitter, maybe they will be able to find out what that means. Right now, in last three years, I’ve been working very hard to finish the book on coupling, Hands down that’s the hardest project I ever worked on and I can’t believe I’m saying this but it looks like it’s almost done.
Henry Suryawirawan: Right. Yeah, so in preparation of this conversation, I actually had a chance to read the book. I think I would say almost, you know, end-to-end, cover to cover. Although it hasn’t completed, right, on O’Reilly. In the beginning, I was actually quite interested to actually figure out what exactly can you talk about coupling, right? So it seems like almost everyone in the uni learned about this coupling, right? But probably only a little bit, right? Like, okay, coupling, cohesion, object-oriented, encapsulation and things like that. But you actually covered it in one book. And after I read it, It’s actually very, very insightful.
[00:03:28] Writing Book About Coupling
Henry Suryawirawan: So let’s try to engage a conversation about coupling today. So maybe in the first thing, right, we all know about coupling, maybe we learn it in the university. What else that you are trying to explain actually by writing this book?
Vladik Khononov: Yeah, so coupling… I remember when the idea for this book came up and I was talking to my friends about it. And one of them said coupling like, okay, so coupling is bad. And do you need like, really 300 pages to say that? That it’s bad?
But as they say, the devil is in the details. So it happens quite often that we’re using certain terms without fully understanding them. Like we have that gut feeling about something. So we are using that in our day to day language. However, we are not able to clearly describe what that something means. And that’s something for me was coupling and cohesion for many, many years. I remember when I attended the first ever meetup I attended, it was about coupling and cohesion. And, yeah, I came out with this feeling that gut feeling level I understand what that means, but how can I explain what coupling is? Why is it bad? What is cohesion? Why is that good? So that’s why I wanted to write this book and share my findings, my research on these topics.
And if you are going to read the book, you’ll see that although it’s called Balancing Coupling and Software Design, its main topic is complexity and modularity. I want to show what complexity is exactly in software design, because again that’s another thing that we understand on the gut feelings level, but it’s challenging to describe and it’s even worse for modularity. Like modularity, okay, they started talking about it in 60s, 70s, 80s. And today, we all know that that’s the goal we should strive towards. But what is modularity exactly? Like how can you put a number on a design? How can you evaluate whether it’s modular or not? So, that was my goal, to show how these two things are actually rather two sides of the same coin, and that coin is coupling. Coupling can take your system either towards complexity or towards modularity. And yeah, for that, I had to write almost 300 pages.
[00:06:09] Coupling
Henry Suryawirawan: Right. So I think maybe let’s try, first of all, to define the terms what is coupling, right? Because I think people know about low coupling, high cohesion, right? It’s always like the terms that you always use, right? Good code means like low coupling, high cohesion. Maybe people associate coupling with just, you know, maybe functions or classes, design, and things like that. But in your book, it is actually beyond just code level, right? It can even go up to organization level or different kind of services, right? Maybe let’s start with the definition first. In your book, what is the definition of coupling?
Vladik Khononov: Yeah, so let’s say we’re working on a system. A system is a number of components that are working together to achieve a goal. That goal is the reason why that system exists. Now, in order for those components to work together, they need to be connected, right? Because they cannot be fully independent of each other. That will be just a collection of components. That’s it. But to make the value of that system higher than the sum of its parts, we need to connect them. And those connections are basically coupling. If you go to the dictionary and look up the word coupling, well, you’ll see that initially what it means is not monolith, not big ball of mud, but it means connection. If two things are coupled, they’re connected.
So I will ask you for the rest of our discussion, at least for the next 40, 50, 1 hour, I don’t know how long we’re going to talk, but for the rest of this discussion, let’s treat coupling as inherent part of system design. Not something that is necessarily good or necessarily evil, but something that is needed. We need coupling. It’s like a glue that holds a system together. We need it. However, how we are designing those interactions, how we are designing those connections between the components, that’s where it gets interesting. That’s where that coupling can take us either towards complexity or towards modularity.
[00:08:17] Shared Lifecycle & Knowledge
Henry Suryawirawan: Right. So I think it’s interesting that you mentioned about connectedness, right? It doesn’t mean that when you have coupling, it’s actually bad, right? Because coupling is necessary whenever you build software design. Because that’s how components actually work in collaboration with each other, right? And you mentioned about connectedness in your book in terms of there are two things that probably are related when you talk about coupling. The first is about shared lifecycle, and the other one is about shared knowledge. I think having an understanding of these two is really critical before we move on to the next section. Maybe you can talk a little bit more about this shared lifecycle and shared knowledge.
Vladik Khononov: Yeah, of course. So, first of all, let’s assume we have two components in a system. Now, I’m purposefully using these abstract terms, components in a system. I’m not telling what these components are. They can be services, they can be objects, classes, even methods within a class. And same is true regarding this system. It can be a distributed system. It can be a module or an object, an object-oriented code base, doesn’t matter.
So let’s say we have two coupled components. Now when we are saying that coupling is bad, what we usually imply is that we have to change those two components simultaneously. For example, you want to change one of them, but then you’re forced to apply a corresponding change to the second component, right? And that connects their lifecycles. They have to be evolved simultaneously.
Now is that something we are interested in? It depends. We’ll talk about it a bit later, I assume. But right now, let’s assume that coupling introduces that dependency in the life cycles of connected components. Now propagating changes in almost all cases is not something that we want. We want to be able to modify each component individually. Once we introduce those connections between components, in order to work together they have to share some knowledge about each other. And the amount of knowledge they share, it actually dictates what is the amount of cascading changes that are going to follow because of that design.
That knowledge can be as simple as a knowledge of integration interfaces. Let’s say I have a module, so I define its public interface. You want to work with it, you have to know the details of its public interface. Or maybe, let’s say, you want to ignore that public interface and you want to use some other means of integration, let’s say you want to reach out to my database. In that case, maybe you want to, you will have to know the implementation details of my module. So that’s the other extreme of knowledge that can be shared. And of course, if I’m sharing only the integration interface, then the amount of cascading changes that are going to follow is going to be dramatically lower than in the second case where you are coupling yourself to my implementation details.
So overall, these are two properties that are very interrelated. The first one is how connected are the lifecycles of those connected components. And the second one is what is the amount of knowledge that’s being shared across the components boundaries.
Henry Suryawirawan: Yeah. Thank you for explaining this. I think it’s such an insightful concept when I read it in the first few chapters. Because maybe the lifecycle is kind of like straightforward, right? Whenever you make changes, if you have to propagate into multiple places, right? It could even be services or multiple teams. That is actually pretty bad. And we know that it’s actually like tightly coupled, right, when we do that. But actually the most important thing that I find really interesting is about the shared knowledge. Because there are explicit knowledge that you share, maybe about business requirements or maybe even the interface or contract APIs, right? When you interact with APIs. But also there’s implicit knowledge that probably is a bit tricky to identify. And that’s why you find it difficult to explain the coupling by just looking at the code, right? Because it’s such an implicit thing.
[00:12:28] Cynefin
Henry Suryawirawan: And let’s try to actually analyze in terms of coupling and complexity, right? Because people associate coupling a lot with a complex code. So you try to explain it with Cynefin framework, right? So maybe a little bit of background. Why is it such a relevant to talk about Cynefin when explaining this coupling and complexity?
Vladik Khononov: Yeah. So first of all, I want to go back to your first comment about the relationship between lifecycle coupling and knowledge, because I think it’s super important what you said here. Because when people are talking about coupling or rather decoupling, they mean let’s take something big and break it apart into those tiny parts. Let’s take a monolith and break it into microservices. However, if you are not following a proper decomposition criteria for drawing the boundaries of those resultant smaller components, you will end up with sharing lots of knowledge. As a result, you will end up with lots of cascading changes. As a result, you are going to end up cursing that very day you started decomposing it. And as a result, your system is going to be full of complexity.
Now, complexity is something that you know it when you see it. But, again, we need something more precise, something that we can explain. So that’s why I chose to use Cynefin framework. I really like how this framework, originally, it’s a decision support framework. It provides with different… we are talking about software engineers, so let’s say different scripts for making decisions and how to make decisions in different situations. And it categorizes those contexts into five domains, each one requiring unique approach to making decisions.
If let’s say you are working on a system. And again, we are software engineers, let’s say, we’re working on a code base. And I want to make a change. If I know exactly what the effect of that change is going to be then that code base is not complex, right? It’s probably simple. That’s why this domain in Cynefin terms is called clear. It’s something that is simple, clear. If I am making a change, I know what’s going to happen.
If, on the other hand, I am looking at the code base and I have no clue what’s going on there. However, I can consult somebody, some external expert, and that expert will tell me exactly what’s going to happen, then, again, it’s not complexity. This time we have a complicated system. For example, let’s say I am looking at a code base written in Go or I don’t know, Scala. Even though I tried to learn these two languages, I would still prefer to call an external expert to help me out to understand what’s going on there. So again, that’s not a reason to call it complex. That’s something that is complicated.
And we arrive to the third domain, which is a complex domain. Here, we have no idea what the effect of a change is going to be. Also there is no expert we can consult. So, the only option we have is to conduct a safe experiment, which means to make a change and observe its results. And based on those results, to try and understand its impact. So that’s complexity. Now, in this domain, we assume that there is connection between an action and its outcome. A cause and effect.
Now, in the fourth domain, which is called chaos, there is no relationship between a cause and effect. The effects are random. Now, such things do happen in software, of course, especially in production environments after deploying on Fridays or on public holidays. However, this is something that is rather not related to our discussion of software design. It’s more related towards the runtime of the system. But in software design, when we’re changing the system, it should be somewhat deterministic, one way or another.
And finally, the fifth domain of Cynefin, it’s unknown. When you’re looking at a domain, you have no idea where you are. And in this situation, you may assume that it’s clear, it’s something simple. But, as they say, the devil is in the details, and when you get to details, it’s only discovered that it’s a complex domain or chaotic domain or whatever. So these are the five domains.
And I really like the way Dave Snowden, the author of this framework, defined complexity. When the only way of identifying outcome of an action is conducting safe experiments. Unfortunately, that’s something that is pretty frequent in software design. When you’re looking at the code base and you want to refactor well. If you know what’s going to happen, great. Maybe it’s complicated and that expert is the compiler. Maybe compiler can tell you what’s going to happen. Or a colleague that built that codebase in the first place. But in many cases, when we’re working with legacy systems, brownfield projects, or big ball of muds, we make a change and… I had a friend, I still remember that. I was working at… It was actually my first, like, real workplace. And that colleague of mine showed me, look, before doing something important, I have these stones. I place these stones on my keyboard for good luck. So once you need those stones for good luck, you’re in complexity, and yeah, you need to find a way out of it one way or another.
Henry Suryawirawan: Right. I think that’s a very relatable joke, right? And some people also like think of it like tests in production, right? You will only know once it hits production with the real traffic, real data, and real user behaviors. So I think, yeah, when you find that kind of situation, probably, it’s more about complexity, right? It’s not about complicated. It’s not about something like chaotic as well, right? So I think this Cynefin framework is really, really good to picture what kind of a domain you are dealing with. And thanks for pitching in about this splitting. Because people normally when they deal with coupling, they will say, okay, I will just split more, you know, like microservice, for example. Or create more classes, create more abstraction.
[00:19:00] Essential vs Accidental Complexity
Henry Suryawirawan: The other thing about dealing with coupling for some people, right, they build more flexibility. When you say coupled, right, it means like difficult to change, so people try to build more flexibility, degrees of freedom, right? In your book, you actually find that these two extremes actually is very dangerous, and it might lead to complexity in terms of code, right? And you classify this sometimes as accidental complexity. The other one is essential complexity, right? So tell us more about these two different complexity as well, because I think it’s really important for software engineer to understand essential and also accidental complexity.
Vladik Khononov: Yeah, of course. So let’s say that you’re working on a system for a non-trivial business domain. Because come on, nowadays, if the business domain is trivial, ChatGPT is going to do it. So assuming that you are working on a non-trivial business domain, there will be some complexity in it. Complexity that is driven by its requirements. For example, let’s say it’s financial system. That domain is not simple. Or a medical system, even worse. If there’s something interesting that the system is doing, usually, it involves some complexity. That’s the reason the system is being built in order to help its users to encapsulate that complexity in the system, to take care of it.
So that’s the essential complexity of that business domain of the problem you’re solving. There is no way of avoiding it. If you avoid that essential complexity, then your system is not doing its job. Why do I need it if it’s not going to help me to take care of that complexity, right? We cannot avoid it, so we have to manage that complexity. How can we manage it? Well, we have plethora of tools, from domain-driven design to design patterns, whatever. We can talk in details about each one of them, but we’ll need 24 hours, at least. So that’s the essential complexity. Again, something that you manage.
The accidental complexity, on the other hand, is something that you introduce. Something that you introduce by not designing your system properly. And again, if we define complexity as that implicit relationship between an action and its outcome, then accidental complexity is designing a system in such a way that whoever is going to work on it next or maybe even me tomorrow, won’t have any clue if a line has been changed, what effect of that change is going to be on other components within the same system. Or maybe it’s interactions with other systems, which are even worse and even harder to predict than effects within my code base.
Again, it’s all a matter of design of how you architect your system and how you design those interactions, which means those connections, which means coupling between its components. Whether it’s going to introduce that accidental complexity. Or on the other hand it will help you to manage that essential complexity.
Henry Suryawirawan: Right. So I think when we talk about complexity, please remember in your mind, right, are you talking about the essential complexity? Which is actually the problems that you’re trying to solve. Maybe it’s the business requirements. Maybe it’s like functionalities that you have to build an algorithm, right? Or are you actually dealing with accidental complexity? Things like you weren’t aware you designed something and actually the impact is a little bit unpredictable, right? Maybe you expect one config change, but how come that you actually deal with so many changes, cascading changes in multiple places. So I think don’t introduce too much accidental complexity. And probably, coupling is actually one of the root cause why accidental complexity can happen.
[00:22:45] Modularity
Henry Suryawirawan: So let’s move on maybe from complexity to the other one, modularity. As you mentioned, right, when you write this book, you want to cover these two areas, really important. So I think modularity also kind of like covered a lot, right, in basic software design or, you know, programming courses. But what is something different about modularity that you want to talk about in relation to coupling?
Vladik Khononov: Yeah. so first of all, to define modularity. What is it? If you look up that word in many places, you will find the recursive definition of like a modular system is one that built of modular components. Thank you for that. That really, really helps me, on a serious note. You may also find definitions saying that a modular system is one that is built out of interchangeable components. A system that resembles Lego bricks, that you can connect with each other and form something else.
Now, let’s say you’re working on a typical knowledge management system for a business. Does it resemble Lego bricks? I don’t think so. Do you need interchangeable components? Maybe you will have to change the database one time, because you will need to run your tests on a memory database which is not going to be a real one, but that’s it. That’s probably the limit of our interchangeable components. Maybe you should encapsulate all the managed services of your cloud vendor so that you can migrate to another cloud vendor. Maybe you should, but trust me, those abstractions you are going to introduce are going to be leaky anyways, and that migration is not going to be as simple as people hope.
So still, does it mean that we don’t need modularity? No. We definitely need modularity. However, we also need to define what do we mean by it. And here, I want to go back to the definition of a system that I started with early on, saying that a system is a set of components working towards a goal. A modular system is a set of, let’s say, modular components, but working not only towards its goal, the goal of that system today, but It is also designed in such a way that we’ll be able to support goals that will be defined in the future. Now to support goals that are going to be defined in the future doesn’t mean that out of the box it has to implement all possible use cases. No. Of course not. And that’s not possible. The design of that system should be able to withstand implementations of those future use cases, those future requirements.
Now, this begs the question. Okay, so let’s say I’m working on knowledge management system for a business in financial domain. Does it mean that my design has to support the option of that system turning into a driver for a printer in the future? Of course not. That’s not a goal that you have to think about. So when thinking about modularity, we have to take into account future goals, but they have to be reasonable. How do we identify those reasonable changes? Well, that’s another thing that makes our job so hard. So we have to think about those future changes. Which changes are going to be reasonable and which are not. Of course, the example I used of a printer driver is a simple one. But yeah, real life is a bit more complicated.
So again, going back to the definition of modularity and modules and modular systems. The goal is to design a system in such a way so that it will support future requirements. Once it does, then we can say that those components it is composed of are actually modules.
So that brings me into another problematic topic of modules versus components, because I defined the system as a set of components and the modular system as a set of modules. Now, if you look up on the internet the difference between these two terms, you will see some problematic definitions. Some sources claim that modules are, let’s say, logical boundaries, whereas components are physical boundaries. Now, this thinking model can be useful in some contexts.
However, if you go back to the origins when the term modularity was introduced to software engineering, you will see that back then they had three properties of a module. One of them was that it should encapsulate a behavior, meaning that we have some behavior that is encapsulated in one module instead of being spread across multiple modules. Second, it has to expose an interface through which other parts of the system can execute that functionality. And third, it should have potential of being independently compiled. Only potential. It doesn’t mean that whether you’re compiling your module independently turns it into a component. No, it is still module. Again, that definition is, in my opinion, it’s flexibility is very useful.
Yeah, so the difference between components and modules, it’s not about the type of boundaries, whether they’re physical or logical. It’s about encapsulating functionality. And that’s something that is essential for managing the essential complexity and not introducing accidental complexity.
Henry Suryawirawan: Right. So I think that’s really insightful, right? This is the first time, I think, I kind of like find a definition of modularity. You know, like the concept of knowledge boundaries, right? And you understand, like, the difference between modules and maybe components and things like that, right? And I think it’s very important, right, when people talk about modularity, most of the times what they are doing is actually like create more abstractions. Things like in object-oriented programming, right, when you want to create more modular system, you create more abstractions, right? You introduce maybe interface, you know, abstract classes and things like that.
[00:29:04] Abstraction & Knowledge Boundary
Henry Suryawirawan: First of all, is this the right way? And second thing right? In your book, you mentioned about module is actually a knowledge boundary. So I think this is also something very insightful, right? You are not talking about physical logical boundary anymore, but it’s all about knowledge boundary. And maybe this is related to the shared knowledge that we discussed earlier. So tell us more about this abstraction and also about this knowledge boundary.
Vladik Khononov: Yeah, sure. So introducing abstractions, it can be beneficial, but it can be a very effective way of introducing accidental complexity. Again, whether you’re using it to manage the essential complexity or introducing new type of complexity depends on your concrete context. Now, one part of me want to say “it depends” and then advance to the next question, but I’m not going to do it. So let’s talk about what it depends on.
So let’s say I have two components and I’m thinking, what are they going to interact with each other directly? Or should I introduce an abstraction between them that will encapsulate something? Now, we have to ask, will that additional abstraction help us to encapsulate knowledge, or in terms of David Parnas, to hide some information. Whether it will help us to use the boundary of one of the modules as a boundary of knowledge.
So let’s say I have a module that implements some functionality that is not trivial. Let’s say it’s going to be about encryption. Yeah, and let’s say I made that very smart decision of implementing my own encryption algorithm. So one way of us to work together would be, I tell you, Henry, yesterday, I had a few beers and I was thinking about that super secure encryption algorithm. So from now on, I’m going to communicate all the information encrypted with that algorithm. In order to decrypt it, here’s what you have to do. And then I brain dump all that after beers thinking about that encryption algorithm on you and you have to implement it in your code base in order to decrypt my data. What’s going to happen is we have that knowledge of that probably dumb encryption algorithm duplicated in two places. One place is my module, which encrypts data, and the second module is yours, that has to decrypt that data.
Of course, since I made that smart decision of coming up with an encryption algorithm, it’s going to change. Probably, there are going to be a security patch, let’s say, after a few days. So now I will apply the change in my code base, and I’ll call in, say Henry. That’s urgent. You have to modify a few lines in that algorithm because otherwise you’re not going to be able to decrypt my data. That’s the effect of duplicating knowledge. Now of course, that’s a simplified example. In real life, we’re talking about business domain entities communicating with each other, but the idea is the same. We have that knowledge that is duplicated.
Now, we could have used a module to encapsulate that knowledge, to hide that encryption algorithm in one place. And in that case, the functionality of that module would be encryption and decryption. Its interface would be probably two methods for encrypting and decrypting data. And it could be independently compiled or it could be compiled within our monolithic code base, doesn’t matter, it’s still a module. Now, if we compare the details of an encryption algorithm versus having an interface with two methods, encrypt and decrypt data, the difference between the two amounts of knowledge is substantial, right? In that case, that abstraction is going to be super useful. It manages the essential complexity. That encryption algorithm is encapsulated. So it helps us to make our system simpler, because all those other components that will have to use it, they don’t have to be aware of the internal details of that logic.
On the other hand, let’s say, we’re building a system and that pattern data transfer objects, it gets lots of hate. Like people are saying, okay, so I have my business entities and then I have to introduce DTOs that are going to be on the boundary of my application. And my APIs are going to return those DTOs, right? Now, why people love to hate DTOs, because in their code bases, usually what they are doing is translating one data structure to another data structure, which looks exactly the same. So you’re just basically copying values from one data structure to another one. However, as long as the attributes of those data structures, those types of those attributes are exactly the same, we’re not actually encapsulating any knowledge by introducing that abstraction. What we’re doing is we’re introducing another moving part. We’re introducing another object or another class that we’ll have to think about when we are going to change our model. We’ll probably have to apply the same change, let’s say, we want to change the name of a field in both places. Now, in this case that DTO is not an effective abstraction. It increases complexity. It introduces accidental complexity.
Now, am I saying that DTOs are bad? Of course, not. I just gave you an example in which DTOs are useless. Sorry, maybe not useless, but they’re not providing value. But on the other hand, they’re introducing another something you have to think about. And the more things you have to think about, the more the cognitive load. And our cognitive load limits are not looking good. There were studies done in 50s, then repeated in 2000, and they were not that spectacular early on. And the number was reduced 50 years later. And yeah.
So going back to your question of abstractions, it depends. If abstraction helps you to encapsulate knowledge, it’s effective. If it doesn’t, then it’s just going to introduce additional moving parts and you’ll end up increasing the accidental complexity of the system.
Henry Suryawirawan: I’m so glad that you brought up the DTO discussion, right? Because I think, yeah, it’s like a love hate kind of a pattern for some people. They find it like a more work in terms of creating the APIs. You know, you have to build this translation, which is probably the same when you build a system first time, right? It’s like mapping the same kind of data structure. But I think if you think about it, if the system evolves, the API evolves, and your implementation details changes over time, maybe the database, how you store the data changes, right? The DTO is like a boundary, right, a contract that you probably pass on with the other teams or the other service. And that doesn’t change, right, as long as you create this proper abstraction. So I think thanks for bringing that up.
[00:36:25] 3 Dimensions of Coupling
Henry Suryawirawan: Let’s move to the next topic. I know that we can probably talk a lot more about complexity and modularity, but I think I want to bring up another fascinating thing that you brought up in the book, right, is actually how to measure coupling. So in the beginning, you mentioned like, sometimes it’s very hard to actually put a number of how coupled is our code, right? It’s always like a, you know, rough idea, maybe estimation. But actually in your book, you bring three different dimensions that we can use to kind of like quantify complexity, coupling, and things like that. And you brought up the point about integration strength, space or distance, and time or volatility. So maybe, I know it’s pretty hard to explain all three at the same time, but maybe if you can elaborate what are these three dimensions, how can we use them to actually assess the complexity or the levels of coupling of our code base?
Vladik Khononov: Yeah, so evaluating coupling is something that we are trying to do for ages now. I think since 60s, when the structured design methodology was coined, they started working on it in late 60s. So they introduced a model for evaluating coupling. Then there was connascence. Then there were some metrics, and those metrics tried to put a number on coupling. The problem with that approach is that it was based on counting variables or counting methods. But is that number something that you can really, really trust?
And there is a talk that I did at a few conferences with my friend, Sonya Natanzon, in which we’re talking about software design metrics, and basically saying that you shouldn’t trust them. And there is a metric called stability, which is about the relationship between the incoming connections and outgoing connections, afferent coupling and efferent coupling. As an ASL speaker, I really hate these terms, like passionately. That metric says that if there are more modules that depend on you than the number of modules you depend on, then your stability score is going to be higher. So in that talk, I’m showing an example which is supposed to be perfect from that sense. Like a module that, let’s say, 100 other modules depend on it. And our module only depends on one other external component. That’s it. So the ratio is 1 to 100. So it’s supposed to be super stable. However, again, as I mentioned a couple of times, the devil is in the details. And on the next slide, I’m showing how that one dependency is implemented. And what I’m showing is that it uses reflection to read a value of a private field. So what is going to happen? Well, once I’m introducing that dependency on an implementation detail, any change to the implementation details of that external module has the potential of breaking my component. Now, once I have something that has to be changed in my component, then I have a hundred other dependencies that are probably going to be affected by it. So now what we’re getting is a perfect stability score ended up being a big ball of mud, basically.
So that’s why initially I tried to avoid putting numeric values on coupling. Because it’s all about details, details that are hard to quantify and hard to describe. So instead, in the book, I propose a different model of evaluating coupling that is based on three dimensions, as you mentioned. The first one is the dimension that measures the knowledge. What is the amount of knowledge that is shared between two connected components? How do you measure knowledge? Can you put it on a scale and have a number next to it? Of course, not. Maybe someday, I don’t know.
By the way, when I was doing talks at conferences on this subject, almost always somebody from audience says that, hey, you really have to think about a way of doing this automatically, of evaluating knowledge. In the book, I said, okay, I tried, I failed. That’s topic for further research. If you want to do it, good luck. I’d be happy.
So, knowledge. So we cannot put a number on it yet. However, we can look at that early work back from late 60s, early 70s, the Structural Design Methodology. And back then, they introduced a model called module coupling. It had six levels that are kind of challenging to apply in our modern systems, because that model is based on languages such as COBOL and FORTRAN. It’s like fun, but the opposite of fun.
Instead of having to learn those languages and what those keywords mean and how to adapt to translate them to our world, in the Balancing Coupling book, I propose a different model. I called it integration strength. It is based both on structured designs model and another one, I’ll talk about it in a minute. So basically, it adapts the six levels of integration strength into terminology that’s going to be more convenient for us today. The four terms are contract, model, functional, and intrusive coupling. They are not defining the amount of knowledge. They’re defining the type of knowledge.
So let’s start from the biggest one. Intrusive coupling. This one means that I’m using something other than public interfaces for integration. Let’s say, I’m using reflection. Let’s say, I’m using another microservices database directly or something else, whatever, that wasn’t intended for integration. I’m introducing an intrusion into it’s boundaries. That’s why it’s called intrusive coupling. Now once we’re introducing intrusive coupling, we have to assume that the author of that module might have no clue that we’re doing that thing. Which means almost any change that they’re applying has the potential of breaking the integration. So, lots of knowledge, the integration interface is fragile, it’s implicit, so expect cascading changes.
Another type of knowledge that can be shared is the knowledge of functional requirements. I called it functional coupling. If the previous one was about how that upstream module or component is implemented, this one is about what is the functionality that the component is implementing? Now, let’s say that we have two components and they implement closely related business functionalities. That means that probably they will have to change simultaneously, because of the same changes in business requirements. That means we have functional coupling.
An extreme example of functional coupling would be, let’s say, we have the same business rule implemented in two places, or the same business algorithm or the same business invariant. And from the business standpoint, if the requirements or the definition of that rule changes, they have to change simultaneously. Because otherwise the system is going to be in an invalid state. In that case, we have a very strong case of functional coupling. By the way, they don’t have to be physically connected. I call it wireless coupling.
Another case of functional coupling is, let’s say we have multiple operations that require concurrency control, probably, because they’re going to work on the same set of data, right? In that case, again, functional coupling. Or maybe we have operations or functionalities that have to be implemented in a specific order, one after another. That’s also a case of functional coupling. That requirement of being executed in a specific order is probably there for a reason. Probably, it introduces some kind of business dependency there. So that’s the level of functional coupling. Here we are sharing the knowledge of our business requirements.
Now, to implement those business requirements, usually we have to model a business domain. We have to understand what is the system we are implementing and define a model that represent that business domain. And then we are going to implement the functionality, the requirements in code using that model. If you have two components that are based on the same model, which means if the model changes, both of them have to change, then you have model coupling.
And finally, the lowest level is contract coupling. So contract coupling, we can think about it as a model of a model. Remember, the discussion about DTOs? I used an illustration of an ineffective DTO, and then you, Henry, elaborated and discussed an effective use of DTOs. In that case, those effective details are contracts, integration contracts. That’s a model of a model that was crafted with the purpose of encapsulating that model that’s being used internally. Whenever that model changes, we can contain those changes behind the same integration contract. So we are minimizing the knowledge that we are sharing across our boundary.
So overall, these are four types of knowledge. We can share the knowledge about our integration contracts, about how we see and how we think about the business domain, its model. Model coupling. Then we can share knowledge about our business requirements, functional coupling. And finally, we can share knowledge about our implementation details, and that’s intrusive coupling. So overall, these four levels are not going to put an exact number on the weight of knowledge you’re sharing. However, these are four different types that signal different amounts of knowledge that’s been shared.
Now, if you’re going to read the book, then you will see that each one of these types has also degrees, except for intrusive coupling. Functional model and contract coupling, they have degrees. So that first of all, you can compare two designs of the same integration strength, and also to give you a more fine grained control on the knowledge that’s being shared there. So, again, integration strength, overall four levels, and three of them have degrees.
Now, the four levels are based on the structure design’s module coupling model from late 60s, early 70s. And those degrees that I’m using for functional, model, and contract coupling are based on a model called connascence, which was introduced in 90s. So integration strength kind of combines both. Let’s say that you evaluated that knowledge using integration strength. You have two components and you identified what is the knowledge that’s been exchanged between them.
Does it say that, let’s say, functional coupling is necessarily bad? Or model coupling, is it worse than contract coupling? Well, it depends. If you can reduce it, of course, you have to reduce it. If you can turn a model coupling into contract coupling, probably, you should do it. But not always. Sometimes you have to share a model. Or sometimes you have to share business requirements, right? Does it mean that your design is bad? No. It depends on the next dimension, the dimension of distance between those connected components.
Now, since the beginning of our discussion, I was using abstract terms to describe systems. I was saying we have sets of components. I purposefully didn’t mention whether those components are methods, objects, services, or whole systems or whatever. That’s because we can introduce coupling across different levels of abstraction. We can have coupling between methods, coupling between objects, coupling between modules, namespaces, services, whatever, whole systems. Now, the higher we go on that layer of abstraction scale, the higher the physical distance between the source code in which those components are implemented.
Like extreme case, let’s say, you have two methods within the same class, probably they are going to be close to each other. Probably, in the same file, right? Different objects, probably, different files. Different namespaces, different folders. Different services, maybe, different repositories. Different systems, maybe, different companies, etc. So the higher you are on that scale, the longer the distance.
Why is that important? It’s important because if you combine it with the knowledge, you get a sense of whether you’re going towards complexity or towards modularity. Because let’s say that you have two components with functional coupling between them which means they’re sharing a lot of knowledge. And you’re putting them in separate microservices, which means the distance between them is big as well. Now, that functional coupling kind of implies that we’re sharing lots of knowledge. So if something is going to change with that knowledge, that change is going to be propagated across the boundaries. So both of them are going to be changed simultaneously.
Now, if the distance is big, is it going to be an easy change? Probably, not. The bigger the distance, the harder it is. The harder it is going to be to imply the change. In other words, we can say that the bigger that distance, the more coordination effort will be needed to implement a change that affects both coupled components. So if we have both integration strength and distance high, we get complexity. We’re looking at a system in which we want to change a component, but in order to understand the effects of that change, we have to investigate components that are located far away from us, maybe, in different repositories even. is it easy? No. Will it require cognitive efforts? Lots of them, right? So that will result in cognitive load and as a result it will result in complexity.
Now, what if we do the opposite of that? Let’s say we have two components that are not sharing knowledge. Let’s say we are on that contract coupling level. And we are putting them close to each other in the same module, the same namespace, the same package, whatever you call it. So both values are low. And if all we have is two components, then probably, yeah, who cares. But usually in a real system it’s not going to be two, it’s going to be way more. And once you have way more unrelated things located close to each other, then when you have to make a change, you suddenly have to find that thing you have to change, right? And the more options you have, the harder the cognitive load. The harder the cognitive load, as a result, the harder the complexity.
So, at this point, we can identify complexity as a situation in which integration strength is equal to distance. Both are low, or both are high, we get complexity. Now what is modularity then? Well, modularity is the opposite of complexity. If you are working on a modular system, you should know exactly what the effect of a change is going to be. So we can apply it here as well. Let’s say that if complexity is in the case of both strength and distance being equal, then modularity is when they’re not equal. And again, extreme examples. If you have high integration strength and there is no way for you to reduce it, because that’s the business domain, that’s your essential complexity, deal with it. Then how can you manage it? Well, you can put those closer related things close to each other. You can minimize the distance between them. Yes, they will have to change together, but once they’re close to each other, the cognitive load on you is going to be lower because it’s almost like modifying the same thing. Or vice versa. Let’s say we have minimal knowledge shared across coupled components. We have contract coupling. Well, what should we do? The distance should be the opposite. Let’s spread them apart. Let’s spread them across, let’s say namespaces or services, whatever. Instead, in those resultant services, we’ll put only things that are related to our modules. In other words, those that do have high integration strength.
So that’s the relationship between distance and integration strength. We can use them for evaluating complexity and modularity of the codebase. Now, there is other dimension, and that’s the dimension of time or the dimension of cutting corners, I call it. Being pragmatic. Let’s say we have two components with functional coupling between them. No, not functional, let’s say, intrusive coupling between them and big distance between them. Let’s say we have two systems. Once we’re talking about systems, then the distance is big, and we’re introducing intrusive coupling between them. Is that design necessarily bad? Well, that’s a tricky question, because from complexity standpoint, we should say yes, right? However, what if that upstream system is not going to change? Never. Let’s say, it’s a legacy system and you have to integrate with it. And that system is dead, like in your company, nobody has that courage of touching it, but you still have to integrate with it. So should you roll up your sleeves and get your hands dirty and implement additional end points for proper integrations through contract coupling? You probably could. However given that that’s a legacy system and it’s not going to change, it’s fine to take its data from its database, for example. So yeah, you are introducing intrusive coupling. However, since the volatility of that upstream system is low, then you are not going to feel any pain in your future, because of that intrusive coupling.
So overall, we have three dimensions. We have integration strength and distance for showing us the way, whether we are headed towards complexity or towards modularity. And we have that dimension that can help us to make pragmatic decisions based on the volatility of our components. Now, if you combine the three of them together, basically, if you are into domain-driven design, then supporting subdomains, generic, when you’re integrating generic subdomains, I say that it’s okay to cut corners. That’s something that usually is implemented, as Eric Evans says, with a rapid application development framework. Is it going to be super modular? Probably, not. Why is it okay? Because those subdomains are not going to change frequently. Core subdomains, on the other hand, that’s where you should expect your changes, and that’s not a place to cut corners. That’s a place where you want modularity.
And if you follow domain-driven design that aggregates, basically, take the idea of functional coupling to extreme, we have those transactional boundaries. So we are putting all those entities that share those transactional boundaries within the same aggregate. Bounded contexts are there to protect our models. So we can use the same model of the business domain within bounded context, but not across bounded context. Across bounded context, we need integration contracts. In DDD language, these are open host service, published language, or anti corruption layer.
Yeah, and if you analyze design patterns, architectural patterns, or cases where people are saying that one pattern is evil and should be considered harmful versus people saying that that pattern will save your life. Well, consider those extreme opinions from the perspective of those three dimensions. Probably, you’re going to find the explanation for those conflicting opinions in one of those dimensions.
Henry Suryawirawan: Wow! I think I feel like I am listening to like a insightful lecture, right? So I think what you just explained, right, with different scenarios, different kind of permutations. I think it kind of like opens up our eyes a bit, right? Like, how do you analyze complexity? How do you analyze things that probably it’s a bit difficult to change. Probably also think about dealing with legacy and you bring in your bread and butter DDD concept as well, right? As you probably write through the last chapters of the book, right? I think all these kind of like now starting to get more sense, right? You bring up the topic of coupling, but also you bring up the topic of architecture, design, and also DDD itself, right?
I hope people, by listening to this episode, or maybe better, by reading your book actually gets more tools to actually discuss about software design. Because yeah, these three different dimensions are really, really critical if you want to talk about the complexity-modularity of the system, right? And whether your system actually can take the trade-off, right? Because coupling, as you mentioned, right, in the book, you cannot really eliminate coupling. Because the software has to work together, there will be a level of coupling. Whether the high coupling is bad, again, it depends on the context, right? If it’s not so volatile, probably it’s not so bad. And you bring in the topic of DDD.
[00:58:11] Balancing Coupling
Henry Suryawirawan: So maybe, as we move on to maybe the later part of our conversation. I hope you still have the time, right? So in your last part of the book, after we understand about definition of coupling, we understand about three dimensions of coupling, how we assess our system design. The last is about balancing the coupling. Now you understand, you have all the tools and the knowledge required. In your software design, you advise us to actually know how to balance this coupling when you make decisions in your software architecture. Probably, a little bit more practical tips, right? Now that we know all this, how would you advise us software engineers to think about balancing coupling in our software design?
Vladik Khononov: Yeah, so when you were making software design decisions at whatever level of abstraction, think about those three dimensions. What is the knowledge that is being shared? What is the distance across which the knowledge is being shared? And also, of course, what is the volatility of that knowledge is going to be? How can you evaluate that volatility? Well, that’s a place where you can use different models. I prefer domain-driven design subdomains, but there are other methods. Now, my model of balanced coupling, it’s not going to give you a score, like a number, like a grade that you can say, hey, I implement a system with 99 percent of balanced coupling. Unfortunately, not. Maybe somebody someday will join forces and implement it, but the devil is in the details. It’s not something that is really trivial.
But at the same time, I wanted to offer that model that you can keep in the back of your head. Something that is easy to remember. Something that doesn’t require memorizing tons of different patterns. All you have to remember is that there are four types of knowledge, and that’s basically it. And there are three dimensions. If you evaluate that knowledge and you compare it with a distance, you know whether you’re headed towards modularity or complexity. If you’re headed towards complexity, then you can look at the volatility and decide whether it’s something that is worth your effort, or maybe you should focus on something else.
So overall, I would say keep these… I hope that they are simple. These simple terms or ideas in the back of your head when you’re making software design decisions and apply them. Again, it’s not something that is going to be easy to incorporate in continuous integration pipeline. But it’s something that is supposed to be easy to incorporate in your software design decision making process.
Henry Suryawirawan: Yeah. So four different types of knowledge sharing, right? Just to recap a little bit, there’s an intrusive coupling, there’s a model coupling, functional coupling, and also contract coupling, right? So these four, very important. Maybe you should bring up these kind of terms when you discuss collaboratively about your software design. And then not just about this knowledge sharing, because again like coupling, probably the most problematic one is about knowledge sharing, right? The shared knowledge, the thing that we talked about in the beginning. So after you analyze all this, you also bring in the three different dimensions. So integration strength is basically those four things, right? And then you bring up the topic about distance, And also about the volatility.
So all in all, if you probably bring all these variables inside your discussion, maybe you’ll make better decisions, right? And also don’t forget as well, maybe in the beginning you design a better balanced coupled system. But over the time, as because business change, you know, software change, which you also covered in your book. There are so many reasons why software will change, right? And let’s not forget, when something major change happen, you also bring up again this topic to actually make sure that your change can rebalance your software design in terms of coupling. And also in the end, right, eventually doesn’t create like a big ball of mud. Or the worst case is distributed big ball of mud, which is like the crazy microservices that people are, you know, struggling with.
So I think all in all, these are very good topics, right? For people who would like to understand further, because I’m sure maybe just by listening, this conversation is a little bit too short right? Maybe you should check out Vlad’s book. I think it’s coming out soon. And hopefully people will be better to get a better tools, you know, like in terms of coming up with better software design.
[01:02:30] 3 Tech Lead Wisdom
Henry Suryawirawan: So let’s wrap up. But before I let you go, Vlad, I don’t know whether you still remember last time I asked you this question. I asked you about these three technical leadership wisdom, I think two years apart, right, and a new book as well. Maybe if you can share a little bit of what kind of wisdom that you can share with us, maybe with the theme of coupling. So yeah, is there anything that you want to share with us?
Vladik Khononov: Yeah, so three tech leadership wisdom. That’s a hard question. But we started by discussing the things that we kind of understand on the gut feeling level. But it really, really helps to get past that gut feeling level towards more explicit definitions. So if you stumble upon something that you cannot explain clearly, I strongly recommend getting into it, learning what’s going on there. Because chances are you’re not alone. Usually, there will be more people struggling to define a concept. For me, it was, for example, coupling and modularity and cohesion. Once you are there, once you’ve found you were able to find such term and to define it, then you should probably share your wisdom with the world, because people are going to be grateful to you for doing that. So that was the first one.
The second one would be about modeling. So modeling is a very important part of what we’re doing. And as software engineers, I don’t think we spend enough time on training that muscle. That modeling muscle. We spend more time doing workshops on Kubernetes and Lambda functions, for example, and things that are more technical. Modeling is about our ability to understand the real world, those real world systems that we have implemented in code. So I would say spend time modeling. It’s super important. It’s super important to train that muscle, to get better at it.
And it also helps to analyze other models. And models are everywhere. Even if you’re looking at a model of a toy car, it’s still a model. So think about, analyze it from that perspective. A model is not a copy of the real world. It’s human-made construct that is supposed to solve a problem. So ask yourself, what is the problem that this specific model solves? Does it do a good job at it? If it’s a toy car, then probably its goal is to mimic that possession of some cool car. How cool is that, etc? Do it with your software models. And then, of course, apply that knowledge for evaluating what I called earlier models of models, integration contracts, how effective are they of encapsulating knowledge. And that will help you to become a much better software designer, again, at whatever level of abstraction you’re working on. It doesn’t matter. The underlying ideas are the same.
That brings me to the third tech leadership wisdom, and that would be about design. And design is another overloaded term that different people understand in different ways. We have graphical design, we have software design, we have product design, whatever. But if you ask yourself about what’s the purpose of design, it’s usually to solve a problem. The design is of a solution. So again, getting better at a design, it’s like getting better at modeling, but at a different level of abstraction. Evaluate designs. Let’s say you’re looking at the microphone I’m using right now. What about its design? Is it good or bad? And this mute button which is impossible to reach. Is it good? Does it solve the problem or should I keep my app open on the screen all the time? Probably that says something about the design.
And once you will get into that notion of design. And again, design of whatever, from appliances to software. Underneath, usually there are the same principles that are driving that design being good or bad. And usually there will be some representation of distance, and knowledge. In software design, as I said, we have integration strength and distance and balance. In graphical design, for example, you have sizes of components on the web page. The greater the distance, the bigger should be the size, right? The greater the distance that your mouse travels.
So yeah, these are the topics that are sort of philosophical, but underneath they will definitely make you a much better, software architect, software designer or just a software engineer. That’s what I call myself.
Henry Suryawirawan: Right. I really love this meta wisdom, right? So you kind of like bring up a topic that is a quite high level. I like the term model of model, actually. So it explains about the contract really, really well, right? So you have a model and then you come up with another model to actually exchange the knowledge or information between, maybe, two components or two services, right?
So Vlad, it’s a pleasure to actually discuss with you about this topic. I think coupling is something that we all software engineers really, really need to understand well. And I find that in many software engineers, when we talk about software design, this is less included in the design. Simply because we talk about, you know, architecture patterns, probably technologies, right? Or maybe cloud technologies, Kubernetes, and things like that. But actually talking about coupling, modularity, complexity, knowledge, boundaries, and things like that is very little in the discussion. So I hope people are more well equipped now with the book, right? And for people who love to maybe ask you questions, maybe continue discussion, or maybe just find out more about this topic, is there a place where they can find such resources online?
Vladik Khononov: Yeah. So you can find me on Twitter. You can find me on LinkedIn. I assume the links are going to be in the show notes. I have a blog, which looks like a very sad place right now, because, yeah, I didn’t have time to update it. And every month somebody, some nice person emails me saying that a link on your blog is broken. And I promised to fix it, but here we are. So yeah, I would say Twitter and LinkedIn, these are the places to get news. The Coupling Book is supposed to be published in September 24. At the moment, it’s available on O’Reilly Online Learning Platform. That draft is probably the same one that’s going to be published, just improved with professional copy editing and professional illustrations. And, yeah, also there are going to be a summary chapter, but that’s about it.
Henry Suryawirawan: Right, so I really highly recommend the book, right? So I think you can also check it out on O’Reilly. I read it as part of this preparation, right? So I think it’s pretty much robust and complete, I would say, in terms of content. So yeah, I think good luck with your publication. So hopefully, people can get to understand about this concept much better. So thanks again for your time, Vlad.
Vladik Khononov: Thank you so much, Henry. It’s been a pleasure.
– End –