#241 - Your Code as a Crime Scene: The Psychology Behind Software Quality - Adam Tornhill
“If you have a healthy codebase, then your development work is going to be very predictable. If you have unhealthy code, then a task can take you up to 10 times longer.”
Why do so many software projects still fail despite modern tools? The answer often lies in the psychology of the team, not the technology stack.
Software development is often viewed purely as a technical challenge, yet many projects fail due to human factors and cognitive bottlenecks. In this episode, Adam Tornhill, CTO and Founder of CodeScene, shares his unique journey combining software engineering with psychology to solve these persistent industry problems. He explains the concept of “Your Code as a Crime Scene,” a method for using behavioral analysis to identify high-risk areas in a codebase that static analysis tools often miss.
Adam covers the tangible business impact of code health, specifically how it drives predictability and development speed. He explains why 1-2% of our codebase accounts for up to 70% of our development work, and how focusing on these hotspots can make our team 2x faster and 10x more predictable. Adam also provides a critical reality check on the rise of AI in coding, exploring whether it will help reduce technical debt or accelerate it, and offers strategies for maintaining quality in an AI-assisted future.
Key topics discussed:
- Combining psychology and software engineering
- Why predictability matters more than speed
- Treating your codebase as a crime scene
- Behavioral analysis vs. static analysis
- The hidden danger of the “Bus Factor”
- Will AI help or hurt code quality?
- Why healthy code helps both humans and AI
- Essential guardrails for AI-generated code
Timestamps:
- (01:29) Career Turning Point: From Developer to Psychologist
- (02:36) Combining Psychology and Software Engineering
- (04:00) Why Engineering Leaders Need Psychology Knowledge
- (05:46) The Root Cause of Failing Software Projects
- (07:43) Why Code Abstractness Makes Quality Hard to Measure
- (09:29) Aligning Code Quality with Business Outcomes
- (11:37) Code Health: 2x Speed, 10x Predictability
- (12:58) Why Predictability is Undervalued in Software
- (14:15) TDD and Practices That Drive Code Quality
- (17:06) Benchmarking Code Health Across the Industry
- (19:53) Introducing “Your Code as a Crime Scene”
- (21:57) Behavioral Code Analysis: Hotspot Analysis vs Static Code Analysis
- (24:06) Behavioral Code Analysis: Understanding Change Coupling
- (26:30) Dealing with God Classes
- (29:40) Behavioral Code Analysis: The Social Side of Code
- (31:33) Why Developers Aren’t Interchangeable
- (33:14) Introduction to CodeScene
- (36:48) Will AI Help or Hurt Code Quality?
- (39:14) Essential Guardrails for AI-Generated Code
- (42:06) Using CodeScene to Maintain Quality in the AI Era
- (43:06) How AI Accelerates Technical Debt at Scale
- (45:54) Why AI-Friendly Code is Human-Friendly Code
- (48:32) Documentation: Capturing the “Why” for Humans and AI
- (50:42) The Reality Check: Future of Software Development with AI
- (52:41) 3 Tech Lead Wisdom
_____
Adam Tornhill’s Bio
Adam Tornhill is a programmer who combines degrees in engineering and psychology. He’s the founder and CTO of CodeScene – the next generation code analysis tool – to help companies succeed with software development. Adam is also the author of multiple technical books, including the best selling Your Code as a Crime Scene, as well as an international keynote speaker and software researcher.
As the founder of CodeScene, Adam aims to revolutionize software development, leveraging AI-driven methodologies to optimize code quality. Adam’s expertise and research have made him a sought-after speaker, inspiring audiences worldwide with his insights into software engineering, the business impact of code quality, and AI innovation. With acclaimed books and patents to his name, Adam continues to shape the future of software development, driving excellence in the industry.
In his spare time, Adam enjoys other interests such as modern history, music, retro computing, and martial arts.
Follow Adam:
- LinkedIn – linkedin.com/in/adam-tornhill-71759b48
- CodeScene – codescene.com
- 📖 Your Code as a Crime Scene – pragprog.com/titles/atcrime2/your-code-as-a-crime-scene-second-edition
Mentions & Links:
- 📝 6X improvement over SonarQube - Raising the Maintainability bar - https://codescene.com/blog/6x-improvement-over-sonarqube
- 📖 Zen and Art of Motorcycle Maintenance - https://www.amazon.com/Zen-Art-Motorcycle-Maintenance-Inquiry/dp/0061673730
- 📝 Code Red: The Business Impact of Code Quality - https://codescene.com/hubfs/web_docs/Business-impact-of-code-quality.pdf
- Unit testing - https://en.wikipedia.org/wiki/Unit_testing
- Test-driven development - https://en.wikipedia.org/wiki/Test-driven_development
- God class - https://en.wikipedia.org/wiki/God_object
- Spec-driven development - https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html
- Git - https://git-scm.com/
- Robert Pirsig - https://en.wikipedia.org/wiki/Robert_M._Pirsig
- Sonar - https://www.sonarsource.com/
- Thoughtworks - https://www.thoughtworks.com/
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
Career Turning Point: From Developer to Psychologist
-
Back then I had been working as a professional developer for six, seven years, and I kind of made this observation that many others have done too, that most software projects, they tend to fail. They tend to fail miserably like, you know, way over budget. Doesn’t live up to customer expectations and painful. And I kind of wanted to understand why is that happening? So I decided to really, really get to the root of the problem and I decided to look outside of technology. So that’s how I got involved in psychology. And that has influenced, and still a lot, in my career.
-
The main advice is that there’s so much to learn from other disciplines too. So even if we work within tech, there’s a lot to learn from the human sciences, from behavioral psychology. Being able to combine multiple disciplines give us an edge as developers.
Combining Psychology and Software Engineering
-
Learning psychology completely changed how I approach software. So first of all, I got explanation to many phenomenon. So we all know about the challenges of scaling development teams and getting a large organization kind of pull in the same direction. And what’s so interesting is that social psychologists, they have studied this for decades. There are a lot of knowledge and a lot of the problems we try to solve within the software industry had already been solved in psychology.
-
The second one that surprised me a little bit more was that once I started to learn about cognitive psychology, which is where we much about how we people think, reason how we solve problems, it kinda quickly occur to me that it’s a surprise that we’re capable of writing code at all. It just shouldn’t be possible because the human brain is way too limited.
-
In cognitive psychology you learn about all these cognitive bottlenecks that we have. But surprisingly the human brain is very good at workarounds. So I thought that there’s a lot of lessons to pull out of cognitive psychology, because if we are aware of what the cognitive bottlenecks are, then we can start to design our software around those bottlenecks. In tandem with the bottlenecks rather than try to fighting them all the time, which like 90% of all code out there does.
Why Engineering Leaders Need Psychology Knowledge
-
It gets very important in order to build an efficient team. There’s so many wasteful ceremonies and procedures that we kind of insist on carrying on because we’ve gotten used to ’em. To just give you a quick example on something I see almost on a daily basis. Brainstorming. It’s something that’s been around for a long, long time in the current format. It’s an idea from the 1950s. And we look at the research done on it. It’s very clear that brainstorming just doesn’t work. And the reason it doesn’t work is because the whole social situation is it’s like an open invitation to lots and lots of social biases.
-
Being aware of that as a technical leader immediately makes it possible to avoid that type of waste and also to get the most outta your team. I highly recommend any technical leader or manager to dip their toes into psychology.
The Root Cause of Failing Software Projects
-
It’s actually quite depressing looking back. But there are certain things we know today that we didn’t know 10 or 20 years ago, and we can use that knowledge to our advantage.
-
The reason I think so many projects fail, I mean the, there are many, many reasons, but a very common root cause is that we have a poor quality code. And if we have poor quality code, then the main problem is that a manager, they won’t see the root cause, they will just see the symptoms. So they will see a project that’s unpredictable. You think things are going to take a day and they end up taking two months. There are all these unknown unknowns. You might also see that it becomes really, really hard to guess when something will be done, which puts a lot of pressure not only on the leadership team, but also on developers. And then of course, once something is done, you have a ton of rework that you cannot anticipate upfront due to poor quality. So you have all these symptoms that we have all seen. But the root cause poor quality remains largely, it’s largely a black box.
-
The big tragedy of software that it’s so hard to have a communication around something as technical as a source code with the non-technical stakeholder. Because even we as developers, we struggle with understanding code we didn’t write ourselves. So how should we expect someone that doesn’t write code themselves to understand it? Code is very abstract, and that’s what I’ve been working on for like the past 10 years, to try to bring visibility to source code, to make it accessible not only to technical people but also to non-technical, non-coders.
Why Code Abstractness Makes Quality Hard to Measure
-
The key problem is that we don’t have any physics for source code. We cannot weigh it. We cannot take a software system, pull it out and turn it around and inspect it for technical debt. It’s just not doable. And that is what makes it so hard.
-
Second problem is that for a long, long time, we haven’t really had any way of measuring quality in a reliable way. And that means that if you sit down and talk to 20 developers, you get thirty different answers on what good source code is.
Aligning Code Quality with Business Outcomes
- Technical debt as a term, the original usage of it back in the 1990s as Ward Cunningham kind of coined it, that one is useful. But the term technical debt has been so diluted over the years and now we’ve used that term to anything that we don’t like or disagree with. And that’s not useful. That’s not helpful. The proper solution is that we need to align whatever we call software quality with a business outcome. Whatever we mean with high quality has to be something that benefits the business. Otherwise, it’s really just a vanity metric and that’s not useful. That’s going to hurt trust rather than empower us as developers to do the right thing.
Code Health: 2x Speed, 10x Predictability
-
Let’s start with the hard problem. How do we define good quality? What we thought was that instead of trying to define what good quality is, let’s try to get developers to agree on what bad quality is, because that is a much easier problem. So there are always these things in software like excess copy paste, deep nested control logic, excessively long functions. It’s the kind of stuff that we can agree upon that this is bad.
-
So we defined a new metric called code health, that this goes back like almost 10 years when we work on that code health metric startup. So we identified like 25 different factors that we can kind of agree that these are bad practices in source code, then we start to measure them and aggregate them. And that made it possible to classify code if it’s healthy or unhealthy. And then to actually connect it to some business impact, what we did was that we started to collect data. We went for real enterprises, real production code, closed source development. So we didn’t have access to a source code, but working closely with those companies, they gave us access to their code health scores as well as their JIRA data. And that made it possible to calculate how long have you spent working on a piece of code and correlate that to its code health category. And by doing that, what we could show is that if you have a healthy code, then your development work is going to be not only than twice as quick as someone working in unhealthy code, it’s also going to be 10x more predictable.
-
And what that means in practice is that if you have a healthy codebase, then your development work is gonna be very predictable. You know roughly how long something is going to take to wrap up because there are no nasty surprises down the road. If you have unhealthy code, then a task can take you up to 10 times longer. And that is what causing stress, confusion, and overtime.
Why Predictability is Undervalued in Software
-
Value different things. So most organizations, they value development speed. The quicker you can get the feature out, the shorter your time to market, the better. And then of course, the organizations that understand the cost of rework and the whole hurting the whole product maturity and customer relationship by putting a lot of bugs into production. So they understand that. And many organizations even have a cost associated with defects.
-
But predictability, it’s not something that people measure. My experience is that predictability is important to everyone, even if we’re not aware of it. Because I’m yet to meet a manager that likes uncertainty. As a manager, you absolutely hate that. You want to know when are things going to be get done. And as a developer, I also strongly dislike uncertainty. Because it’s causing so much stress and overtime. Predictability, it’s very undervalued in software.
-
Predictability, what I mean by that is predictable in the sense that we have an idea on what we want to achieve and we can express that in source code. And it works like we expressed. Unpredictability is rather that, yeah, we have this idea now we want to do it in the source code and we end up in this unhealthy code. And we have absolutely no idea how we should express our idea, because we cannot make sense of the code to start with.
TDD and Practices That Drive Code Quality
-
There are many practices that definitely correlate with code quality and that are prerequisite for high quality code. And one of them is unit testing. More specifically, I’ve been a big, big fan of test driven development for almost 25 years now. It’s how I write code. I don’t think it’s about software testing at all. Rather, I think it’s a great design methodology, because you kind of start with like the outcomes, what do I want to achieve, and that helps driving the code. It also kind of adds to this predictability in the sense that it takes a potentially large task and gives you a method for how to break it down into smaller steps so that you can stay on track. So that is important.
-
Then there are a lot of like practices that teams are driving towards today that I think are valuable, like very, very short development cycles, frequent releases. Because there’s nothing that’s more useful feedback than working software. So that’s what we try to do internally at CodeScene as well, to get things into production as soon as possible. By using the feature toggles, we can kind of start to use it, start to validate whatever we’re building and dog food, work in progress. That’s super useful. So highly iterative development for sure.
Benchmarking Code Health Across the Industry
-
There are definitely vast differences between different companies and even between different teams inside the same company. Just earlier this year we’d start to publish our benchmarking data. I have a blog post where I have written about that and there’s also a research paper behind it. What we show is like the top five performance, like across the industry have healthy codebases. But the vast majority are a little bit further down, in the slightly unhealthy space. So that seems to be the norm for software that we struggle with maintaining healthy codebases.
-
I’m really blessed in that I get to meet a lot of different organizations and a lot of different software teams, so I get to see a lot of code. My experience is that smaller teams with small codebases, unsurprisingly, they tend to have a healthier place than larger projects. That said, I have personally seen projects that have been developed heavily, products that have been developed heavily for a decade that are still healthy. And that is really, really a prerequisite for remaining innovative and being able to keep the fun in software development.
Introducing “Your Code as a Crime Scene”
-
The first idea in your code as crime scene is that not all code is equally important. Some code is simply worked on much, much, much more frequently than other pieces of code. And if you plot out the change frequency of every single file in your codebase, just look at how often have you done or commit that touch that part of the code, you will see an extremely steep power log curve. And that means that at the head of that curve, you have maybe 1-2% of your codebase that accounts for the majority of your development work. Very, very often it’s about 25% of development work in 1% of the codebase, and occasionally it could be up to 60, 70% in just a small part of the code. So the obvious implications is that if you want to improve code quality or if you want to remediate technical debt, then you should really start with the most frequently worked on files.
-
And these are the ones that call hotspots in your code as crime scene. They are development hotspots. Because that’s where the return on investment is. That’s where we’re really going to make a difference. But it’s also a positive message because what it means is that the majority of your code is code that’s rarely if ever touched. So that’s where you can actually live with some technical debt. So even if you have code that’s unhealthy, it’s a complete mess. No one really understands it. If it’s code that you never have to touch, I mean, you need to be aware of the problem because it’s a potential future risk, but it’s probably not an urgent priority. And it would have a very unclear return on investment if you spend time refactoring it. So the bulk of your code as crime scene is a set of techniques for prioritizing your time, your effort, and your precious attention to where it’s likely to be needed the most.
Behavioral Code Analysis: Hotspot Analysis vs Static Code Analysis
-
The obvious challenge is that static analysis, it’s a great way for catching stylistic issues. You can even catch some bugs. I’m a big fan of static analysis. I recommend all teams to do that. But static analysis was never ever intended to help you prioritize technical debt. It just cannot do that maybe it could be used to assess the amount of debt or quality issues you have, but it cannot possibly give you any priority on them, because it doesn’t know anything about the interest on that debt.
-
The big problem I see in practice is that you go to an organization that is using one of these static analysis. So you saw linting aggregator, like Sonar that we discussed. You end up seeing that, that they have 5,000 issues. And that simply means that important stuff will fly under the radar. It would kind of drown in that amount of information.
-
So what teams typically do is that they say everything that’s just information or a warning, throw that away, let’s focus on the major stuff. And that can actually lead you to waste time fixing things that aren’t urgent nor important. So you go into that long tail code that you never have to touch and you make changes to it, then you’re very likely to introduce a new bug. And at the same time, you might have smaller issues in the hotspots, and these are the ones that keep driving costs every single day. But they get down prioritized because they don’t have this critical label on them. So I think that’s the big danger with static analysis, that it makes it impossible to prioritize fixing the right technical debt. Also it makes it very easy to waste time doing things that aren’t important.
Behavioral Code Analysis: Understanding Change Coupling
-
Change coupling is important because it shows you the change patterns in your codebase. And it has so many different use cases. The most obvious one is to be able to reason about the cost of change. Code complexity can come in two different shapes. You can either have super complicated source code, the code is written in a bad and unstructured way, and now it’s hard to understand. But could also be that the code itself is actually fairly easy to follow, but you have no idea how various modules kinda fit together to make a system. So whenever you want to make a change, maybe implement a new feature, then you find yourself playing shotgun surgery, like traversing the whole system, searching for places that need to be modified.
-
And change coupling is really powerful, because with it does is that it looks into the history of your code via the Git history and figures out that other developers that worked on this part of the code also had to modify that and that and that file. So it kind of gives you a map where you can kind of see where the changes are going to be. And that is something you can use not only to onboard yourself and figure out quicker what you need to change in order to complete the task. It’s also super useful if you think about like architecture refactorings, because you can use that information to figure out which modules belong together, which modules should I split and so on. And it’s also important to clarify that change coupling in itself is neither good nor bad. It just shows you that this is the way the system is.
Dealing with God Classes
-
God classes. They are amazing. It’s probably the worst code smell you can ever come across. They typically see that the god class, they have like these implicit dependencies that you reveal via change coupling to like forty different places in the code. It’s extremely expensive. And what I typically recommend is that, first of all, use this information to make everyone in the organization aware of the bottleneck and the cost so that we have a shared situation of understanding. And then we need to do some serious refactoring.
-
Very often what you find in god class is that they become god classes because they accumulate so many different business responsibilities. So the first challenge is always to identify what are these different responsibilities? And then you need to start to split up the god class. You need to start modularize it so that you can, you know, put each responsibility in its own module, and that will help with the change coupling. But it’s going to be painful. So that’s where hotspots down at the function level can really, really help. It’s a technique I call X-ray, hotspots X-ray, and it basically gives you like a prioritized list of given these hundreds of functions that you might find in the god class, maybe only 20% of them are actively worked on. So these are obviously responsibilities you want to start modularizing and extracting first.
Behavioral Code Analysis: The Social Side of Code
-
One classic thing is unfamiliar code. Code that we didn’t write ourselves. So quite often you find developers complaining that a piece of code is hard to understand. And then you start to measure it objectively using something like the Code Health Metric. And you find out that this code is actually healthy. And you have a conversation with the developer and the development team, and you quickly figure out that the reason they thought it was complicated was that they have never worked on that code before. So they had some onboarding to do. They need to become familiar with the structure of the code as well as the whole domain.
-
So it’s very easy to kind of mistake or lack of familiarity for complexity. And when you do that, it’s really dangerous because now you run the risk at prioritizing refactorings to something that doesn’t need to be refactored. What you need is proper onboarding time and time for learning. So it might make you do the wrong thing.
-
However, a flip side of that is also that you can run into risks like the truck factor or the bus factor. Which is a fun way of pointing out the risk associated with key person dependencies in software. And the bus factor, it’s kind of fascinating. We did a study last year on it and we found that even in larger teams, larger departments with like 50, 60 developers, the bus factor is usually just two or maximum three people. So what that means in practice is that if the two wrong developers would leave your team, then you lose control of 50% of the codebase. Because the people who know the details of that code are gone.
-
This is where behavior code analysis can really add a different dimension to how we look at code. Because using Git data, it’s possible to figure out which developer that has written which code, and if they are still around. It’s of course not anything I recommend using for micromanagement. There are so many pitfalls associated with that. But it’s super useful in order to build a knowledge map of your codebase so that you know that if I’m working this part of the code, this is developer I should ask for. It also makes it very easy to discover risks like the, like a low bus factor. And if you’re aware of those potential problems, then as a technical leader, as a tech lead or a coach, you can help the organization remediate that problem.
-
In particular, what I always recommend is to combine this with the technical measures. So if you find that you have a piece of code with the low bus factor and that code is also unhealthy, then there’s an extreme off-boarding risk associated with that. So you probably want to be proactive here and refactor that code while the developer who wrote it is still a lot around. So pair them together with someone else on that team. You’re going to dispute knowledge in the process as well as remediating our massive, massive future risk.
Why Developers Aren’t Interchangeable
-
It’s very dangerous and that explains a lot of the problems in the software industry that we cannot view developers as interchangeable cogs in a large machine. What I try to do is I try to visualize that problem. What we do is that we create maps like visualizations that pretty much shows what your software system looks like. On these maps, you see every piece of code visualized, but they’re visualized in a more accessible form. So they’re not visualized the source code or visualized like various circles. And the size of the circle simply shows the amount of code. If you can present the bus factor using that visualization, then you can immediately spot the problem because you will see that if like these two people leave, you lose control of all these pieces of code.
-
So I typically use color to visualize that. So the problem really pops out and once you have seen that, you can never unsee it again. And it’s also good because it gives you a way of getting feedback continuously, because if you start to act up on this risk, then you will see how the bus factor decreases over time. That said, I don’t think the bus factor is necessarily, it’s not always a bad thing. I would be worried sometimes if we don’t have a bus factor, because individual productivity does vary a lot. But what it’s about is to avoid unnecessary risk, ensure that everyone on the team gets a chance to contribute.
Introduction to CodeScene
-
I founded CodeScene 10 years ago. And I founded it after writing Your Code as a Crime Scene. And in Your Code as a Crime Scene, I basically collected a bunch of techniques I’ve been using myself throughout my career. But I also realized that the book won’t be enough. I really wanted to have professional tools that could automate the analysis from Your Code as a Crime Scene. So that was my main motivation.
-
And the way CodeScene works, you basically point it to your codebase and then you press a button and then CodeScene does all the heavy lifting. So you end up with not only KPIs and trends on code quality and code health and that stuff, you also get complete visualizations. So you can visually see where are the bottlenecks in your code, where are the complicated hotspots. And those maps are intended to serve as a communication not only within the engineering team, but also so you can sit down with technical leaders and managers and have a conversation around things like technical debt and code quality.
-
The thing that sets CodeScene apart is it’s obviously the first and only behavioral code analysis tool. So the only tool that really, really consider the intersection of people and code. Second one thing we are very proud of is that our code health metric is the only proven and validated code quality metric with the connection to business outcomes like we talked about before, measurable defect reduction, measurable speed up in development time.
Will AI Help or Hurt Code Quality?
- I think it’s very much up to us. So AI itself, I have some pretty solid proof that it can help us write better software. But there’s also a couple of trends that worry me, which indicates that if we misapply AI or use it for the wrong purpose, then it’s more likely to serve as a technical debt generator rather than support. We are all like at the fork in the road right now. And it’s pretty much up to us as a community, as companies where we’re going to take AI because we can end up in big, big problems in the future where technical debt grows exponentially at machine speed.
Essential Guardrails for AI-Generated Code
-
The first guardrail has to be code quality. This is because we are using AI today and that’s what I mean with the big risks. I see that we’re using a AI today to automate coding. But writing code is a very, very small part of developer’s work week. It’s like roughly 5% of our work week. That’s the time we spend typing on the keyboard. So development is not about typing faster. The big bottleneck is instead in understanding existing code where we spend roughly 70% of our work week. If we’re not aware of this and if we’re not guarding against this, then we will have an AI that optimizes 5% of our work with at the expense of the other 70.
-
Writing code was never the bottleneck, but we pretend it was. And now we get a lot of code and now we have to spend a ton of time reading through that code and trying to understand code that someone else, in this case, an AI wrote, which is arguably a harder problem. So this is something I’m worried about. So the least we can do is to ensure that whatever code AI generates that it’s healthy. What we do internally is that we have a code health metric in the IDE. So wherever we have write some code or where if we use AI to generate some code, it has to pass a certain code quality bar, a certain code health bar. Otherwise, we discard it. It’s not useful.
-
The other thing that is important as a guardrail is that we need to, we kind of need to value this shift in emphasis from writing code to become really, really good at, at reading code. Traditional software engineering practices like TDD before, like code reviews, like pair programming, they will be more important than ever before in the AI era. Because it’s a really, really hard problem to assess a lot of code that we didn’t write ourselves.
Using CodeScene to Maintain Quality in the AI Era
-
There are multiple tools here. One thing it’s super important to have a security scan happen as soon as possible. Because a lot of the early research on AI kind of showed that it was very prone to shipping vulnerable code. So that’s like a very low barrier to quality. Make sure it’s secure.
-
The second thing, and this is where I think CodeScene has an important role to play, is that integrate the code health check in your IDE so that whenever you get some code, human written, AI written, make sure it’s healthy. Otherwise, discard it. Or use an MCP that feeds it back to the AI so that can make a new attempt at shipping code that are the right standard. Acceleration just isn’t useful if it kind of drives our projects into this brick wall of technical debt. So having healthy, secure code, that’s like the very basics of this.
-
And third, also think we need to be really, really good at questioning whether we need that code in the first place. Because an AI makes it more or less free to generate a lot of code. But that code, that’s still expensive, that’s still waste if that code isn’t needed. So maybe there’s a library, maybe there’s already some in-house functionality for doing this. Or maybe we should go back and question the requirement to really have to implement this. Because every single line of code we add is going to cost us a lot of time over the years, just to maintain it and just all the time we have to understand it.
How AI Accelerates Technical Debt at Scale
-
The challenge is that AI, it kind of works really, really well for individuals doing toy tasks. That way an AI can kind of drive most of the stuff in particular if it’s a task that’s been done many, many times, and that’s perhaps code that shouldn’t even be written. You face challenges when you start to scale to the team level. I’ve seen companies, I remember like six months ago I worked with a company that had made a massive, massive rollout of an AI coding tool. And they had done everything by the book. So they started, they did on evaluation at a small scale with a small team, and that was of course, successful. And then they decided, hey, let’s scale this up to a couple of thousand developers. And that’s where the problem started, because pretty soon they noticed an impact on production that they were generating poor code leading to bugs. And this is where it’s so dangerous, because if that code isn’t healthy that this AI generated.
-
You need to get the human into the loop to figure out, okay, what’s causing this bug? Because an AI is not particularly good at that. The state of AI is simply not there, where it can automatically repair system in production reliably. So the code needs to have a fail safe where humans can go in. And now you have this massive, massive onboarding because you have thousands of lines of code that you didn’t write, your teammates didn’t write, but now you need to understand them and you need to understand them under time pressure. So that’s where why code quality is so fundamental for succeeding with AI-assisted development.
Why AI-Friendly Code is Human-Friendly Code
-
This was an observation originally made by Thoughtworks on their tech radar earlier this year, that where they start to talk about this concept of AI friendly code. And that kind of really resonated with me because it’s exactly what I see in our own internal data lake where we study how AI impacts quality. And the idea here is that healthy code, an AI will benefit from this additional structure. It will benefit from the richer context by having proper function names, cohesive functions, it simply makes it so much easier for the AI to do do a good job.
-
So it’s kind of fascinating that we have this potential double win here that if we can write off the code, not only is the life going to be easier for us as humans, but we’re also going to be in a place where we’re able to benefit from AI. And that also means the obvious implication is that if your code isn’t good enough, then maybe you’re in a situation where you cannot apply AI safely. Maybe you need to refactor first to even get to a level where you can benefit from AI. So this is an area where we have initiated a bunch of research initiatives. So I hope to be able to share some actual numbers and put some actual data on what the problem is and whether the cutoff points are for when it comes to AI performance and source code.
Documentation: Capturing the “Why” for Humans and AI
-
Good documentation has always been valuable. I know that there are a lot of conflicting opinions about that in the software industry. One thing I always miss in source code is that I want to understand the whys. Why have we chosen to do it this way? What are the trade-offs? Because that’s something I can never read out of source code. I can use the source code to understand what the code does. But I don’t know why we choose just this approach.
-
That level of documentation is super important for a human maintainer. And I would be surprised if an AI didn’t benefit from that, because if the AI has the context of why, then we can probably avoid a lot of pitfalls. The classic example is that we have done some really, really hard performance optimizations, as a human, you want to know about that. As an AI, you probably also want to understand that because it’s gonna to influence what path you take.
The Reality Check: Future of Software Development with AI
-
In two, three years time, I think what’s going to happen is that we all get a reality check. The reason I say this is because looking historically, I haven’t seen a single technological revolution that led to the demand for less work. It’s always been more work. Because what happens is that when we get access to new technologies, like for example a machine that can code, what happens is that we raise the bar. We start to take on larger and larger problems and that will only increase the demand for software developers. Given the current state of AI. I have a very hard time seeing that an AI will replace humans entirely. I don’t think that will ever happen. I might of course be wrong, but there’s no evidence for that so far.
-
So what I think is going to happen over the next few years is that we’re going to have this hybrid model where we as humans have to understand code that’s written by machine, and we have to kind of work in tandems with them. And that’s where I think it’s so important that we reemphasize traditional engineering principles in software. But it also means that the barrier to entry is going to be really, really challenging, because these skills they take, at least for me, it took a decade before I even knew what I did in software. And we kind of need to grow the next generation of developers too.
-
This is what I’m mostly worried about because as a junior, you need to have a chance to start and you need to learn from first principles. And having an AI available as a junior, I’m not sure if that’s the right way to go because it’s so, it makes it so easy to complete tasks without building a proper understanding. So to me, true learning has to be effortful. We really have to struggle in order to learn, and we need a chance to do that.
-
Yeah, I might be worried for our profession 10 or even 15 years from now, because we need to grow the next generation of developers. There’s going to be a lot of them that are needed out there. That said, I do hope that AI, if we rethink it, then I think AI can become a best friend. Like doing all this work that we dislike to do. You mentioned documentation. Another good example might be to fix technical debt, automate the security patches, that type of stuff that is repetitive and might be boring to a human and often they get down prioritized. That’s the type of task that I think could be super beneficial for a machine.
3 Tech Lead Wisdom
-
Learn to learn.
- We have seen that like the only true constant in software development is change. We just discussed AI, and that’s one example, when these changes in technology, changes in programming languages, changes in whatever technologies we’re using, when they come along, we need to be able to understand them quickly and follow along. And that to me is a lifelong journey where we have to practice and become more efficient at learning new stuff and be able to pick it up fast.
-
Become a domain expert at whatever you do.
- This is something I wish I understood much, much earlier in my career. We all understand the importance of being good at programming and development. But understanding the actual domain where we build products is super important, because if we are domain experts, if we truly understand our customers, use the product and what they aim to achieve with the product, then we can do so many optimizations. That’s where the real big benefits are. We might realize that we don’t even have to build this feature. We can take away that feature, or this one can be much simpler. That’s where we make the real architectural wins.
-
Always lead by example.
- This is something I’ve been trying to do myself now, because I’ve found myself in a manager position of the kind of founding CodeScene. I always try to lead by example. So what that means in practice is that I would never require anything from my teammates that I wouldn’t be prepared to do myself. That is important.
[00:01:29] Introduction
Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I’m very excited to have Adam Tornhill with me today. He’s the CTO and founder of a software quality, software metrics tools called CodeScene. And actually also represents his famous book that he authored, uh, many years ago titled Your Code as Your Crime Scene. I think it’s very interesting title that hopefully we get a chance to talk about it, like, you know, investigating your codebase, trying to figure out, uh, where the defects are, where the problem is. And I think Adam is also quite experienced, or I would say also experts in software quality, and looking forward also to discuss about software quality and AI later on. So Adam, looking forward for this conversation. Welcome to the show.
Adam Tornhill: Thank you very much. I’m really happy that I could join. I’m looking forward to this.
[00:02:36] Career Turning Point: From Developer to Psychologist
Henry Suryawirawan: Right. Adam, uh, first of all, maybe I would like to ask you, looking back at your career until now, are there certain turning points that you think are interesting that we could learn from you?
Adam Tornhill: So I think I had multiple turning points, but the most impactful was roughly 20 years ago. So back then I had been working as a professional developer for six, seven years, and I kind of made this observation that many others have done too, that most software projects, they tend to fail. And They tend to fail miserably like, you know, way over budget. Doesn’t live up to customer expectations and painful. And I kind of wanted to understand why is that happening? So I decided to really, really get to the root of the problem and I decided to look outside of technology.
So that’s how I got involved in psychology. I signed up at the university for an introductory course to psychology and kind of went on for roughly six years and took a second degree in psychology. And that has influenced, and still a lot, in my career, uh, going forward.
So, uh, I think the main advice is that there’s so much to learn from other disciplines too. So even if we work within tech, there’s a lot to learn from the human sciences, from behavioral psychology. And um, I think that being able to combine multiple disciplines, uh, give us an edge as developers.
[00:05:46] Combining Psychology and Software Engineering
Henry Suryawirawan: Yeah, thank you for sharing your unique journey, right? I think studying psychology and software engineering at the same time. Um, it’s not quite often I hear people doing so. But I think that gives you a quite a unique perspectives. And I know that you have been applying these two disciplines quite rigorously either in your research, in your current work and also your product. So tell us maybe what’s interesting fusion that you have found out about engineering and psychology.
Adam Tornhill: There’s so much to it. So learning psychology completely changed how I approach, uh, software. So first of all, I got explanation to many phenomenon. So we all know about the challenges of scaling development teams and getting a large organization kind of pull in the same direction. And what’s so interesting is that social psychologists, they have studied this for decades. There are a lot of knowledge and a lot of the problems we try to solve within the software industry had already been solved in psychology. So that’s like the obvious learnings.
The second one that might be a, that surprised me a little bit more was that once I started to learn about cognitive psychology, then which is where we much about how we people think, reason how we solve problems, it kinda quickly occur to me that it’s a surprise that we’re capable of writing code at all. It just shouldn’t be possible because the human brain is way too limited. You, you know, in cognitive psychology you learn about all these cognitive bottlenecks that we have. But surprisingly the human brain is very good at workarounds. So I thought that there’s a lot of lessons to pull out of cognitive psychology, because if we are aware of what the cognitive bottlenecks are, then we can start to design our software around those bottlenecks, right? In tandem with the bottlenecks rather than try to fighting them all the time, which like 90% of all code out there does. So that’s one area that I applied directly from psychology to software.
[00:07:43] Why Engineering Leaders Need Psychology Knowledge
Henry Suryawirawan: Very interesting. I hope we get to dive deeper into the cognitive aspect. But interestingly when you mention about this, right, many software engineering teams, especially now where everyone is building technology within their companies, right? So many leaders I think still do not understand the psychological aspect of software development or software development teams. So would you say that every engineering leaders these days must understand a bit of this psychology?
Adam Tornhill: It gets very important in order to build an efficient team. There’s so many wasteful ceremonies and procedures that we kind of insist on carrying on because we’ve gotten used to ’em. And, uh, I mean, to just give you a quick example on something I see almost on a daily basis. Brainstorming, right? It’s something that’s been around for a long, long time in the current format. It’s an idea from the 1950s. And we look at the research done on it. Then the research is, you know, it’s very clear that brainstorming just doesn’t work. And the reason it doesn’t work is because the whole social situation is it’s like an open invitation to lots and lots of, uh, social biases. And, uh, being aware of that as a technical leader immediately makes it possible to avoid that type of waste and also to get the most outta your team. So I think it’s, uh, very useful. I highly recommend any technical leader or manager to dip their toes into psychology.
Henry Suryawirawan: Right. Yeah. Especially, I would say software engineering is more like a knowledge worker kind of type of work, right? Where you, you use more of your brain power and definitely like cognitive, you know, cognitive load, psychological aspect. And also we work in teams rather than mostly solo, right? So I think it’s also important to understand psychology from other human behaviors part.
[00:09:29] The Root Cause of Failing Software Projects
Henry Suryawirawan: So you mentioned in the very beginning that you took your turning point simply because you found so many software projects failing. And this was way back many, many years ago. I still think statistically many software projects still kind of like failed. Uh, it seems like we haven’t really learned our lessons. And it goes back mostly to software quality or code quality. So tell us why this is still the case, why we have advanced so many technologies, uh, within so many years, but it seems like this problem cannot be solved.
Adam Tornhill: Oh, uh, I’m afraid that you are correct. It’s actually quite depressing looking back. But there are certain things we know today that we didn’t know 10 or 20 years ago, and, uh, we can use that knowledge to our advantage. So the reason I think so many projects fail, I mean the, there are many, many reasons, but a very common root cause is that we have a poor quality code. And if we have poor quality code, then the main problem is that a manager, they won’t see the root cause, they will just see the symptoms. So they will see a project that’s unpredictable. You think things are going to take a day and they end up taking two months. There are all these unknown unknowns. And uh, you might also see that it becomes really, really hard. So it becomes really hard to guess when something will be done, which puts a lot of pressure not only on the leadership team, but also on developers. And then of course, once something is done, you have a ton of rework that you cannot anticipate upfront due to poor quality, right?
So you have all these symptoms that we have all seen. But the root cause poor quality remains largely, it’s largely a black box. And that I think is the big tragedy of software that it’s so hard to have a communication around something as technical as a source code with the non-technical stakeholder. Because even we as developers, we struggle with understanding code we didn’t write ourselves, right? So how should we expect someone that doesn’t write code themselves to understand it? So code is very abstract, and that’s what I’ve been working on for like the past 10 years, to try to bring visibility to source code, to make it accessible not only to technical people but also to non-technical, non-coders.
[00:11:37] Why Code Abstractness Makes Quality Hard to Measure
Henry Suryawirawan: Yeah, you mentioned the keyword there, abstract. Because I think many, if we look at many other engineering discipline, right, it seems like they are able to do much better job in terms of, I dunno, like chunking the work, making more estimate. Is it because like code is abstract such that it’s very difficult to kind of like maintain the quality or even like have a bar of quality that we all can aspire to hit? Because like for example, in other engineering, they might have some standards, right? And they just need to, you know, work within the standard. But for code, I guess it’s very difficult to say what kind of quality every software team needs to aspire to. Is that part of the problem? Why this is such a difficult problem?
Adam Tornhill: It’s part of the problem, but I kind of think that the key problem is that we don’t have any physics for source code, right? We cannot weigh it. We cannot take a software system, you know, pull it out and turn it around and inspect it for technical debt. It’s just not doable. And that is what makes it so hard. Uh, second problem, I think, is that for a long, long time, we haven’t really had any way of measuring quality in a reliable way. And that means that, you know, if you sit down and talk to 20 developers, you get thirty different answers on what good source code is. So I think there are many challenges like that, that we have been trying to tackle with our research.
[00:12:58] Aligning Code Quality with Business Outcomes
Henry Suryawirawan: Yeah. I guess one part of the software engineering teams whenever we discuss about code quality, right, is the, so called the gap of understanding the code quality within the team itself. Then the next thing is actually to explain it with the stakeholders, the non-technical people about code quality. We always use technical debt, technical debt. It, first of all, is it the right term for us to communicate with other non-technical people? Or if so, uh, what would be the best way to convey this kind of understanding about the importance of code quality?
Adam Tornhill: That’s an interesting point. I think technical debt as a term, I think the original usage of it back in the 1990s as Ward Cunningham kind of coined it. I think that one is useful. But as you kind of indicate that the term technical debt has been so diluted over the years and now we’ve used that term to anything that we don’t like or disagree with. And that’s not useful. That’s not helpful. So I think the proper solution is that we need to align whatever we call software quality with a business outcome. Whatever we mean with high quality has to be something that benefits the business. Otherwise, it’s really just a vanity metric and that’s not useful. That’s going to hurt trust rather than empower us as developers to do the right thing.
[00:14:15] Code Health: 2x Speed, 10x Predictability
Henry Suryawirawan: Yeah. And you wrote this paper long time back, the business impact of code quality. So tell us, um, what kind of findings that you found out when writing that paper about the benefits of good software quality? And how do you define good software quality back then?
Adam Tornhill: Yeah. Uh, let’s start with the hard problem. How do we define good quality? So, one of my favorite books of all time is Zen and Art of Motorcycle Maintenance by Robert Pirsig. It’s such a brilliant book, right? They influenced a lot of my views on quality. But if I do like a sloppy summary of that book, it’s basically about a really, really intelligent guy that, uh, ends up in a mental hospital, because he kind of tries to define quality and he fails. So it’s a really, really, really hard problem. What we thought was that instead of trying to define what good quality is, let’s try to get developers to agree on what bad quality is, because that is a much easier problem. So there are always these things in software like, you know, excess copy paste, deep nested control logic, excessively long functions. It’s the kind of stuff that we can agree upon that this is bad.
So we defined a new metric called code health, that this goes back like almost 10 years when we work on that code health metric startup. So we identified like 25 different factors that we can kind of agree that they, these are bad practices in source code, then we start to measure them and aggregate them. And that made it possible to classify code if it’s healthy or unhealthy. And then to actually connect it to some business impact, what we did was that we started to collect data. And, uh, we went for, you know, real enterprises, real production code, closed source development. So we didn’t have access to a source code, but working closely with those companies, they gave us access to their code health scores as well as their, uh, JIRA data. And that made it possible to calculate how long have you spent working on a piece of code and correlate that to its code health category. And by doing that, what we could show is that if you have a healthy code, then your development work is going to be not only more than twice as quick as someone working in health, unhealthy code, it’s also going to be 10x more predictable.
And what that means in practice is that if you have a healthy codebase, then your development work is gonna be very predictable. You know roughly how long something is going to take to wrap up because there are no nasty surprises down the road. If you have unhealthy code, then a task can take you up to 10 times longer. And that is what causing stress, confusion, and overtime. So that was some of the contributions we did in the “Code Red: The Business Impact of Code Quality” papers.
[00:17:06] Why Predictability is Undervalued in Software
Henry Suryawirawan: And yeah, one other thing is like, uh, you found that 15 times fewer bugs, uh, for software that has higher quality. So I think those are kind of like remarkable, right? I think the, when you mentioned predictability, I think many stakeholders still kind of don’t treat it as equal as the number of bugs and speed of, you know, development, right? Predictability I think is very, very important, especially when you build software products that evolve over the time. So why do you think, you know, predictability is kind of like put on the backseat rather than, you know, being put as more focused? Because people always think, okay, I need to deliver fast. I need to deliver without bugs. And hence, maybe I add more people, I add more tools and things like that. But predictability is something that is kind of like taking a backseat. Maybe some thoughts about this.
Adam Tornhill: Yeah, that, that’s an interesting observation. So I do agree. I think that’s correct. So, uh, what I’ve seen is that organizations value different things. So most organizations, they value development speed, right? The quicker you can get the feature out, the shorter your time to market, the better. And then of course, the organizations that understand the cost of rework and the, know, the whole hurting the whole product maturity and customer relationship by putting a lot of bugs into production, right? So they understand that. And many organizations even have a cost associated with defects, right?
But predictability, you might be right. It’s not something that people measure. But my experience is that I think predictability is important to everyone, even if we’re not aware of it. Because I’m yet to meet a manager that, uh, likes uncertainty, right? As a manager, you absolutely hate that. You want to know when are things going to be get done. So I think it’s important. And as a developer, I also strongly dislike uncertainty. Because, again, it’s causing so much stress and overtime. So, uh, I think it’s predictability. I think it’s very undervalued in software.
Predictability, what I mean by that is predictable in the sense that we have an idea on what we want to achieve and we can express that in source code. And it works like we expressed. Unpredictability is rather that, yeah, we have this idea now we want to do it in the source code and we end up in this unhealthy code. And, uh, we have absolutely no idea how we should express our idea, because we cannot make sense of the code to start with.
Henry Suryawirawan: Yeah. So in my head, I think unpredictability could be first, we can’t even reason how to make the change, because, you know, the code is so messy. The other one is we make an estimate, but we kind of like delivered way, way, you know, beyond that estimate. And maybe the thing is like we deliver something, but the quality somehow when being tested is kind of like not up to mark for some reasons, right? So I think there are many possibilities of how this become unpredictable.
[00:19:53] TDD and Practices That Drive Code Quality
Henry Suryawirawan: Funny enough, when talking about code quality, people always talk about clean code. So, uh, is clean code also highly associated with code quality or is there any kind of good practice that you think is highly associated with good code quality?
Adam Tornhill: So I think there are many practices that definitely correlate with code quality and that I think are prerequisite for high quality code. And one of them is, of course, um, unit testing. More specifically, I’ve been a big, big fan of test driven development for almost 25 years now. It’s how I write code and, uh, I don’t think it’s about software testing at all. Rather, I think it’s a great design methodology, because you kind of start with like the outcomes, what do I want to achieve, and that helps driving the code. It also kind of adds to this predictability in the sense that it takes a potentially large task and gives you a method for how to break it down into smaller steps so that you can stay on track. So that I think is important.
Then there are a lot of like practices that teams are driving towards today that I think are valuable, like very, very short, uh, development cycles, frequent releases. Because there’s nothing that’s more useful feedback than working software. So that’s what we try to do internally at CodeScene as well, to get things into production as soon as possible. By using the feature toggles, we can kind of, you know, start to use it, start to validate whatever we’re building and, uh, dog food, work in progress. That’s super useful. So highly iterative development for sure.
Henry Suryawirawan: Yeah, so thanks for, uh, mentioning again about TDD, right? I think in so many conversations I had with so-called the software thought leaders, definitely TDD is kind of like one of the most mentioned practices, right? And TDD is, uh, okay, one aspect is definitely for testing, but the other aspect that is important is actually driving the design through test, right? So thinking about in the very first, you know, very first step when you write the code is like what behavior that you expect it to behave. So I think that’s a very good practice.
[00:21:57] Benchmarking Code Health Across the Industry
Henry Suryawirawan: Funny enough, right when we talk about software code quality, we have so many resources available. Uh, now I think it’s maybe you have books, YouTube, podcasts, whatever that is, right? But I rarely see people within software industry that says, my codebase is the best quality. So they’ll always say, yeah, there are parts that are good, there are parts which are really bad or has technical debt. Is it always the case in every customers that you see that this is kind of like the normal thing? Or is it like some people have a much, much better codebase compared to some other software development team?
Adam Tornhill: There are definitely vast differences between different companies and even between different teams inside the same company. We are actually, and perhaps finally, starting to shine some light on that. So uh, what we did at CodeScene, I think just earlier this year was that we’d start to, uh, publish our benchmarking data. I have a blog post where I have written about that and, uh, there’s also a research paper behind it. But, uh, basically what we show is like the top five performance, like across the industry have healthy codebases. But the vast majority are a little bit further down, in the slightly unhealthy space. So that seems to be the norm for software that we struggle with maintaining healthy codebases.
My experience from a more subjective point, because I do, you know, I’m really blessed in that I get to meet a lot of different organizations and a lot of different software teams, so I get to see a lot of code. My experience is that smaller teams with small codebases, unsurprisingly, they tend to have a healthier place than larger projects. That said, I have personally seen projects that have been developed heavily, products that have been developed heavily for a decade that are still healthy. And that I think is really, really a prerequisite for remaining innovative and, uh, you know, being able to keep the fun in software development.
Henry Suryawirawan: Seems sounds like an interesting research, right? So definitely we’ll put it in the show notes for people to refer further.
[00:24:06] Introducing “Your Code as a Crime Scene”
Henry Suryawirawan: So let’s say you have a customer now, so, uh, I’m pretty sure whenever they wanna, you know, implement CodeScene or maybe call you for consulting or whatever that is, right? They think that their codebase is pretty bad, right? So this is where your concept of your codebase as a crime scene is becoming quite interesting. So tell us what exactly the first thing that you would do? Because you have this opinion that the codebase is not just technical thing, there’s the behavioral analysis that you are doing on top of the codebase to see or to sense why the software code quality like that. So tell us about this very first step that you have.
Adam Tornhill: Sure. So the first idea in your code as crime scene is that not all code is equally important. Some code is simply worked on much, much, much more frequently than other pieces of code. And if you plot out the change frequency of every single file in your codebase, just look at how often have you done or commit that touch that part of the code, you will see an extremely steep power log curve. And that means that at the head of that curve, you have maybe 1-2% of your codebase that accounts for the majority of your development work. Very, very often it’s about 25% of development work in 1% of the codebase, and occasionally it could be up to 60-70 in just a small part of the code. So the obvious implications is that if you want to improve code quality or if you want to remediate technical debt, then you should really start with the most frequently worked on files.
And these are the ones that call hotspots in your code as crime scene. They are development hotspots. Because that’s where the return on investment is. That’s where we’re really going to make a difference. But it’s also a positive message because what it means is that the majority of your code is code that’s rarely if ever touched. So that’s where you can actually live with some technical debt. So even if you have code that’s, you know, it’s unhealthy, it’s a complete mess. No one really understands it. If it’s code that you never have to touch, I mean, you need to be aware of the problem because it’s a potential future risk, but it’s probably not an urgent priority. And it would have a very unclear return on investment if you spend time refactoring it. So the bulk of your code as crime scene is a set of techniques for prioritizing your time, your effort, and your precious attention to where it’s likely to be needed the most.
[00:26:30] Behavioral Code Analysis: Hotspot Analysis vs Static Code Analysis
Henry Suryawirawan: Yeah, so hotspots I think is very interesting. So I don’t think many people associate software quality with this hotspot most likely in the industry, so far that I have experienced, at least. They will rely on like static code analysis tool. So maybe think of it like SonarQube or whatever linter and all that, right? And they will throw out a bunch of issues that you, you know, you find in a report and then you just have to classify them, which is critical, high, medium, low, and you kind of like go through the list and kind of like close them. So tell us what is the pitfall of this approach? Because I’m sure in like still in the industry, many software development teams actually practice this rather than doing the hotspot analysis.
Adam Tornhill: Yep. Yep. Uh, that’s correct. The obvious challenge is that static analysis, it’s a great way for, uh, catching, um, stylistic issues. You can even catch some bugs. I’m a big fan of static analysis. I recommend all teams to do that. But static analysis was never ever intended to help you prioritize technical debt, right? It just cannot do that because it, maybe it could be used to, you know, assess the amount of debt or quality issues you have, but it cannot possibly give you any priority on them, because it doesn’t know anything about the interest on that debt, right?
So the big problem I see in practice is that you go to an organization that is using one of these static analysis. So you saw linting aggregator, like Sonar that we discussed. You end up seeing that, that they have 5,000 issues. And they, that simply means that important stuff will fly under the radar. It would kind of drown in that amount of information. So what teams typically do is that they say everything that’s just information or, uh, you know, a warning, throw that away, let’s focus on the major stuff. And that can actually lead you to waste time fixing things that aren’t urgent nor important. So you go into that long tail code that you never have to touch and you make changes to it, then you’re very likely to introduce a new bug. And at the same time, you might have smaller issues in the hotspots, and these are the ones that keep driving costs every single day. But, again, they get down prioritized because they don’t have this critical label on them. So I think that’s the big danger with static analysis, that it makes it impossible to prioritize fixing the right technical debt. Also it makes it very easy to waste time doing things that aren’t important.
Henry Suryawirawan: Yeah, so, especially these days, right? There are so many different types of static code analysis. So one could be like software quality type. Second is about, you know, security, right? So once you integrate all this, no wonder, I think many software teams will have like hundreds of, you know, issues. Uh, although prioritization, when you mention about prioritization, many would just focus on the critical or high categories, right? And then they’ll just, you know, spend some time to actually close them. But yeah, definitely I think sometimes we found the issues that is categorized as high, but is in the code that is rarely touched. So I think your point there, I think makes sense, right? So sometimes, you know, like if the code doesn’t get touched so often, why would you want to change it? Because maybe it doesn’t give you the high ROI, yeah.
[00:29:40] Behavioral Code Analysis: Understanding Change Coupling
Henry Suryawirawan: So hotspots actually is one of the pillar, um, within, within your behavioral code analysis. The other one is actually the so called the change coupling, right? So for example, if you make one change on the file, most likely you will also make a change to another file, right? This is where the coupling is. So tell us why this is also an important analysis that you do within this behavioral code analysis.
Adam Tornhill: Yep. Change coupling is, uh, important because it shows you the change patterns in your codebase. And, uh, it has so many different use cases. The most obvious one is to be able to reason about the cost of change. So what I mean by that? Well, simply that code complexity can come in two different shapes. You can either have super complicated source code, right? The code is written in a bad and unstructured way, and now it’s hard to understand. But could also be that the code itself is actually fairly easy to follow, but you have no idea how various modules kinda fit together to make a system. So whenever you want to make a change, maybe implement a new feature, then you find yourself playing shotgun surgery, like traversing, uh, the whole system, searching for places that need to be modified.
And change coupling is really powerful, because with it does is that it looks into the history of your code via the Git history and figures out that, you know, other developers that worked on this part of the code also had to modify that and that and that file. So it kind of gives you a map where you can kind of see where the changes are going to be. And that is something you can use not only to onboard yourself and figure out quicker what you need to change in order to complete the task. It’s also super useful if you think about like architecture refactorings, because you can use that information to figure out which modules belong together, which modules should I split and so on. And it’s also important to clarify that change coupling in itself is neither good nor bad. It just shows you that this is the way the system is.
[00:31:33] Dealing with God Classes
Henry Suryawirawan: What about if I have something like a god class, right? So every change actually go through that one class. So what, how do you classify this?
Adam Tornhill: Yeah. God classes. They are amazing. It’s probably the worst code smell you can ever come across. When I’ve analyzed systems with god classes, and I might even have a blog post on that too with some examples. They typically see that the god class, they have like these implicit dependencies that you reveal via change coupling to like forty different places in the code. It’s extremely expensive. And what I typically recommend is that, first of all, use this information to make everyone in the organization aware of the bottleneck and the cost so that we have a shared situation of understanding. And then we need to do some serious refactoring.
So very often what you find in god class is that they become god classes because they accumulate so many different business responsibilities. So the first challenge is always to identify what are these different responsibilities? And then you need to start to split up the god class. You need to start modularize it so that you can, you know, put each responsibility in its own module, and that will help with the change coupling. But again, it’s going to be painful. So that’s, again, where hotspots down at the function level can really, really help. So it’s, uh, it’s a technique I call X-ray, hotspots X-ray, and it basically gives you a, like a prioritized list of, you know, given these hundreds of functions that you might find in the god class, maybe only 20% of them are actively worked on. So these are obviously responsibilities you want to start modularizing and extracting first. So I hope that helps.
[00:33:14] Behavioral Code Analysis: The Social Side of Code
Henry Suryawirawan: Yeah, so definitely very interesting, right? The way that you mentioned about this kind of analysis is like investigation, right? So it’s like investigating crime scene. Crime scene that is done by all team members within the software engineering teams. And speaking about software engineering team members, right? So the other pillar of your behavioral code analysis is actually the social aspect of it. So tell us what are the social aspects that you analyze, why they are also important as part of this analysis, and how do you capture it within the CodeScene tool?
Adam Tornhill: Yeah, sure. One classic thing is, um, unfamiliar code. So that is code that we didn’t write ourselves. So quite often you find developers, uh, complaining that a piece of code is hard to understand. And then you start to measure it objectively using something like the Code Health Metric. And you find out that no, this code is actually healthy. And you have a conversation with the developer and the development team, and you quickly figure out that the reason they thought it was complicated was that they have never worked on that code before. So they had some onboarding to do, right? They need to become familiar with the structure of the code as well as the whole domain. So it’s very easy to kind of mistake or lack of familiarity for complexity. And when you do that, it’s really dangerous because now you run the risk at, you know, prioritizing refactorings to something that doesn’t need to be refactored, right? What you need is proper onboarding time and time for learning. So it might make you do the wrong thing.
However, a flip side of that is also that you can run into risks like the truck factor or the bus factor. Which is, you know, a, a fun way of pointing out the risk associated with key person dependencies in software. And the bus factor, it’s kind of fascinating. We did a study last year on it and we found that even in larger teams, larger departments with like 50, 60 developers, the bus factor is usually just two or maximum three people. So what that means in practice is that if the two wrong developers would leave your team, then you lose control of 50% of the codebase, right? Because the people who know the details of that code are gone. And, uh, this is where I think, behavior code analysis can really add a different dimension to how we look at code. Because using Git data, it’s possible to figure out which developer that has written which code, and if they are still around.
And, uh, it’s of course not anything I recommend using for micromanagement. I think there are so many pitfalls associated with that. But it’s super useful in order to build a knowledge map of your codebase so that you know that if I’m working this part of the code, this is developer I should ask for. It also makes it very easy to discover risks like the, like a low bus factor. And if you’re aware of those potential problems, then as a technical leader, as a tech lead or a coach, you can help the organization remediate that problem.
And in particular, what I always recommend is to combine this with the technical measures. So if you find that you have a piece of code with the low bus factor and that code is also unhealthy, then there’s an extreme off-boarding risk associated with that. So you probably want to be proactive here and, you know, refactor that code while the developer who wrote it is still a lot around. So pair them together with someone else on that team. You’re going to dispute knowledge in the process as well as remediating our massive, massive future risk. So that’s one example at the, like the individual level. And then, of course, there’s a team analysis aspect of it too.
[00:36:48] Why Developers Aren’t Interchangeable
Henry Suryawirawan: Yeah, speaking about the unfamiliarity and also bus factor, right? I think obviously almost every leaders understand about this criticality, right? So some team members seems to be quite expert in some areas of the code. But having said that, right, I think when we talk about developers with executives, I think they would think it’s very… like software developers easily, are easily interchangeable. You know, like, okay, one person quit, maybe we can just replace it with another person, right? Maybe we can even hire more senior, they think, and they will be able to solve it. So tell us, because I’m a software engineer, I know that is sometimes not possible, but from your point of view how do you actually explain the danger of, you know, having this mindset? Because obviously there are some pitfalls about this approach.
Adam Tornhill: It’s very dangerous and I think that explains a lot of the problems in the software industry that we cannot view developers as interchangeable cogs in a large machine. What I try to do is I try to visualize that problem. What we do is that we create maps like visualizations that pretty much shows what your software system looks like. On these maps, you see every piece of code visualized, but they’re visualized in a more accessible form, right? So they’re not visualized the source code or visualized like various circles. And the size of the circle simply shows the amount of code. If you can present the box, bus factor using that visualization, then you can immediately spot the problem because you will see that if like these two people leave, you lose control of all these pieces of code. So I typically use color to visualize that. So the problem really pops out and once you have seen that, you can never unsee it again. And it’s also good because it gives you a way of getting feedback continuously, because if you start to act up on this risk, then you will see how the bus factor decreases over time. That said, I don’t think the bus factor is necessarily, it’s not always a bad thing. I mean, I would be worried sometimes if we don’t have a bus factor, because individual productivity does vary a lot. But what it’s about is to avoid unnecessary risk, ensure that everyone on the team gets a chance to contribute.
Henry Suryawirawan: Yeah, so definitely we can use some socio-technical practices as well. Things like pair programming or mob programming, ensemble programming. Or even just do, I dunno, things like lunch and learn explaining about modules as like your own modules to other team members. I think that can also help.
[00:39:14] Introduction to CodeScene
Henry Suryawirawan: I think also this is a nice plug for you to actually introduce CodeScene for those of us who haven’t heard or haven’t played around with CodeScene before. Because, um, yeah, there are some alternatives out there to measure software quality. But essentially how does CodeScene work and why should you think people should try CodeScene to measure that software quality?
Adam Tornhill: Yeah, sure, I’d be happy to. So I founded CodeScene 10 years ago. And I founded it after writing Your Code as a Crime Scene. And in Your Code as a Crime Scene, I basically collected a bunch of techniques I’ve been using myself throughout my career. But I also realized that the book won’t be enough. I really wanted to have professional tools that could automate the analysis from Your Code as a Crime Scene. So that was my main motivation.
And the way CodeScene works, because we have come a long way in 10 years, is that you, it’s a source product. So you basically point it to your codebase and then you press a button and then CodeScene does all the heavy lifting. So you end up with not only KPIs and trends on code quality and code health and that stuff, you also get complete visualizations. So you can visually see where are the bottlenecks in your code, where are the complicated hotspots. And those maps are intended to serve as a communication not only within the engineering team, but also so you can sit down with technical leaders and managers and have a conversation around things like technical debt and code quality.
And, uh, I think the thing that sets CodeScene apart is, uh, it’s obviously the first and only behavioral code analysis tool. So the only tool that really, really consider the intersection of people and code. And, uh, second one thing we are very proud of is that our code health metric is the only proven and validated code quality metric with the connection to business outcomes like we talked about before, measurable defect reduction, measurable speed up in development time. So that’s the gist of CodeScene. And then of course, we have all these bells and whistles with, you know, automated code reviews, IDE integrations, and so on. Could go on for a long time about this.
Henry Suryawirawan: Yeah, so I think that’s the first thing, the unique part of this CodeScene is the behavior analysis part, right? So I think, I haven’t really played around with CodeScene much. I just saw it from the websites. But I think it’s really interesting to see some kind of visualization, especially when you already have the knowledge about this and you have interest in this, right?
Because again, still some people associates code quality with, you know, the number of issues found by, you know, like a software, a static software analysis tools, right? So I think this one is kind of slightly different, right? So it’s taking the behavioral analysis of your code. And also the bus factor as well, I like it, because sometimes it’s very important for us to see, especially when you have a few team members and you don’t actually do hands-on coding. You wanna see the kind of bus factors associated with some of the developers.
[00:42:06] Will AI Help or Hurt Code Quality?
Henry Suryawirawan: So I think one great thing these days talking about software quality is actually about AI produced code. So I know this is probably one of the hot topic. Um, first of all, I would like to clarify with you, do you think AI will help us in software quality? Or do you think AI will not help us, or even, uh, make our code quality worse?
Adam Tornhill: So I think it’s very much up to us. So AI itself, I have some pretty solid proof that it can help us write better software. But, uh, there’s also a couple of trends that worry me, which indicates that if we misapply AI or use it for the wrong purpose, then it’s more likely to serve as a technical debt generator rather than support. So it’s like a, we are all like at the fork in the road right now. And I think it’s pretty much up to us as a community, as companies where we’re going to take AI because we can end up in, uh, big, big problems in the future where technical debt grows exponentially at machine speed.
[00:43:06] Essential Guardrails for AI-Generated Code
Henry Suryawirawan: My experience is actually, I worry more for the latter, right? I think the tech debt, the amount of code quality that gets produced can be worse, simply because I think now it’s very easy to produce many lines of code, procedural, uh, you know, you just can continue on one class over and over and over. And the third thing, I think it’s about coherence in terms of architecture and design and all that, right? So I think I gotta worry that, uh, one day, if let’s say all developers kind of like outsource our writing code to AI mostly, and especially vibe coding and all that, definitely we know it’s even worse, right? So what do you think are some of the guardrails that we should have actually to help us avoid these kind of, you know, code quality getting even worse, right? So I don’t think everyone aspire to have this, but we all kind of like seduced by the, you know, the speed, the amount of work that can get done simply by using AI. So what, uh, do you think are some of the guardrails?
Adam Tornhill: The first guardrail has to be code quality. And the reason for this is because yes, we are using AI today and, uh, that’s what I mean with the big risks. I see that we’re using a AI today to automate coding. But writing code is a very, very small part of developer’s work week. It’s like roughly 5% of our work week. That’s the time we spend typing on the keyboard. So development is not about typing faster. The big bottleneck is instead in understanding existing code where we spend roughly 70% of our work week. If we’re not aware of this and if we’re not guarding against this, then uh, we will have an AI that optimizes 5% of our work with at the expense of the other 70. Writing code was never the bottleneck, but we pretend it was. And now we get a lot of code and now we have to spend a ton of time reading through that code and trying to understand code that someone else, in this case, an AI wrote, which is arguably a harder problem. So this is something I’m worried about. So the least we can do is to ensure that whatever code AI generates that it’s healthy. So what we do internally is that we have a code health metric in the IDE. So wherever we have write some code or where if we use AI to generate some code, it has to pass a certain code quality bar, a certain code health bar. Otherwise, we discard it. It’s not useful. So that I think is the first thing.
The other thing that I think is important as a guardrail is that we need to, we kind of need to value this shift in emphasis from, uh, writing code to become really, really good at, at reading code. So, uh, I think that traditional software engineering practices like TDD that we talked about before, like code reviews, like pair programming, they will be more important than ever before in the AI era. Because it’s a really, really hard problem to assess a lot of code that we didn’t write ourselves.
[00:45:54] Using CodeScene to Maintain Quality in the AI Era
Henry Suryawirawan: Yeah, so I think one challenging aspect I think is like simply because it’s kind of cheap to produce code these days. These practices of, you know, even generating tests, it could be also even much more code to review. And everyone seems to be working, like the expectation, everyone seems to be working more things, right? I have a worry that someday, you know, our brain capacity just couldn’t keep up. And I think we’ll just, you know, simply ignore whatever code quality problems that we have and still continue the cycle that is spiraling down, you know, over the time.
So when you mentioned about code quality, uh, what kind of things that you support in CodeScene that actually can help us reduce this tendency? Because I think it’s very easy to just produce code commit, and, you know, let other people see. If not see, then they will just deploy it. So what do you think the tools that can help us avoid this?
Adam Tornhill: Uh, I think there are multiple tools here. But one thing I think is important is obviously, and I know that some AI platforms are already considering building this in and some might already have it, but I think it’s super important to have a security scan happen, uh, as soon as possible. Because a lot of the early research on AI kind of showed that it was very prone to shipping vulnerable code. So that, that’s like a very low barrier to quality. Make sure it’s secure.
The second thing, and this is where I think CodeScene has an important role to play, is that integrate the code health check in your IDE so that whenever you get some code, human written, AI written, make sure it’s healthy. Otherwise, discard it. Or, you know, use an MCP that feeds it back to the AI so that can make a new attempt at shipping code that are the right standard. Because, um. uh, I used to say this, that acceleration just isn’t useful if it kind of drives our projects into this brick wall of technical debt. So having healthy, secure code, that’s like the very basics of this.
And third, also think we need to be really, really good at questioning whether we need that code in the first place. Because like you say, an AI makes it more or less free to generate a lot of code. But that code, that’s still expensive, that’s still waste if that code isn’t needed. So maybe there’s a library, maybe there’s already some in-house functionality for doing this. Or maybe we should go back and question the requirement to really have to implement this. Because every single line of code we add is going to cost us a lot of time over the years, just to maintain it and just all the time we have to understand it.
Henry Suryawirawan: Yes, speaking about that, I always remind myself that code is a liability, right? So the more code means more liability, is not necessarily good things.
[00:48:32] How AI Accelerates Technical Debt at Scale
Henry Suryawirawan: So you mentioned about, you know, the acceleration of technical debt. So maybe if you have experience working with some customers implementing AI, what do you think are some of the top tech debts that gets produced more, simply because, uh, people are using AI?
Adam Tornhill: Yeah, I think the challenge is that AI, it kind of works really, really well for, you know, individuals doing toy tasks. That way an AI can kind of drive most of the stuff in particular if it’s a task that’s been done many, many times, and that’s perhaps code that shouldn’t even be written. You face challenges when you start to scale to the team level.
So my experience is, I mean, I’ve seen companies, I remember like six months ago I worked with a company that had made a massive, massive rollout of an AI coding tool. And they had done everything by the book. So they started, they did on evaluation at a small scale with a small team, and that was of course, successful. And then they decided, hey, let’s scale this up to a couple of thousand developers. And, um, that’s where the problem started, because pretty soon they noticed an impact on production that they were generating poor code leading to bugs. And this is where it’s so dangerous, because if that code isn’t healthy that this AI generated, you know, you need to get the human into the loop to figure out, okay, what’s causing this bug? Because an AI is not particularly good at that. The state of AI is simply not there, where it can automatically repair system in production reliably. So the code needs to have a fail safe where humans can go in. And now you have this massive, massive onboarding because They have thousands of lines of code that you didn’t write, your teammates didn’t write, but now you need to understand them and you need to understand them under time pressure. So that’s where why code quality is so fundamental for succeeding with AI-assisted development.
Henry Suryawirawan: Wow, so yeah, you mentioned this particular scenario that kind of like also worries me, right? So whenever you produce a lot of code that gets produced by AI. And then when there’s a production issue, I think still we have to kind of like know the reason, the root cause where in particular code that it fails. So if we don’t actually have a good understanding of codebase, I think it will be quite dangerous and within time pressure as well.
[00:50:42] Why AI-Friendly Code is Human-Friendly Code
Henry Suryawirawan: You also have this, uh, line that I think quite interesting to discuss, right? You mentioned that code that is good, readable by human, is actually also beneficial for AI or LLM to work, uh, with. So tell us, uh, about these findings, right? And why do you see correlation between these two?
Adam Tornhill: Yeah, that’s a topic that I’m super interested in at the moment. This was an, uh, observation originally made by Thoughtworks on their tech radar earlier this year, that where they start to talk about this concept of AI friendly code. And that kind of really resonated with me because it’s exactly what I see in our own internal data lake where we study how AI impacts quality. And the idea here is that, you know, healthy code, an AI will benefit from this additional structure. It will benefit from the richer context by having proper function names, cohesive functions, it simply makes it so much easier for the AI to do do a good job.
So it’s kind of fascinating that we have this potential double win here that, you know, if we can write off the code, not only is the life going to be easier for us as humans, but we’re also going to be in a place where we’re able to benefit from AI. And that also means the imp- obvious implication is that if your code isn’t good enough, then maybe you’re in a situation where you cannot apply AI safely. Maybe you need to refactor first to even get to a level where you can benefit from AI.
So this is an area where we have, uh, initiated a bunch of research initiatives. So I hope to be able to share some actual numbers and put some actual data on what the problem is and whether the cutoff points are for when it comes to AI performance and source code in, yeah, hopefully in a month’s time.
Henry Suryawirawan: I hope, uh, the research can, you know, gives us some more insights about, you know, good code practices, good code quality actually can help human and AI to collaborate much better, right? So I think that’s pretty exciting.
[00:52:41] Documentation: Capturing the “Why” for Humans and AI
Henry Suryawirawan: So one thing about that, right, so do you think also software documentation or maybe now these days people talk about spec driven development, do you think this is also a good practice for us to try out? Uh, simply because AI can have a much, much better structured and context as well. Do you think this is something, an area that every software developers have to try out?
Adam Tornhill: I think, uh, I think that good documentation has always been valuable. I know that there are a lot of conflicting opinions about that in the software industry. One thing I always miss in source code is that I want to understand the whys. Why have we chosen to do it this way? And, uh, what are the trade-offs? Because that’s something I can never read out of source code. I can use the source code to understand what the code does. But I don’t know why we choose just this approach.
So that level of documentation I think is super important for a human maintainer. And I would be surprised if an AI didn’t benefit from that, because if the AI has the context of why, then we can probably avoid a lot of pitfalls. Um, the classic example is that we have done some really, really hard performance optimizations, right? So as a human, you want to know about that. As an AI, you probably also want to understand that because it’s gonna to influence what path you take. So, uh, yes, I think that’s a very interesting angle with documentation.
Henry Suryawirawan: Yeah. And also not to mention AI can actually help us to produce that documentation itself and we can iterate together, right? To produce a much better documentation. Because I think almost all developers would not like, you know, writing documentations for whatever reasons. Uh, I personally also sometimes drudge, you know, like writing documentations. But I think now we can kickstart the documentation writing by using AI to help us. And I hope, yeah, we can iterate much better and then the documentation gets improved over the time as well.
[00:54:31] The Reality Check: Future of Software Development with AI
Henry Suryawirawan: So I think speaking about software developer and AI, obviously one question that always in the news is about, you know, the future of software development, right? And one thing is about, you know, some people say, more code will be written by AI. Some big tech companies actually claim that within their companies. People also say we don’t need more juniors in the future. Some people get laid off simply because they think AI can, you know, produce equal number of code. So what is your view with all this trend and what kind of future of software development or software engineer do you think would happen in the next, I dunno, one or two years? Let’s not put five or 10 years, yeah.
Adam Tornhill: So in two, three years time, I think what’s going to happen is that we all get a reality check. The reason I say this is because looking historically, I haven’t seen a single technological revolution that led to the demand for less work. It’s always been more work. Because what happens is that when we get access to new technologies, like for example a machine that can code, what happens is that we raise the bar. We start to take on larger and larger problems and that will only increase the demand for software developers. Given the current state of AI, we, I have a very hard time seeing that, uh, an AI will replace humans entirely. I don’t think that will ever happen. I might of course be wrong, but there’s no evidence for that so far.
So what I think is going to happen over the next few years is that we’re going to have this hybrid model where we as humans have to understand code that’s written by machine, and we have to kind of work in tandems with them. And that’s where I think it’s so important that we, you know, reemphasize traditional engineering principles in software. But it also means that the barrier to entry is going to be really, really challenging, because these skills they take, at least for me, it took a decade before I even knew what I did in software. And we kind of need to grow the next generation of developers too.
This is what I’m mostly worried about because I think, as a junior, you need to have a chance to start and you need to learn from first principles. And having an AI available as a junior, I’m not sure if that’s the right way to go because it’s so, it makes it so easy to complete tasks without building a proper understanding. So to me, true learning has to be effortful, right? We really have to struggle in order to learn, and we need a chance to do that.
Yeah, I might be worried for our profession 10 or even 15 years from now, because we need to grow the next generation of developers. There’s going to be a lot of them that are needed out there. That said, I do hope that AI, if we rethink it, then I think AI can become a best friend. Like doing all this work that we dislike to do. You mentioned, uh, documentation. Another good example might be to, you know, fix technical debt, automate the security patches, that type of stuff that is repetitive and might be boring to a human, right, and often they get down prioritized. That’s the type of task that I think could be super beneficial for a machine.
Henry Suryawirawan: Yeah. So yeah, definitely it’s a reality check that we are all waiting for, right? So I think you mentioned a couple of key things for listeners here to put more focus in, right? The first principles, the fundamentals, right? And also I think more importantly also do try out using these tools, because unless you try to use it, you probably won’t see the, you know, the so-called the negative side of using AI. So it could be the amount of tech debt that it gets introduced, security issues that seem still not doing pretty well, right? So definitely, if you understand how software gets delivered and also introduce much better guardrails in your software development, I think that will be much more important.
So Adam, we have talked a lot about, you know, AI, code quality, is there anything else that you think must be covered before we move on to the last question?
Adam Tornhill: No, I think we covered so much ground. So I’m, uh, I’m quite happy with that.
[00:58:27] 3 Tech Lead Wisdom
Henry Suryawirawan: Yeah. With that I have one last question that I would like to ask you. I call this the three technical leadership wisdom. So maybe you can think of it just like advice you wanna give to us. Maybe you can share the wisdom for us today.
Adam Tornhill: Yeah, sure. So I think my top three recommendations would be, number one, learn to learn. So we have seen that like the only true constant in software development is change. We just discussed AI, and that’s one example, right? So when these changes in technology, changes in programming languages, changes in whatever technologies we’re using, when they come along, we need to be able to understand them quickly and follow along. And, uh, that to me is a lifelong journey where we have to practice and become more efficient at learning new stuff and be able to pick it up fast.
The second recommendation I would give, and that’s something I wish I understood much, much earlier in my career, that is to become a domain expert at whatever you do. So we all understand the importance of being good at programming and development. But understanding the actual domain where we build products is super important, because if we are domain experts, if we truly understand our customers, use the product and what they aim to achieve with the product, then we can do so many optimizations. That’s where the real big benefits are, right? We might realize that we don’t even have to build this feature. We can take away that feature, or this one can be much simpler. That’s where we make the real architectural wins.
And finally number three is, uh, I would recommend to always lead by example. So this is something I’ve been trying to do myself now, because I’ve found myself in a manager position of the kind of founding CodeScene. I always try to lead by example. So what that means in practice is that I would never require anything from my teammates that I wouldn’t be prepared to do myself. That I think is important.
Henry Suryawirawan: Wow. Beautiful, beautifully said, right? So learn to learn, uh, become domain expert and also lead by examples. So Adam, if people would love to connect with you, ask you more questions beyond this conversation, is there a place where they can find you online?
Adam Tornhill: Yeah, I spend most of my online time at LinkedIn. So that’s my preferred channel and I’ll be super happy to connect and continue the conversation there.
Henry Suryawirawan: Yeah, and I hope more people will be able to also try out, uh, using CodeScene to do behavioral analysis of their codebase and see the kind of code health, healthness that they have within their software development teams.
So again, thank you so much for your time today, Adam. I learned a lot about code quality and also the danger of using AI towards the code quality. So thank you for your time.
Adam Tornhill: Yeah. Thanks, Henry. Thank you very much for, uh, hosting me. Thanks. – End –
