#139 - A Developer's Guide to Effective Software Testing - Mauricio Aniche

 

   

“An effective developer is an effective software tester. As a developer, it’s your responsibility to make sure what you do works. And automated testing is such an easy and cheap way of doing it."

Mauricio Aniche is the author of “Effective Software Testing”. In this episode, Mauricio explained how to become a more effective software developer by using effective and systematic software testing approaches. We discussed several such testing techniques, such as testing pyramid, specification-based testing, boundary testing, structural testing, mutation testing, and property testing. Mauricio also shared his interesting view about test-driven development (TDD) and suggested the one area we can do to improve our test maintainability.  

Listen out for:

  • Career Journey - [00:03:43]
  • Winning Teacher of the Year - [00:06:07]
  • An Effective Developer is an Effective Tester - [00:09:33]
  • Reasons for Writing Automated Tests - [00:10:43]
  • Systematic Tester - [00:13:45]
  • Testing Pyramid - [00:17:50]
  • Unit vs Integration Test - [00:20:25]
  • Specification-Based Testing - [00:22:55]
  • Behavior-Driven Design - [00:25:34]
  • Boundary Testing - [00:27:01]
  • Structural Testing & Code Coverage - [00:30:16]
  • Mutation Testing - [00:35:31]
  • Property Testing - [00:38:45]
  • Test-Driven Development - [00:42:00]
  • Test Maintainability - [00:46:03]
  • Growing Object-Oriented Software, Guided by Tests - [00:48:07]
  • 3 Tech Lead Wisdom - [00:49:24]

_____

Mauricio Aniche’s Bio
Dr. Maurício Aniche’s life mission is to help software engineers to become better and more productive. Maurício is a Tech Lead at Adyen, where he heads the Tech Academy team and leads different engineering enablement initiatives. Maurício is also an assistant professor of software engineering at Delft University of Technology in the Netherlands. His teaching efforts in software testing gave him the Computer Science Teacher of the Year 2021 award and the TU Delft Education Fellowship, a prestigious fellowship given to innovative lecturers. He is the author of the “Effective Software Testing: A Developer’s Guide”, published by Manning in 2022. He’s currently working on a new book entitled “Simple Object-Oriented Design” which should be on the market soon.

Follow Mauricio:

Mentions & Links:

 

Our Sponsor - Tech Lead Journal Shop
Are you looking for a new cool swag?

Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.

Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.

 

Like this episode?
Follow @techleadjournal on LinkedIn, Twitter, Instagram.
Buy me a coffee or become a patron.

 

Quotes

Winning Teacher of the Year

  • I decided to open my lecture notes. Published it in a website. And that’s how we did the course. People can read my lecture notes. And to my surprise, suddenly, I started to get emails from people from other universities. “Hey, I’m reading your lecture notes. This is very cool. Do you have exercises that I can use? Do you have slides?” And at some point I said, well, maybe I need to turn this into a book.

An Effective Developer is an Effective Tester

  • To be an effective developer, you must become an effective software tester.

  • Because as a developer, it’s your responsibility to make sure that what you do works. And the only way to do this is by writing tests that prove to you and to your team that your code works.

  • Before, and I even worked in environments like this where I would just focus on coding and then I would send my software to another company. This company would do the testing for me. They would send me a report, they would fix the bug, so on and so forth. And that’s just not productive. Back and forth between these two teams.

  • It’s your responsibility to make sure that things work. And automated testing is just such an easy and cheap way of doing it. And that’s why I say in the book that effective developer is someone that effectively tests what they do.

Reasons for Writing Automated Tests

  • You mentioned about developers being confident, right? And what I always say is you never know. You don’t know how many times a day you make a mistake. Because if you don’t have tests, everything that you code looks perfect, right? You just believe it works. And then you quickly realize how much you need them, because we break stuff all the time.

  • How to convince developers to write tests? I think there are many angles to this.

    • One is to show to them and to have them practice, so that they don’t see writing tests as something that is a burden. They just get used to it. They get proficient at writing tests, not only with the testing frameworks, but on writing code that is easy to be tested on. There’s this aspect of just practising and getting better at test itself.

    • The second one, if you’re in a very large and complex software system, things are too complex. And then for you to have this pleasure back in writing tests, you need to make sure it’s easy to write tests. And this means as a company or as a team, you have to invest in a small infrastructure that facilitates the act of writing tests.

  • Those are the two perspectives I see. One is, again, getting good at testing itself, and the second one is having the proper platform so that you can write the tests.

Systematic Tester

  • We wrote a paper a couple of years ago, and in this paper we asked developers to write tests. The most important one for this conversation is the developers were never systematic. They would just follow their hearts and follow your experience. And feelings are very important when it comes to testing, but it also opens up space for mistakes. Maybe you’re not in a good day and then you forget something.

  • Second, if you’re always using all your brainpower to come up with test cases, even the basic ones, you’re not saving energy to think of complex cases.

  • When you’re more systematic in terms of testing, that means, you just follow some sort of cake recipe that helps you to come up with a bunch of test cases that are quite easy to see, you can put them on a checklist, basically. And then it leave your brain power to focus on test cases that really require human, smart people, thinking about it.

  • The fun part is there are lots of techniques like this. If you look at academic books on testing, even the books from the 80s, their goal back then was to come up with a recipe that teaches you how to write perfect tests.

  • There are a couple of basic stuff that you don’t even have to think, and you can write those tests. And they are usually good enough to get you to a very high coverage already. And then leaving space for you to then focus on boundary testing corner cases, and this type of stuff.

  • And being systematic is something that you see in other engineering fields. There’s a story there backed up by research that shows that medical doctors that use a checklist before some surgery, they make less mistakes than medical doctors that don’t. And you don’t have to be ashamed of following a checklist.

  • Of course, we don’t have to be systematic all the time. It’s just too expensive, maybe too complex to be systematic all the time. But identify, like, I’m testing a complex method here, let me be a little bit more systematic.

  • In your book, you mentioned that if you are systematic, you can assign any developers to the same problem, and they will most likely come up with the same test suite. Also, becoming effective means that you write the right test. To come up with a certain right amount of tests is very important.

Testing Pyramid

  • The testing pyramid helps you to be pragmatic when writing those tests. Because one thing is to come up with the test cases. The other one is writing this in code and making sure that this works nicely in your development process, in your CI, etc.

  • For example, if you just write end-to-end tests, maybe your test suite will cost you too much to execute, and then at some point this becomes a bottleneck. I think the testing pyramid gives this pragmatic point of view.

  • I like the idea of the testing pyramid very much. And the idea is that, at the bottom, you have unit tests. And why is it at the bottom? Because they are usually cheaper to write. They are cheaper to run. They tend to be more robust. They tend not to really fail. They’re not flaky, in general. And then you go up the pyramid, you have integration and end-to-end tests, and you still have to do them. But you do them a little bit less. Maybe you focus a little bit more on unit tests. Maybe it’s okay to have some duplicated tests here, if you’re unsure.

  • I think it’s okay to make this sort of mistakes in the unit test, while with the integration and end-to-end test, you wanna prioritize a little bit more.

  • [Software Engineering at Google] brings a new perspective that I also appreciate very much that is, forget about unit testing and integration testing. Just separate your test suite between, is this one fast enough that I can actually run it during a pre-merge together with your merge request or pull request, or is it super slow that I will have to run in a separate machine, etc.

  • If you remember back in the 2000s, our big discussions were, what is a unit test? We know that this doesn’t matter at all, as long as it runs fast and it gives you proper feedback, that’s good.

  • No one can argue that a slow test is better than a fast test. You can argue that maybe you like integration tests more than unit tests, but no one can say a slow test is better than a fast test.

Unit vs Integration Test

  • Mocking can be dangerous for sure. Because if you mock too much, then at the end you’re not testing anything. So you wanna test as much as possible real behavior or behavior that will look like in production. But at the same time, you don’t wanna be slowed down by things that you don’t control so much.

  • For example, let’s say you’re writing a piece of code that makes a call to a web service that is developed by another team. And suddenly for you to really run this test, you need that web service available to you. And this becomes very quickly a pain. So in this case, the mocking makes a lot of sense. And the web service, you don’t control so much.

  • Let’s give an example of something you control. A database. Should I mock my database or not? To be honest, I believe, my opinion is that you should not mock your database, because today you can make tests with database so fast that you can actually run them during your pre-merge time and you get feedback very fast. So why would you mock something that is not really preventing you from writing the test in an easy way and to run it fast?

  • I think it’s not about writing integration test or unit test, but it’s about writing lots of tests that are fast in the end, and that you can have full control over. And those are the things you should mock, right? Things that you don’t have control over.

  • Then, of course, if you go back to the testing pyramid. Let’s say you’re mocking this web service. So you can write lots of unit tests, super fast tests that just mock this web service, but you can still have one or two or three integration tests that are a bit more expensive that make real calls to this web service. So you see that things work when you put all the components together.

  • Something that I always say is, you should not write the integration test that could have been a unit test. You don’t need an integration test to exercise an if-statement in your code. Just write a unit test for it. Leave integration test for what it really pays off, that is to find integration bugs.

Specification-Based Testing

  • Testing is about trying to find bugs. And how do you do this? Because you compare what you expect the program to do with what the program does. And for you to do this, you need to know what the program should do. Where is this information? In the requirements.

  • At some point, there’s this notion of what the program should do. And specification-based techniques are the ones that help you look at the requirements and identify interesting test cases.

  • If you look at the requirements, it’s quite easy to see those are the inputs of the program. This is sort of what the program needs to do in the output. Those are the different paths that the program may take, so on and so forth. And you can look at all of this and then get inspiration to write your tests.

  • One technique I show in my book is to look at the inputs of your program. What are the inputs? Separate them one by one. Look at them separately and explore their domain. You do this per input. And why do you do this separately? Because it’s just much easier for our brains to process small things.

  • You do each one of them separately, then you try to look at all of them together. You look for other possible corner cases and so that might be explicit on the documentation or the requirements. And then, and only then, you come up with test cases. And that is a sort of the ideal of specification-based testing. That you start your tests from what the program should do and not from the implementation.

  • I actually like this very much, because as a developer, the person writing the code that I’m about to test, that gives me the opportunity to disconnect from my implementation and really focus, hey, this is the input that I’m gonna pass the program. This is the expected output.

  • If you really wanna be a little bit more systematic when it comes to testing, is the first step is start writing tests, or creating test cases from the specs, from the requirements.

Behavior-Driven Design

  • Usually, when people talk about specification-based testing, they are focusing on a very coarse-grained feature. From the point of view of the user, what should I test? In my book, I show that you can do this at method level, for example, or at class level, because as a developer, that’s sort of the units you’re always handling with.

  • To me, BDD makes more sense when you’re looking at the big picture and looking at the whole functionality from really the point of view of the final user. And the specification-based techniques, you can apply in both.

  • Should you write tests in a BDD style? I think that’s really a matter of taste. Tooling is a matter of personal taste in the end, right? As long as you’re looking at the behavior of the program, what you expect from it, I think this is a great step towards good testing.

Boundary Testing

  • Empirical research actually shows that bugs love boundaries. Because we are very good at implementing happy paths. The bugs then, they start to cluster on things that we’re not so good at. That is, to handle corner cases.

  • And coming up with corner cases is very hard. Because we develop complex software systems. But one way to get started is to observe your program. You look at the inputs and how these inputs changed the outputs. And boundary testing is a perfect technique for this.

  • What does boundary mean? And this is precisely where you can write a test, because we love to put bugs there. And as a developer, that makes sense because it’s very easy to confuse a greater than with a greater than or equals to. And the idea of boundary testing is look at your program and look at the inputs and how the inputs affect the outputs, and look for those moments where a small change in the input changes the outputs.

  • There’s a paper from 1994 that is cited in my book, that explains this in a more mathematical way. That shows that if you write tests like this, you are more likely to reveal a bug. And I think this is something very easy to change in the behavior of the developer.

  • And it’s like a very quick win that will give you better tests. Because as a developer, you look at an if-statement, you write a test for the true branch and for the false branch. But then we usually pick random numbers that exercise the true branch and then exercise the false branch. Instead of picking random numbers, pick the numbers that are close to the boundary. These two tests will be way stronger than the other two tests that sort of exercise the program in the same way, but very far from the boundary.

Structural Testing & Code Coverage

  • Structural testing is sort of the academical name for using code coverage in practice. So the idea of structural testing is, let’s use the structure of the program as a source of inspiration to write tests. Before, we were looking at the requirements with specification-based testing. Maybe in text. Now we’re looking at the implementation and trying to come up with ideas.

  • There’s this hate in industry about code coverage. There are many reasons for this. One, you can cheat the coverage number. It’s very easy to write a test that covers a lot of stuff, but the test is not so good. If you’re using coverage as a target number, maybe that forces you to write useless tests. Also, if you get to 100% coverage, that also doesn’t mean your code is perfect. And it’s quite expensive to get to 100% coverage. This is why people usually hate it. But I think people usually hate code coverage, because they are focusing more on the number rather than on the output, the information that coverage brings to you.

  • In my book, I show that coverage should compliment specification testing, because, you know, you write the test based on the specification, and then maybe you do boundary testing. And then the question is, am I done? Well, you can triangulate this and you can look at coverage. And then you see, did I cover everything? And maybe you forgot to cover something, and then the question is, why did I forget? Was it because I forgot? Was it because there was a mismatch between the requirements and the implementation?

  • You reflect about it. You either write a test or you don’t try to test, and it’s also fine. But it helped you to reflect about, am I done with writing tests? And if you use coverage with this purpose, notice that we are not talking about numbers anymore. But we’re more talking like the tests that I wrote, are they good or not? And coverage is giving me insights about it. And I think that’s how we should use coverage.

  • But then, the one million dollar question then is, is there a correlation between high coverage numbers and the effectiveness of the test suite? So a test suite that has a very high coverage. Is this one more likely to find bugs if I have a bug in my code?

  • And then you see lots of papers, and a lot of those papers show correlation between coverage and effectiveness of the test suite. And that makes sense, right? Because the more code your test suite covers, the more likely it will find a bug if you introduce a bug. Of course, different papers show different levels of correlation. Some of them are strong correlation, some others are more weak correlation, but the correlation does exist.

  • I think the lesson that I get from these papers are, 100% coverage may not mean a lot, because once you’re there, your tests are very good, but that doesn’t mean it’s perfect. And then when you’re at 100% coverage, coverage doesn’t give you more useful information, because it’s all covered.

  • But if you’re on the other side, that is you have very low coverage, let’s say 10%, then that means a lot, right? That means your test suite is maybe poor. Maybe there’s something that you can improve on there.

  • To summarize it, coverage can be used as a way to complement your tests. To help you see if your test suite is good enough or not. And coverage helps you to identify poorly tested areas of the code base. Hey, this is not covered, so maybe we should write a bunch of tests for it. I don’t have to go to 100% coverage, but I have to write some tests for it.

  • All code should be covered until proven otherwise.

Mutation Testing

  • How do you know if your test suite is good or not? You can look at coverage for sure. But maybe, you can get to a lot of coverage, but maybe your assertions are poor. And then there’s a bug, but your assertions are not catching that bug, but you’re covering the code, so your coverage is very high. Mutation testing helps you to identify this gap. And what is the idea? The idea is I’m gonna create mutants of my code.

  • Imagine like I just get your production code, and I, on purpose, insert a bug, so I change a greater than to a smaller than. If I run your tests, you must have a test that is failing, because I just introduced a bug. If that happens, that means, okay, your tests can actually kill that mutants. So your tests are doing some good stuff. But if I can mutate your code, so I introduce a bug on purpose and your tests are still green. Hey, I just found a case, a possible bug that someone can introduce that you’ll never know with your test suites. So that is the idea of mutation testing.

  • And what mutation testing tools do for you is to automate this process. So they change your code, they run your tests, and they see if the tests catch the bug and they repeat this in a structured way. And they give a beautiful report in the end. And I love the idea of mutation testing. It just makes a lot of sense to understand if your tests are good or not.

  • In practice, a lot of companies are not there yet to benefit from mutation testing. You start to benefit from mutation testing when you have a lot of tests and you have very good test suites. If your test suite is very poor, it doesn’t kill any mutants, so why would you run mutants in the first place? Coverage is good enough for you at that moment. So I feel like a lot of companies are not there yet in terms of maturity to use mutants. But if you are, so if you have a beautiful test suite, then put mutants in your pipeline.

  • Tools are evolving. So PITest is a very nice tool. There are ways for you to reduce the time it takes, because it’s an expensive process. We have to mutate your code and run other test suites. And we do this a million times, or the tool does this a million times. So those tools are getting better and better, so that you just run the tests that are really relevant to the mutant.

Property Testing

  • Property-based testing is a way of writing a test that is different from the way we write normal tests. Usually when we write a normal test, which I call an example-based test in the book, you know in your mind what sort of branch you wanna exercise in the code or test case you wanna create based on the requirements. And what you do is you think of a concrete input that will run the program, will exercise the program in the way you want.

  • In property-based testing, what you do is you try to describe a property of the program and you let the two come up with the inputs for you. So let’s say in this program, if for any number that is greater than five, you know that the output of the program should always be a positive number. So inputs X greater than five, then the program always prints positive numbers. You can write a property, and then you say, create any number that is greater than five and just assert that the number that comes out of it is positive. And then the tool creates a lot of inputs for you.

  • The idea is very cool. It is of course much harder to write a test, because you have to stop thinking of specific functionality you wanna test, but more like what are the properties of my program that I wanna exercise? But once you can do this, then it’s very powerful, because you’re just testing a lot.

  • In industry, building information systems, which is what a lot of us do, I think you have less opportunities to write this property-based testing. So I still feel like example based testing is something you should do as the to-go approach. But consider property-based testing especially for situations where sometimes you just feel unsure this one example is enough. I need more.

Test-Driven Development

  • I’m a big fan of TDD. I even wrote a book about TDD in 2012. It’s in Brazilian Portuguese. That was because I was doing a lot of TDD myself. And I really felt I was just being a better developer doing TDD. And this was in 2012.

  • Now in 2023, I think I do TDD way less. And I feel that’s just because I found a way to get the benefits of TDD without having to do TDD.

  • What are these benefits? I feel like the big benefit for me about TDD is the ability, is that we work on very small steps. We make very small, steady progress towards the bigger feature.

  • Before knowing TDD, someone would give me a feature to implement and I would be all over the place trying to write the full algorithm in one go. And that would just complicate my life. I would code for an entire day, a lot of frustration, because, you know, you bump into barriers; you break stuff that was already working, so on and so forth. And then after a few days of work, then writing tests, you know, I just wanna get this feature done. I don’t wanna do this anymore.

  • With TDD, I was like, oh, I cannot do TDD if I work on such a big chunk of code. So it forced me to see programming as an act of writing small things and combining these small things one by one to give bigger behavior. I think that’s sort of the big difference between TDD and non-TDD.

  • And if you can incorporate this into your development practice, I think it’s okay if you don’t start with the tests. So today, for example, in a lot of cases, I actually start with the production code, but my coding sessions are very small.

  • I think if you find your way to work on small and steady steps, I think you get the benefit from TDD.

  • And this is actually what more recent research shows about test-driven development. Qualitatively, if they ask developers that are doing TDD, do you like TDD? The answer will be yes. Quantitatively, if you compare the quality of the test suites in comparison to people not doing TDD, the differences are negligible. Very small.

  • TDD by itself doesn’t do magic. So more recent research actually shows that the benefits are on those small steps and not because you’re writing the test before.

  • That being said, do I recommend you to do TDD? Definitely, yes. Especially if you’ve never done it. Because if you’ve never done it, odds are you’re not used to work on doing small things. You’re just used to work on big things. So TDD is the best teacher you can have when it comes to explain to you how to program in small bits. So do TDD, do a lot of it. Once you internalize the ideas, then it’s okay. Then you don’t have to do it anymore.

Test Maintainability

  • If you really write lots of tests, you’re gonna have lots of source code and you’re gonna have to maintain this as well, not only your production code. All the love that you put in your production code, you also have to put in your test code.

  • To me, in the test code, if you have to focus on one thing, you have to make sure that the part of your test where you create the data, that you create the input that you’re gonna pass to the method or to the class you wanna test, that part has to be crystal clear.

  • Because if you’re working on a very complex information system, odds are that the complexity of the test will be in that part. So that part needs to be very easy to read. It has to be very easy to evolve, because your entities in your domain are evolving all the time. So it has to be easy to make an evolution in the production code without breaking the test code. And it has to be easy for you to come up with complex data inputs without having to spend 50, 100 lines of code.

  • So you have to have lots of utility methods, or whatever you wanna call it, that help you build data. Test data builders, the famous name for patterns like this. You need to invest in them. Input data has to be crystal clear.

3 Tech Lead Wisdom

  1. You should master testing.

    • Mastering testing means not only mastering the tools like JUnit and Mockito and whatever tool you use, but also mastering creating good test cases. And once you really become proficient with it, it just becomes way easier and way cheaper. You start having less reasons not to do it, because it’s just more natural.

    • How do you get there? By practicing. It’s gonna hurt at the beginning, but the more you do it, the easier it’ll get. So practice testing.

  2. You should never stop learning.

    • A lot of us go through education and then we just join a company and we start working as a developer, and then suddenly we forget that we need to keep updating ourselves.

    • You have to find time. During your work hours, to be honest. Your employer needs to be on this to upskill yourself. So read books. Just keep studying because there’s so much new stuff going on and so many best practices.

    • And something that I feel among developers is that a lot of them, when they go and have these big discussions to decide what architecture to take, should we use practice A versus B, whatever. People have intuition, but they have very little evidence to explain to people. And if you’re reading books, you have clear ways to explain why you wanna go for this or not for that. So there are so many benefits in studying and putting studying as part of your daily job. So work on this. Make sure you study a lot.

  3. Because you’re studying, now it’s time to share your knowledge.

    • Make sure you also write about what you learn or give talks at conference.

    • This is very good for you because it exposes you to other people. It forces you to formalize what’s in your head, and you have to put in words. And I find that putting in words is the best way for you to really understand how much you understand about something.

    • It’s good for you. Because you’re just getting better as a person, as a developer. And it’s also good for others because I’m pretty sure you have cool stuff to share and that there will be others that are willing to listen to you. So please share knowledge. It’s as important as learning new knowledge.

    • Sharing is like a form of testing itself, that you actually learn from what you study. Maybe the best test case that you write for those learning is to share it.

Transcript

[00:00:57] Episode Introduction

Henry Suryawirawan: Hi, everyone. Welcome to the Tech Lead Journal podcast, the podcast where you can learn about technical leadership and excellence from my conversations with great thought leaders in the tech industry. If you haven’t, please follow the show on your podcast app and social media on LinkedIn, Twitter, and Instagram. And for video contents, Tech Lead Journal is also available on YouTube and TikTok. And if you are willing to support my work, please buy me a coffee at techleadjournal.dev/tip or subscribe as a patron at techleadjournal.dev/patron.

My guest for today’s episode is Mauricio Aniche. Mauricio is the author of “Effective Software Testing”. In this episode, he explained how to become a more effective software developer by using effective and systematic software testing approaches. We discussed several such testing techniques, such as testing pyramid, specification-based testing, boundary testing, structural testing or code coverage, mutation testing, and property testing. Mauricio also shared his interesting view about test-driven development (also known as TDD), which you may find quite surprising. And towards the end, he suggested the one area we can do to improve our test maintainability.

I hope you enjoy listening to this episode and learning a lot of things about effective software testing and several different systematic testing techniques. It would be really great if you share this with your colleagues, your friends, and communities, and leave a five-star rating and review on Apple Podcasts and Spotify. Your small help will help me a lot in getting more people discover and listen to the podcast. So let’s go to my conversation with Mauricio after a few words from our sponsor.

[00:03:04] Introduction

Henry Suryawirawan: Hey, everyone. Welcome to another new show of the Tech Lead Journal podcast. Today, I have with me Mauricio Aniche. He’s the author of a book titled “Effective Software Testing”. As the title says, we’ll be talking a lot about software testing today, how to do it effectively and how to do it properly.

So Aniche is the tech lead at Adyen, a company in Netherlands, I believe. And I think what is really cool is Aniche is also a lecturer and he has won a Computer Science Teacher of the Year in 2021. I think that sounds really exciting. Maybe you can share a little bit more about that.

So thank you so much for this opportunity. I’m really looking forward to discuss about software testing today.

Mauricio Aniche: Thanks, Henry, for the invite.

[00:03:43] Career Journey

Henry Suryawirawan: So Mauricio, I always like to ask my guests to share maybe a little bit more about yourself. Telling us your highlights or any turning points in your career that are interesting for us to learn from.

Mauricio Aniche: Yeah. Cool. Yeah. So my name is Mauricio. I work as a software developer for almost 20 years. I had a little break when I went fully to academia, so I spent five or six years as an assistant professor in software engineering. Doing research in software engineering. So I wasn’t coding professionally for that period. And my career was always about, or my passion in my career was always about software design and software testing. So how to model your code in a way that’s easy to maintain and how to write test for it.

And at the very beginning of my book, I tell this story that one of my first projects in my career as a team lead, we coded some software that was supposed to run on a hardware. And then we spent six months coding version number one. We flew to another country. We installed it. It didn’t work for 24 hours. There was a super big bug. And I was super sad that my first software didn’t work for a day. And that was sort of the click for me to start focusing on testing. So since then, and that was in 2006, so it’s been a while ago, I’ve been trying to write tests for everything that I do. And I think the book was just a consequence of me putting those ideas on paper.

Henry Suryawirawan: Wow! It always amazed me like your first experience in your career can actually leave you a lot of impact to your career, right. So you mentioned that your first project maybe didn’t work as much as you want as a team. And that actually led you to have more passion in testing. And I think led you to also writing this book.

So maybe tell us a little bit more about what, if you can reflect back on that experience, what actually probably some of the root causes of why the project failed and it didn’t last for a day even.

Mauricio Aniche: Yeah, lack of testing was the main reason. We were doing a lot of testing for sure, but manually. We had beautiful Excel spreadsheets full of things that we had to test. We even created a simulator because in that software we would talk to an external party via serial port. So we even created a simulator so that we could test a little bit more like in the real world. But those tests were never automated. And what happens is, as a developer, you change something, you test the happy path of your change, and that’s it. And yeah, then we had a bug, and a bug that was totally preventable by very basic automated testing. So that was the click for me, as I said.

[00:06:07] Winning Teacher of the Year

Henry Suryawirawan: Yeah. And also you said you go back to academia, right? So you spent five, six years and I think that also led you to win this award. Maybe tell us a little bit more about it. What made you leave a lasting impression to the students, I guess?

Mauricio Aniche: Yeah. So it’s very hard for me to decide if I like working in industry, where I can write software and deliver value right away to people. Or if I like academia where I can just sit and reflect about how to make software engineers better at what they do. So my life was always a little bit like this. And I was working as a developer. I did my Masters and then I said, oh, that looks cool. I did a PhD. I finished my PhD. I said, well, maybe I wanna try this a little bit more. I did a postdoc, so I took a postdoc position. And then I’m like, yeah, it’s amazing to do research, right? And then let me try being a professor full-time.

And what usually assistant professors do is they do research and they also do teaching. And I started to teach software testing here at the Delft University of Technology, a very important technical university in the Netherlands. And first time I gave this course was 2017. And I’ve been doing this course until today. And I think teaching testing has been also an amazing experience for me, because it made me just understand way more about everything that I was doing. I had to formalize my thoughts so that I could pass it to people.

And I think throughout the years I’ve been doing lots of changes. Maybe a big one was during Corona. Cause when Corona came and then we suddenly switched to online. I didn’t want my students to be in front of the camera as a compulsory activity, you know, because people had their own challenges. So I decided to open my lecture notes. And I remember I just got a teaching assistant here. And then I said, those are my lecture notes. Just make it a little bit more beautiful. Publish it in a website. And that’s how we are gonna do the course. People can read my lecture notes.

And to my surprise, suddenly, I started to get emails from people from other universities. Hey, I’m reading your lecture notes. This is very cool. Do you have exercises that I can use? Do you have slides? And then same thing in year number two. And then this has started to grow. And at some point I said, well, maybe I need to turn this into a book. And I contacted Manning. I submitted a proposal. They accepted it. My book went through their very thorough review. So the book just got way better.

And I think in the end, this teaching award that I got, so Delft has an award every year that they give to teachers that are doing something impactful. And this is based on the surveys from students and etc. I think a lot of it was because of this first switch from instead of me lecturing, I just gave them a book that is easy to read, that is very practical. And creating the book itself. I think students liked very much the content of the course. So my course is far from those theoretical courses you usually see about software testing that you never see source code. It’s more practical. So all of this combined gives me these awards that I’m actually super proud of.

And even today, so I’m giving this course right now, actually, in this quarter. Students are reading my book. And again, the feedback has been super nice. One of the students came to me this year and said, you know what? This is the first book in my bachelor so far that I read cover to cover. Or back to cover. So yeah, that’s the story.

Henry Suryawirawan: Wow! Thank you for sharing such a beautiful story. And I think, yeah, it always gives you a very proud moment to hear such impact, right? Especially when someone reads your book end-to-end. Especially, it’s a technical book. I must admit myself, I rarely finish technical books end-to-end. Because there are hundreds of pages, sometimes it’s dry. But I think looking at your book, I think I can see there are practical things. There are sample code. There are things that guide people to actually understand your thought process.

[00:09:33] An Effective Developer is an Effective Tester

Henry Suryawirawan: Which is a good segue for us to start talking about your book " Effective Software Testing". I think one key sentence that I pick when I read your book is that you mentioned to be an effective developer, you must become an effective software tester. In practical world, sometimes developer and tester, they are two different roles. So tell us more about this. Why do you say that? To become an effective software developer, you need to become an effective software tester.

Mauricio Aniche: Thanks for the question. I think it’s because as a developer, it’s your responsibility to make sure that what you do works. And the only way to do this is by writing tests that prove to you and to your team that your code works. So indeed, before, and I even worked in environments like this where I would just focus on coding and then I would send my software to another company, this company would do the testing for me. They would send me a report, they would fix the bug, so on and so forth. And that’s just not productive. Back and forth between these two teams is too big. And I think, again, it’s your responsibility to make sure that things work. And automated testing is just such an easy and cheap way of doing it. And that’s why I say in the book that effective developer is someone that effectively tests what they do.

[00:10:43] Reasons for Writing Automated Tests

Henry Suryawirawan: Right. And I think, I mean, I don’t know. I was a developer last time. Sometimes developers can be very confident type of person, right? So we write a code. We test a little bit during the local development. We think it works. We always think it works. Many developers probably do not enjoy writing test for some reasons. Maybe apart from unit test sometimes because it’s very close to their workflow.

So how would you actually invite more software developers to actually have more passion or energy to start writing tests? If they have the perception that, yeah, I’m super confident of my code. I don’t wanna spend more time to write automated tests and things like that.

Mauricio Aniche: Yeah, that’s a very interesting question. So the first part was, you mentioned about developers being confident, right? And what I always say is you never know. You don’t know how many times a day you make a mistake. So you program something wrong, and you only know this once you have tests. Because if you don’t have tests, everything that you code looks perfect, right? You just believe it works. As soon as you have tests, then you see, you know, oh my God, I’m breaking my test 50 times a day. And then you quickly realize how much you need them, because we break stuff all the time.

Now, how to convince developers to write tests? That’s a very deep question. I think there are many angles to this. One is to show to them and to have them practice, so that they don’t see writing tests as something that is a burden. They just get used to it. They get proficient with writing tests, not only with the testing frameworks, but on writing code that is easy to be tested on. You look at a program and the test cases emerge in your head, and all you need to do is to code them in your programming language. So there’s this aspect of just practising and getting better at test itself.

The second one, if you’re in a very large and complex software system, things are too complex, right? And then for you to have this pleasure back in writing tests, you need to make sure it’s easy to write tests. And this means as a company or as a team, you have to invest in a small infrastructure that facilitates the act of writing tests. I was discussing this yesterday, actually, not here at the company, but in another university that I was yesterday. And I was saying, if you’re working on a very complex software system to test something, maybe you need to know to instantiate 10 entities or have data in 30 different tables in your database. And that’s just because your business is complex. And you have to have some something in there that makes this easy for you. That in a bunch of a couple lines of code, you can set up the scenario you wanna test and then you can do the test.

So those are the two perspectives I see. One is, again, getting good at testing itself, and the second one is having the proper platform so that you can write the tests.

Henry Suryawirawan: Totally makes sense for me. So from my point of view, also, sometimes we also need to, maybe, introduce back, like what you mentioned at the beginning of the pride, right? So how can you tell that your code works now and also in the future. There’s no other way to do it, other than having some automated tests that can actually prove that your work always passes, right, in terms of test cases. And I think many developers also think probably writing more code means like doubling their effort, right? So I think this must be changed as well in terms of perspective, so that you don’t finish writing the code if you don’t have the test. Maybe just some perspective from me.

[00:13:45] Systematic Tester

Henry Suryawirawan: Let’s move to the thing that you mentioned in the book, right? So you mentioned about effective, and there’s another keyword that actually you mentioned in the book, which is systematic. So I find these two things are very interesting, the way you explain in the book. Maybe if you can explain here as well. What will be your advice for people to be a more effective software tester and also to be systematic in coming up with those tests?

Mauricio Aniche: Yeah, indeed. So I make a very strong point that maybe we can be a little bit more systematic when we approach testing. We wrote a paper a couple of years ago, and in this paper we asked developers to write tests. So we gave them a program and wrote tests, and we observed them and they were thinking aloud so that we could see how they were coming up with test cases. And we observed a couple of things. Maybe the most important one for this conversation is the developers were never systematic. They were always, you know, just following their hearts. They would look at the code is, hmm, this feels like the next thing I wanna test. They would go and they would write the test and then they would repeat, hmm, what is the next thing I wanna test? So they would just follow their hearts and follow your experience. And feelings are very important when it comes to testing, but it also opens up space for mistakes. Maybe you’re not in a good day and then you forget something.

Second, if you’re always using all your brainpower to come up with test cases, even the basic ones, you’re not saving energy to think of complex cases. So when you’re more systematic in terms of testing, that means, you know, you just follow some sort of cake recipe that helps you to come up with a bunch of test cases that are quite easy to see, you can put them on a checklist, basically. And then it leave your brainpower to focus on test cases that really requires human, smart people, thinking about it.

And the fun part is there are lots of techniques like this. If you look at academic books on testing, even the books from the 80s, their goal back then was to come up with a recipe that teaches you how to write perfect tests. And there are lots of good ideas there. And you can think of, I’m not gonna give a tutorial here, of course, but you can think of basic things like for example, if your method receives a list as an input, there are a bunch of interesting cases that always makes sense. So for example, what happens if the list is empty? What happens if the list is null, if there’s null in your programming language? What happens if the list has just one element, right? Because sometimes your algorithm changes if there are multiple elements or just one. So there are a couple of basic stuff that you don’t even have to think, and you can write those tests.

And they are usually good enough to get you to a very high coverage already. And then leaving space for you to then focus on boundary testing, you know, corner cases, and this type of stuff. And being systematic is something that you see in other engineering fields, right? And I think I said in my book, there’s this very famous book called The Checklist Manifesto. And there’s a story there backed up by research that shows that medical doctors that use a checklist before some surgery, they make less mistakes than medical doctors that don’t. And you don’t have to be ashamed of following a checklist. It’s good.

And I think that’s the point of my book when I say we can be a little bit more systematic. Of course, we don’t have to be systematic all the time. It’s just too expensive, maybe too complex to be systematic all the time. But identify, you know, like, I’m testing a complex method here, let me be a little bit more systematic. So that’s the whole idea of being systematic when it comes to testing.

Henry Suryawirawan: Yeah, I think your point is right. Sometimes we have typical use cases, just like list, or maybe you have an API, failure cases, success cases, so all this can be made as a checklist. I’ve heard about Checklist Manifesto as well. I think the author name Atul Gawande.

And I think becoming systematic in solving this kind of a testing problem is really crucial. Because in your book, you mentioned that if you are systematic, you can assign any developers to the same problem. They will most likely come up with the same test suite, right? How cool is that? Because, sometimes we feel that we only need a certain testers to come up with a comprehensive amount of number of tests. But I think the systematic way is actually to have any developers working on the same problem, but they can come up with the same test suite. That I think is really, really powerful concept.

And I think also becoming effective means that you write the right test. Because I think it never ends, right? You can come up with any number of tests as long as you can produce the test inputs, right? And I think to come up with a certain right amount of tests is very important.

[00:17:50] Testing Pyramid

Henry Suryawirawan: Which brings us to some of the techniques later on. One of the techniques that is commonly, I don’t know, discussed in the software world is about test pyramid. Maybe let’s start from there first. Do you think this is a systematic way to think about producing tests? And what is your view about this testing pyramid?

Mauricio Aniche: I think the testing pyramid helps you to be pragmatic when writing those tests. Because one thing is to come up with the test cases, right? So those are the inputs I wanna give to my program, and this is the expected output. The other one is, writing this in code and making sure that this works nicely in your, let’s say, development process, in your CI, and etc.

So for example, if you just write end-to-end tests, maybe your test suite will cost you too much to execute, and then at some point this becomes a bottleneck, right? So I think the testing pyramid gives this pragmatic point of view on, hey, we’re gonna have to automate this test, and we’re gonna have to maintain those tests, and we’re gonna run them the whole day.

And I like the idea of the testing pyramid very much. And the idea is that, at the bottom, you have unit tests, right? And why is it at the bottom? Because they are usually cheaper to write. They are cheaper to run. They tend to be more robust. They tend not to really fail. They’re not flaky, in general. And then you go up the pyramid, you have integration and end-to-end tests, and you still have to do them, right? You have to. But you do them a little bit less. Maybe you focus a little bit more in a unit test. Maybe it’s okay to maybe have some duplicated tests here, if you’re unsure. Are you covering the same thing or not? I think it’s okay to make this sort of mistakes in the unit test, while with the integration and end-to-end test, you wanna prioritize a little bit more. And I like this idea of the pyramids.

Although, you know, in the “Software Engineering at Google” book, I think from 2018, I think they bring a new perspective that I also appreciate very much that is, forget about unit testing and integration testing, right? Just separate your test suite between, is this one fast enough that I can actually run it during a pre-merge together with your merge request or pull request, or is it super slow that I will have to run in a separate machine, and etc. So separating the test suite in fast and slow makes a lot of sense to me. Because if you remember back in the 2000s, our big discussions were, what is a unit test? If I’m testing two classes together, is this two unit tests or not, right? And I think in 2023, we know that this doesn’t matter at all, as long as it runs fast and it gives you proper feedback, that’s good.

So I like this new way of seeing things. And it brings way less space for these type of discussions, right? Because I feel no one can argue that a slow test is better than a fast test. You can argue that maybe you like integration tests more than unit tests, but no one can say a slow test is better than a fast test, right? So I like this new way of phrasing this.

[00:20:25] Unit vs Integration Test

Henry Suryawirawan: Yeah, let’s go to the point you mentioned just now because there are some discussions and debates about people preferring more about integration tests rather than unit tests. Some think that doing unit tests, because you probably mock most of the things, they feel it’s less valuable. So what is your view about this? I know it, it’s probably a little bit more hot topics to discuss, but what is your view on this?

Mauricio Aniche: I think I have a pretty clear opinion on this, right? And I think, mocking can be dangerous for sure. Because if you mock too much, then at the end you’re not testing anything. So you wanna test as much as possible real behavior or behavior that will look like in production. But at the same time, you don’t wanna be slowed down by things that you don’t control so much. For example, let’s say you’re writing a piece of code that makes a call to web service that is developed by another team. And suddenly for you to really run this test, you need that web service available to you, right? And this becomes very quickly a pain. So in this case, the mocking makes a lot of sense. And the web service, you don’t control so much.

Let’s give an example of something you control. A database. Should I mock my database or not? To be honest, I believe, my opinion is that you should not mock your database, because today you can make tests with database so fast that you can actually run them during your pre-merge time and you get feedback very fast. So why would you mock something that is not really preventing you from writing the test in an easy way and to run it fast? So I think that’s the trade off that needs to happen.

I think it’s not about writing integration test or unit test, but it’s about writing lots of tests that are fast in the end, and that you can have full control over. And those are the things you should mock, right? Things that you don’t have control over. And then, of course, if you go back to the testing pyramid. Let’s say you’re mocking this web service. So you can write lots of unit tests, you know, super fast tests that just mock this web service, but you can still have one or two or three integration tests that are a bit more expensive that make real calls to this web service. So you see that things work when you put all the components together.

But you know, something that I always say is, you should not write the integration test that could have been a unit test, right? You don’t need an integration test to exercise an if statement in your code. Just write a unit test for it. Leave integration test for what it really pays off, that is to find integration bugs.

Henry Suryawirawan: I really love your pragmatic approach to answering this question. So I fully agree as well in terms of control, right? So, first, think about do you control that? And especially if not just the external service or infrastructure related. Time is also probably one thing that you wanna consider, right? Time is something you cannot control by itself, but you can actually create like a fake system or mock it, sometimes. So I really love the way you explain about this.

[00:22:55] Specification-Based Testing

Henry Suryawirawan: So let’s move on to, maybe, some of the other techniques, right? We have talked about unit test, integration test, end-to-end test. In your book you actually mentioned couple of testing techniques. So let’s start with the first one, which is called the specification-based testing. And it’s actually very interesting the way you start by explaining this. I would assume like some people would start with unit test, but you start with specification-based testing. So maybe tell us more, what is specification-based testing? Why you prioritize that as the first.

Mauricio Aniche: Cool. Yeah. So what is testing, right? So testing is about trying to find bugs. And how do you do this? Because you compare what you expect the program to do with what the program does, right? And for you to do this, you need to know what the program should do. Where is this information? In the requirements. And I’m not saying like a Word document that contains the requirements or UML, it doesn’t matter. The requirements can be in your mind. At some point, there’s this notion of what the program should do. And specification-based techniques are the ones that help you look at the requirements and identify interesting test cases.

This is not new. Again, this dates from the 80s. And we’ve became very good at coming up with techniques. And basically, what I wanna show in my book is, if you look at the requirements, it’s quite easy to see, you know, those are the inputs of the program. This is sort of what the program needs to do in the output. Those are the different paths that the program may take, so on and so forth. And you can look to all of this and then get inspiration to write your tests.

And the one technique I show in my book is actually a very basic one, that basically says, look at the inputs of your program. So the method you’re testing or the component, whatever. What are the inputs? Separate them one by one. Let’s say your function receives three inputs, an integer, a string that represents whatever, and a list. Look at them separately and explore their domain. So let’s say this is string. What are the possible strings that can come here? Are there any special strings that would make the program to do something different? You do this per input. And why you do this separately, because it’s just much easier for our brains to process small things, right? So you do each one of them separately, then you try to look at all of them together. You look for other possible corner cases and so that might be explicit on the documentation or the requirements. And then, and only then, you come up with test cases. And that is sort of the ideal of specification-based testing. That you start your tests from what the program should do and not from the implementation.

And I actually like this very much, because as a developer, right, the person writing the code that I’m about to test that gives me the opportunity to disconnect from my implementation and really focus, hey, this is the input that I’m gonna pass the program. This is the expected output. So this is my suggestion. If you really wanna be a little bit more systematic when it comes to testing is first step is start writing tests, or creating test cases from the specs, from the requirements.

[00:25:34] Behavior-Driven Design

Henry Suryawirawan: So when we talk about specification-based testing, I think people always, I mean, not always often associate with BDD, right? Behavior-Driven Design. And sometimes it’s like Gherkin type of language and things like that. What’s your view of this? Do you always have to come up with BDD type of specification test, or is there some other types of tests that we can also use?

Mauricio Aniche: Yeah, very good question. And maybe that’s a difference between my book and other books. Usually when people talk about specification-based testing, they are focusing on a very coarse-grained feature. From the point of view of user, what should I test? In my book, I show that you can do this at method level, for example, or at class level, because as a developer, that’s sort of the units you’re always handling with, right? And I think, to me, BDD makes more sense when you’re looking at the big picture and looking at the whole functionality from really the point of view of the final user. And the specification-based techniques, you can apply in both.

Now, should you write tests in a BDD style? I think that’s really a matter of taste. I am personally not a BDD fan, so I don’t write tests using BDD style. I don’t use Cucumber. That’s just not my thing. But I see where this comes from, right? And I don’t think, you know, in practice, this is too different. It’s really telling you to focus on the specs, on the behavior of the program, right. And coming up with the test cases. Tooling is a matter of personal taste in the end, right? As long as you’re looking at the behavior of the program, what you expect from it, I think this is a great step towards good testing.

[00:27:01] Boundary Testing

Henry Suryawirawan: Right. You also mentioned just now in your explanation about corner cases, right? Sometimes when you have requirements, most often it’s just a normal case, right? I expect the program to work like this. So, first of all, like who will come up with these corner cases? And how would you actually invite people to come up with more creative corner cases? I know you mentioned about boundary testing. Is it also one way to actually generate corner cases? So tell us more about this.

Mauricio Aniche: Yes, indeed. So corner cases are the cool tests, right? And empirical research actually shows that bugs love boundaries. Because we are very good at implementing happy paths. The bugs then, they start to cluster on things that we’re not so good at. That is, to handle corner cases, right? And coming up with corner cases is very hard. Because we develop complex software systems. But one way to get started is to observe your program. You look at the inputs and how these inputs changed the outputs. And boundary testing is a perfect technique for this.

What does boundary mean? So imagine you have a very simple program, you know, that has an if. If X greater than five, do something else. So this, if divides the execution of the program. So if the input is four, four is not greater than five, so the program will do something. Then next one, five. Five is not greater than five, so the program will do the same thing. Now, six. Six is greater than five, and now the program does something else. There was a small change in the input from five to six, right? So very small change in the integer value. But the program responded to something completely different. So this is a boundary.

And this is precisely where you can write a test, because we love to put bugs there, right? And as a developer, that makes sense because it’s very easy to confuse, you know, a greater than with a greater than or equals to. And the idea of boundary testing is look at your program and look at the inputs and how the inputs affect the outputs, and look for those moments where a small change in the input changes the outputs.

There’s a paper from 1994 that is cited in my book, that explains this in a more mathematical way. So this was really written by a computer scientist. And that shows that if you write tests like this, you are more likely to reveal a bug. And I think this is something very easy to change in the behavior of the developer, right? And it’s like a very quick win that will give you better tests. Because as a developer, you look an if, you write a test for the true branch and for the false branch. But then we usually pick random numbers that exercise the true branch and then exercise the false branch. Instead of picking random numbers, pick the numbers that are close to the boundary. These two tests will be way stronger than the other two tests that sort of exercise the program in the same way, but very far from the boundary. That’s a very easy change.

And funny enough, I get emails from people reading my book. And this part of the book is the one that people like the most. And people say, oh my God, this is so simple. And that really made my test so much better, because it’s also easy. So think of boundary testing, for sure. And of course, in this example, it’s very easy to find the boundaries, but in more complex programs, you are gonna have to spend some time looking for the boundaries, identifying the boundaries, but that’s just part of the exercise that we have to do.

Henry Suryawirawan: Wow! Thanks for sharing this technique, this tip. So I really love the way you phrase it, right? Bugs love the boundaries. So for people here, maybe that’s also a lesson for all of us, right? So when you think of test cases, now start to think in terms of boundary first. Then you can pick maybe some random inputs later on, after you have covered the boundaries.

[00:30:16] Structural Testing & Code Coverage

Henry Suryawirawan: So let’s move on to the next technique, which you categorize as structural testing. So maybe tell us more. What is structural testing? How can we use it to become an effective software tester?

Mauricio Aniche: So structural testing is sort of the academical name for using code coverage in practice. So the idea of structural testing is, let’s use the structure of the program as a source of inspiration to write tests. So before, we were looking at the requirements with specification-based testing, right? Maybe text. Now we’re looking at the implementation and trying to come up with ideas.

And how do you do this? Well, you look at an if statement. And then you think, well, I wanna write a test that actually exercises that if statement. And maybe you can do this line by line, right? And then once you cover all the lines, you’re done with your structural testing, because you tested the structure in the way you want it. And your coverage tools give you this information, right? You can see line coverage, for example. Or you can be a little bit more thorough and say, well, I wanna cover all the branches and whenever there’s an if, I wanna make sure that there’s a test for the true branch and for the false branch. Or you can go deeper. An if statement can have multiple conditions. And I wanna exercise every condition as true and as false, right? So you can keep going crazy about this, but this is sort of structural testing.

And I think the main point of my book is that there’s this hate in industry about code coverage, right? There are many reasons for this. You know, one, you can cheat the coverage number, right? It’s very easy to write a test that covers a lot of stuff, but the test is not so good. If you’re using coverage as a target number, so let’s say your company says 90%, otherwise we don’t get your merge request, maybe that forces you to write useless tests just so you get to 90%. Also, if you get to 100% coverage, that also doesn’t mean your code is perfect, right? And it’s quite expensive to get to 100% coverage. This is why people usually hate it.

But I think people usually hate code coverage, because they are focusing more on the number rather than on the output, the information that coverage brings to you, right? And in my book, I show that coverage should compliment specification testing, because, you know, you write the test based on the specification, and then you do maybe boundary testing. And then the question is, am I done? Well, you can triangulate this and you can look at coverage, right? And then you see, did I cover everything? And maybe you forgot to cover something, and then the question is, why did I forget? Was it because I forgot? Was it because there was a mismatch between the requirements and the implementation?

You reflect about it. You either write a test or you don’t try to test, and it’s also fine, right? But it helped you to reflect about, am I done writing tests? And if you use coverage with this purpose, notice that we are not talking about numbers anymore. But we’re more talking like the tests that I wrote, are they good or not? And coverage is giving me insights about it. And I think that’s how we should use coverage.

But then, the one million dollar question then is also, okay, okay, coverage is cool. I use structural coverage that will maybe help me increase my coverage, because I’m gonna look at an instruction that I didn’t test and I’m gonna test it now. Is there a correlation between high coverage numbers and the effectiveness of the test suite? So a test suite that has a very high coverage. Is this one more likely to find bugs if I have a bug in my code? And this is a perfect question for academics to answer, right?

And then you see lots of papers, written in my book, I said papers from the 1990s up to 2010 or something. And a lot of those papers show correlation between coverage and effectiveness of the test suite. And that makes sense, right? Because the more code your test suite covers, the more likely it will find a bug if you introduce a bug. Of course, different papers show different levels of correlation. Some of them are strong correlation, some others are more weak correlation, but the correlation does exist.

And I think the lesson that I get from these papers are, 100% coverage may not mean a lot, because once you’re there, your tests are very good, but that doesn’t mean it’s perfect. And then when you’re at 100% coverage, coverage doesn’t give you more useful information, because it’s all covered. So if you’re there at the extreme, then coverage is not so good anymore for you and doesn’t tell you much. But if you’re on the other side that is you have very low coverage, let’s say 10%, then that means a lot, right? That means your test suite is maybe poor. Maybe there’s something that you can improve there.

So to summarize it, coverage can be used as a way to complement your tests. To help you see if your test suite is good enough or not. And coverage helps you to identify poorly tested areas of the code base. And, hey, this is not covered, so maybe we should write a bunch of tests for it. I don’t have to go to 100% coverage, but I have to write some tests for it.

Henry Suryawirawan: Wow! I really love another pragmatic advice on this, right? Because people always debate about code coverage. I know some companies also who put code coverage as the so-called the build gate, right? So if you don’t pass a certain threshold, we’ll fail your build. So I think, you remind us that code coverage should be a complimentary thing. It should not be the main thing, right? And I really love the way you mentioned that 100% doesn’t mean that you will not find any bugs, right? Although, there’s a correlation, of course. But a low coverage actually means that there are room for improvement.

And I also like one particular statement that you mentioned, how should we use a criteria in terms of covering more line coverage. You say in your book that all code should be covered until proven otherwise, right? So I think that’s also another good guideline I find. Sometimes we can leave some coverage behind, but actually you should have a good reason for that, right? So I think that’s a very good thing.

[00:35:31] Mutation Testing

Henry Suryawirawan: Another technique that sometimes is used for this is actually called mutation testing. So this is also in the structural test kind of territory. So tell us more, for people who may not have heard about this term mutation testing, what it is and how does it help to provide effective software testing as well?

Mauricio Aniche: Sure. So, how do you know if your test suite is good or not? You can look at coverage for sure, right? But maybe, you can get to a lot of coverage, but maybe your assertions are poor, right? And then there’s a bug, but your assertions are not catching that bug, but you’re covering the code, so your coverage is very high. Mutation testing helps you to identify this gap. And what is the idea? The idea is I’m gonna create mutants of my code.

So imagine like I just get your production code, and I, on purpose, insert a bug, so I change a greater than to a smaller than, right? So let’s say, I do this. If I run the tests, your tests, you must have a test that is failing, because I just introduced a bug. If that happens, that means, okay, your tests can actually kill that mutants. So your tests are doing some good stuff. But if I can mutate your code, so I introduce a bug on purpose and your tests are still green. Hey, I just found a case, a possible bug that someone can introduce that you’ll never know with your test suites. So that is the idea of mutation testing.

And what mutation testing tools do for you is to automate this process. So they change your code, they run your tests, and they see if the tests catch the bug and they repeat this in a structured way. And they give a beautiful report in the end. So if you’re in the Java world, for example like I am, you have PITest, which is an open source tool that does that for you. And I love the idea of mutation testing. It just makes a lot of sense to understand if your tests are good or not.

I think, in practice, a lot of companies are not there yet to benefit from mutation testing. You start to benefit from mutation testing when you have a lot of tests and you have very good test suites. If your test suite is very poor, well, it doesn’t get, it doesn’t kill any mutants, so why would you run mutants in first place, right? Coverage is good enough for you at that moment. So I feel like a lot of companies are not there yet in terms of maturity to use mutants. But if you are, so if you have a beautiful test suite, then put mutants in your pipeline.

Tools are evolving. So PITest is a very nice tool. There are ways for you to reduce the time it takes, because it’s an expensive process, right? We have to mutate your code and run other test suites. And we do this a million times, or the tool does this a million times. So those tools are getting better and better, you know, so that you just run the tests that are really relevant for the mutant and blah, blah, blah.

There’s a beautiful paper that I cite in my book called, I think it’s “Mutation Testing at Google”. It was published in an academic conference, but in the industry track. And that paper sort of describes the experience of Google in trying to put mutation testing on scale. Mutation testing, by the way, is also, it dates from the 70s, right? So the idea is super old, but I think now that we have very good hardware to make it work, and the tools are getting better and better, I think now that industry is adopting it more, which is really cool.

Henry Suryawirawan: I loved your analogy using mutants to explain how this mutation testing works. So I think, it’s true that I think if you have a very low coverage or kind of like the poor test suite, you will not be able to kill the mutant, so to speak, right? Because your test will always pass, because you don’t cover enough.

[00:38:45] Property Testing

Henry Suryawirawan: Another technique that you mentioned in your book is called property testing. How does this differ and what is property testing?

Mauricio Aniche: Yeah. So property-based testing is a way of writing a test that is different from the way we write normal tests, right? So usually when we write a normal test, which I call example-based test in the book, you know in your mind what sort of branch you wanna exercise in the code or test case you wanna create based on the requirements. And what you do is you think of a concrete input that will run the program, will exercise the program in the way you want. So in the example that I gave, x greater than five, it can be any number greater than 5, 5, 6, 7, 8, right? And you pick just one, because you’re pragmatic and because you cannot do more than that, right? So maybe you can do two or three or four, but then you cannot do 50 by yourself by hand. It’s just too expensive. So this is example-based testing, because you’re testing by example. You pick one example.

In property-based testing, what you do is you try to describe a property of the program and you let the two come up with the inputs for you. So let’s say in this program, if for any number that is greater than five, you know that the output of the program should always be a positive number. So inputs X greater than five, then the program always prints positive numbers. You can write a property, and then you say, create any number that is greater than five and just assert that the number that comes out of it is positive. And then the tool creates a lot of inputs for you.

So for example, in the Java world, you have jqwik. When you run the test, jqwik will think of 1000 inputs for your function and it will send 1000 inputs for your function. So in a way, you’re exploring way more the domain of that input, right? Because you’re not trying one example, you’re trying multiple. So the idea is very cool. It is of course much harder to write a test, because you have to stop thinking of specific functionality you wanna test, but more like what are the properties of my program that I wanna exercise? But once you can do this, then it’s very powerful, because, yeah, you’re just testing a lot.

So maybe another example so that you can see the usefulness of it. Imagine you’re just implementing some data structure. So let’s say, you’re implementing a set yourself, right? So you cannot have repeated elements on a set. And you can write a property-based testing that just tries to insert random elements in the set. And you make sure that you’re having repeated elements. And it doesn’t matter the order that you insert the elements. In the end, the property is the set must only have unique elements in there. And you write a test like this, and then suddenly you’re testing dozens, hundreds of combinations of inserts, of repeated stuff, new stuff, repeated stuff. Something that you would never be able to do by hand. So that’s how powerful those things can be.

Now in industry, building information systems, which is what a lot of us do, property-based testing is, I think you have less opportunities to write this. To use property-based testing. So I still feel like example based testing is something you should do as the to-go approach. But consider property-based testing especially for situations like this, where sometimes you just feel unsure this one example is enough. I need more. And then property-based testing can help you.

Henry Suryawirawan: Yeah, thanks for the thorough explanation about this property-based testing. I personally haven’t done mutation testing and property testing myself, so I think it’ll be cool to introduce that sometimes in my test suites, I guess. So for people who also haven’t experienced it, I think you can check some of the papers that Mauricio has mentioned or also some more materials from his book. Make sure to check out as well.

[00:42:00] Test-Driven Development

Henry Suryawirawan: Talking about testing, I think it’ll be a miss if we don’t discuss about test-driven development or TDD. What’s your view about this workflow? Are you a proponent or do you always do TDD? So maybe tell us more about that.

Mauricio Aniche: That’s a tough question, Henry, because usually, you know, you have very passionate people about test-driven development, right. And sometimes I make fun with my friends that on my Twitter bubble, sometimes I just post something that says that TDD is not that perfect. And then, yeah, a lot of backlashing, right?

And I’m a big fan of TDD. I even wrote a book about TDD in 2012. It’s in Brazilian Portuguese, so maybe not accessible for everyone that’s listening here. And that was because I was doing a lot of TDD myself. And I really felt I was just being a better developer doing TDD. And this was in 2012, when I wrote the book that I was doing TDD for a couple years. Now in 2023, I think I do TDD way less. And I feel that’s just because I found a way to get the benefits of TDD without having to do TDD.

And what are these benefits, right? What am I talking about? I feel like the big benefit for me about TDD is the ability, is that we work on very small steps, right? And we make very small, steady progress towards the bigger feature. Before knowing TDD, someone would give me a feature to implement and I would be all over the place trying to write the full algorithm in one go. And that would just complicate my life. I would code for an entire day, a lot of frustration, because, you know, you bump into barriers, you break stuff that was already working, so on and so forth. And then after, you know, a few days of work or whatever, then writing tests, you know, I just wanna get this feature done. I don’t wanna do this anymore, right. And with TDD, I was like, oh, I cannot do TDD if I work on such a big chunk of code. So it forced me to see programming as an act of writing small things and combining these small things one by one to give bigger behavior. I think that’s sort of the big difference between TDD and non-TDD.

And if you can incorporate this to your development practice, I think it’s okay if you don’t start with the tests. So today, for example, in a lot of cases, I actually start with the production code, but my coding sessions are very small. I write a little bit of production code and then a little bit of testing. I experiment. Sometimes I delete the code and I start again and do in small tests. I think if you find your way to work on small and steady steps, I think you get the benefit from TDD.

And this is actually what more recent research shows about test-driven development. So if you look at the whole body of knowledge on TDD, you see papers. And what are the results of these papers in a nutshell. Qualitatively, if they ask developers that are doing TDD, do you like TDD? The answer will be yes. I am actually one of the authors of these papers. I wrote a paper that did qualitative research on this, and that was sort of the feedback, we like TDD. Quantitatively, if you compare the quality of the test suites in comparison to people not doing TDD, the differences are negligible. Very small. So TDD by itself doesn’t do magic. So more recent research actually shows that the benefits are on those small steps and not because you’re writing the test before.

That being said, do I recommend you to do TDD? Definitely, yes. Especially if you’ve never done it. Because if you’ve never done it, odds are you’re not used to work on doing small things. You’re just used to work on big things. So TDD is the best teacher you can have when it comes to explain to you how to program in small bits. So do TDD, do a lot of it. Once you internalize the ideas, then it’s okay. Then you don’t have to do it anymore.

Henry Suryawirawan: Wow! It’s like a safety wheel, right, so to speak, right? You start using it and once you kind of like master it, like your bicycle, right, you can take it off and you can choose when to use it and when not to use it. And I like that you mentioned about this empirical research. When I read it in your book, I was intrigued. This research doesn’t actually show that TDD produces much way better tests versus the non-TDDers. So I think that’s really cool to know about this and I think I’ll make sure to put it in the show notes as well for people who are interested and also have passion about TDD. It’s not saying that TDD is not good, right? But I think it’s the mindset of working with small things that is very, very important.

[00:46:03] Test Maintainability

Henry Suryawirawan: So we have a few minutes left before we go to the technical leadership wisdom. I have one question, maybe tips from you, for people to be able to become a better tester in terms of their quality or maintainability. Is there any few tips that you can just give us here so that we can improve?

Mauricio Aniche: That’s a very good question, because, you know, if you really write lots of tests, you’re gonna have lots of source code and you’re gonna have to maintain this as well, not only your production code, right? So I think all the love that you put in your production code, you also have to put in your test code.

And to me, in the test code, if you have to focus on one thing, because in my book I mentioned lots of ideas, you know, good practices that you can have to make your code beautiful and blah, blah, blah. But if I can just talk about one, that is you have to make sure that the part of your test where you create the data, right, that you create the input that you’re gonna pass to the method or to the class you wanna test. That part has to be crystal clear.

Because if you’re working on a very complex information system, odds are that the complexity of the test will be in that part. So that part needs to be very easy to read. It has to be very easy to evolve, because your entities in your domain are evolving all the time. So it has to be easy to make an evolution in the production code without breaking the test code. And it has to be easy for you to come up with complex data inputs without having to spend 50, 100 lines of code.

So you have to have lots of utility methods or whatever you wanna call it, that help you build data, right? Test data builders, the famous name for patterns like this. You need to invest on them. So that is my tip number one, and that’s what you need to focus. Input data has to be crystal clear.

Henry Suryawirawan: Yeah, I also mentioned reading about this in the, I think the “Growing Object-Oriented Software, Guided by Tests”, right? I think this is also one key technique that is mentioned in the book. So I really love it because I have seen in my experience before, seeing, you know, lines of test code, which are probably too complicated to understand. And especially the part where you produce this test data, right? Initially you will see a lot of gibberish probably, like why are this logic here, but it’s actually to produce test data. And I think it’s, uh, really, really important for us here to improve.

[00:48:07] Growing Object-Oriented Software, Guided by Tests

Mauricio Aniche: And if I can tell a personal story, Henry. If I can tell a personal story, you mentioned this “Growing Object-Oriented Software, Guided by Tests”, right? Authored by Steve Freeman and Nat Pryce. And I have a very funny story about it, because in 2010, there was this workshop in Paris about test-driven development, the one and only on this series, and I had a paper there. And Steve Freeman, the author of the book, was giving a keynote. And so he gave the keynote and I was then giving a presentation of my paper. And in my paper, I was citing him a lot. So there was like Freeman and Pryce say this in their book. At some point, Steve looked at me, you know, for like, because it was a 10th time, I said his name on my presentation and then I said, I’m sorry, I just like your book very much. And we became friends, right?

And then when I moved to Europe, then we were closer to each other. And then Steve started to teach one guest lecture every year in my course in Delft. So he always comes to Delft every year for my course. And that book influenced me a lot. And he’s in fact one of the foreword authors of my book, right? So one of the forewords there was written by Steve. So he influenced me a lot. And this book is an amazing book. If you have the chance, read it as well.

Henry Suryawirawan: Wow, thanks for sharing about this personal story. I find it really, really interesting. So I think that’s also one good reason how you get connected to a more famous guest or famous authors. So I think thanks for sharing that.

[00:49:24] 3 Tech Lead Wisdom

Henry Suryawirawan: So, as I’ve mentioned, I have one last question that normally I ask for all my guests, which is called the three technical leadership wisdom. You can think of it just like advice that you wanna give to the listeners here. Maybe from your experience, from your expertise. What would be some of your wisdom that you can share here, Mauricio?

Mauricio Aniche: Three. Let’s do it. So number one, focused on this episode, right? So you should master testing. And mastering testing means not only mastering the tools like JUnit and Mockito and whatever tool you use, but also mastering creating good test cases. And once you really become proficient with it, it just becomes way easier and way cheaper. So you start having less reasons not to do it because it’s just more natural. How do you get there? By practicing, right? It’s gonna hurt at the beginning, but the more you do it, the easier it’ll get. So practice testing.

Advice number two, for software engineers in general is you should never stop learning, right? I think it’s very easy for us, you know, a lot of us go through education and then we just join a company and we start working as a developer, and then suddenly we forget that we need to keep updating ourselves. I once heard from a developer, you know, hey, I’m working for two years in a row and I started to read this book. I recommend a book to this person. And he said I started to read this book and it feels so nice to learn something new. And I don’t know why I wasn’t reading books before. I guess I was just too busy working. And that’s life. That’s life. We’re all adults. We have responsibilities. It’s very hard.

So I think you have to find time. During your work hours, to be honest. Your employer needs to be on this to upskill yourself. So read books. Are you interested in, I don’t know, microservices? Read the book. So just keep studying because there’s so much new stuff going on and so many best practices.

And something that I feel among developers is that a lot of them, when they go, you know, in a big company and gonna have these big discussions to decide what architecture to take, should we use practice A versus B, whatever. People have intuition, but they have very little evidence to explain to people. And if you’re reading books, you have clear ways to explain why you wanna go for this or not for that. So there are so many benefits in studying and putting studying as part of your daily job. And so work on this. Make sure you study a lot.

And advice number three will be, we just learned a lot from advice number two, because you’re studying, now it’s time to share your knowledge. So make sure you also write about what you learn or give talks at conferences, right? And this is very good for you because it exposes you to other people. It forces you to formalize what’s in your head, and you have to put in words. And I find that putting in words is the best way for you to really understand how much you understand about something. And maybe that’s because, you know, I have this academic background, so writing really helps me to see how much I understand about something. It’s good for you, right? Because you’re just getting better as a person, as a developer. And it’s also good for others because I’m pretty sure you have cool stuff to share and that there will be others that are willing to listen to you. So please share knowledge. It’s as important as learning new knowledge.

So there you go. The three advices, Henry.

Henry Suryawirawan: Wow. Really beautiful. Thanks for sharing that. After I hear about our discussion about testing, now I think sharing is like a form of testing itself, right? That you actually learn from what you study, right? From reading the books and things like that. Maybe the best test case that you write for those learning is actually to share it, either, you know, writing a blog or whatever, YouTube video and things like that. So I’m having this testing mindset these days after hearing what you explained earlier.

So thank you so much for covering this testing. I think it’s really part of the skillset that developers should master more, right? Like what you mentioned in your first wisdom. Developers need to practice more and be good at testing because I think that really is the key to produce a quality software.

So for people who love to connect with you, maybe to talk more about testing, is there a place where they can reach you online?

Mauricio Aniche: I am always on Twitter, so it’s @mauricioaniche. So my name and my surname. I also have a free newsletter related to my book. So if you go to the website, effective-software-testing.com, you can subscribe to the newsletter. Those are the best ways to get in contact with me for sure. LinkedIn. You can also write me on LinkedIn, Mauricio Aniche, you find me. Feel free to drop me a message there as well.

Henry Suryawirawan: Cool. So I’ll make sure to put the newsletter as well so that people can learn maybe every week, right, how do you improve your testing experience? So thank you so much for your time, Mauricio. I hope people enjoy learning about testing and I wish that your book is also gonna be teaching people a lot more about how to produce best quality code.

Mauricio Aniche: Thank you so much Henry for the invite and I hope you like it, everyone. Bye-bye.

– End –