#172 - The Quality Mindset with Holistic and Risk-Based Testing Strategies - Mark Winteringham

 

   

“The quality is connected to the risk, and the risk is connected to the testing. If we don’t keep an eye on quality, our testing and development will drift, because we are no longer building the thing that people care about anymore."

Mark Winteringham is a quality engineer and the author of “Testing Web APIs”. In this episode, discover how holistic, risk-based testing strategies can transform your software quality. Mark explains how to prioritize our testing by understanding what users truly value and translating that into different risk-based testing strategies, such as testing API design, exploratory testing, automated testing, and acceptance test-driven design (ATDD). Mark also reveals the testing Venn diagram as our testing strategic roadmap. Finally, get a glimpse of Mark’s upcoming book “AI-Assisted Testing” and learn how AI will evolve the roles of testers and developers.  

Listen out for:

  • Career Journey - [00:01:24]
  • Writing “Testing Web APIs” - [00:05:17]
  • Holistic Testing Strategy - [00:07:48]
  • Start With Understanding the Problem - [00:11:02]
  • Testing Venn Diagram Model - [00:14:11]
  • Risk-Based Testing - [00:18:22]
  • Defining Quality & Quality Attributes - [00:22:29]
  • Testing API Design - [00:26:41]
  • Exploratory Testing - [00:32:08]
  • Automated Testing - [00:36:18]
  • Acceptance Test-Driven Design (ATDD) - [00:41:54]
  • “AI-Assisted Testing” Book - [00:45:51]
  • Evolution of Developer and Tester Roles - [00:48:46]
  • 3 Tech Lead Wisdom - [00:53:51]

_____

Mark Winteringham’s Bio
Mark Winteringham is a quality engineer, course director, and author of “AI Assisted Testing” and “Testing Web APIs”, with over 10 years of experience providing testing expertise on award-winning projects across a wide range of technology sectors. He is an advocate for modern risk-based testing practices, holistic based Automation strategies, Behaviour Driven Development and Exploratory testing techniques.

Follow Mark:

Mentions & Links:

 

Our Sponsor - Manning
Manning Publications is a premier publisher of technical books on computer and software development topics for both experienced developers and new learners alike. Manning prides itself on being independently owned and operated, and for paving the way for innovative initiatives, such as early access book content and protection-free PDF formats that are now industry standard.

Get a 45% discount for Tech Lead Journal listeners by using the code techlead45 for all products in all formats.
Our Sponsor - Tech Lead Journal Shop
Are you looking for a new cool swag?

Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.

Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.

 

Like this episode?
Follow @techleadjournal on LinkedIn, Twitter, Instagram.
Buy me a coffee or become a patron.

 

Quotes

Writing “Testing Web APIs”

  • I’m a huge advocate for exploratory testing. So I love doing automation, but I think exploratory testing is like a massively important skill for all testers or for anyone who’s involved in the quality space.

  • I ended up watching some online gaming streamer who was also an author. And someone asked him, “How do you write a book?” And he was like, one page, one page a day. Or a page at a time.

Holistic Testing Strategy

  • When we’re talking about holistic, we’re talking about basically different activities – in the context of a testing strategy – different activities that are being executed to address different types of risks. And that’s why I think holistic strategies are necessary. Because we have to handle lots of different types of risks that can impact our product.

  • Automation is useful, but it is very much targeted at a specific set of risks which tend to be the sort of the functional, the correctness of a product. They are change-detectors. So they are there to help us determine whether or not the system has changed intentionally or unintentionally. But that doesn’t help you with things like performance. That doesn’t help you with issues around implementing the wrong thing in the first place or how those fringe or edge cases occur in our APIs. All those different types of risks, security, how the end user’s going to use it, how it interacts with other things, all these are different types of distinct risks that we may or may not care about.

  • So quality comes into this. What does quality mean to our end users and what risks could impact that? That determines what type of activities we do.

  • If we go in with what’s the normal mindset towards testing, which is running test scripts, having them executed manually or automated, is you get this sort of idea of a monoculture. We’re only focused on a very specific type of risk and we’re ignoring these other aspects to our potential detriment or to a negative impact.

  • The other side as well is that every context is different. Different contexts have different needs, different aspects of quality.

  • Being holistic means you’re being kind of responsive to what’s going on around you. What is it you’re dealing with? You’re tuning into that rather than saying one size fits all.

Start With Understanding the Problem

  • I’m always a big advocate of, if you’re starting a new project, is just generally asking questions and exploring. But not necessarily exploring in a way to make judgment. You’ll do that at a later point. But it’s more of a sort of exploring the product, exploring the people who work on the product. That’s why I quite like the 10 Ps of Testability because it breaks down a context into these distinct areas.

  • Understanding all of that information helps us to better appreciate what challenges we face as a team. It helps us understand who our end users are and what are we trying to achieve for them. And it’s by putting all of this sort of information together that we start to identify those opportunities.

  • Testing is always about supporting a team. Making sure that they’re informed and they’re making the right decisions. They’re making the most valuable decisions at the best of times. Having that sort of context in place, it becomes easier to identify those opportunities to sort of elevate the team so that they can build a higher quality product.

  • I could do all the testing in the world, but we could still end up with a very low-quality product. It may work, but people might hate it. Or it may work in a certain way, but as soon as somebody presses the shiny red button on page three, the whole thing falls over.

  • Gathering all of that kind of information helps us identify those opportunities. And then from there, we can start being strategic about which opportunities are we going to follow? How are we going to measure that those opportunities and those ways in which we address those opportunities are valuable and we’re not going off track? It all comes from that sort of information gathering process at the start.

Testing Venn Diagram Model

  • It’s a Venn diagram. We have one circle which is the imagination. And another circle which is implementation.

  • The imagination side, this is where we are testing to learn about what it is that we want in our product. What do we want to build? And inside that, we will have explicit information, like requirements, acceptance criteria, test cases, documentation. But then we also have implicit and tacit information there as well. So why are we building this product? When someone says relevant results, what do you mean by relevant? Relevant to who? That’s where the sort of the misunderstandings come from. So we want to ask questions there to dispel those incorrect assumptions and misunderstandings across the team.

  • The implementation side is the product, the thing that exists. If we are only testing based on explicit information, like acceptance criteria, test scripts, requirements, that sort of stuff, we’re only testing a small portion of actually how the product behaves. So things like exploratory testing and monitoring can be really useful, because those sorts of activities help us learn more about how the product actually really behaves.

  • By learning more about how the product actually behaves, and by learning more about how we want the product to work in the first place, the more we can overlap these two areas so that we can make better informed decisions.

  • If we go for just a monoculture approach, if we’re just running test scripts, then you get a bit of an overlap in that Venn diagram, because you are kind of using your explicit understanding of what you think the product is used to test the product. And you learn some information, but you are missing out on so much more.

  • That’s why I like using that model because it communicates for me the goal of testing. Which is to find out as much about both of these items and get that overlap as much as possible. It’ll never be 100%, but you’re always striving towards it. But then also, it goes back to the risk aspect of, some risks live more in the implementation side, some live more in the imagination side. And then some exist in that sort of overlap because, as our product gets more complex, things like regression become an important factor.

Risk-Based Testing

  • It’s as much about mindset as it is in terms of specific techniques and approaches. We’re always under so much pressure to deliver. And I think because of that, we tend to sort of focus on the output at the end, the artifact. So when we talk about the context of testing, we’re talking about the tests that there were done.

  • I’ve been seeing this quite a lot lately, and it does annoy me. It’s talking about testing types. People build strategies or approaches to testing around testing types. So you have to do integration testing, you have to do functional testing. It’s that mindset of trying not to think of it as types or boxes of testing. Think about the product, think about the end goal of what risks impact that.

  • A tester or anyone who’s interested in qualities, the number one tool is questions, is asking questions and going what will happen if X, Y, and Z happen, or what would be the result of this? What do you mean by that? So using like 5Ws and an H: what, why, where, who, when, and how. Yet those sorts of primers to ask questions are a great place to start.

  • Elisabeth Hendrickson came up with the idea of the newspaper game. So imagine a newspaper article, like a headline on the paper, and use that as a trigger to, what would cause company X leaks customer data, right? Well, what happened to cause that headline? And you sort of follow the story there.

  • RiskStorming is really good because it helps you identify quality. Then it helps you identify risks, and then it helps you identify what testing you want to do for those risks. That’s a much more structured workshop based thing.

  • Whereas the headline games, things like test oblique by Mike Talks as well. Those are sort of much more informal things. And like 5Ws and H, it’s just a primer, a trigger to get you to ask those questions.

Defining Quality & Quality Attributes

  • What does quality mean? We are all individuals. And we’re all contextual in our own rights and the same thing can kind of be applied to the products.

  • When we think about quality, we have to think of it not necessarily as a singular thing, but it’s like this multidimensional thing. So there are different types of characteristics.

  • There is a great list of quality characteristics from the Test Eye and it must have something like maybe 70+ quality characteristics out there. Technical based: So does it work? Is it operable across different devices? Does it integrate with environments and stuff? But then some of them are much more kind of like a motive. So does it feel good to use? Does it look good? Does it make me excited? So lots of different characteristics. Then you have a kind of the time factor of this, is that different things will matter at different times.

  • Different quality characteristics will matter to different people. So our end users might care that it looks good and is easy to use, but if we’re in a regulated environment, our auditor wants to make sure that it’s got quality characteristics of we can understand how it’s working, it’s got good auditing processes, that sort of thing.

  • So different people have different perspectives in that way as well. And those things change over time. So if you’re a startup, what quality means to your end users and to your stakeholders is going to mean something very different when you’re moving into like a mid, small, medium company to enterprise.

  • We constantly have to keep asking ourselves regularly, like what does quality mean to our end users, to the people that matter? And what does it matter like over time? Otherwise, again, the quality is connected to the risk, and the risk is connected to the testing. If we don’t keep an eye on quality, our testing and our development will drift because we are no longer building the thing that the people care about anymore. That’s kind of why we want to think about quality characteristics on a regular basis.

Testing API Design

  • It’s interesting that you say that you don’t see that very often, because I think it happens all the time, but it’s not done in an explicit, structured or clear format.

  • I’ve worked with developers and they’re doing this sort of stuff implicitly. They’re asking those questions as well. And that is a form of testing. So when we’re talking about like testing API designs, this is where this whole sort of shift left mindset comes in. It’s that testing the ideas, testing the assumptions.

  • Perhaps we’re doing something like a collaborative session with different people. So that’s sort of kind of three amigos style things, or something informal, but we’re together as a team and we’re looking to implement something. So what we’re thinking here is, what is it we want to build? What is the solution that we’re proposing? And then it’s asking questions about aspects of it.

  • So you’re asking questions around maybe the actual sort of technical implementation. Maybe you’re asking questions around, oh, you’ve set these business rules. Do you mean like is the boundary here or is it there? What do you mean by this sort of domain phrase?

  • I love it when someone says, I just want to ask a stupid question, because they’re the best questions. There’s nothing more satisfying when you ask that and three people answer with different answers, because everyone’s like getting that sort of misunderstanding. So it’s about understanding the implementation, raising risks, raising test ideas and stuff.

  • Testing API designs is very much about just making that process that everybody does, making it more explicit, making it more collaborative. So getting people to share ideas so that we come out and we have a clear idea of what it is that we want to build. And also that we are all on the same page, like we all agree.

  • There was research that showed that most bugs come not from errors in code, but from misunderstandings of requirements or of users’ needs. That’s why this sort of aspect is essential. And it’s useful for a testing perspective as well, because you can ask the questions then of how are we going to test this? And what things do we need to do to make this thing testable?

  • And then you can factor in different types of tools. So things like Swagger documentation is really useful, because it’s a way of documenting the API in a way that makes it readable.

Exploratory Testing

  • Exploratory testing is a semi structured approach to testing. It basically relies on my creativity as an individual, but still has enough structure and enough boundary in place that we are focused on what we want to do.

  • I very much enjoy a charter-based approach to exploratory testing. The idea is, again, sort of connected to risk. I’ve identified a risk. I set out an exploratory testing charter to learn about that risk, and then I will do testing. And it’s not scripted. It is very much me following my instincts, using my own internal heuristics. Maybe using some external heuristics as well, taking notes, reviewing those and looking at a sort of paths and avenues towards how I will do my testing. So it’s not great for repeatability, but it’s really good for expansive, broad testing that’s going to reach into areas that scripted testing is not really going to work on.

  • One of the main reasons why I love exploratory testing as well is if you’re a good exploratory tester, it starts to blur the edges of what automation means. Some of the most successful things I’ve had in the automation space have been within the context of exploratory testing. It’s great building automated test cases and test scripts and stuff, but if you can build a tool that can scrape your monitoring system or your log files to let you know when errors occur. If you can build a little script that injects all the fields. Tools that can set up data for you really quickly. I find that quite satisfying as an activity and it allows me to accelerate my exploratory testing. Because I’m less focused on the setup and I’m more into the sort of observation and analysis.

  • You can go off charter as well. So if you find something like some astonishingly bad part of the system elsewhere, there’s nothing wrong with going off charter as well. The important factor is that you know that you’ve done that. And then you can be like I’ve discovered all this other interesting stuff. Now I’m going to go back to the thing that I wanted to focus on.

  • Risk is your guide, but it doesn’t set strict boundaries on what you can and can’t do. You do have flexibility and fluidity in there as well.

Automated Testing

  • One of the big things with automation is focusing on risks. I have these acronyms or phrases that I use regularly to sort of help me understand what I’m automating. I have this TuTTu and TaTTa. So from the user interface, am I testing the UI? Or am I testing through the UI?

  • A lot of people that work in the automation space, especially testers, will do things full stack. And they’ve got some automated tests running, but the automated test is actually focused on a risk in the backend, like some business calculation. You don’t need the user interface for that. You could do on the API layer, which is why we have TaTTa. Am I testing the API or am I testing through the API? If this is some sort of business calculation, some sort of rule, then couldn’t I build a unit test for this?

  • The success with API automation or the success with any automation is almost the inverse of saying I’m not going to automate it on this layer. I’ll look at all the different parts of the system that I’m touching with my end-to-end test. And I’ll try to be more targeted. Then what that means is that the risks that we do care about in the API are much more focused around, like how it’s presented, how it’s structured. Does it respond in the correct way that we want it to respond?

  • Contract testing sits under the banner of automated testing. That’s very much focused on contract drift between integrations as well. Performance testing is technically automated testing as well.

  • It’s all about risk. If I have tested every individual component within my API with unit tests, then my API tests are much more about, do these things integrate, am I receiving the right requests and sending the right responses? So what you end up with is probably less API tests.

  • Context matters here. I’ve worked on projects where I can’t build unit tests, because everything’s already comes to us compiled. The whole company gets given an application that’s already been built and then we’re adding to it. So we have to do everything on the API layer, but we’re making that informed decision based on what are the risks, what are the context as well.

  • Big thing with automation is that you’re not trying to be exhaustive on the API layer. You’re trying to be selective in terms of the types of risks that we care about. And assume that other testing activities are going on, maybe, on the more atomic level.

  • What my goal for an end-to-end test is? Does everything work end-to-end? Does this bit, the front end, talk to the back end? Does API A talk to API B, C and D? But I don’t really care about the business logic in them, because I know that’s been checked or tested in a previous way.

  • Coverage of risk is a type of coverage. What we’re trying to do here is not necessarily cover every instance of data, every code path. Like is some tool reporting back that you’ve only got 88% coverage? How has it decided that number? There’s nothing wrong with the word coverage. It’s more about what are you covering. What we should be covering is the risks that matter to us, not necessarily the amount of paths we have or what some sort of tool is telling us is acceptable or not.

Acceptance Test-Driven Design (ATDD)

  • One of the core tenets of BDD, which is shift left, testing ideas, questioning designs, doing that work together collaboratively. So we all have that understanding. From there, then we want to capture that into some sort of documentation or some sort of concrete examples and scenarios that describe how we expect things to work. And then it’s from there that we can use that for acceptance test-driven design.

  • It’s really interesting with ATDD. The risks that it is mitigating aren’t functional business risks. It’s more team risks.

  • If you follow that sort of red-green-refactor approach with Acceptance Test Driven Design, what it’s doing is it’s putting the boundaries in the place to deliver the right thing. So it’s giving some sort of cue to the developer if you haven’t built what has been asked of you until you make it pass. Because it’s described from a business level or from how a user will interact with the system, because it’s done from that sort of that level, it still gives the developer scope to implement it in the way that they want to implement it.

  • It’s not so prescribed that all kinds of creativity in the development space is gone. I like to think of it as like putting the barriers on when you’re bowling. It stops you from bowling a gutter ball, but you can still hit one pin or 10 as a developer.

  • ATDD is really useful for those risks of making sure that we deliver the right things. That doesn’t mean that it’s going to cover all the other risks, like data risks and structural risks. And that’s why we might have other automation or other testing activities in place.

“AI-Assisted Testing” Book

  • The general crux of the new book is about how we can use AI ultimately to assist us, to support our testing.

  • Large Language Models, GenAI, they’re fantastic. They do these amazing things. They’ve sort of kicked off this whole conversation, not just for testers but for developers as well, of is AI going to replace us? But I don’t think that’s the effective way of using them. I think the effective way is to reflect on what you do in your role and look at the aspects of it that can benefit from the use of these types of tools.

  • For example, things like generating large sets of data. In the automation space, can I do things like page objects and boilerplate code? It’s exploring the ideas of how could these tools could potentially be trained on our context or tuned towards our context as well, so that not only when we ask them a question do we get a response, but we get a response that is a kind of informed by what’s going on around us.

  • The book’s very much about like how we use these tools. It’s exploring things like prompt engineering, fine tuning, retrieval of augmented generation. But it’s also as much about us as individuals. How do we identify the places that they can be useful? How can we have like healthy level of skepticism, but not be so skeptical that we become cynical about these tools as well?

  • Generally, just sort of work out ways in which we can use GenAI tools to help us enhance our testing. Test faster, test deeper. But certainly not replace us, that’s for sure. Well, at least not now.

Evolution of Developer and Tester Roles

  • I get asked this question: will AI take our testing jobs? And I always say like if they take our testing jobs, then we have bigger problems to deal with than our jobs right now. Because distinctly, testing and quality is a human-driven, heuristic-driven sort of matter.

  • It has the impact to evolve our roles. Probably more on the developer side than on the testing side, as it stands. As we start to rely on tools like Copilot, as developers to build our frameworks, then there’s an argument that it’s because it’s garbage in, garbage out. So if this thing’s trained on bad patterns, it’s going to output bad patterns, which means boom time for testers, because we’re going to have more bugs to find and more testing to do.

  • There is that sort of challenge of how do you test the product that has been developed by not just an individual or collection of individuals, but also by a highly probabilistic machine as well.

  • The developers who are having success with these tools are the people who are using it for very specific tasks. There’s almost like a Pareto’s law thing going on where the tool gets them 80 percent of the way. And then that individual sort of factor comes in there.

  • As these tools become more and more prevalent, there’s going to be more interest in people who can write prompts, and people who can engineer these types of tools. Not necessarily data scientists and AI scientists. But can you tune a model? Can you build the right set up? And conversely, for testers, can you test these systems? And how do you deal with a deterministic system versus an indeterministic system? We’ve all been taught how to build test cases. You can’t do that in this context. So there are new skills to learn there.

  • On the not so great side is, what does this mean for people coming into the industry? How does someone who’s just coming into testing, how does someone who’s just coming into development learn those sorts of important skills?

  • For a lot, it’s going to be this evolution of how do you build a relationship with these tools in a way that utilizes both your skills and the skills of the tools? Because if you’re not using them, you are not going to be as performant as the person who is. And if we rely too much on these types of tools, they’re just going to end up producing stuff that’s not very valuable to us, because they’re not designed, they’re not tuned to our context.

3 Tech Lead Wisdom

  1. Context is key. There is no one size fits all solution to everything.

  2. Being a servant leader is massively important. Responding to what’s going on around you and helping people with problems, it’s going to be much more productive than applying your own will or your own perspective on things.

  3. Reflect on yourself, reflect on your abilities. I’m a big advocate for continuous learning. I’m always trying to read new stuff, expose myself to new things as well. Not just stuff in the software development space, but like in the arts as well. I learned loads, in terms of teaching and presenting, from standup comedy. There’s a lot we can learn from other people as well and draw that sort of those skill-sets in.

Transcript

[00:01:01] Introduction

Henry Suryawirawan: Hello, guys. Welcome back to another new episode of Tech Lead Journal podcast. Today, I have Mark Winteringham here. So we’re going to cover a lot about testing web APIs and also testing, in general. And later on, we’ll sneak in the AI topics part as well, because I think it’s kind of like the recent trending topics. AI and testing. So Mark, thank you for coming. And looking forward for this conversation.

Mark Winteringham: Thank you. Thank you for having me on.

[00:01:24] Career Journey

Henry Suryawirawan: Right. Mark, I always love to start my conversation by asking you to share your career highlights first or any turning points that you think we all can learn from you.

Mark Winteringham: Yeah, sure. So I always start with the terrible joke of I wasn’t supposed to be a software tester, I was supposed to be a rockstar. But things kind of went a bit different trajectory. So yeah, I actually studied music at university, music and technology. I was always interested in computers even as a kid. But I was also very interested in music as well. But my first job was actually testing music notation. So, I managed to carve myself a little bit of a niche of having the ability to work with technology, but also being able to read music to an advanced degree that meant that I could sort of get into the testing role. And it was more of a, I wanted to get myself a foot in the door into tech, because I wanted to write music for video games. And I’d heard the way through into the games industry was through testing. But I couldn’t get a testing job and sort of that sort of classic thing of we’re looking for a junior, must have three years experience, that sort of thing.

But yeah, so I got this job as a tester and actually found that I really enjoyed the process of testing. And then I was quite lucky early on that I had a mentor who saw my interest in technology, and he coached me into getting into test automation quite early on. So like most of my career has been around the test automation space. Working across a lot of different companies or startups, big enterprise places. I moved into contracting about sort of five, six years into my career and that gave me an opportunity to move around.

And it was sort of about sort of six years into that that I started teaching as well, started teaching testing. So talking about testing web APIs. My first sort of workshop that I built was around teaching testers how to use APIs, how to test them. I had to apply sort of exploratory testing and heuristics and stuff, but as well as automation. Found that I got really real good taste of that. And, you know, that rock style mentality of touring the world and going to conferences and teaching and talking about not just web APIs, but also testing and automation in general as well.

I then teamed up with a chap called Richard Bradshaw, who’s a friendly tester, and we started running a course called Automation in Testing. So whilst I was doing testing, I was also teaching and learning a lot through sort of various conferences and things like that. And yeah, sort of, about sort of 10 years in, that’s when I started writing my first book, Testing Web APIs. And then, yeah, sort of bring up to now, I’m a quality engineer for John Lewis Partnership. I’m sort of still working in the test automation space, but I’m sort of thinking about quality, in general, these days. Not just about the quality of the product, but the quality of the work, the quality of the testing, and supporting my team so that they can be the best that they can be.

Henry Suryawirawan: Thanks for sharing your story. I think it’s pretty interesting. Testing music notation. I haven’t heard it before, so I think that’s how you got into this whole testing journey. I think that’s pretty interesting to hear. Thanks for sharing that.

[00:05:17] Writing “Testing Web APIs”

Henry Suryawirawan: So, you mentioned you have been doing a lot of automation, API automation, web API automation, maybe a little bit of background, how did you come up with the idea to write this book, what kind of problems that you’re trying to solve?

Mark Winteringham: Well, as I said, it started with the workshop initially. I was keen to just sort of put something out there and do a bit of public speaking. So I kind of set myself the challenge of building this workshop. And then I went out with a guy who used to run London Tester Gathering. His name’s Tony Bruce and he used to run these collection of workshops and we were having a drink, and I was like, I’ve got this idea of an API testing workshop. And he was like, right, you’re in, you’re doing it. And I was like, oh, okay. So it kind of just sort of kind of kicked off like that. But like something that’s always kind of threaded in a lot of my teaching is that I like to teach practical skills. But also have like a foundation of sort of like mindset and philosophy towards how we’re sort of doing our testing.

So I’m a huge advocate for exploratory testing. So I love doing automation, but I think exploratory testing is like a massively important skill for all testers or for anyone who’s involved in kind of quality space. But teaching that is quite tricky whereas if I can teach someone to test a web API in an exploratory testing context, then they kind of implicitly learn that sort of information. So that was kind of what I was doing with the book, with Testing Web APIs. I developed this body of work, this understanding of how I approach testing web APIs. And I had a couple of attempts at trying to write it. People kept sort of kind of encouraging me, but I was sort of like, there’s not enough material here. Turns out I was wrong.

But I ended up watching some online gaming streamer who was also an author. And someone asked him like, how do you write a book? And he was like, one page, one page a day. Or a page at a time, sort of thing. So I was like, oh, okay, I’ll give that a try. So I literally wrote a page a day for a month. And before I knew it, I had my first chapter. And I thought, actually, I think I could do this. I’ve been blogging a lot previously and writing articles and stuff for companies and teaching materials. So I’d sort of built up that skillset. So it was kind of, yeah, it was sort of this sort of the training that I was doing on the side, the growing interest in writing. And then like you said, this sort of small gradual process of building something up over time to end up with a body of work.

[00:07:48] Holistic Testing Strategy

Henry Suryawirawan: Right. And when I read your book, actually, it’s pretty interesting. I was expecting actually to read a lot of materials about automation, how you deal with, I don’t know, Postman, REST APIs, and things like that. But actually, the way you explain stuff in your book, it’s a mix of like mindset, philosophy, and also different strategies, right, which I think in your book, you mentioned it like the holistic approach. So tell us why it is very important for testing strategy or testing approach or methodology has to be more holistic. And what do you mean by holistic here?

Mark Winteringham: So when we take a step back, when we’re talking about holistic, we’re talking about basically different activities – in the context of a testing strategy – different activities that are being executed to address different types of risks. And that’s why I think holistic strategies are necessary. Because we have to handle lots of different types of risks that can impact our product. So for example, you know, things like automation is useful, but it is very much targeted at a specific set of risks which tend to be the sort of kind of the functional, the correctness of a product. They are change detectors. So they are there to help us determine whether or not the system has changed intentionally or unintentionally. But that doesn’t help you with things like performance. That doesn’t help you with issues around implementing the wrong thing in the first place or how those fringe or edge cases occur in our APIs. All those different types of risks, security, how the end user’s going to use it, how it interacts with other things, all these are different types of distinct risks that we may or may not care about. So quality comes into this like what does quality mean to our end users and what risks could impact that? That determines what type of activities we do.

The challenge is that if we go in with what’s the normal mindset towards testing, which is running test scripts, having them executed manually or automated, is you get this sort of kind of idea of a monoculture. We’re only focused on a very specific type of risk and we’re ignoring these other aspects to our potential detriment or to a negative impact. So there’s that aspect. The other side as well is that, say every context is different. So if I’m testing APIs for… so I worked for HMRC for a while and I was on their tax platform. Performance is important and accuracy of calculations is important. Whereas, perhaps, a classic one that sort of people were talking about back in the day was like Pokemon Go. People probably care much more about performance, and they may care more about security, like personal data being shared across or people, you know, stealing all of your inventory from your account or something like that. So different contexts have different needs, different aspects of quality.

So again, holistic, being holistic means you’re being kind of responsive to what’s going on around you, what is it you’re dealing with? You’re kind of tuning into that rather than saying one size fits all and trying to sort of kind of square peg in a round hole, that sort of idea.

[00:11:02] Start With Understanding the Problem

Henry Suryawirawan: Yeah. So thanks for mentioning this understanding about context and risk, right? So I think that’s the main theme all over your book, right? So you kind of cover why specifically we need to care about certain stuff and what kind of risks are we trying to solve here. And you kind of like also emphasize that we, as testers or developers, we have to understand the problem first before we actually approach testing overall, right, rather than starting to just write test scripts and things like that. So maybe can you explain, how can we actually start by understanding the problem better, right? And why is it important?

Mark Winteringham: So I’m always a big advocate of, if you’re starting a new project, is just generally asking questions and exploring. But not necessarily exploring in a way to make judgment. You’ll do that at a later point. But it’s more of a sort of exploring the product, exploring the people who work on the product. That’s why I quite like the 10 Ps of Testability by Rob Meaney and Ash Winter, because it breaks down a context into these distinct areas. So looking at the people, looking at the process. How is the product built? What technologies are we using? How do we get it from our computers in front of other people and sort of the pipeline process. So understanding all of that information helps us to, again, better appreciate what challenges we face as a team. It helps us understand who our end users are. And, you know, what are we trying to achieve for them?

And it’s by putting all of this sort of information together that we start to identify those opportunities. So for me, like testing is always about supporting a team. Making sure that they’re informed and they’re making the right decisions. They’re making the most valuable decisions at the best of times. So, yeah, having that sort of context in place, it becomes easier to identify those opportunities to sort of elevate the team so that they can build a higher quality product. So that’s a big thing as well as like testing doesn’t necessarily, like I could do all the testing in the world, but we could still end up with a very low quality product. It may work, but people might hate it. Or it may work in a certain way, but as soon as somebody presses the shiny red button on page three, the whole thing falls over.

So yeah, so gathering all of that kind of information helps us identify those opportunities. And then from there, we can start being strategic about which opportunities are we going to follow? How are we going to measure that those opportunities and those ways in which we address those opportunities are valuable and we’re not going off track? Yeah, but it all comes from that sort of kind of information gathering process at the start.

Henry Suryawirawan: Yeah. So I think another thing you mentioned in the book after you ask questions and things like that, right? It’s very important to build shared understanding among different team members, maybe in the teams, right? Because I think one common practice I see about testing, a lot of teams actually kind of like throw it over the wall, you know. Like they have a team of testers, you know, you pass them the requirements, pass them the binaries, right, and let them test. Maybe it’s more like a black box approach. But at the same time, probably the shared understanding is not there, right?

[00:14:11] Testing Venn Diagram Model

Henry Suryawirawan: And I think one thing that I really like in the book is about this Venn diagram. And you kind of like bring that all over the different chapters, right, when you cover different strategies. Maybe elaborate a little bit more about this Venn diagram. How can we actually use it in our testing strategy?

Mark Winteringham: Yeah. So it’s based off a model, a visual model that James Lyndsay created to kind of explain the value of exploratory testing. But I remember when he showed it to me and I really liked it, but I actually thought that it could be applied to testing as a whole. So the idea is, like you say, it’s a Venn diagram. We have one circle which is the imagination. And another circle which is implementation.

So on the imagination side this is where we are testing to learn about what it is that we want in our product. What do we want to build? And inside that, we will have explicit information. So like you say, like requirements, acceptance criteria, test cases, documentation. But then we also have implicit and tacit information there as well. So why are we building this product? When someone says relevant results, what do you mean by relevant? Relevant to who? That’s where the sort of the misunderstandings come from. So we want to ask questions there to dispel those incorrect assumptions, misunderstandings across the team.

And the idea is that the more we learn about that side, we apply the same thing on the implementation side. So the implementation side is the product, the thing that exists. Again, if we are only testing based on explicit information, so again, acceptance criteria, test scripts, requirements, that sort of stuff, we’re only testing a small portion of actually how the product behaves. So things like exploratory testing and monitoring can be really useful, because those sort of activities help us learn more about how the product actually really behaves.

So by learning more about how the product actually behaves, and by learning more about how we want the product to work in the first place, the more we can overlap these two areas so that we can make better informed decisions. Like I said, if we go for just a monoculture approach, if we’re just running test scripts, then you get a little bit of an overlap in that Venn diagram, because you are kind of using your explicit understanding of what you think the product is used to test the product. And you learn some information, but you are missing out on so much more.

So that’s why I like using that model because it communicates for me the goal of testing. Which is to find out as much about both of these items and get that overlap as much as possible. It’ll never be 100%, but you’re always striving towards it. But then also as well, again, it goes back to this risk aspect of some risks live more in the implementation side. Some live more in the imagination side. And then some exist in that sort of overlap because as our product gets more complex, things like regression become an important factor. But they’re not the be all and end all, it’s again, it’s that spread of all of them. So yes, that’s why I’m sort of a big fan of that model. And it’s, as I say, it’s a great teaching aid in the book because I can move across the model in different places and talk about how these different activities work in different areas.

Henry Suryawirawan: Yeah. And I think not to mention also for different areas, like the imagination will have some testing strategies that kind of like more focused towards that. Things like, for example, contract testing, right. And also test API designs. And also on the implementation side, maybe there are other aspects as well. So you kind of like bring the whole holistic approach to testing. And it’s not just, you know, like an automation in terms of API step by step, right? So I think it’s very interesting, definitely. And in the overlap, right? So if we can make it larger and larger, we kind of like align both the imagination and implementation. And I think that’s where the testing strategy, where you build a lot more automation to cover both areas, I think will be more powerful. So I think throughout the book you will see a lot of these Venn diagrams. So for people who are interested you can check out the book as well. I think it’s really powerful framework to frame along all these testing strategy.

[00:18:22] Risk-Based Testing

Henry Suryawirawan: So after we understand about the importance of this testing approach and understanding context and risk, right, we need to pick the testing strategies. And I think you emphasized a lot in this conversation already as well about the risk. I think sometimes this is not intuitive for many teams, I think. Because when we talk about testing, right, they always come up with, okay, what are the requirements, right, what users need to do. But they actually don’t talk it in the perspective of risk. So maybe if you can explain a little bit more how can we actually start building our test strategy using this risk perspective?

Mark Winteringham: Yeah, I think it’s as much, like you say, it’s about mindset as it is in terms of specific techniques and approaches. We’re always under so much pressure to deliver. And I think because of that, we tend to sort of focus on the output at the end, the artifact. So when we talk about the context of testing, we’re talking about the tests that there were done. So it’s, I think, it’s like when we’re trying to get people to do sort of risk based testing, the first thing is really to get this, it’s like, there you go.

It’s funny, I’ve been seeing this quite a lot lately again and it does annoy me. It’s talking about testing types. Types of testing. So people build strategies or approaches to testing around testing types. So you have to do integration testing, you have to do functional testing. It’s like, yes, those are types of testing, but that’s, you know, it’s… I’m trying to think of a good analogy. It’s like talking about types of tools. Like, oh, you know, we’re going to do some DIY. Well, I have to have a screwdriver. I have to have a hammer. So it’s fine, but you’re painting a wall. So those things aren’t going to be very useful for you. So I think, yeah, it’s that mindset of trying not to think of it as types or boxes of testing. Think about the product, think about the end goal of what risks impact that.

Then in terms of like the next sort of tricky part is getting people to be sort of kind of risk based and risk analysis. So this is where like a tester or anyone who’s interested in qualities, like number one tool is questions, is asking questions and going, you know, what will happen if X, Y, and Z happens, or what would be the result of this? What do you mean by that? So using like 5Ws and an H: what, why, where, who, when, and how. Yet those sort of primers to ask questions is a great place to start.

Then you’ve got frameworks like RiskStorming by Beren Van Daele who has built this like brilliant online asynchronous tool that you can use. I think it’s literally RiskStorming Online. You can just Google that and I can’t remember the URL. Sorry, Beren. But using sort of kind of more sort of heuristic based techniques. So Elisabeth Hendrickson, I think, came up with the idea of the newspaper game. So imagine a newspaper article, like a headline on the paper and use that as a trigger to, what would cause company X leaks customer data, right? Well, what happened to cause that headline? And you sort of follow the story there.

There are frameworks that we can use that we can follow. So like RiskStorming is really good because it helps you identify quality. Then it helps you identify risks, and then it helps you identify what testing you want to do for those risks. Whereas, you know, that’s much more structured workshop based thing. Whereas, yeah, like, the headline games, things like test oblique by Mike Talks as well. Those are sort of much more informal things. And like 5Ws and H, it’s just a primer, a trigger to get you to ask those questions.

Henry Suryawirawan: Yeah, so I think if people adopt this kind of a risk -based approach, I think it will be much more valuable in terms of the output of your testing activities, right? Not just producing, you know, hundreds of test scripts, just UI automation, for example, or just user based approach. But you kind of like miss the whole aspects of different risks, right? Because we cannot test everything, definitely. There will be too many. But yeah, how should we prioritize? What kind of things that we should test? I think this risk-based approach is really important.

[00:22:29] Defining Quality & Quality Attributes

Henry Suryawirawan: And before we actually come up with those prioritized risks, right? You also mentioned that we have to know about the prioritized quality attributes. But let’s say when we talk about quality first, right, because I think the whole purpose of testing normally is to ascertain standard or quality of the product. So what do you mean by quality and how do we actually think about the quality attributes?

Mark Winteringham: Yeah. So I think that’s the crux of it, of all of it is that, yeah, like what does quality mean? So whenever I’m sort of kind of like people ask me about, oh, you know, what is testing about and why do you care? I always go with the classic argument, question, sorry, of, you know, what, what flavor of crisps or chips do you like? And they go, oh, I like ready salted or paprika or something like that. You’re wrong. It’s salt and vinegar because they’re my favorite. And I know quality and you don’t know quality. And they’re like, no, no, you’re wrong, like this flavor is the right. You know, we are all individuals. And we’re all contextual in our own rights and the same thing can kind of be applied again to the products.

So when we think about quality, we have to think of it not necessarily as a singular thing, but it’s like this multidimensional thing. So there are different types of characteristics. So I mentioned some earlier when I was talking about accuracy results and responsiveness. There is a great list of quality characteristics from the Test Eye, which you can Google and it must have something like 50, 60, maybe 70 plus quality characteristics out there and stuff. And some of them are, you know, technical based. So does it work? Is it operable across different devices? Does it integrate with environments and stuff? But then some of them are much more kind of like a motive. So does it feel good to use? Does it look good? Does it make me excited? So lots of different characteristics. Then you have kind of the time factor of this is that different things will matter at different times.

Another factor, to take a step back, is different quality characteristics will matter to different people. So our end users might care that it looks good and is easy to use, but if we’re in a regulated environment, our auditor wants to make sure that it’s got quality characteristics of we can understand how it’s working, it’s got good auditing processes, that sort of thing. So different people have different perspectives in that way as well. And those things change over time. So if you’re a startup, what quality means to your end users and to your stakeholders is going to mean something very different when you’re moving into like a mid, small, medium company to enterprise.

So we constantly have to keep asking ourselves regularly, like what does quality mean to our end users, to the people that matter. And what does it matter like over time? Otherwise, again, the quality is connected to the risk, and the risk is connected to the testing. If we don’t keep an eye on quality, we, again, our testing and our development will drift because we are no longer building the thing that the people that matter care about anymore. So yeah, that’s kind of why we want to think about quality characteristics on a regular basis.

Henry Suryawirawan: Yeah, some people actually coin the terms quality attributes here, like non-functional requirements or performance attributes or, you know, there are multiple terms about it, but essentially, you’re looking into like the -ilities, right? So some people also call it -ilities. Availability, scalability, and things like that. So I think this quality attribute sometimes in some teams, they tend to under prioritize that and only focus on the functional aspect, you know, like given a certain input, what’s the output that should come out, right? And they just create more test cases around that.

But actually when you have holistic approach, right? So you will have aspects of quality attributes that you want to test as well. And again, depending on the product, depending on the kind of quality that the team or the user cares about, right? You build some kind of testing strategies around those things as well. So I think it’s really important for those of you who would love to ascertain your quality of the product, first of all, know about your quality attributes, right? Quality characteristics. And then build some kind of testing strategies around those.

[00:26:41] Testing API Design

Henry Suryawirawan: So let’s maybe go into several testing strategies that you cover in the book. Maybe some from the implementation, some from the imagination. And later on we’ll also talk about automation testing, definitely. So the first strategy that I want to pick is actually testing API design. Personally, I find this a bit rare being covered in the development space or at least in my area. So what do you mean by testing API design? How do we do that and why is it important?

Mark Winteringham: So I think it’s interesting that you say that you don’t see that very often, because I think it happens all the time, but it’s not done in a explicit, structured or clear format. So you know, I’ve worked with developers and they’re doing this sort of stuff implicitly. They’re asking those questions as well. And that is a form of testing. So when we’re talking about like testing API designs, this is where this whole sort of shift left mindset comes in. Again, it’s that testing the ideas, testing the assumptions.

So we’re in a situation, perhaps we’re doing something like a collaborative session with different people. So that’s sort of kind of three amigos style things, or just again, something informal, but we’re together as a team and we’re looking to implement something. So what we’re thinking here is, what is it that we want to build? What is the solution that we’re proposing? And then it’s asking questions about aspects of it. So you’re asking questions around maybe the actual sort of technical implementation. Maybe you’re asking questions around, oh, you’ve set these business rules. Do you mean like is the boundary here or is it there? What do you mean by this sort of domain phrase?

I love it when someone says, I just want to ask a stupid question, because they’re the best questions. Because they’re usually because someone’s like, I don’t understand an aspect of this so I need clarification. There’s nothing more satisfying when you ask that and three people answer with different answers, because everyone’s like got that sort of misunderstanding. So it’s about understanding the implementation, raising risks, raising test ideas and stuff. So you know, oh, I see that you want to put a limit on this field. What happens if I go over that limit? And they go, oh, well, you know, should return a 500 status code. You’re like, oh, shouldn’t it be like a 400, because it’s a user input error and stuff? And it’s conversations lead to that.

So yeah, like testing API designs is very much about just making that process that everybody does, making it more explicit, making it more collaborative. So getting people to share ideas so that we come out and we have a clear idea of what it is that we want to build. And also that, we are all on the same page, like we all agree. I remember seeing some research once, I wish I could find it, but there was research that showed that most bugs come not from errors in code, but from misunderstandings of requirements or of users needs. So I think that’s why this sort of aspect is essential. And it’s useful for a testing perspective as well, because you can ask the questions then of how are we going to test this? And what things do we need to do to make this thing testable?

And then, yeah, you can factor in different types of tools. So like things like Swagger documentation is really useful, because it’s a way of documenting the API in a way that makes it readable. You can trigger off ideas and questions from and conversations from, but it’s not implementing the whole code base. And then it’s got its own sort of ecosystem of tools to make it easier for people to translate that into code at a later date. So you get that kind of like double win there of everybody’s on the same page. We’ve got this explicit documentation, and at press of a button, we have the potential to turn this, get us halfway down the line in terms of implementing our new endpoint or a new API.

Henry Suryawirawan: Right. So I think this approach, when you mentioned about collaborative activities, right. I think, again, it’s rare to me, always, you know, seeing the three amigos in practice, right? Like maybe the business analyst or the product owner, right, with the tester and developer always collaborating actively for almost all of the stories or, you know, the requirements that they’re implementing.

So I think the key again here, just to explicitly mention it, collaborative software development, I think is really, really crucial, right? Because shared understanding, you don’t want to misunderstand their software requirements or expected behavior of the product that you’re building, right?

So do a lot more this testing API design, right? Maybe it’s not just API design, sometimes it’s driven by the UI, right? For people who are building UI-centric journey.

Mark Winteringham: And I sort of talk about it from the API design perspective, but this is a mindset and approach that we can apply to our team in general. And like it’s something that’s been championed by Lisa Crispin and Janet Gregory, their Agile testing books. All of their work on continuous testing as well, and they’re big advocates for holistic testing as well. But they’ve really sort of kind of opened up that mindset of getting teams to talk to each other about the things that they’re building and the quality that they want to imbue into the things that they build. So yeah, you can apply it to any sort of kind of context that you’re working in.

Henry Suryawirawan: Yeah. And not to mention also, after you get that shared understanding, please write it down somewhere, maybe as a documentation or maybe some kind of a test specifications, whatever that is, right? So that the shared understanding is not lost when people change or maybe a business strategy change and things like that. So thanks for sharing about this testing API design.

[00:32:08] Exploratory Testing

Henry Suryawirawan: So the second testing strategy that I’d like you to maybe talk a little bit more, which is what you mentioned in the beginning, something that you have passion about, exploratory testing. So what is exploratory testing and how can we approach this kind of testing?

Mark Winteringham: So exploratory testing for me is a semi structured approach to testing. It basically relies on my creativity as an individual, but still has enough structure and enough boundary in place that we are focused on what we want to do. So I very much enjoy sort of charter-based approach to exploratory testing, which is mentioned a lot in Explore It! by Elisabeth Hendrickson. And the idea is, again, sort of connected to risk. I’ve identified a risk. I set out an exploratory testing charter to learn about that risk, and then I will do testing. And it’s not scripted, it is very much me following my instincts, using my own internal heuristics. Maybe using some external heuristics as well, taking notes, reviewing those and looking at sort of paths and avenues towards how I will do my testing. So it’s not great for repeatability, but it’s really good for sort of expansive, broad testing that’s going to reach into areas that scripted testing is not really going to work on.

One of the main reasons why I love exploratory testing as well as is if you’re a good exploratory tester, it starts to blur the edges of what automation means. Some of the most successful things I’ve had in the automation space have been within the context of exploratory testing. So yeah, it’s great building automated test cases and test scripts and stuff, but if you can build a tool that can scrape your monitoring system or your log files to let you know when errors occur. If you can build a little script that injects all of the fields, like if you’ve got a massive like 20 input form field to fill in, but you only need to test one field or something like that, you know, building a tool that can inject data into that. Tools that can set up data for you really quickly. I find that quite satisfying as an activity and it allows me to accelerate my exploratory testing. Because I’m less focused on the setup and I’m more into the sort of observation and analysis and reaction to those sort of things as well.

Henry Suryawirawan: I think the charter is very important when you do this exploratory testing, right? Exploratory doesn’t mean that it’s super, you know, like freedom, right? You can just do whatever you like and figure it out if you find any bugs or not. But I think the most, probably more effective way is actually to set a charter, right? Again, it’s driven by the risk that you identify earlier regarding your quality attributes and things like that. And then you kind of like set the charter, you explore based on that theme, and see how it goes, right? And some people and some teams actually do like bug bashing, you know, like group activities where, you know, you set maybe a theme, then they will just explore the application, try to break it, and report issues that they find along the way. So I think for people who would love to do exploratory testing, so one tip here, very, very important, is to set it based on the quality characteristics, and also set a charter before you start it.

Mark Winteringham: Yeah. And so I wanted to add one last thing to that as well is that you can go off charter as well. So if you find something like some astonishingly bad part of the system elsewhere, there’s nothing wrong with going off charter as well. The important factor is that you know that you’ve done that. And then you can be like, right, well, I’ve discovered all this other interesting stuff, now I’m going to go back to the thing that I wanted to focus on. Whereas I think sometimes if we’re a bit more unstructured, we’ll go look at the other thing and then go, my work here is done, and forget about the initial thing that we were working on. So again, like risk is your guide, but it isn’t necessarily, you know, doesn’t set strict boundaries on what you can and can’t do. You do have flexibility and fluidity in there as well.

Henry Suryawirawan: Right, thanks for adding that. So I think, yeah, you can go off chart, but yeah, don’t forget to, again, document your findings, right? Don’t just take it like an ad-hoc activity and that’s it, right?

[00:36:18] Automated Testing

Henry Suryawirawan: So speaking about testing, you know, obviously people associate that a lot with the automated API testing, right? Sometimes it’s called end-to-end testing or acceptance testing. Maybe tell us more how can we do better automated testing, because I’m sure many people understand about automated testing. But how to do it more effectively or better is something that maybe you can give some advice on.

Mark Winteringham: Yeah. So it’s interesting, like sort of drawing the parallels between end-to-end testing and automation testing, because I think that automation is literally just using tools to do something that we were doing. So one of the big things, again, with automation, I think, is focusing on risks. So I have these acronyms or these sort of kind of phrases that I use regularly to sort of help me understand what I’m automating. I have this TuTTu and TaTTa. So from the user interface, am I testing the UI or am I testing through the UI? A lot of people that work in the automation space, especially testers, will do things full stack. And they’ve got some automated tests running, but the automated test is actually focused on a risk in the backend, like some business calculation. You don’t need the user interface for that. You could do on the API layer, which is why we have TaTTa. Am I testing the API or am I testing through the API? Again, if this is some sort of business calculation, some sort of rule, then couldn’t I build a unit test for this?

So I think sometimes like the success with API automation or the success with any automation is almost the inverse of saying I’m not going to automate it on this layer. I’m not going to automate that thing. I’m going to look at, talking about an end-to-end test, I’ll look at all the different parts of the system that I’m touching with my end-to-end test. And I’ll try and be more targeted. However, then what that means is that the risks that we do care about in the API are much more focused around like how it’s presented, how it’s structured. Does it respond in the correct way that we want it to respond? Contract testing sits under the banner of automated testing, because it’s using tools again. That’s very much focused on contract drift between integrations as well. Performance testing is technically automated testing as well. The Oracle is slightly different in terms of how we determine what’s good or bad, but we still need to use tools to put an API or APIs under load as well.

So, you know, starting to sound like a stuck record, but again, it’s all about risk. Like going back to what I was saying earlier with the being more specific and targeted. If I have tested every individual component within my API with unit tests, then my API tests are much more about do these things integrate, am I receiving the right requests and sending the right responses? So what you end up with is probably less API tests. Again, context matters here. I’ve worked on projects where I can’t build unit tests, because everything’s already comes to us compiled. And I don’t just mean me as a tester. I mean, literally that the whole company gets given an application that’s already been built and then we’re adding onto it. So we have to do everything on the API layer, but we’re making that informed decision based on what are the risks, what are the context as well.

So I think, yeah, big thing with automation is that you’re not trying to be exhaustive on the API layer. You’re trying to be selective in terms of, again, the types of risks that we care about. And assume that other testing activities are going on, maybe, on the more atomic level. So bringing it back to end-to-end testing, what my goal for an end-to-end test is? Does everything work end-to-end? Right, that’s literally all my test does. Does this bit, the front end, talk to the back end? Does API A talk to API B, C and D? But I don’t really care about the business logic in them, because I know that’s been checked or tested in a previous way.

Henry Suryawirawan: I like one heuristic that you mentioned in the book, right? So if you have the capability to push your checks or your testing lower, you should actually try to do that more often, right? Because if you implement it in the upper layer, most likely it’s going to be slower, less maintainable, and also more complicated to set up, right? So I think this is very important for people not just focus a lot on the end-to-end testing or API testing. The TuTTu TaTTa thing, I think that’s also very important. Are you testing the API itself or are you testing through the API, which is kind of like the business logic or the validation and things like that, right?

So I think that’s kind of like a good heuristic as well that people can use so that we can have a more holistic approach to testing. And I think I also like another thing that you mentioned in the book about automation testing, because many teams actually drive their API testing by coverage or just user requirements.

So again, I think you mentioned it several times already, use risk-based approach. Don’t just focus on the coverage, right? You know, like aiming for higher code coverage or test coverage, right? But also look at the different type of risks that you cover. I think that’s very important.

Mark Winteringham: I was going to say, like coverage of risk is a type of coverage. What we’re trying to do here is not necessarily cover every instance of data, every code path, or, you know, different types of coverage. Like, you know, is some tool reporting back that you’ve only got 88% coverage? How has it decided that number? There’s nothing wrong with the word coverage. It’s more about what are you covering? And that’s why, yeah, I think what we should be covering is the risks that matter to us, not necessarily the amount of paths we have or what some sort of tool is telling us is acceptable or not.

[00:41:54] Acceptance Test Driven Design (ATDD)

Henry Suryawirawan: Right. So maybe speaking a little bit on the requirements aspect, right? Because you mentioned maybe a lot of bugs are introduced by a lot of misunderstanding about the requirements. So how can we get less misunderstanding? What about this acceptance -driven testing that some people advocate about? Do you have some advice on this area as well?

Mark Winteringham: Yeah. So it’s an interesting space, like acceptance test-driven design and like behavior-driven design as well. It took me a long time to sort of kind of get my head around it really. And you have to kind of break it down into smaller chunks. So we talked about like behavior-driven design or behavior-driven development. We’ve already kind of talked about one of the core tenets of BDD already, which is shift left, testing ideas, questioning designs, doing that work together collaboratively. So we all have that understanding. From there, as you were mentioning earlier as well, then we want to capture that into some sort of documentation or some sort of kind of concrete examples and scenarios that describe how we expect things to work. And then it’s from there that we can use that for acceptance test-driven design.

I always think that it’s really interesting with ATDD. It’s because the way that the risks that I say that that’s mitigating aren’t functional business risks. It’s more team risks. So yeah, I was running a workshop on automation. And this tester who was there turned around and sort of thought they’d ask a cheeky question. They said, oh, you know, it’s all well and good me automating all this testing, but can I automate my developers so they actually build the right thing? And I was like, well, it’s funny you should say that, because that is the risk that I think like ATDD is mitigating. If you follow that sort of red-green-refactor approach with Acceptance Test Driven Design, what it’s doing is it’s putting the boundaries in the place to deliver the right thing. So it’s giving some sort of cue to the developer of you haven’t built what has been asked of you until you make it pass. But it’s done in a way that because it’s described from a business level or from how a user will interact with the system, because it’s done from that sort of that level, it still gives the developer scope to implement it in the way that they want to implement it.

So it’s not so prescribed that all kind of creativity in the development space is gone. I like to think of it as like putting the barriers on when you’re bowling. You know, it stops you from bowling a gutter ball, but you can still hit one pin or 10 as a developer. So again, it’s this holistic thing going on here is that ATDD is really useful for those risks of making sure that we deliver the right things. That doesn’t mean that it’s going to cover all of the other risks, like data risks and structural risks. And that’s why we might have other automation or other testing activities in place.

So, yeah, I’m a big fan of ATDD, but sometimes it gets kind of a bit lost in the testing soup as a whole. And, you know, you start seeing people start using these frameworks to try and exhaustively test everything or exhaustively set all these tests out that a developer has to follow. And I think that’s where you get fatigue and people stop using these things because you just end up with too many tests. You’re not focused on the risks of delivery and you just end up sort of overwhelming the team and they lose interest in. Stop doing that approach.

Henry Suryawirawan: Yeah, ATDD definitely has a lot of things to cover. So you mentioned some important aspects like it has to be collaborative. That’s the first thing, right? Use examples to specify the behaviors, right? Also like the living document aspect, right? So I think it’s also powerful in that aspect. You kind of like write down your requirements, but also evolve it along the way, right? It’s not just a test script, but it’s much more beyond that. So I think for people who love ATDD, do check it out. I also covered previous episode with John Ferguson and Jan Molak.

[00:45:51] “AI-Assisted Testing” Book

Henry Suryawirawan: So speaking about the thing that one of your audience asked you just now, right, cheekily, right, how to automate developers. I know that you are writing a new book in the process, AI-Assisted Testing.

Mark Winteringham: Yes.

Henry Suryawirawan: Maybe we can use AI a little bit. So tell us a little bit more snippet, like why are you writing this book and what should we expect from AI to help us in testing aspect?

Mark Winteringham: Yeah. So AI-Assisted Testing, it’s almost complete. It’s kind of a bit of a logical next step from some of the things that I’ve been talking about. So, you know, I was talking about like exploratory testing and how I use tools within my exploratory testing to aid my testing. The general crux of the new book is around how we can use AI ultimately to assist us, to support our testing. These things like, you know, Large Language Models, GenAI, they’re fantastic. They do these amazing things. And they’ve sort of kicked off this whole conversation, not just for testers but for developers as well, of is AI going to replace us? But I don’t think that’s the effective way of using them. I think the effective way is to reflect on what you do in your role and look at the aspects of it that can benefit from the use of these types of tools.

So for example, things like generating large sets of data using Large Language Models for those. In the automation space, can I do things like page objects and boilerplate code? Can I feed it some HTML and then just get my page objects out of it straight away? It’s exploring the ideas of like, you know, how could these tools could potentially be trained on our context or tuned towards our context as well, so that not only when we ask them a question do we get a response, but we get a response that is kind of informed by what’s going on around us. Because a lot of the tools that exist at the moment, general generative AI. We want something that’s a bit more specific.

So yeah, the book’s very much about like how we use these tools. It’s exploring things like prompt engineering, fine tuning, retrieval augmented generation. But it’s also as much about us as individuals. How do we identify the places that they can be useful? How can we have like healthy level of skepticism, but not be so skeptical that we become cynical about these tools as well? And yeah, generally, just sort of work out ways in which we can use GenAI tools to help us enhance our testing. You know, test faster, test deeper, that sort of kind of idea. But certainly not replace us, that’s for sure. Well, at least not now.

Henry Suryawirawan: Right. I think a lot of people are thinking, you know, scarily that these kind of tools will kind of like effectively eliminate their jobs or their roles, right? But I can also see some kind of evolution, right? Maybe developers will be able to write more of these test cases, right? Or maybe testers can also use that to improve the code itself, not just reporting bugs and let developers fix those bugs. But also kind of like help to fix it along the way.

[00:48:46] Evolution of Developer and Tester Roles

Henry Suryawirawan: So I think what kind of things that, maybe after you play around with this AI, what kind of things that probably would change because of the introduction of AI? Not talking just about elimination of roles, but I think, I don’t know, like an evolution of skill set or maybe a different approach of collaboration altogether.

Mark Winteringham: Yeah, GenAI stuff has been around fairly recently, but like there’s been a lot of sort of AI work, machine learning work that’s been done in the past that we, especially in the testing space, we’ve seen around things like visual comparison tools, for example. So, you know, I get asked this question, like, you know, will AI take our testing jobs? And I always say like if they take our testing jobs, then we have bigger problems to deal with than our jobs right now. Because distinctly testing and quality is a human-driven, heuristic-driven sort of matter.

But I agree with you. I think that it has the impact to evolve our roles. I think probably more on the developer side than on the testing side, as it stands. I think there’s a lot of talk about how actually, you know, as we start to rely on tools like Copilot as developers to build our frameworks, then there’s an argument that it’s because, you know, it’s garbage in, garbage out. So if this thing’s trained on bad patterns, it’s going to output bad patterns, which is, you know, means boom time for testers, because we’re going to have more bugs to find and more testing to do. If you kind of cynically look at it from that sort of kind of angle. So there is that sort of challenge of how do you test the product that has been developed by not just an individual or collection of individuals, but also by a highly probabilistic machine as well. So there’s that sort of factor.

I think that, yeah, the developers who are having success with these tools, and I think it’s, again, it’s going to apply in this sort of testing spaces, are the people who are using it for very specific tasks, as I say. So I’m on a Slack group with some developers who are working with things like Copilot. And what they’ve said, what’s really interesting is that they can quickly knock up some code, knock up some tests, and then it gives them more time to sort of play around and refactor the code. So to give it that extra polish. So there’s almost like a Pareto’s law thing going on where the tool gets them 80 percent of the way. And then that individual sort of factor comes in there.

I think as well, there’s going to be role, like as these tools become more and more prevalent, there’s going to be more interest in people who can write prompts, and people who can engineer these type of tools as well. So not necessarily data scientists and AI scientists. But can you tune a model? Can you build the right set up? And conversely, you know, for testers, can you test these systems? And how do you deal with a deterministic system versus an indeterministic system? We’ve all been taught how to build test cases. You can’t do that in this context. So there’s new skills to kind of learn there.

I think on the not so great side is, and I think that this applies to quite a lot of industries as well, it’s not just software development, is what does this mean for people coming into the industry? You know, it’s all well and good me talking from a perspective of privilege of, you know, nearly 20 years of experience working in the quality space. But how does someone who’s just coming into testing, how does someone who’s just coming into development learn those sort of important skills? And where do those jobs come from? Because, you know, if you’ve got one Large Language Model that’s doing the work of 20 junior developers, that’s probably going to mean that they’re not going to be hiring 19 of those in the future. So I think there are questions around that sort of aspect of moving into this industry and how that will impact it. I think, yeah, for a lot, it’s going to be this evolution of like how do you build a relationship with these tools in a way that utilizes both your skills and the skills of the tools? Because if you’re not using them, you are not going to be as performant as the person who is. And if we rely too much on these types of tools, they’re just going to end up producing stuff that’s not very valuable to us, because they’re not designed, they’re not tuned to our context.

Henry Suryawirawan: Yeah, you brought up a very interesting point here for many of us to reflect, right? So, this LLM tool, the GenAI tool, is currently still underterministic, right? So it’s highly likely when you ask a question, it will come up with A answer, then you ask again, it will come up with B answer, slight variant of it. So I think it’s very important as a tester, your job is, again, to ascertain quality. So how can you ascertain quality if you are undetermenistic, so to speak, right? So I think the job, the role of testers, most likely will not be eliminated. I may be wrong, so maybe one day we can get a much improved AI. But I think here the key is to use AI as an assistant, right? It’s not to replace anyone. And use that to boost our productivity or kind of like to cover all these boiler plates, like generating data or kind of like starting the code itself. So I think be more optimistic and I think we are looking forward for the book. I hope to probably also read that book and if we can also cover another episode for that.

[00:53:51] 3 Tech Lead Wisdom

Henry Suryawirawan: So thanks again, Mark, for this interesting conversation about testing. Unfortunately, we have to wrap up. But before I let you go, I have one last question I always ask my guests, which I call the three technical leadership wisdom. So if you can think of it just like an advice that you want to give to us, maybe if you can share your version for us to learn from you.

Mark Winteringham: Three things to think about. So I think, obviously, the first one is going to be context is key. Given that I’ve banged on about that for nearly 600 pages across two books. I think that is a truly important aspect is that there is no one size fits all solution to everything.

I think as well, you know, from the perspective of being a tester, from being a quality engineer. You know, I’m still learning this myself, but being sort of a servant leader is massively important as well. So responding to what’s going on around you and helping people with problems, it’s going to be much more productive than kind of applying your own will or your own perspective on things as well.

And as well like, I think we talk about the AI stuff as well is reflecting on yourself, reflecting on your abilities. And so I’m a big advocate for continuous learning. I’m always trying to read new stuff, expose myself to new things as well. Not just stuff in the software development space, but, you know, like in the arts as well. Like I learned loads from, in terms of teaching and presenting, from standup comedy. I’ve never gone up on the stage and done a set or anything like that, but there’s a lot we can learn from other people as well and draw that sort of those skillsets in as well. So yeah, that would be my three.

Henry Suryawirawan: Wow, it’s very interesting. So first, it’s about context, right? Again, like no one thing fits all the kind of a problem, right? So I think I also like the last part where you kind of like learn from different various industries or different kind of philosophies or comedy and things like that. And you kind of infuse that to your role. I think that’s really powerful as well.

So Mark, if people would love to connect with you, ask you more questions, maybe is there a place where they can reach you online?

Mark Winteringham: So usually hovering around on LinkedIn. So you can find me as Mark Winteringham there. I am also still on Twitter / X with what’s left of it. That’s a 2bittester with the number two. And I’m also have my website, mwtestconsultancy.co.uk, which I hope to be adding some more stuff to once I finished the second book.

Henry Suryawirawan: Right. I highly recommend your “Testing Web APIs” book. So it’s not just testing web APIs, guys. So it’s also holistic approach, right? Starting from identifying quality characteristics, risk approach, and things like that. I think it’s kind of like comprehensive. And I’m really looking forward for the second book, AI-Assisted Testing. I think everyone is excited about the potential of AI. And I think we are looking forward for some tips from you on how to use it more properly. So good luck with your process of writing the second book.

Mark Winteringham: Thank you very much.

– End –