#92 - Agile and Holistic Testing - Janet Gregory & Lisa Crispin

 

   

“Testing is an activity that happens throughout. It is not a phase that happens at the end. Start thinking about the risks at the very beginning, and how we are going to mitigate those with testing."

Janet Gregory and Lisa Crispin are the co-authors of several books on Agile Testing and the co-founders of Agile Testing Fellowship. In this episode, Janet and Lisa shared the agile testing concept and mindset with an emphasis on the whole team approach, which was then followed by an explanation of the holistic testing concept with a complete walkthrough how we can use the approach in our product development cycle, including how Continuous Delivery fits into holistic testing. Janet and Lisa also described some important concepts in agile testing, such as the agile testing quadrants (to help classify our tests) and the power of three (aka the Three Amigos). Towards the end, Janet and Lisa also shared their perspective on exploratory testing and testing in production.  

Listen out for:

  • Career Journey - [00:06:35]
  • Agile Testing - [00:13:56]
  • Whole Team - [00:15:17]
  • Agile Testing Mindset - [00:19:19]
  • Holistic Testing - [00:24:42]
  • Continuous Delivery - [00:34:53]
  • Agile Testing Quadrants - [00:39:03]
  • The Power of Three - [00:42:50]
  • Exploratory Testing - [00:47:08]
  • Testing in Production - [00:50:49]
  • 3 Tech Lead Wisdom - [00:54:10]

_____

Janet Gregory’s Bio
Janet Gregory is a testing and process consultant with DragonFire Inc. She is the co-author with Lisa Crispin of Agile Testing Condensed: A Brief Introduction (LeanPub 2019), More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), and Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), the Live Lessons Agile Testing Essentials video course, and the course, Holistic Testing: Strategies for agile teams. Together with Lisa Crispin, she has founded the Agile Testing Fellowship to grow a community of practitioners who care about quality.

Lisa Crispin’s Bio
Lisa Crispin is the co-author, with Janet Gregory, of three books: Agile Testing Condensed: A Brief Introduction, More Agile Testing: Learning Journeys for the Whole Team, Agile Testing: A Practical Guide for Testers and Agile Teams; the LiveLessons Agile Testing Essentials video course. She and Janet co-founded the Agile Testing Fellowship, which offers “Holistic Testing: Strategies for agile teams” live training course both remotely and in-person. Lisa was voted by her peers as the Most Influential Agile Testing Professional Person at Agile Testing Days in 2012.

Follow Janet and Lisa:

Mentions & Links:

 

Our Sponsor - Skills Matter
Today’s episode is proudly sponsored by Skills Matter, the global community and events platform for software professionals.
Skills Matter is an easier way for technologists to grow their careers by connecting you and your peers with the best-in-class tech industry experts and communities. You get on-demand access to their latest content, thought leadership insights as well as the exciting schedule of tech events running across all time zones.
Head on over to skillsmatter.com to become part of the tech community that matters most to you - it’s free to join and easy to keep up with the latest tech trends.
Our Sponsor - Tech Lead Journal Shop
Are you looking for a new cool swag?

Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.

Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.

 

Like this episode?
Follow @techleadjournal on LinkedIn, Twitter, Instagram.
Buy me a coffee or become a patron.

 

Quotes

Career Journey

  • I was very fortunate that my boss every day would come in and sit with me and say, “So, Janet. What are you working on today?”. And then he’d say, “So how are you going to test that?” And then at the end of the day, he would come in and he’d say, “So Janet, what did you do today? How did you test that?” So I learned to think about testing right from the very beginning of my first job. And I didn’t know any difference.

Agile Testing

  • Our official definition: collaborative testing practices that occur continuously from inception to delivery and beyond, supporting frequent delivery for our customers. Testing activities focus on building quality into the product, using fast feedback loops to validate our understanding. And the practices strengthen and support the idea of whole team responsibility for quality.

Whole Team

  • In our experience, being successful at delivering valuable software products to customers is an effort that requires everybody on the team to be committed to it and work together. We’ve seen over the years that testing at the end only by tester doesn’t work. Even when I worked on waterfall projects, everybody still did testing the whole time.

  • And having that mindset where we’ve all talked about that everybody on the team, regardless of our specialty, “Here’s a level of quality we want our software.”

  • The State of DevOps survey has really supported that with hard data. When developers own the testing, when they own the automated tests, yet they created them and maintain them along with the testers, and the testers help them with all these other activities, like exploratory testing, that’s what correlates with high-performing teams.

  • That question, “how were we going to test this?”, if we start that at the very beginning, that drives the testing all the way through.

  • I’m going to take this from Elizabeth Hendrickson. It’s one of my favorite quotes. “Testing as an activity that happens throughout. It is not a phase that happens at the end.”

  • If we truly believe that testing is an activity that happens from the very beginning, when we first see that feature, that very first feature, starting to think about what are the risks. Start thinking about the risks at the very beginning, and start thinking about how are we going to mitigate that. And a lot of that mitigation involves testing. And so moving those testing activities, thinking about them early. I think that’s how we’re going to change it.

  • Don’t bring in testers when you’ve got code to test. Bring them in early to start thinking about those risks and really talking about the level of quality, because that’s how we started.

  • I don’t use software tester. I don’t use that term for myself because I think it’s more than software. We test ideas. We test assumptions. We’re testing many things. And so it’s not only the software. I think we test the product and the product is the whole.

Agile Testing Mindset

  • I really think it’s an ongoing process. I think it needs training. I think it needs daily coaching from somebody who’s done it before, who knows what they’re doing to help the team.

  • If you haven’t already been on a high performing Agile or whole team approach team, you can’t understand what the unicorn magic of that is. You really have to experience it. If people can get somebody, hire somebody at least temporarily who does know it, who does understand it and can help the team get over it.

  • Mindset switches are hard. It’s a cultural change.

  • As Janet says, we’re going to focus now on preventing bugs, not catching bugs at the end. We’re not going to be the quality police because it’s not our job to determine what quality is. It’s the customers. So now we’ve got to find out what does the customer want? That’s a big part of our job as testers. It’s to help everybody understand, get a shared understanding of what the customer really needs.

  • From a testing perspective, the hardest thing is to understand that we are testing this small slice. We are making sure that this small slice works right now. Doesn’t mean that we’re going to give it to the customer right now. If that’s what they need, if they can use it, yes. But from a testing perspective, that is the hardest thing. It’s to understand that their testing is narrow for this particular story. Once testers get used to that, they realize how much easier it is to keep testing and adding complexity.

  • When I’ve taught our course hundreds of times, I think that’s the biggest takeaway. Most of the time, from the whole team because we teach it to the whole team, is that they can do that one small piece and add complexity as they go and test it.

Holistic Testing

  • There’s a lot of talk about “shift left, shift right”. And when I hear “shift left, shift right”, I think of a lateral line like a horizontal line. I’m thinking that software development isn’t like that.

  • I kind of had this little mini light bulb moment and took the loop as it is now, and wrote a blog post on it, showing that testing really does start from the discovery.

  • So when we start thinking about discovery, that’s a very beginning, thinking about the risks and planning for it, going through the building, and those are all stages that we do. Whether we go really fast through them because we’re doing a small story. We discover, we understand, then we deploy and test. We might put it into production. But one way or the other, we have an internal release even if we don’t put it to production. And then thinking about what we learned from it. So it really is looking at that whole cycle. Thinking about testing holistically.

  • Part of the problem was Dan Ashby use continuous testing, which made a lot of sense to us, and then somehow people took the term continuous testing and co-opted it to only mean the automated regression tests that run in continuous integration. And so when you said continuous testing, that’s what people thought of. And I was like, oh, that’s like a teeny, tiny little part of testing. It’s an important part. What describes it better?

  • Janet came up with the word “holistic” and yeah, because the whole team approach, going through the whole cycle, the testing activities are in the whole cycle and it really sums it up.

  • I would say we have a story. And so we would take and we would think about what risks there are. We might share some acceptance tests, high-level acceptance tests. We might do example mapping if it needed to be. So that’s part of the understanding. So you’re planning at a high level. You understand that story, and every story will be different depending on how well known it is. What are the risks?

  • Then you have some high-level acceptance tests. You’ve worked through examples, and the whole team has an understanding of what they are going to build, and then they build it. And, of course, the building will include automating and it’ll include exploratory testing on that story. It includes TDD for the programmers. It includes whatever it takes to make that code into a deployable piece.

  • So you have that code. You deploy it. Your automated tests are already running. You can run them on your system. You can do any other human centric kind of testing. Perhaps you have to do some performance testing or something like that on a deployed system. So all of these little testing activities happen. And then if you release it, whether it’s internally or externally to a customer, you’re going to make sure that happens, and so testing in production.

  • And then you can have your observability and your monitoring happening. But that’s all about learning. How is your customer using it if you’ve actually put it out to production, if they’re using it? And then you learn and you go into your next story.

  • That cycle can be very fast if you’re doing something very simple, like logging in with a valid user. We’re not doing any extra stuff. We’re just doing that happy path. The next story might be validating all the usernames, adding complexity. Maybe the third story is something else. But you’re adding complexity all the way along. And I think that is recognizing that loop is just a continuous cycle, constantly adding on.

  • One thing I particularly like about the loop is having this part where we’re going to learn, observe your production. That’s one of the things the past few years, my mission has been to make sure that testers get involved with that because our products are so complex these days.

  • We need to start, as we’re thinking about what code we’re going to write for that little story that Janet is talking about, what telemetry do we need? How should we instrument that code? What events do we need to capture so that we can create our monitoring dashboard, so that we can create our alerts if something goes wrong in production? So we have all the data we need for the things we didn’t anticipate and didn’t create dashboards or alerts for, and we can learn very quickly what went wrong and fix it.

  • We need the whole picture. We can’t just be people say “shift left, shift right”. We can’t do one or the other. We’ve got to be holistic. We’ve got to go through the whole cycle and be involved all the time. Using our testing skills to identify risks, to spot anomalies, to spot patterns, to help the teams see those.

  • “Can you understand what’s happening in your system and why without having to push additional code? Slicing and dicing data from the telemetry signals.” (Liz Fong-Jones)

Continuous Delivery

  • To me, Continuous Delivery is something to think about and to strive for, even if you never do it. A lot of teams are on three months release cycles. Delivering to the customer every three months. And it might be because their customer chooses not to take it.

  • If you have that goal and you’re thinking about it, then that means that your stories are going to be testable. Your stories are going to be small enough that you can test and you will get those good habits. Even if you choose never only to give it to the customer every once in a while.

  • I encourage teams to practice Continuous Delivery. To go for that even if it’s on an internal server and they’re practicing it in their staging environment. They can still get to that even if they don’t push it to the customer every time.

  • How do you get there? That’s a series of small steps. Understanding that every team has a different context. There are still teams out there that don’t even have Continuous Integration.

  • One thing I tell people who don’t have anything. They don’t even have CI. You have a deployment pipeline. Changes that you make in your software product get to production somehow. There are steps that you take to get there, even if they’re manual. And it’s really important that the whole team understands all those steps.

  • We tell people, sit down together, use your virtual whiteboard and virtual sticky notes and draw your pipeline. Even if it’s manual or even if it’s some automation, some manual. Understand what you do, and then try to pick a place that’s maybe your biggest bottleneck, where it would help you if you automate it. Get that faster feedback. Make your process more robust so that you do have something solid you can get to production. But you have to build it step-by-step.

  • The first step in solving a problem is to make it visible. So if your problem is you struggle to get anything to production because you just don’t have good processes, start by visualizing what it is now, and then decide how you want to improve it.

Agile Testing Quadrants

  • The quadrants are a way of classifying the tests. Four quadrants:

    • The left side is about guiding development. Tests that you write before you start coding.

    • The right-hand side is about critiquing the product. So tests that you do after coding is complete.

    • The top half is business facing tests. So those are tests that your business would be able to look at, would be able to understand, would be able to give feedback on.

    • And the bottom half is technical facing, which just means that the language that those tests are in is something that business wouldn’t look at. For example, performance testing. They care about the results, but they would never go look at the actual tests.

  • Technology facing tests that guide development, quadrant one, and those are kind of unit tests. So the programmers are mostly responsible. They’re guiding their development with the unit tests.

  • Quadrant two is business facing tests that guide development. So this is where when we do acceptance test-driven development or behavior-driven development. Thinking about how are we going to use those examples to guide development? What tests can we write before? So things like simulations or those sorts of things.

  • Quadrant three is business facing that critique the product. So generally we put in exploratory testing in there, or user acceptance testing. Business facing tests. We need to do them because there are unknowns. What didn’t we think about?

  • Quadrant four gives us a way to talk about all those other tests. Performance, reliability, and all the quality attributes that we know we have to work with.

  • The quadrants are a way of visualizing your testing. Again, we encourage teams to think about the tests they do, and then kind of put them in the quadrants to think about why are they doing them? Are they doing it to support? Because the left-hand side is supporting or guiding, that’s about preventing defects in code. Can we test our assumptions? Can we test our knowledge before we ever write a line of code? The right-hand side is about finding defects. What didn’t we think about?

The Power of Three

  • How we should write all the tests and all the quadrants? Together.

  • We often will start in that quadrant two, the business facing tests to guide development with prototypes and wire frames, and what is our UI going to look like?

  • When we say this “the power of three,” these days, I think you’re going to need four or five because you often need the designers or data experts or operations experts. It could be anybody.

  • Once we’ve written, slice up these testable stories and start working on them, now in that technology phasing quadrant one that guides development, the developers or the programmers are coding test first. They’re doing test-driven development at the unit level. That’s going to be something that the programmers own because that’s not a testing activity as much as it is a code design activity. But it does help us design testable code and operable code.

  • Then we’re going to maybe go back into that quadrant two and do our story testing. And again, it’s great to have a programmer and tester pair up on that, or a test or product owner, or even better, from my point of view, an ensemble with several people in different roles. Testing it together.

  • When we really think the story is ready to go, and we get enough stories in a feature, now we can start into that quadrant three, the critiquing the product but still business facing. Exploratory testing, usability testing, things we really need for code that has been written and deployed. Again, more collaboration. Involving more people.

  • And then the technology facing. Maybe then we spike the code. We just do some throwaway code with that architecture, and now we can do performance testing on it or load testing on it. Let’s see if it scales. And if it does, we throw out that code and we start writing the real code. If it doesn’t, we spike another architecture. Those technology facing tests, maybe done at the very beginning, if that’s the most important quality attribute to our business.

Exploratory Testing

  • I would recommend everybody to read Elisabeth Hendrickson’s book, “Explore It”. I like the fact that she’s actually got a chapter in there for programmers. How to explore when they’re testing themselves? One of the things that I really like and I use to guide me is the format she uses: explore , with <she uses the word resources, I think variations, but what is it that I’m exploring with>, and then to discover.

  • When I put an exploratory test charter together, I use that format because it helps guide me. It’s a mission statement. It’s not necessarily easy to do because you don’t want it too small. You don’t want it too big. What is it that I’m going to explore? And it might be a specific risk.

  • I also like to think about it as if I took that charter and I tested, I would have my own notes, I would have my own findings, if I gave it to Lisa, she would explore differently. She would use different examples. She would have her own. So there’s enough leeway that we would possibly test differently, but we’re still exploring the same thing. But there’s a mission. Not just randomly hitting keys. I want to look for this specific thing.

  • There can certainly be value in ad-hoc testing. A lot of teams do bug bashes, where they just get a bunch of people and they start hammering on whatever they want to release. That’s fine. They’ll find bugs that way too. But the exploratory testing is designed more to try to find those unknown unknowns before they bite you in production, and be more organized.

  • When writing the charters, I like to start thinking of what resources I’m going to test with first, and that helps you think of that mission. And I would also recommend pairing with somebody on it.

  • I would also put all those charters as stories in your project tracking tool, in your backlog, along with your feature stories. They have to be visible. Everybody needs to know that those have to be done along with all the coding and all the other testing.

Testing in Production

  • “In my context”, because that is so important. So testing in production, if I’m doing Google Maps, yeah, let the customers figure, find the bugs and report them. But if I’m doing code for a heart monitor, for example, I’m going to have way different risk profile. We’re going to do all the testing we possibly can before we put it into production as such.

  • If you have a safe way to do it, things like feature flags, so the customer actually doesn’t see it. We’re not touching customer data where we’re testing, and we have it turned off.

  • It’s so important because as good a job as we do of testing, we know our test environments don’t look like productions. We’re not going to test everything. We’re going to miss things. And so, we need to be able to put things in a production environment. Use some kind of release strategy that lets us do that safely. It takes building infrastructure. And again, that’s an effort that your whole team needs to get involved in and that’s when you start needing to have operations specialist, Site Reliability Engineers, platform engineers in your team, or working with your team to help you with that.

  • It became really important to be able to use our production data observability. What are customers doing? How are the thing performing? And also the analytic tools. Because if you can’t test it in production, you need some way to understand what’s happening there.

3 Tech Lead Wisdom

  1. The prerequisite for successful software development is psychological safety.

    • One of the things that we’ve learned in the last, probably 10 years now, but people don’t pay enough attention to it is the prerequisite for successful software development is psychological safety.

    • We had to feel safe to experiment and learn. We have to feel safe to ask questions. When you don’t feel safe, you’re going to burn out.

    • My advice is if you’re in a toxic environment where you don’t feel safe - where people are not safe - try to be an agent for change. You can maybe influence people. And if it doesn’t work, if you can, get out.

    • Now, I realize that’s a privilege. But also realize it isn’t your problem. No amount of yoga and meditation is going to fix it for you. Don’t blame yourself. It’s a problem with your organization. So pay attention to that before you burn out and really suffer, and your team’s going to be suffering too.

  2. Small efforts can bring about big changes.

    • Don’t underestimate your scope of influence. Don’t be afraid to try new things. Keep learning.

    • Sometimes don’t be afraid. But then take that jump, take that opportunity that is in front of you. Don’t be afraid to take it. But that’s only good if you have psychological safety.

  3. Build relationships.

    • Build relationships within your team, but also outside of your team.

    • Do start with people who seem friendly and seem like they’ll be your allies. You got to pick them out. That just forms a foundation for so much.

    • The kind of learning that Janet is talking about and helping you get changed because maybe there’s something you want to do and you can’t interest anybody on your own team. Maybe somebody on another team wants to help you.

Transcript

[00:01:28] Episode Introduction

Henry Suryawirawan: Hello to you, my friends and my listeners out there. Welcome to the Tech Lead Journal podcast, the show where you can learn about technical leadership and excellence from my conversations with great thought leaders out there. And welcome to the episode number 92. Thanks for tuning in and listening to this episode. If you’re new to Tech Lead Journal, don’t forget to subscribe and follow the show on your podcast app and social media on LinkedIn, Twitter, and Instagram. And if anyone wants to contribute to the creation of this podcast, support me by subscribing as a patron at techleadjournal.dev/patron.

Testing is a critical part of any software and product development. However, there are still many teams that consider testing as an afterthought, or even making quality and testing as the responsibility of silo teams, who are quite detached from the product development life cycle. I’ve been wanting to invite guests who can share more about how we can do better testing as a whole, and I’m so excited to share this conversation with you all.

My guests for today’s episode are the lovely duo, Janet Gregory and Lisa Crispin. Janet and Lisa are the co-authors of several books on Agile Testing and the co-founders of Agile Testing Fellowship, and they are very well-known and influential champions for better software testing approaches.

In this episode, Janet and Lisa shared the agile testing concept and mindset with an emphasis on the whole team approach. Our discussion was then quickly followed by an explanation of the holistic testing concept with a complete walkthrough how we can use the holistic testing approach in our product development cycle, including how Continuous Delivery fits into holistic testing. Janet and Lisa also described some important concepts in agile testing, such as the agile testing quadrants, that we can use to help classify and think about variation of test cases, and the power of three, also widely known as the Three Amigos. Towards the end, Janet and Lisa also shared their insightful perspective on conducting better exploratory testing and demystifying testing in production.

I really enjoyed my conversation with Janet and Lisa, learning about the concept of agile and holistic testing, and how those approaches can help us to create better software and product. I especially like their dynamics in our conversation, which made the discussion lively. If you also enjoy this episode, please share it with your friends and colleagues who can also benefit from listening to this episode. Leave a rating and review on your podcast app, and share your comments or feedback about this episode on social media. It is my ultimate mission to make this podcast available to more people, and I need your help to support me towards fulfilling my mission. Before we continue to the conversation, let’s hear some words from our sponsor.

[00:05:35] Introduction

Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today I’m very excited to have the testing duo, very much well-known in the software industry. We have Janet Gregory and Lisa Crispin in this episode. So for those of you who don’t know both of them, they are actually the author of “Agile Testing” book, which was published in maybe 2009. So it was kind of like revolutionary during that time, when all Agile is in the buzz, and also try to improve testing practices. And they wrote another book which is titled “More Agile Testing”. This was published in 2014. Maybe there was something that they want to improve from the first book. And recently they also publish a book, which is a kind of condensed version of both books called “Agile Testing Condensed”. So as you can tell, today we are going to talk a lot about Agile testing and some more testing related stuffs, for sure. So Janet and Lisa, really pleased to have you in the show. Welcome to the Tech Lead Journal podcast.

Janet Gregory: Thank you.

Lisa Crispin: Thanks. It’s really nice to be here. Thanks for inviting us.

[00:06:35] Career Journey

Henry Suryawirawan: I always like to start with asking my guests to introduce themselves. So maybe if you can take turns and introduce yourself. Telling us more about your highlights and turning points that’s worth to share.

Janet Gregory: So I had a little bit of a different start. Because I worked right out of high school, and then I got married, had kids, did all of those things, did some traveling. Lived overseas for a couple of years, Singapore and Jakarta. And then when we came back, I decided I needed to do something with my life. So I went to university, and I graduated computer science and became a programmer. But this was all around the age of 40. And then I started programming for about, I guess, six years. Then I liked to tell people I saw the light, and I went into the testing.

I was a QA manager for a few years. But my first job was not very satisfying because it was in a very chaotic kind of environment. Being a QA manager, all about process and nobody was listening to me and I was so frustrated. And when I quit that job, I was quite lucky because I ended up in an Agile team as a tester. I just kind of washed my hands and started all over again. That really truly did change my life. Because that’s when I met Lisa. Because I had this moment of panic thinking. What does testing mean in Agile? I don’t know. So I met her because she was writing her very first book, “Testing XP”. So I had the opportunity to help review that book and learned a lot. And then, of course, then Lisa and I have had our own kind of separate careers, but very much bound together all the way through. I was testers/coach because I was one of the few people that actually understood how to test in an Agile team. Until our first book was written, I should say. And then I started more consulting. And then I’ve been consulting ever since. Really helping teams to understand what it means. So that’s my career in a nutshell.

Lisa Crispin: When I went to college, you couldn’t even get a computer science degree. It wasn’t a thing yet. You had to get electrical engineering. But my original degree is in animal science with a focus in beef cattle production, and then I got a Master’s in Business Administration, and once had a research job. But at some point, I needed a job and wandered into the University of Texas at Austin employment office and saw a sign that said, “Programmer trainees needed. No experience necessary.” And I said, “Well, that’s me.” And I’m fortunate that a) they hired me and b) they hired me for my domain knowledge. They wanted people with a business background. They had a great idea. They trained programmers themselves, so that we all could have collective code ownership because we all coded the same way. We worked together. We collaborated. That was back in the day when programming was a low pay, low status job. So there were plenty of women. It was a lot of fun.

Eventually, I went to work for a software vendor and kind of accidentally, as most people do, got into testing. “Oh, our customers are really upset that all these bugs go out with our releases. What if we tested it?” And so that was in the early nineties when I switched from programming to testing. I’ve just been fortunate. I’ve just been on so many great teams most of the time in my career. I’ve been on some bad ones, but mostly great ones. I was able to join my first extreme programming team in 2000. It was difficult for testers because it got all the publications at the time about extreme programming, I mean, they were great. It was all about quality. It was all about testing. It was all about people. But nothing about testers and what they might do on an extreme programming team.

So, you know, my team and I were trying to figure it out. Through the extreme programming mailing list, I met Janet. So we were all trying to figure this out together. Co-wrote my first book with Tip House because we wanted to share what we learned so far. Because we knew there were other challenges. So it’s been all about sharing experiences, and I’ve been very fortunate to collaborate with Janet since then. Not only on the books, but on conference sessions and write other articles and training courses. It’s just been great to reach out to our software community, testing community around the world, get everybody to share experiences, and we kind of channel that out and share it to everybody as much as we can.

Henry Suryawirawan: Thank you for sharing both of your stories. If I can pick some of the interesting things. So I didn’t know, you have stayed in Singapore and Indonesia. So I’m Indonesian, by the way. So that’s kind of like similarity from both of us.

So both of you studied programming, but then went into testing. So tell us a little bit more why testing, apart from so many other areas in software engineering?

Janet Gregory: Okay. My very first job in programming, I worked for the Vancouver stock exchange, and so things had to work. I was very fortunate that my boss every day would come in and sit with me and say, “So, Janet. What are you working on today?” There was no pairing. We all sat in cubicles and things. So I would tell him what I was going to be working on. And then he’d say, “So how are you going to test that?” And then at the end of the day, he would come in and he’d say, “So Janet, what did you do today? How did you test that?” So I learned to think about testing right from the very beginning from my first job. And I didn’t know any difference.

So when I went to my next job, which didn’t do testing on their things, and was working on just as critical application as stock exchange, it bothered me. I coded for a couple of years there, and then my boss come to me and said, “So Janet, we think we need to have a test team. You are the only one who is constantly complaining about our process. Would you like to be our QA manager?” I knew that I didn’t want to program forever. I knew that from the very beginning, and so I actually jumped at the chance. And then I took everything I possibly could learn about testing and started there. So it ended up not working out for me there, but that’s how I got into it. I’ve never looked back.

Henry Suryawirawan: That’s kind of like very interesting question. “How do you test that?” Because I think many software engineers even don’t think about that. So they just build the features and finish and maybe pass it to QA. So how about you, Lisa? Do you have any interesting insights why you went into testing world as well?

Lisa Crispin: Yeah, it’s a pretty good story. So I was working in tech support for a software vendor, a pretty big software vendor. This is back in the days when Microsoft and Apple were still small. I think we were bigger. We were frustrated because that was back when you sent everything out on tape to the customers. And these were big customers. It was mostly with database software. We do a release. The tapes would be sent out. We had nothing to do with it. This was back when people just called on the phone. No faxes. No email. You just had an angry customer on the phone who said, “I cannot believe you didn’t see this giant bug.” We hadn’t even seen it. We didn’t have it yet either.

So it occurred to us to ask the developer. We were in Colorado, they were back in Germany and we said, “Could it be possible to send us maybe a preliminary version before you send it to the customers? And we can just start trying it out.” So they said, “Yeah, we could send it like a week ahead.” So we could install it. We can start testing it. We could find bugs and we can start working on patches so that when the angry customer called, we said, “Don’t worry. We’re working on a patch, and we’ll have that out to you next week.” And we did it on our own, and then our managers were like, “Testing. What an interesting idea. What if we had a department devoted to people who do testing, to coordinate the releases, and cut the release tapes and deal with the customers? Maybe we should combine all these things.” And so I put my hand up. I’m like, “That sounds fun.” And like Janet, I’ve never looked back.

[00:13:56] Agile Testing

Henry Suryawirawan: Wow. So both of you took the chance, jumped into the opportunity and went ahead until now. So which brought you into the concept of Agile testing. The first book was written in 2009, if I’m not wrong. So what is Agile testing? Maybe if you can give the definition for all of us here who maybe have not heard about it yet.

Janet Gregory: Our official definition: collaborative testing practices that occur continuously from inception to delivery and beyond, supporting frequent delivery for our customers. Testing activities focus on building quality into the product, using fast feedback loops to validate our understanding. And the practices strengthen and support the idea of whole team responsibility for quality.

We spent a lot of time trying to put that together. But really what it means is play nice in the sandbox. Work together. Think about testing from the very beginning. When I think about this, and we might talk about it a little bit later, is that definition could be applied to the term we’re using these days, which is holistic testing.

Lisa Crispin: And by the way, that was a community effort. We got a lot of input from people in the community. We’ve written a couple of blog posts on as we were developing that. And so you can get more details on our website, AgileTester.ca/blog and search around through that. But yes, Janet, that’s a really good point that the definition still works, even if we use a different label.

[00:15:17] Whole Team

Henry Suryawirawan: So if I read the descriptions, definitely, first, it’s like there are so many things mentioned there. But there are a few key things that I observe. The first thing is you mentioned about this concept called “whole team”. What do you mean by whole team? What is the relation with testing as a whole team?

Lisa Crispin: In our experience, being successful at delivering valuable software products to customers is an effort that requires everybody on the team to be committed to it and work together. We’ve seen over the years that testing at the end only by tester doesn’t work. Even when I worked in waterfall projects, everybody still did testing the whole time. Waterfall doesn’t mean you have to do a bad job. And having that mindset where we’ve all talked about that everybody on the team, regardless of our specialty, here’s a level of quality we want for our software. This is what we want to get to. This is our goal. And we’re committed to getting there because we know it’ll be hard. We know we’ll run into a lot of obstacles. We’ll have a lot of hard problems to solve. But together with all our different skills and experiences, we’ll be able to do it. Because we’re committed to do it, we’re not going to just throw our hands up and give up.

That’s what it takes. If it’s just part of the team, even if some of the developer’s like, “Oh yeah, I really care about quality. I’ll write some unit tests.” But if that’s all they do and they don’t involve the testers, or the testers go off and try to do all the testing on their own, we just see that doesn’t work. The State of DevOps survey has really supported that with hard data. That was Janet’s and my experience over all the years. But then we saw data from that survey support it. When developers own the testing, when they own the automated tests, yet they created them and maintain them along with the testers, and the testers help them with all these other activities, like exploratory testing, that’s what correlates with high-performing teams. I know as humans, we don’t pay attention necessarily to hard data, but we have the hard data to back up our views.

Janet Gregory: And it starts from the very beginning. That question, how were we going to test this? If we start that at the very beginning, that drives the testing all the way through.

Henry Suryawirawan: So I think I agree it’s a very important distinction. Having the whole team really taking responsibility of how to test the application or the system that they’re building. Because as you can tell in the industry, I’m sure throughout your consulting experience as well, there are many teams still have the silo responsibility. Developer, just build the software while there’s another team called the testing team, QA team, quality engineering, whatever you call it, does the testing part.

So what in your opinion should change in order to make it more like a whole team concept? All these still kind of like a common trend in the industry. I can tell at least from my part of the world. What should change in order to go towards this whole team concept?

Janet Gregory: Well, and I’m going to take this from Elizabeth Hendrickson. It’s one of my favorite quotes. “Testing is an activity that happens throughout. It is not a phase that happens at the end.” So, if we truly believe that testing is an activity that happens from the very beginning, when we first see that feature, that very first feature, starting to think about what are the risks? To be able to do that, that means that whoever is thinking about testers if you have testers on the team, if you don’t have testers on the team, start thinking about the risks at the very beginning, and start thinking about how are we going to mitigate that? And a lot of that mitigation involves testing. It starts there. And so moving those testing activities, thinking about them early. I think that’s how we’re going to change it.

Don’t bring in testers when you’ve got code to test. Bring them in early to start thinking about those risks and really talking about the level of quality, because that’s how we started. And I think that has to change in some mindsets of the team and individuals. Because it’s how we think about testing. So I don’t use software tester. I don’t use that term for myself because I think it’s more than software. We test ideas. We test assumptions. We’re testing many things. And so it’s not only the software. I think we test the product and the product is the whole.

[00:19:19] Agile Testing Mindset

Henry Suryawirawan: Yeah, which you mentioned is like a mindset, right? So in the book you mentioned Agile testing mindset. So I can also see observation from at least my part of the world. So testing is always deemed as, like maybe a lower part of engineering, right? Where people assume they just do manual testing, doing the repetitive jobs. I know that some testers are like good programmers as well. They write great automation and all that. But still, unfortunately, there are still these misconceptions. Also, there’s this term called quality engineering, quality assurance. So people think they are the division that is responsible for quality.

So I think all these maybe need your advice, how to change this mindset? So that we can move towards this Agile testing mindset. Again, where everyone will involve in the testing and really cares about quality since the beginning.

Lisa Crispin: Yeah. Janet and I were just chatting about that this morning. Cause it’s not easy. And I’m just trying to think back that how did my mindset changed. I remember the first two-week iteration on my first extreme programming team, I was working for a startup consulting company and most of the first two weeks I was helping another client on site. So I came back just like the day before we’re going to demo to the customers for our very first iteration. One of the things I did was okay, I’m going to start up the server, and we got this little application going and I’m going to log in as two different users and the server crashed. And I was like, “What? We can’t show this to the customers. It’s terrible. You can’t even have two people on it.” Fortunately, in extreme programming, you have a coach, part of your team. Our coach said, “Now, Lisa, we don’t have a story in this iteration for supporting more than one user. The reason for that is our customer is developing this to show people the potential features of their product because they need funding. They’re not going to use it. They’re just going to demonstrate it. They don’t need to have two people logged in.” And I was like, “Whoa, that’s mind blowing. To me, quality was the server stays up. It’s reliable. Everything works. But now the quality is what the customer needs, and we really had to focus on that.

So I really think it’s an ongoing process. I think it needs training. I think it needs daily coaching from somebody who’s done it before, who knows what they’re doing to help the team. Janet and I like to say, if you haven’t already been on a high performing Agile or whole team approach team, you can’t understand what the unicorn magic of that is. You really have to experience it. And so, you know, if people can get somebody, hire somebody at least temporarily who does know it, who does understand it and can help the team get over it. Because mindset switches are hard. It’s a cultural change. As Janet says, you know, we’re going to focus now on preventing bugs, not catching bugs at the end. We’re not going to be the quality police because it’s not our job to determine what quality is. It’s the customers. So now we’ve got to find out what does the customer want? That’s a big part of our job as testers. It’s to help everybody understand, get a shared understanding of what the customer really needs.

Janet Gregory: That’s a really good story, Lisa. I wish I would’ve heard that 20 years ago. Because that’s exactly it. From a testing perspective, the hardest thing is to understand that we are testing this small slice. We are making sure that this small slice works right now. Doesn’t mean that we’re going to give it to the customer right now. If that’s what they need, if they can use it, yes. But from a testing perspective, that is the hardest thing. It’s to understand that their testing is narrow for this particular story. Once testers get used to that, they realize how much easier it is to keep testing and adding complexity. You have that first story, and then that works. You add complexity and you can test that and that works. You just keep wrapping that complexity around, and you get a solid feature. And I think that makes the world of difference. But it is a hard mindset to think that we have to do that narrow. It’s funny. I have to laugh here. I’m just going to add a little story. Because I can’t talk without my hands. So this is a podcast. You have to visualize that I’m holding my hands up and I’m doing narrow focus.

Lisa Crispin: That’s true. That’s the thing in my early days, or even for the first few years that I was on Agile teams. My teammates had to keep reminding me. “Lisa, I know you just think you found a bug. Could you just make sure the happy path works first so that we can be confident that we’ve at least started in a good way?” Cause you know, I was always going off in the weeds. I bet there’s a bug over here. “No, Lisa. We think it works, but we want you to help us know that it works.” Driving development with those tests, I was like creating huge numbers. I was with the product owner. We’re creating these huge matrices of test cases and expected outputs and giving that to developers. “Okay. Here’s what you should do.” And they’re like, “Oh my gosh, I can’t see the forest for the trees. So can you just give us a happy path? Let’s all make sure that works. Then let’s add on like, one unusual scenario and just do it a step at a time.” And that was something I had to learn because I had been more used to, even though I was trying to always collaborate with developers and testers, still we were doing big bang thing. So I was getting a whole bunch of stuff to test at once. And that’s quite a different experience.

Janet Gregory: Yeah. And I think when I’ve taught our course hundreds of times, and I think that’s the biggest takeaway. Most of the time, from the whole team because we teach it to the whole team, is that they can do that one small piece and add complexity as they go and test it. And I think that’s a real showstopper for a lot of people just going, “Wow,” as you’ve said.

[00:24:42] Holistic Testing

Henry Suryawirawan: I was laughing when you shared that, because I wasn’t even thinking about application that should just work for one people. We always assume that it could work for multiple people. So you brought a point where sometimes yeah, testing is hard because you don’t know what kind of slice that you should be focusing on. And I think you’ve mentioned the key point where we should test within a smaller scope, add complexity afterwards. And it seems like this kind of like aligned to your concept of holistic testing, where you have this short cycle from build, test deploy and all that. So maybe this is a good time to actually introduce what do you mean by holistic testing that you are talking pretty lately?

Janet Gregory: I’ll take this one because there’s a lot of talk about “shift left, shift right”. And when I hear “shift left, shift right”, I think of a lateral line like a horizontal line. I’m thinking that software development isn’t like that. So the DevOps loop started, and I think that’s great. The first time I had seen that was in “Discover to Deliver,” Ellen Gottesdiener and Mary Gorman’s book. They had this nice little loop, the DevOps loop. And then Dan Ashby, in one of his blog posts, talked about continuous testing. He said, this DevOps loop is great, but where is testing? They always had this chunk, this phase for testing, and it didn’t work. So he made this, “We test here,” and he put all around the loop. We test here and here and here and everywhere. And I thought, perfect. And I used that for a long time. But still, something bothered me about the DevOps loop.

So, Lisa came up with one iteration, and then we just kept playing with it. And finally, I kind of had this little mini light bulb moment and took the loop as it is now, and wrote a blog post on it, showing that testing really does start from the discovery. So when we start thinking about discovery, that’s a very beginning, thinking about the risks and planning for it, going through the building, and those are all stages that we do. Whether we go really fast through them because we’re doing a small story. We discover, we understand, then we deploy and test. We might put it into production. But one way or the other, we have an internal release even if we don’t put it to production. And then thinking about what we learned from it. So it really is looking at that whole cycle. Thinking about testing holistically. When I read that definition and I hadn’t really thought about that definition of Agile testing, and it’s very applicable to holistic testing. But I’m going to let Lisa talk about why we call it holistic testing versus Agile testing these days.

Lisa Crispin: I mean, obviously, it was Janet that came up with this idea. But part of the problem was Dan Ashby use continuous testing, which made a lot of sense to us, and then somehow people took the term continuous testing and co-opted it to only mean the automated regression tests that run in continuous integration. And so when you said continuous testing, that’s what people thought of. And I was like, oh, that’s like a teeny, tiny little part of testing. It’s an important part. What describes it better? So Janet came up with the word “holistic” and yeah, because whole team approach, going through the whole cycle, the testing activities are in the whole cycle and it really sums it up.

What I found interesting is, until just recently, I was still a hands-on tester on a feature team and working in a company with several different teams, I shared Janet’s blog post on our Slack, one of the really experienced testers on another team was like, “Ah,” and he had a lightbulb moment too. He said, “this model is great. I can finally explain to people what it is we do.” Because it really summed up when there are well-performing teams that are doing a good job of the holistic approach, and there’s a test around that team kind of acting as a testing consultant, guiding them leading that effort. Now he can take and explain to other teams, this is what we do. I have a way to show you now. Other people have also come back with that kind of feedback. One of the ways that resonated was people who are already doing it, recognize it. So we hope it’ll help other people then be able to achieve that holistic approach and grow that in their own teams.

Henry Suryawirawan: So maybe if you can walk us through in terms of practical sense, right? So you have this cycle. There are multiple stages in the cycle. So I just pick a starting point, discovery, as you mentioned, then you have planning, understanding, building, deploying, releasing, observability, and learning. Throughout these stages, there should be some level of testing involved. So maybe if you can walk us through in practical terms, day to day, maybe in a sprint. How do you run this? Do we create a story for each of the stage? Or do we have one story where each has its own stage?

Janet Gregory: So if we think about it, say, the discovery. I would say we have a story. Let’s take the simplest story that we possibly can. One person can log in, using Lisa’s thing. So that story is the happy path. Lisa can log into the system. What does that take? And so we would take and we would think about what risks there are. Maybe the risk is the server isn’t up. She can’t get validated, whatever it is. We think about the risks. We might share some acceptance tests, high-level acceptance tests. We might do example mapping if it needed to be. So that’s part of the understanding. So you’re planning at a high level. What does it take? You understand that story, and every story will be different depending on how well known it is. What are the risks?

So then you have some high-level acceptance tests. You’ve worked through examples, and the whole team has an understanding of what they are going to build, and then they build it. And, of course, the building will include automating and it’ll include exploratory testing on that story. It includes TDD for the programmers. It includes whatever it takes to make that code into a deployable piece. So you have that code. You deploy it. Your automated tests are already running. You can run them on your system. You can do any other human centric kind of testing, like, perhaps you have to do some performance testing or something like that on a deployed system. So all of these little testing activities happen. And then if you release it, whether it’s internally or externally to a customer, you’re going to make sure that happens, and so testing in production. Every time I say testing in production, I zapped back 20 years ago. “No, no, that’s not what we mean anymore.” But you can do that, and then you can have your observability and your monitoring happening. But that’s all about learning. How is your customer using it if you’ve actually put it out to production, if they’re using it? And then you learn and you go into your next story.

So that cycle can be very fast if you’re doing something very simple, like logging in with a valid user. We’re not doing any extra stuff. We’re just doing that happy path. The next story might be validating all the usernames, adding complexity. Maybe the third story is something else. But you’re adding complexity all the way along. And I think that is recognizing that loop is just a continuous cycle, constantly adding on.

Lisa Crispin: One thing I particularly like about the loop is having this part where we’re going to learn, observe your production. That’s one of the things the past few years, my mission has been to make sure that testers get involved with that because our products are so complex these days. We’re using the cloud. We’re using all these services, microservices. We need to start, as we’re thinking about what code we’re going to write for that little story that Janet is talking about, what telemetry do we need? How should we instrument that code? What events do we need to capture so that we can create our monitoring dashboard, so that we can create our alerts if something goes wrong in production? So we have all the data we need for the things we didn’t anticipate and didn’t create dashboards or alerts for, and we can learn very quickly what went wrong and fix it.

And also we’re so lucky these days. The analytics tools that are out there, which were done for marketing and product so they know what customers are doing, so valuable to us as testers. We can go out and look and see how our end users using our product? You know, some of their tools that’ll show you a screencast of what somebody did, which is kind of creepy, but it’s very educational. When you see somebody clicking on something that’s not clickable or just moving their mouse around cause they don’t know what they’re doing. That’s so helpful to improve our product, to know where to focus our testing, to know how to help users. We need the whole picture. We can’t just be people say shift left, shift right. We can’t do one or the other. We’ve got to be holistic. We’ve got to go through the whole cycle and be involved all the time. Using our testing skills to identify risks, to spot anomalies, to spot patterns, to help the teams see those.

Janet Gregory: I was just actually listening to your last podcast with Liz Fong-Jones, and she was talking about observability. One of the things that she said, and I went, “This is the way to think about it.” Cause it hit home the way she said it was, “Can you understand what’s happening in your system and why without having to push additional code? Slicing and dicing data from the telemetry signals.” And I thought what a wonderful statement that is about observability.

Lisa Crispin: And sometimes as well, because I remember so many painful years of, “Oh God, we cannot figure out why is this 500 error happening with our web server?” Okay, let’s add this telemetry and let’s redeploy. Well, that didn’t do it. Let’s have this redeploy. You know that Liz is one of my sheroes. The people in that space and in observability and Site Reliability Engineering, they understand testing and they understand the importance of having this data. We all need to be aware of that and not think that some separate area of expertise that we don’t need to know about.

Janet Gregory: Lisa said that’s one of the testing activities that happens in that understanding, is what do we have to put into the code?

Henry Suryawirawan: Having heard what you just described, both of you in detail, in depth, including an example, how we can practice it, I think it totally makes sense. I can see why in the industry, sometimes we all work in silos. Most of the times developer just read, okay, this is the story requirement. That’s it. I don’t think about telemetry. I don’t think about how we can measure user success and all that. I just build based on the specifications. Testers probably also follow. So I think the whole concept of having this cycle for every story, it can be bigger scope. It can be small scope. But having all these cycles covered in a story, actually really good, because you can have a good understanding in depth from the start, from the discovery up to the user themselves. How we can monitor and make sure that the story itself brings value to the user. Thank you so much for explaining this concept.

[00:34:53] Continuous Delivery

Henry Suryawirawan: One part that I think worth to mention, because you assume that the story can be deployed to production where you have the release, deploy and observability. But so many people still haven’t practiced this, which is continuous delivery. How important is continuous delivery in this holistic testing?

Janet Gregory: Well, I see so many different contexts. To me, Continuous Delivery is something to think about and to strive for, even if you never do it. A lot of teams are on three months release cycles. Delivering to the customer every three months. And it might be because their customer chooses not to take it. It’s a requirement that a lot of customers do. They don’t want all those little fixes for many reasons. But if you have that goal and you’re thinking about it, then that means that your stories are going to be testable. Your stories are going to be small enough that you can test and you will get those good habits. Even if you choose only to give it to the customer every once in a while. I encourage teams to practice Continuous Delivery. To go for that even if it’s on an internal server and they’re practicing it on their staging environment. They can still get to that even if they don’t push it to the customer every time. It’s a great way to get all of those practices in place. So it’s not going to happen overnight because a lot of teams don’t even have automation yet.

How do you get there? That’s a series of small steps. Understanding that every team has a different context. So we talk about it as everybody does, but they don’t. They don’t. There are still teams out there that don’t even have Continuous Integration. I remember when I told Lisa that once, oh, probably 10 years ago or 15 years ago. And she goes, “What?” Because every team she worked on have them and she couldn’t see working any other way. But there’re still teams without Continuous Integration, which, you know, is kind of sad. But I know they exist.

Lisa Crispin: It’s puzzling to me because like Janet says, every team I’ve been on had it cause it only took us a day or two to get it going. So what is the problem, people? It’s not hard. And like Janet said before, we even had the words, Continuous Delivery, which I first learned from Jez Humble and Dave Farley’s excellent book of the same name, which is a book about testing. My team took us a few years because at first we couldn’t even get a passing build with an artifact we could deploy to production in two weeks. And then we tried to get one and then we set goal, okay, we’re going to have a passing build every day. And for the days we don’t have that, we’re going to have a calendar where we put colors to remind us. So everybody in the company can see, including the executives and start to understand the importance of having this passing build every day. Just building it step-by-step. It took us years to get there, but then we always had a deployable artifact we could deploy to production.

In our business domain, customers didn’t want new changes every day. So we only did it every two weeks. But we had the ability, and that’s the goal. One thing I tell people that don’t have anything. They don’t even have CI. You have a deployment pipeline. Changes that you make in your software product get to production somehow. There are steps that you take to get there, even if they’re manual. And it’s really important that the whole team understands all those steps. So we tell people, sit down together, use your virtual whiteboard and virtual sticky notes and draw your pipeline. Even if it’s manual or even if it’s some automation, some manual. Understand what you do, and then try to pick a place that’s maybe your biggest bottleneck, where it would help you to automate it. Get that faster feedback. Make your process more robust so that you do have something solid you can get to production. But you have to build it step-by-step. I learned from Janet years ago. The first step in solving a problem is make it visible. So if your problem is you struggle to get anything to production because you just don’t have good processes, start by visualizing what it is now, and then decide how you want to improve it.

Henry Suryawirawan: I would be surprised if many people still do not practice Continuous Integration. But a few years back, I could even see some teams do not even have version control or maybe they have version control, even just locally. So I think this is also a bad practice. I think we should all avoid. But by now, I hope all the listeners, you have practiced at least version control and Continuous Integration.

[00:39:03] Agile Testing Quadrants

Henry Suryawirawan: So, in Continuous Delivery, we know that you have many stages in your deployment pipeline. Obviously, many of those stages will be automation tests. So I know that you have this concept, Agile testing quadrants, where you categorize different types of tests. Maybe can you give us an overview? What is Agile testing quadrants and what are the representations for each of the quadrants?

Janet Gregory: It actually was Bret Pettichord and myself and Brian Marick. And it was one of those conference late night chats. We were talking about this, and Brian was explaining these quadrants. So he had a napkin. It’s one of those stories, right? So he had a napkin. He’s drawing on it. And then Bret and I bugged him for months till he finally wrote it up in a blog post. So then we could use it. And we did ask his permission to use it. But the quadrants are a way of classifying the tests.

Four quadrants. So the left side is about guiding development. Tests that you write before you start coding. The right-hand side is about critiquing the product. So tests that you do after coding is complete. The top half is business facing tests. So those are tests that your business would be able to look at, would be able to understand, would be able to give feedback on. And the bottom half is technical facing, which just means that the language that those tests are in is something that business wouldn’t look at. For example, performance testing. They care about the results, but they would never go look at the actual tests.

So the four quadrants and the number for ease of use only, technology facing tests that guide development, quadrant one, and those are kind of unit tests. So the programmers are mostly responsible. They’re guiding their development with the unit tests.

Quadrant two is business facing tests that guide development. So this is where when we do acceptance test-driven development or behavior-driven development. Thinking about how are we going to use those examples to guide development? What tests can we write before? So things like simulations or those sorts of things. Those are all things that help us.

Quadrant three is business facing that critique the product. So generally we put in exploratory testing in there, or user acceptance testing. Business facing tests. We need to do them because there are unknowns. What didn’t we think about?

And then quadrant four, which was the quadrant that made me really like this. Because at that time, and that was in 2003 at that conference, every time I would talk about testing in Agile projects, every tester I met would say it won’t work. They talk about customer tests and programmer tests. But what about all those other tests? Quadrant four gives us a way to talk about all those other tests. Performance, reliability, and all the quality attributes that we know we have to work with.

So the quadrants are a way of visualizing your testing. Again, we encourage teams to think about the tests they do, and then kind of put them in the quadrants to think about why are they doing them? Are they doing it to support? Because the left-hand side supporting or guiding, that’s about preventing defects in code. Can we test our assumptions? Can we test our knowledge before we ever write a line of code? The right-hand side is about finding defects. What didn’t we think about? So it’s a way of classifying it. So it’s a way of visualizing the kinds of tests and where do we actually want to do this?

Henry Suryawirawan: Thanks for walking us through the Agile testing quadrants. So maybe for those of you who cannot visualize what Janet is saying just now, maybe I’ll put it in the show notes, what the diagram looks like. But there are so many categories of tests. Depending on the left side, top, bottom. Maybe you can have a look.

[00:42:50] The Power of Three

Henry Suryawirawan: But importantly, there are so many types of tests. For one particular story, should we build so many of those tests? What is the ratio here? How should we build these test cases? Because I also noticed that you have this good practice, which is called the three amigos or the power of three. So maybe walk us through a little bit on how we should write all those tests and how?

Lisa Crispin: How we should write all the tests and all the quadrants? Together. With each feature that your team’s going to work on and with each story that you work on, it depends. For like a web application, maybe you’re doing something with a user interface and then it has a backend server and a database and all those pieces. We often will start in that quadrant two, the business facing tests to guide development with prototypes and wire frames, and what is our UI going to look like? We can test those. Maybe get involved with designers.

When we say this “the power of three,” these days, I think you’re going to need four or five because you often need the designers or data experts or operations experts. It could be anybody. So we make sure that testing those feature ideas that Janet mentioned earlier. Once we’ve written, sliced up these testable stories and start working on them, now in that technology phasing quadrant one that guides development, the developers or the programmers are coding test first. They’re doing test-driven development at the unit level. That’s going to be something that the programmers own because that’s not a testing activity as much as it is a code design activity. But it does help us design testable code and operable code. All those good things.

Then we’re going to maybe go back into that quadrant two and do our story testing. And again, it’s great to have a programmer and tester pair up on that, or a test or product owner, or even better, from my point of view, an ensemble with several people in different roles. Testing it together. Well, I guess you wouldn’t do that so much at the story level. That would be more for exploring later on. But definitely you could do acceptance testing at the story level. In Paris, I’ve done that a lot. Doing the “Show me,” that Janet talks about. The programmers then think they’re done with the story and they ask a tester, “Hey, can I just walk you through what I’ve done in just a few minutes?” You may have had that aha moment of, “Oh, I didn’t understand that quite right.” Or, “Oops, I forgot a piece.” You’ve prevented a bug. You haven’t even checked the code into the source code control yet. So having that close collaboration, it’s like this integration, testing, coding, testing, coding.

When we really think the story is ready to go, and we get enough stories in a feature, now we can start into that quadrant three, the critiquing the product but still business facing. Exploratory testing, usability testing, things we really need for code that have been written and deployed. Again, more collaboration. Involving more people. One of my recent jobs, when somebody thought they had a feature that was pretty ready to go, we just open it up to the whole engineering organization and say, “Hey, we’re going to do an ensemble testing session. Exploratory testing on this feature for 30 minutes at this time. Please join us.” Then we’d have people from other teams join. Fresh eyes. They’ve never seen that feature before. That’s so important. And just in such a short time, we would flush out anything. Any bugs that were there. Any hidden assumptions that we hadn’t thought about. Getting that collaboration, even from people, with other teams.

And then the technology facing, stuff you might do at the beginning. Maybe we’ve got some new piece of a product that we want. It needs to support a large number of users at the same time and have really good performance. Well, we have to think about the architecture and how do we know that architecture will scale? So maybe then we spike the code. We don’t do the careful thing. We just do some throwaway code with that architecture, and now we can do performance testing on it or load testing on it. Let’s see if it scales. And if it does, we throw out that code and we start writing the real code. If it doesn’t, we spike another architecture. Those technology facing tests, maybe done at the very beginning, if that’s the most important quality attribute to our business.

Janet Gregory: And those become guiding development.

Henry Suryawirawan: So the key thing that I observed when you explained it, Lisa, is that it’s still collaborative, right? It’s not just one person or one team who is responsible to come up with all these tests. It could be from product manager. It could be from tester. It could be from software developer, designer and whatever the role is in the team. And again, there’s the feedback, right? So the test sometimes drives back the development and also vice versa. So I think I like this concept of holistic testing already. Thanks for sharing this concept.

[00:47:08] Exploratory Testing

Henry Suryawirawan: So you mentioned a couple of times about exploratory testing. Maybe this term I think a little bit vague for some people. Some people think it’s like, okay, we just do any random tests. See whether the system crash. Maybe a little bit of guiding light here. What do you mean by exploratory testing?

Janet Gregory: Well, first of all, I would recommend everybody read Elisabeth Hendrickson’s book, “Explore It”. I like the fact that she’s actually got a chapter in there for programmers. How to explore when they’re testing themselves? She has so many good ideas. One of the things that I really like and I use to guide me is the format she uses: explore , with <she uses the word resources, I think variations, but what is it that I’m exploring with>, and then to discover.

So when I put an exploratory test charter together, I use that format because it helps guide me. It’s a mission statement. It’s not necessarily easy to do because you don’t want it too small. You don’t want it too big. What is it that I’m going to explore? And it might be a specific risk. We think we have some security issues in this area, so maybe I’m going to be exploring certain kinds of exploits or something. But maybe I want to see something, say, we’re adding this form and we’ve got people’s names. So if I use the Agile Testing Fellowship, our website that we have, one of the things we did was we wanted to make sure that we could use any names. So, for example, your name doesn’t have any funny characters. But if I put in somebody from Thailand, they might have some different characters, or from Sweden or Norway, they have those different kinds of characters. Will it support every name? And so we just put an exploratory test charter to do that. I also like to think about it as if I took that charter and I tested, I would have my own notes, I would have my own findings, if I gave it to Lisa, she would explore differently. She would use different examples. She would have her own. So there’s enough leeway that we would possibly test differently, but we’re still exploring the same thing. And so it just takes practice. But there’s a mission. Not just randomly hitting keys. I want to look for this specific thing.

Lisa Crispin: Yeah. There can certainly be value in ad-hoc testing. A lot of teams do bug bashes, where they just get a bunch of people and they start hammering on whatever they want to release. That’s fine. They’ll find bugs that way too. But the exploratory testing is designed more to try to find those unknown unknowns before they bite you in production, and be more organized. I actually got a great tip from a programmer that I worked with a few years back. When writing the charters, I like to start thinking of what resources I’m going to test with first, and that helps you think of that mission as Janet says. What have I got at my disposal that I can use to test this? Maybe I’ve got some API endpoints I could use, or maybe I’ve got some production like test data I can use, or some personas that the marketing department came up with that I can use. So thinking about the resources is one hell, cause it’s hard to learn to write those things. And I would also recommend pairing with somebody on it.

I would also put all those charters as stories in your project tracking tool, in your backlog, along with your feature stories. They have to be visible. Everybody needs to know that those have to be done along with all the coding and all the other testing. We need to do that exploring as well.

Henry Suryawirawan: Thanks for clarifying this concept because many people still just assume exploratory testing, yeah, okay, you just go ahead and try to break the system or do random stuff. So I think there should be a particular charter or mission, you keep mentioning that. I think that’s a very good explanation.

[00:50:49] Testing in Production

Henry Suryawirawan: One more misconception which Janet mentioned is about testing in production. Again, it’s a common misconception people think, oh, maybe we should not write tests. We just deploy, and maybe let people figure it out. Maybe a little bit of guiding light here as well.

Janet Gregory: Oh. I remember listening to a keynote years ago at a conference, and they said testing is dead. Let the customers test. I wanted that person in the worst way, and I kicked myself now for not putting my hand up and saying something. But I was hoping that person would say, “in my context”, because that is so important. So testing in production, if I’m doing Google Maps. Yeah. Let the customers figure, find the bugs and report them. But if I’m doing code for a heart monitor, for example, I’m going to have way different risk profile. We’re going to do all the testing we possibly can before we put it into production as such.

But testing in production, and Lisa can talk about it much better than I can most of the time. But really, if you have a safe way to do it, things like feature flags, right? So the customer actually doesn’t see it. We’re not touching customer data where we’re testing, and we have it turned off. So there’re different ways to do it. I think Katrina Clokie’s book “A Practical Guide to Testing in DevOps” is still the best one that I’ve come across for testing feature flags and working within that construct.

Lisa Crispin: Yeah. “A Practical Guide to Testing in DevOps” That’s available on Leanpub and you can get it free. She doesn’t require you to pay, although it would be nice to pay, cause it’s one of my go-to books, for sure. But it’s so important because as good a job as we do of testing, we know our test environments don’t look like productions. We’re not going to test everything. We’re going to miss things. And so, we need to be able to put things in a production environment. Use some kind of release strategy that lets us do that safely. It takes building infrastructure. And again, that’s an effort that your whole team needs to get involved in and that’s when you start needing to have operations specialist, Site Reliability Engineers, platform engineers in your team, or working with your team to help you with that.

Janet Gregory: I think you just said something that really struck with me there, Lisa. You said in the production environment, not necessarily with production data. And I think that’s the key is the production environment, not the customer’s data.

Lisa Crispin: My last full-time job that I had, we could not test in production in any way. We just could not. It was a financial services application. There was no way to do it. If we had tried to do it, we would got in trouble. And so then it became really important to be able to use our production data observability. What are customers doing? How are the thing performing? And also the analytic tools. What are the individual users? We could actually see what they were doing. Like, oh, people are complaining about this and we can’t reproduce it in our test environment. Let’s go see what they’re doing in production. Thank goodness we have those tools now. Because if you can’t test it in production, you need some way to understand what’s happening there.

Janet Gregory: Context. Yeah. So important.

Henry Suryawirawan: Thanks for emphasizing that again, context. So for people who learn about test in production, it doesn’t mean that you just deploy your code without any kind of safe mechanism, like feature flags or maybe robust monitoring on observability because without knowing what your system does, it’s kind of like risky, especially if you’re dealing with finance or health and all that.

[00:54:10] 3 Tech Lead Wisdom

Henry Suryawirawan: So unfortunately, due to time, we have to wrap up. It’s been a really great conversation about testing in overall. But before I let both of you go, normally I would ask this question called three technical leadership wisdom. So I will leave it whether you want to combine effort or if you want to go individually. So what will be your three technical leadership wisdom to share with us?

Lisa Crispin: Yeah. I thought that was an interesting question. And I think my answer is different now than it would have been a few years ago. One of the things that we’ve learned in the last, probably 10 years now, but people don’t pay enough attention to it is prerequisite for successful software development is psychological safety. We had to feel safe to experiment and learn. We have to feel safe to ask questions. When you don’t feel safe, you’re going to burn out. My advice is if you’re in a toxic environment where you don’t feel safe, where people are not safe, try to be an agent for change. I’ve really found good help from the books by Linda Rising and Mary Lynn Manns, “Fearless Change” and what fearless change patterns to help influence people. You can maybe influence people. And if it doesn’t work, if you can, get out. Now, I realize that’s a privilege. Some people they need to pay their bills and they don’t have any other options. But also realize it isn’t your problem. No amount of yoga and meditation is going to fix it for you. Don’t blame yourself. It’s a problem with your organization. So pay attention to that before you burn out and really suffer, and your team’s going to be suffering too. So that’s one of them. I’ll let you do one, Janet.

Janet Gregory: Two of mine go right in with that. One of the things was small efforts can bring about big changes. Don’t underestimate your scope of influence. The other one that kind of ties in with what you were talking about, psychological safety, don’t be afraid to try new things. Keep learning. Many years ago, and this was in my pretty early days out of high school, it was my second job out of high school. I was an admin assistant, an administrative assistant, and I had a really lazy boss. Really lazy. People kept saying, why do you work for him? What I ended up doing was I learned, cause he had me do most of the budget reconciliation and things cause he didn’t like to do it. So I had to do it for him. But then one day he was sick, and it was a very important meeting of all the directors and a budgetary kind of meeting, and they were talking about it. I walked in with all the papers and they said, “Where’s Brian?” And I said, “He’s sick.” And they said, “Should we cancel the meeting?” And I said, “No, let me explain.” Because I had taken on that extra thing and done it, learn that, it just opened up a whole new world for me. I was viewed very differently than I had been the day before. It paid off. So sometimes don’t be afraid. But then take that jump, take that opportunity that is in front of you. Don’t be afraid to take it. But that’s only good if you have psychological safety, as Lisa said before.

Lisa Crispin: My second big piece of advice. Something I have followed myself. For a change, I take my own advice. The last few jobs I have done is build relationships. One of the things I got from Katrina Clokie’s book is build relationships within your team, but also outside of your team. I’ve gotten so much benefit from asking people on the platform team, “Could I have a one-on-one with you for 30 minutes every couple of weeks?” So helpful. I learned so much from them. I got so much support from them that helped my team. I just felt like it was a secret weapon. Do start with people who seem friendly and seem like they’ll be your allies. You got to pick them out. That just forms a foundation for so much. The kind of learning that Janet is talking about and helping you get changed because maybe there’s something you want to do and you can’t interest anybody on your own team. Maybe somebody on another team wants to help you.

Henry Suryawirawan: Really beautiful. So everything aligns together. I can see the synergy between both of you, right? As you can tell from the books and all that. So for people who want to follow up with you, or maybe continue this conversation outside of this episode, is there a place where they can find you online?

Janet Gregory: Lots of places. We’re both on Twitter. My Twitter ID is JanetGregoryCA. CA for Canada because I’m from Canada. People get that confused. Lisa, your Twitter ID?

Lisa Crispin: It’s just LisaCrispin, and I’m Lisa Crispin on all social media platforms that I’m on.

Janet Gregory: We’re on LinkedIn. For any of our websites, we all have a contact us. My website is JanetGregory.ca or AgileTester.ca or the AgileTestingFellow.

Lisa Crispin: And I’m LisaCrispin.com. We’re all connected together.

Janet Gregory: If you Google, it’s pretty easy to find.

Henry Suryawirawan: And Lisa and Janet also have a Leanpub book. So it’s the Agile Testing Condensed. If you ever want to have this condensed concept in one go, I think it’s also a good place where you can get all these resources.

One last question. One fun fact. So when I read your book, I always see these dragon and donkey images. I have my hypothesis, but let you explain what all of these about dragon and donkey.

Lisa Crispin: Well, the donkeys are kind of my mascot because I have donkeys. So we have four donkeys here in our little farm in Vermont. They’ve taught me a lot over the years about trust and agility. I hitch them up to carts and wagons and drive them and take them places, and they’d do work around the farm. They’re sort of therapy donkeys. I take them to senior centers and in schools and things like that. So yeah, it’s just my passion.

Janet Gregory: So three of those donkeys are miniature donkeys, and then she’s got one big one who protects the three little ones. I just thought that was kind of cool. And the dragon is because I’m a fantasy buff. I love fantasy stories. My favorite series is Dragonriders of Pern by Anne McCaffrey. So I like friendly dragons. Not like the Game of Thrones' dragons. The friendly ones. So it’s just kind of something that appeals to me.

Henry Suryawirawan: Thanks for explaining that. Really fun fact for those of you who also notice in the book that wherever you see dragon and donkeys, this is what it means. Again, really pleasant to have this conversation. Thank you so much for spending your time, Janet and Lisa. Goodbye for now.

Lisa Crispin: Thank you so much for having us. It was a fun conversation.

Janet Gregory: It was.

– End –