#210 - Creator of WireMock: Building a Successful Open Source Project and The Art of API Mocking - Tom Akehurst

 

   

“Over-the-wire mocking brings into your inner loop of development the ability to learn and get feedback about real world integration by working with something that closely resembles a real API.”

Brought to you by Swimm.io
Start modernizing your mainframe faster with Swimm.
Understand the what, why, and how of your mainframe code.
Use AI to uncover critical code insights for seamless migration, refactoring, or system replacement.

Tired of API dependencies slowing down your development and testing?

Dive into my conversation with Tom Akehurst, creator of WireMock, and discover the art of using API mocking to build successful software in complex distributed environments.

Key topics discussed:

  • The origin story of WireMock, born from integration challenges at Disney
  • How WireMock became a leading API mocking tool with millions of monthly downloads
  • Insights on building and maintaining successful open-source projects
  • The key benefits of API mocking for developer productivity and experience
  • The shift from the traditional testing pyramid to a “testing trophy” approach
  • Leveraging API mocking for API-first design and rapid prototyping
  • The distinction between API mocking and contract testing
  • The future of API testing and development in the age of microservices and AI

Whether you’re a seasoned developer or just starting out your journey in API development, this episode provides valuable insights into the power of API mocking and the journey of building a successful open-source project.  

Timestamps:

  • (02:11) Career Turning Points
  • (08:08) WireMock OSS Success Story
  • (15:15) Welcoming & Aligning with Contributors
  • (18:05) Benefits of WireMock & API Mocking Tools
  • (19:59) API Mocking & Testing Pyramid
  • (22:05) API Mocking vs Contract Testing
  • (25:25) The Economics of API Mocking
  • (27:27) API First Design
  • (32:32) Impact to the Developer Experience & Productivity
  • (35:32) Working More Effectively with Distributed Systems
  • (38:15) API Virtualization/Simulation
  • (41:13) AI Advancement in API Development
  • (44:25) Building API for AI Agents
  • (47:25) 3 Tech Lead Wisdom

_____

Tom Akehurst’s Bio
Tom Akehurst is the creator of WireMock, the open source API mocking tool, which he’s now been working on for well over a decade. Lately he’s also the CTO and co-founder of WireMock, Inc., where he’s helping complex engineering organisations effectively adopt API simulation techniques in order to build better software faster.

Tom has been developing software for over 20 years. He’s built large-scale web systems for media, travel, hospitality, retail and government, applying lean, eXtreme Programming, Continuous Delivery and DevOps principles along the way.

Follow Tom:

Mentions & Links:

 

Our Sponsor - JetBrains
Enjoy an exceptional developer experience with JetBrains. Whatever programming language and technology you use, JetBrains IDEs provide the tools you need to go beyond simple code editing and excel as a developer.

Check out FREE coding software options and special offers on jetbrains.com/store/#discounts.
Make it happen. With code.
Our Sponsor - Manning
Manning Publications is a premier publisher of technical books on computer and software development topics for both experienced developers and new learners alike. Manning prides itself on being independently owned and operated, and for paving the way for innovative initiatives, such as early access book content and protection-free PDF formats that are now industry standard.

Get a 45% discount for Tech Lead Journal listeners by using the code techlead24 for all products in all formats.
Our Sponsor - Tech Lead Journal Shop
Are you looking for a new cool swag?

Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.

Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.

 

Like this episode?
Follow @techleadjournal on LinkedIn, Twitter, Instagram.
Buy me a coffee or become a patron.

 

Quotes

Career Turning Points

  • We kept finding ourselves in situations where we were supposed to integrate with an API, and it wasn’t ready or the team delivering it said it was ready, but it didn’t behave as specified. A lot of the time environments would be unstable. Things would be broken. These problems are really common in microservices now or SOA, or anything networked that you need to address to be productive.

WireMock OSS Success Story

  • There are things that make open source projects successful. You need to be solving a problem that’s important and top of mind for people in a given moment. There is an element of timing around that. I started building WireMock when REST started becoming popular. And a few years after that, microservices became what everybody was talking about.

  • There’s a number of design choices that made it a popular and useful tool. There’s another element as well, which is about showing up and doing the boring things repeatedly. Writing documentation, fixing bugs, being available on a support channel so people can interact with you and demonstrating that you’re there for the long term to support the project.

  • In addition to the transition away from SOAP as the predominant API style, there was a transition in how people were building software, and the DevOps movement was getting into full flow around that time. There was the desire and impetus to automate things and make things more developer-centric. The tools of the SoapUI generation that tended to be customized IDEs weren’t always a great fit where you wanted a code-based or API-based primary interface rather than a UI.

  • One of the ways WireMock thrived is that from the outset, there was this design principle that everything should be representable as data. There should be an externalized data format. It should be well documented and something you can check into source control and something legible by human beings, rather than just something that gets dumped out wholesale by a tool as a way of persisting its state into a file system.

  • The idea was this data model is at the core, but then you would have a DSL over the top as a nice way of expressing that data programmatically and getting all the benefits from a full programming language. That combination yielded many benefits. Having everything expressed as an externalized data format allowed a degree of interoperability in ways I wasn’t predicting.

  • People went off and wrote other programming language bindings to take the eight or nine third party programming languages supported as clients to WireMock that I had nothing to do with writing.

  • I’ve noticed more recently by describing things as data rather than code, you can make inferences about what that data means that you can’t easily do with code.

  • Putting lots of extension points in there has definitely been a good choice. There are times when as an open source maintainer, you get asked to put things into the core of your product, and you have to say no, because it feels like a niche use case. Adding complexity you have to maintain that isn’t pulling its weight in terms of usage.

  • But if you can provide good extension points for people and encourage them to take advantage of those, then you’re not just saying no. You’re saying, I’ll give you some help towards this, but then you can do the work to build this particular feature.

  • Giving people the opportunity to use the tool in ways you didn’t necessarily expect as a general unifying principle.

  • The other thing worth mentioning is focusing on the first run experience. Making sure that when people first encounter your tool they get value from it quickly. The same heuristics you apply when building a startup where you get 10 seconds of someone’s attention when they first become aware of what you’re building or promoting. If you can capture their attention in that 10 seconds, they might give you two minutes or five minutes. And if you can give them something of value in that time, they might give you an hour to try and set the thing up.

  • You have to make sure that at each of these touch points, you’re allowing people to make meaningful progress or have a moment of enlightenment where they realize the value you’re delivering.

  • I don’t think I was thinking about it in these terms when I first built it, but I had this instinctive sense that if you can give people little snippets of code they can paste into their project, and it just works. Because when I first started building WireMock, programmatically setting up an HTTP server in Java was quite an onerous process.

Welcoming & Aligning with Contributors

  • I probably didn’t do enough for a long time in the WireMock project to promote contributor accessibility.

  • In WireMock, the company, we had a head of community working for us for a while. He moved things forward enormously in that regard. He did all the obvious things around improving documentation, signposting people to contribution guidelines, put things where you’d expect to find them in GitHub, helped clean up things that made it difficult to get started working with the source code.

  • There’s a load of project hygiene factors. It’s the first run experience for contributors rather than users. While I’d focused a lot on the user’s first run experience, I’d neglected the contributors’ one for a long time.

  • You want to be able to check the project out, build it, run the tests, open it in an IDE and poke around to see how things work. If somebody has to install tools that can’t be easily installed automatically, and jump through hoops before they can start, you set the bar very high for them to make a contribution. And they probably won’t.

  • Setting clear guidelines around how you accept contributions and what you’ll accept and won’t. I’ve always tried to say to people, please come and speak to us before writing code, because it’s awful when someone’s put loads of effort into building something new, and you have to say, “Sorry, this probably doesn’t belong in the core, we’re not going to merge this.” Being upfront about that so people feel their time’s being valued while contributing is definitely good.

Benefits of WireMock & API Mocking Tools

  • WireMock is an API mocking tool. What it allows you to do is simulate the behavior of networked APIs over the wire. It’s conceptually similar to object mocking or in-code mocking. But instead of substituting implementations of interfaces within your code or function implementations, you’re going outside of the process and substituting something on the other side of a network connection.

  • So if you’re testing an app or service that needs to call out to an API, it can still call out over a network interface. When testing with API mocks, you’re still exercising all the production code paths around networking and serialization and all those things that have to happen to integrate with an external service.

  • There’s a number of benefits to doing this versus object mocking. With microservices and systems being increasingly composed of networked components, a lot of the complexity has moved onto the wires or into the code which governs what happens on the wires. While you can create abstractions within your codebase for the things that happen on the wire and do all your testing with respect to those, you’re essentially abstracting away real world complexity, which will bite you in production if it isn’t correct.

  • The benefit of over-the-wire mocking is that you’re bringing into your inner loop of development the ability to learn and get feedback about real world integration. You’re working with something that closely resembles a real API rather than your own abstraction with your own assumptions about it.

API Mocking & Testing Pyramid

  • What I’ve observed in practice over the years of doing projects and using API mocking is that the really useful band of testing migrates to the middle. So you get the testing trophy shape more often than the testing pyramid shape.

  • You shouldn’t be aiming for a particular shape at all. The shape as a heuristic - there are better ways of figuring out how you should balance your testing strategy.

  • Back when I first started test-driven development, anything above the unit test layer, anything integrated tended to be very expensive and difficult. Tests were costly to build, the surrounding infrastructure was costly, and running them was difficult, slow and error-prone. The reason for the testing pyramid being that shape was if you could push as much as possible down to the unit layer, things would be fast and easy.

  • With frameworks and servers shrinking to be small and fast to boot up, combined with tools like WireMock, you can now do far more of your testing in what I think of as the Goldilocks zone in the middle. You’re isolating individual apps and services by mocking them on the network, but still getting the benefit of tests exercising real production code paths. Combined with improvements in observability and debugging tools, you can get maximum bang for your buck by writing tests this way.

API Mocking vs Contract Testing

  • They’re adjacent concepts and mocking is often used as part of contract testing. WireMock is used as part of the Spring Cloud contract testing module.

  • The way consumer-driven contract testing tools work is that you define a mock sufficient for your client code, then you run tests with the tool, and it derives a set of tests it can apply to the server based on what you mocked. So mocking is integral to the process.

  • There are benefits around environment stability. There’s also a similar benefit to mock-driven, test-driven development where you’re saying, this is my interface definition, this is what we’re building to. There are going to be two implementations right from the start - the mock implementation, and eventually the real implementation. There is a contract that both must adhere to. Contract testing is valuable in that context.

  • One point of resistance around mocking you hear a lot is this idea that it can’t be trusted, it’s not realistic. You see some organizations who do the vast majority of their testing in a completely integrated arrangement. They exercise the system as a user would, then see if anything goes wrong. They don’t attempt to categorize tests by risk or manage anything like that. They just do a load of testing and maybe things break, or maybe they don’t. This tends to be a very costly, inefficient and error-prone way of testing.

  • An alternative is making effective use of mocks by having explicit segmentation of risks you’re managing with different kinds of testing. One type of testing assumes your external contracts are correct - does the thing you’re testing work as expected? Is it functionally correct? Then outside of that, you have a smaller set of tests asking, are my assumptions correct?

  • There’s still a place for integrated testing to verify if contracts are being adhered to by the real system. The mock contract and real contract are the same thing. By separating these two risks, you can save yourself energy, runtime and labor. Versus saying, I don’t trust this, I’m going to do everything fully integrated.

The Economics of API Mocking

  • Another challenge within testing is the economics of testing different scenarios. Often, there are things relatively straightforward to test - happy paths or near happy paths that don’t require much data. But as you step away from those cases, things get progressively more difficult. If you have cases where you need to load systems with specific data or large amounts of data, or cases where you want to simulate failures or errors that systems can’t produce on demand, they become difficult or impossible.

  • The other downside of integrated testing is you tend to gravitate towards tests you can practically implement rather than ones most valuable in managing risk. The advantage of bringing mocks into the equation is they flatten the economics out. The difficulty in testing some weird edge case is about the same as testing your number one happy path.

API First Design

  • In environments where an API needs to be built, there is opportunity for collaboration between consumer and producer.

  • Very typical in microservice environments, you have this problem where a new product feature needs to be built, but it doesn’t have the API feature yet. The mobile team needs it, the web team needs it, the data team needs it. So there has to be a collaborative design process.

  • It can be valuable to design things up front and think about these things rather than writing code first and generating the design from it. Sitting and thinking about things, mulling over the consequences of design choices generally produces better results than just bashing the keys and going for it.

  • This is another facet of organizations that rely heavily on integrated testing. There are places where internal APIs are like opaque pipes where two things talk through, but nobody cares what’s going on as long as the system functions. That’s maybe okay when there’s a producer and one consumer.

  • But if this is to be genuinely an API, if at some point there are going to be 10 other teams consuming this, and you continue that approach where two endpoints just need to communicate, you’ll probably make erratic design choices inconsistent with the organization style or API style. You’ll introduce breaking changes. You’ll do things that make it harder for more people to adopt this.

  • Treating it as a proper API that is co-designed and evolved in a way that avoids breaking changes and surprises is really important.

  • Where mocking fits into this is that mocks can be prototypes of APIs. This is something our cloud product is particularly oriented towards. When designing an API, you want to get it into people’s hands so they can try it and validate it as quickly as possible.

  • A lot of API first tools stop short of giving you something practical to work with. You have a design document and maybe governance rules you can run against them. They’re validated by inspection. You have people looking at them saying, yeah, this looks about right. What you should really do is give it to developers and say, go and code something. Try to build some version of what you want to use this API feature for.

  • You can guarantee that nearly always, there’ll be that ‘oh’ moment where despite spending three hours in a design session talking about what this API should look like, the second you try to write code that uses it, you realize some obvious thing you’ve missed - fields that you absolutely need in the data. Otherwise, you can’t proceed with your workflow.

  • The sooner you get it out of the woodwork, the better, particularly in organizations where APIs are being built as facades over legacy stacks or where the cost of implementing an API feature is very high and involves lots of coordination.

  • The value of shifting left that feedback point where you discover whether the API design is right is huge. You see these banking environments where an API is a facade over decades of legacy tech. It can literally take months to surface one new piece of data.

  • I’m a very strong believer in using mocking as prototyping to surface those problems early so you can deal with them as cheaply as possible.

Impact to the Developer Experience & Productivity

  • The biggest aspect is that in environments where you have lots of APIs of varying levels of developer experience and accessibility, coming from different sources and vendors, your development environment is made up significantly of other people’s APIs. The stability and developer experience that those APIs expose has a huge impact on your ability to be productive.

  • Concretely, if you’re integrating with an old third party API run by a vendor who didn’t prioritize developer experience, it has a sandbox that’s slow, flaky, maybe not running the same code as production, hard to get data into, can’t do performance testing. All these things impact you directly as a developer by destabilizing your environment.

  • If you’re working on a highly integrated piece of software, every external API that isn’t presenting an excellent developer experience is degrading the quality of yours bit by bit.

  • It’s like availability where in a highly networked environment, every non-perfect availability number for each dependency reduces yours proportionately. It’s similar with developer experience. If you’re wrestling with dozens of third party sandboxes that all have different problems, you can spend a lot of time not actually doing your job.

  • It’s not just third parties. It’s APIs built on top of legacy systems. Sometimes commercial off-the-shelf software installed on premise where you’ve only got one non-production license. So everyone’s sharing the same environment. It’s running on ancient server infrastructure that no one wants to buy anymore. You can’t run more of it, and all these problems.

  • Mocking lets you build an insulating wall around your own environment. You can say, I don’t need all of that while I’m doing my development. I’m going to build an environment I can fully control and get that determinism and performance I need for my developers to remain in flow.

Working More Effectively with Distributed Systems

  • Treat the APIs, the messages you pass around between systems as first class artifacts. Make them visible and legible. Treat them as independent artifacts of design. Apply governance rules to them. Try to make them consistent. All these things will make it easier.

  • Adopt testing strategies that take advantage of this. Have this notion that I’ll do most of my testing out to the edge of my boundary for my service or app. I’ll assume my assumptions about contracts are correct, but then have other supporting testing strategies to validate whether that’s true.

  • It’s a vintage time for tooling in the API space. There’s lots of progress happening around standards. OpenAPI has added Arazzo and Overlays and all these things. The richness with which you can describe APIs, and increasingly describe facets of APIs in ways useful for verifying them, observing them, documenting them. Both the standards and tooling around those standards are improving at pace.

  • There’s this area referred to as API observability, where the consolidation around those standards and Open API in particular allows you to look at API traffic and draw rich conclusions from it. You can figure out what’s going on within, when and how things are changing and what the direction of travel is.

API Virtualization/Simulation

  • There’s definitely a language problem in API mocking or simulation or virtualization.

  • To give a brief history lesson, there was SoapUI, and even before that it was part of a generation of tools that called themselves service virtualization. The product that popularized it was called Lisa, originally from ITKO and then Computer Associates. They used the term service virtualization. This was before VMware and that type of virtualization took off. So it was less of a confusing term back then.

  • Mocking came out of the Agile movement, from the London extreme programming way of doing things. I was very into that when I first started building WireMock. Mocking as a language and set of idioms made sense. One of the reasons WireMock appealed was it spoke the language of that growing cohort of Agile developers. The mocking thing came from that period and set of practices.

  • API virtualization is a reheating of service virtualization. And simulation is a term that some open source and commercial vendors have started using as a further break with the past that’s more descriptive than virtualization or mocking.

  • When I talk about mocking, I have a broad view of what that could mean. It can mean simple canned responses, what most people associate mocking with, all the way up to complex rich dynamic behavior.

  • We did a survey within the company recently about this, and discovered that most people see those two as very different parts of the spectrum. Mocking is the simple canned example thing you do when writing a unit test or narrow integration test. Then simulation and virtualization is where you’re doing things that are data driven, templated and dynamic or stateful, introducing those sources of complexity.

  • The convention we’re going for is mocking if you’re doing it in code in your inner loop, and simulation if you’re doing more complex stuff in your outer loop, as a simple rule of thumb.

AI Advancement in API Development

  • One thing becoming clear is that LLMs need to interact with APIs to get things done.

  • There’s discussion happening about whether APIs will go away and agents or LLMs will just become web scrapers, and we won’t need separate APIs anymore. I’m not so sure about that because when looking at consumer facing web systems, there’s an argument that they generally have much better human UIs than APIs.

  • There’s a huge swathe of software out there that’s only accessible via its API. So to make those things available to AI applications, it’s necessary to provide APIs that AIs can use.

  • It seems that AI and LLMs have a different taste in APIs than developers do. The very normalized, DRY style we tend to go for in developer API design isn’t very AI friendly. AIs prefer things where all the context is present and made explicit in one place rather than being achieved through references and shorthands. I think there’s going to be a move to start building APIs intended for agents that adopt this very denormalized style relative to previous generation APIs.

  • Another thing happening is AI coding assistants and agents that are starting to emerge for assisting with coding are producing a lot of demand and revealing bottlenecks in enterprise software delivery systems.

  • The Google DORA report that came out, which is a big quantitative study of developer productivity and its various influences, looked at the use of AI coding assistants. They found that on average there was a net negative productivity impact where these were present.

  • My hypothesis on this is a theory of constraints. If you make something faster, which isn’t what’s constraining your system’s throughput overall, you’ll just build up work in progress around the actual bottleneck. A lot of organizations don’t have the downstream software delivery throughput to cope with more untested code being produced.

  • Add to that, we probably trust code produced by LLMs a bit less than that produced by humans. There’s a big problem to solve in how organizations do end-to-end software delivery in a way that can take advantage of the productivity benefits of coding assistants.

Building API for AI Agents

  • WireMock has an externalized data format. One big advantage is that because it’s been around for a long time, the internet is full of examples of how to create this data. LLMs are quite good at producing WireMock mocks. You can ask for something in WireMock JSON format, and in my experience, it will produce something pretty good and useful.

  • One thing we’re looking at is this idea that we should ask our AI coding assistants to generate the APIs they want. If you’re using an AI coding assistant to build an agent for something, we should ask the LLM for an API design and express it in WireMock. Then we can immediately test it in our agentic workflow.

  • This is an idea we’re exploring, whether it will take us anywhere or prove viable, time will tell. But being able to use mocking to remove non-determinism when testing these flows is a valuable use case for it.

  • Where you have workflow steps, some involving calling an LLM, some calling out to fetch data or perform operations in external systems or APIs, you have a difficult problem with non-determinism from the LLMs. If you’re also integrating with sandboxes where data isn’t completely stable, you’re multiplying the problem of non-determinism and how you can repeatedly test.

3 Tech Lead Wisdom

  1. Hire great engineers and teach them the scrappy mindset.

    • An observation about hiring engineers for a team, particularly in a startup. My wisdom was to challenge conventional wisdom that if you’re building a startup, you should look for scrappy, very customer-focused, very business-focused engineers who will move fast and break things.

    • While there are developers who can be both scrappy and great engineers, my experience of hiring is that there tends to be a spectrum of people. You have people who are great engineers, very rigorous, but may not be business and customer oriented in the way that people at the other end are. Then the people at the other end have more of those focuses but maybe don’t produce such great quality code or tend to introduce technical debt at a high rate when they build things.

    • The point I want to make is that you can hire rigorous engineers and then teach them to expand their comfort zone to do the things needed to make a startup work. Adopting more of a commercial focus and user and customer focus. Also, flexibility around quality and use of technical debt strategically.

    • I would argue it’s easier to hire people who are great engineers and then show them how to do that stuff rather than hire people who are scrappy and terrible engineers and then try to make them good engineers. It’s harder to move in that direction.

  2. Making an open source project successful is about making the end users successful.

    • Making an open source project successful is almost a product management cliché. It’s about making your end users successful and making them look good amongst their peers.

    • A large part of that is if they’re going to stake their reputation on introducing the tool you’ve built into their organization, you need to show that you’re behind it, and you’re serious about it. This isn’t some developer’s whim. It’s a serious project that you’re going to document, support, and stand by. Do all the boring work to make it successful in the long term.

  3. Being productive in organizations that have multiple teams and many services is about a couple of key things.

    • One is decoupling teams, to some extent individuals, from the broader context of the technology enough that their own working set, their context is at a manageable size.

    • Secondly, look for opportunities to shift feedback left. Get meaningful feedback about the quality and fitness for purpose of your code, make that happen as early as possible, even if that means doing more design and planning around things like APIs and interfaces.

Transcript

[00:01:33] Introduction

Henry Suryawirawan: Hello, guys. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I have with me Tom Akehurst. He’s the creator of WireMock. If you don’t know, it’s an open source tool for API testing, API mocking, and things like that, which has been created long time ago. So it’s like in 2011. I think we’ll learn from Tom today how he made that open source project successful. And we’ll talk a lot more about API testing and API development in general as well. So welcome Tom to the show.

Tom Akehurst: Thank you for inviting me on.

[00:02:11] Career Turning Points

Henry Suryawirawan: Right. Tom, I always love to first invite my guests to share a little bit more about you, maybe by telling your career turning points that you think we all can learn from.

Tom Akehurst: Sure. So I’m sort of, I guess, a career software developer. I’ve been doing it for my whole adult life. I’ve usually worked in enterprises that have complex and often distributed network systems, integration problems to solve. That kind of thing. And to give a little bit of that, I guess, the genesis story of WireMock, which is probably my most significant career turning point.

I was doing a consulting project for Walt Disney Parks and Resorts working on what is now the magic band system. So their system of kind of RFID wristbands that you get as a guest that grants you access to rides and your hotel room and various other things. When that system was first being conceived of and built, I ended up working on that program. And Disney needed to do a sort of huge digital transformation in order to enable this. So they had a load of systems that sort of operated different bits of the park and hospitality experience, but they were kind of quite siloed. They did some integration. They had some sort of existing service oriented architecture, you know, I guess, but the degree of integration needed to be far greater in order to support this initiative to build the magic band system. So I got involved in that program.

And it was sort of before microservices have been coined as a term, and it was when REST was really just catching on as the sort of hot new architectural style. One of our senior engineers from my company actually persuaded the Disney senior engineers, the architects, that they should be using RESTful design principles rather than continuing to build things on SOAP. And that was great and that was forward looking. But the downside of this was that nobody really knew what they were doing at this point. And it was a huge program. Lots of developers kind of parachuted in from various integrators and consulting companies very quickly onto this program.

And the result was, to put it politely I suppose, friction being caused in various places. And things not being quite as productive as they needed to be. And the team that I was working on was building a set of apps for cast members, for Disney staff members to help customers with their experience. And these systems needed to talk to lots of different APIs that had been provided by various different vendors. We kept finding ourselves in this situation where we were supposed to integrate with an API and it wasn’t ready or the team delivering it said it was ready, but it didn’t behave the way that the specification said it would. A lot of the time environments would be unstable. Things would be broken. This whole sort of litany of problems that are really common in microservices now or SOA, or anything networked really that you kind of need to address in order to be productive.

And I actually went on paternity leave and I kind of thought that this is an itch I’ve been wanting to scratch for a while. There must be a, we need a tool that’s going to help decouple us from this sort of slightly kind of chaotic environment in which we’re trying to build software. And so I spent quite a lot of my paternity leave kind of hacking away on what became the first version of WireMock. And it was really the first requirement for it was I just want to be able to copy and paste the examples on the wiki that formed the specification of these APIs into a tool. And then I want to be able to test my product against these simulated responses rather than having to rely on all of these other teams to provide working services to provide test data to provide me documentation, telling me how to operate these things, all of that kind of stuff. So that was kind of how WireMock was born.

Henry Suryawirawan: Right. So very exciting that you were sharing with us, like how did you start with this WireMock tool, right. I could still remember back then as well, like when early in my career, we used to work with SoapUI and things like that when SOAP was still the main thing in application development. And SoapUI back then was kind of like the go to tool for API mocking, testing, or just doing things like WireMock is doing, right? So I think when we switch to REST, definitely there are a lot of gaps. But for example, the tools, you know, the maturity of the tools as well. So I think it was really great to see such solution like WireMock exist. Because as we adopt more RESTful APIs, I think we need such tools in place.

[00:08:08] WireMock OSS Success Story

Henry Suryawirawan: So WireMock itself was created, I think, in 2011, right? It’s been like 14 years now. I think it’s such a successful project. It has like 6 million monthly downloads, lots of contributors, right? And a lot of companies and maybe other tools integrating WireMock as part of their systems as well. So maybe if you can share a little bit of your insights. How do you actually make this open source project successful?

Tom Akehurst: That’s a really interesting question. I think there are a number of things that make open source projects successful. I think there is a degree to which, obviously, you need to be solving a problem, which is really important and top of mind for people in a given moment. There is an element of timing around that. I started building WireMock at the point that REST started becoming popular. And then a few years after that, microservices became this thing that everybody was talking about. And it, particularly, with respect to microservices, I think it had reached a maturity at the point that people really got into doing that, where it was the tool for the moment. And it became very useful in that regard.

I think there’s a number of design choices that I made that turned out to be good ones, I guess, that made it a popular and useful tool. I’ll go into that a little bit more in a moment. But I think there’s another element as well, which is to some extent about kind of showing up and doing the boring things and doing them repeatedly. So writing documentation, fixing bugs, being available on a support channel of some kind so that people can interact with you over it and demonstrating that you’re there for the long term to support the project.

I think in terms of the API design, it’s interesting you mentioned SoapUI. Actually, I think there was a, because it’s sort of in addition to the transition away from SOAP as the kind of predominant API style. There was also around that time, a sort of transition in the way people were building software, and the DevOps movement was getting into full flow around that time. And the sort of the desire and the kind of the impetus to automate things and to make things a lot more kind of developer centric and work that way, was also a kind of a trend at the time. And I think the tools of the sort of SoapUI generation that tended to be kind of customized IDEs, essentially, were not always necessarily a great fit in that context where you wanted a sort of code-based or an API-based primary interface onto the tools that you used rather than a UI.

And one of the ways in which WireMock thrived is that it was, right from the outset, there was this design principle I wanted to kind of hold on to, which was everything should be representable as data. So there should be an externalized data format. It should be well documented and something you can check into source control and something which is kind of legible by human beings, rather than just this thing that gets dumped out wholesale by a tool just as a way of kind of persisting its state into a file system.

And the idea was this data model is at the core, but then you would have a DSL kind of over the top of that as a nice way of expressing that data programmatically and getting all the benefits you get from a full programming language. And I think that combination yielded a lot of benefits. Some of them that I was maybe predicting when I made those choices and some of them that I wasn’t. Having everything is expressed as a sort of externalized data format allowed a degree of interoperability in ways that I think I wasn’t predicting.

So maybe some more obvious ways that a lot of people went off and wrote other programming language bindings. So I think to take the sort of eight or nine third party programming languages supported as clients to WireMock that I had nothing to do with writing. There was sort of some other benefits as well. One I’ve noticed much more recently, actually. So being able to, by describing things as data rather than as code, you can make kind of inferences about what that data means that you can’t very easily with code.

So a good example is, in our commercial product, we have this kind of 2 way generation between OpenAPI and mocks. And getting from a stub definition, which is describing in a sort of declarative way, I guess, how a stub should respond to an input document. It’s far easier to take that and then turn that into a useful OpenAPI element, whereas the alternative that a lot of previous generation tools did, where they’d kind of say, if you want to do anything complex, you kind of have to break into scripting, you have to write a Groovy script or a bit of JavaScript in order to express this rule. You can’t really then go and parse that and use it in this other context. So there’s some design elements like that.

I think putting lots of extension points in there has definitely been a good choice. I think there are times when as an open source maintainer, you get asked to put things into the core of your product and you have to say no, because it feels like this is too much of a niche use case. Adding a lot of complexity for, you are going to have to take on and maintain, that is then maybe not kind of pulling its weight in terms of the amount that it gets used. But if you can provide good extension points for people and encourage them and sort of help them to take advantage of those, then you’re not just flat out saying no. You’re kind of saying, I’ll give you some help towards this. But then you can do the work to build this particular feature. So I think that’s been kind of quite important as well.

Those are probably the main things. I think there’s maybe a few other things I could point to about the design. But those are the key things, really, I think, giving people the opportunity to use the tool in ways that you didn’t necessarily expect as a sort of general unifying principle.

I suppose the other thing is worth mentioning as well is focusing on the first run experience kind of. Making sure that when people first encounter your tool that they get some value from it very quickly. I think that the same kind of heuristics that you apply when you’re first building a startup, I think, also apply here where you sort of get 10 seconds of someone’s attention when they first become aware of the thing that you’re building or promoting or whatever. And then if you can capture their attention in that 10 seconds, then they might give you two minutes or five minutes or whatever. And then if you can give them something of value in that time, then they might give you an hour in order to try and set the thing up. And you have to kind of make sure that at each of these touch points, you’re allowing people to make meaningful progress or to sort of have a moment of enlightenment where they realize what the value is that you’re delivering.

I don’t think I was concretely thinking about it in these terms when I first built it, but I think I just had this instinctive sense that if you can give people little snippets of code that they can paste into their project and the thing will just work. Because when I first started building WireMock, I kind of programmatically set the standing up an HTTP server in Java was quite an onerous process. You had to understand slightly opaque APIs, I suppose you could say in the web servers that were available at the time.

So part of the real value of WireMock was you could paste, in the original version, it was a JUnit for rule. So you could paste in these two lines, but there was an annotation and a class being new’ed up. And in the background, that would mean that the web server would start up, bind to a port, configure itself in a way that meant you could start using it, and then shut itself down at the end of your test. And you didn’t have to worry about any of that, any of, you know, sort of Jetty’s internals or anything like that. So I think that was quite a powerful tool to gain adoption.

Henry Suryawirawan: Well, thanks for sharing all these learnings and insights, right? Maybe for those people who, you know, just started their career in the last five to 10 years, right? Maybe you would think this is a natural thing to do, like code as configuration, code as data, and all this fast bootstrap, right? You know, like starting a web server in one line or just annotation. I think these days, these things are quite ubiquitous, but back then I think it was maybe more revolutionary. So I think kudos to you for thinking in that way.

[00:15:15] Welcoming & Aligning with Contributors

Henry Suryawirawan: And I think looking at the success of WireMock, right, it’s been around for quite a number of years. I don’t get a chance to talk to so many open source founder who have this kind of project, right? And the last I checked, you have about 246 contributors on the GitHub. So that is kind of quite a lot of developers, you know, like and not many companies also have like that number of developers, right, working as a team. But you’re having this as part of the open source team, open source project. So maybe tell us a little bit more, share for those people who want to build big open source project. How do you ensure that people are aligned or you have good amount of people willing to contribute and expanding the project?

Tom Akehurst: That’s an interesting one. I don’t profess to be very expert at this, I have to say. So I probably didn’t do anywhere near enough for a long time in the WireMock project to sort of promote contributor accessibility. But in WireMock, the company, we had a head of community working for us for a little while. And he actually, he moved things forward enormously in that regard. So he did all the sort of obvious things around improving documentation, kind of signposting people to a contribution guidelines and so on, put things in places where you would expect to find them in GitHub, helped clean up a load of things that just made it difficult to get started working with the source code, all of that kind of thing.

So I think there’s sort of a load of project hygiene factors you could call them that. I suppose it’s kind of the first run experience for contributors rather than users. And while I’d focused a lot on the user’s first run experience, I think I’d kind of neglected for a long time the contributors one. Really, you want to be able to check the project out, build it, run the tests, open it in an IDE and be able to kind of poke around and see how things work. And if you make it so that somebody has got to install a bunch of tools that can’t be easily installed automatically, and they have to jump through a load of hoops before they can even start doing that, then you set the bar very high for them to show up and make a contribution. And they probably won’t.

I think setting clear guidelines around how you accept contributions and what you’ll accept and what you won’t. I mean, I, I’ve always tried quite hard to say to people, please come and speak to us before writing a load of code, because it’s always awful to, when you can see that someone’s put loads of effort into building something new and you have to say, look, sorry, this probably doesn’t belong in the core. So this is, we’re not going to merge this. So. I think trying to be as upfront as possible about that kind of thing so that people feel like their time’s being valued while they’re contributing is definitely a good thing.

Henry Suryawirawan: Yeah. So I think you emphasize again the developer experience, right, before this term getting hijacked lately because of developer productivity and that thing, right? But it used to be a term, right? Developer experience, you know, the first time you check out a code, how do you get up to speed very fast, the API, the CLI experience, and things like that. Thanks for emphasizing that again, because I can see that so many successful open source project simply they have this investment on developer experience. And also not to mention the contribution guidelines and being being like a safe community for people to contribute, right? I think that’s also another key.

[00:18:05] Benefits of WireMock & API Mocking Tools

Henry Suryawirawan: Let’s go to the WireMock itself, right? So maybe tell us what is WireMock why people should consider using tools like WireMock?

Tom Akehurst: So in a nutshell, WireMock is an API mocking tool. What it allows you to do is to simulate the behavior of networked APIs over the wire. It’s conceptually similar to kind of object mocking or in code mocking, for those that are familiar with those kinds of techniques. But instead of substituting implementations of interfaces within your code or something along those lines, substituting function implementations, you’re instead kind of going outside of the process and you’re substituting something on the other side of a network connection. So if you’re testing an app or a service that needs to call out to an API, it can still call out over a network interface. It can, you’re still, when you’re testing with API mocks, like the sort WireMock provides, you’re still exercising kind of all the production code paths around networking and serialization and all of those things that kind of have to happen in order to integrate with an external service.

And there’s a number of benefits to doing this versus doing object mocking. I would argue that the particularly kind of microservices and systems being increasingly composed from large numbers of networked components, a lot of the complexity of our systems has kind of moved onto the wires or has moved into the code which governs what happens on the wires. While you can create abstractions within your code base for the things that happen on the wire and then do all of your testing with respect to those, you know, you’re essentially abstracting away a lot of the, real world complexity, which is going to bite you in production if it isn’t correct.

The benefit of doing the kind of over-the-wire mocking is that you’re bringing into your inner loop of development, I suppose, the ability to learn and get feedback about that real world integration, you know. When you’re working with something that closely resembles a real API rather than your own abstraction with your own set of assumptions about it as the thing you’re integrating with.

[00:19:59] API Mocking & Testing Pyramid

Henry Suryawirawan: Right. So I think it’s really interesting, right, for people who may not experience using this kind of API mocking tool. If you refer to the testing pyramid where you have the unit test, integration test, end-to-end test, maybe API test somewhere as well. Where do you think WireMock sits in that pyramid? Can it be in multiple layers as well? Like how do you think we should apply this kind of API mocking tool in the testing pyramid paradigm?

Tom Akehurst: It’s funny you should mention this actually. I wrote a blog post on this a few weeks ago, which garnered a lot of vigorous conversation, you could say. Um, what I’ve observed in practice over the years of doing lots of projects and using API mocking as part of them is that the really useful kind of band of testing migrates to the middle. So you get the testing trophy shape more often than you do the testing pyramid shape. I kind of have this sense that you shouldn’t be aiming for a particular shape at all. You know, the shape as a heuristic is probably, I think, there are better ways of figuring out how you should balance your testing strategy, I suppose.

Whereas, I think, kind of back when I first started developing or when I first started doing test driven development, anything above the unit test layer, anything integrated at all tended to be very expensive and difficult. And tests were costly to build the infrastructure around them was costly to build and running them tended to be difficult and slow and error prone and so on like that. So the reason for the testing pyramid being the shape it was is that if you could push as much as possible down to the sort of unit layer, then things would be sort of fast and easy and so on like that.

But I think the combination of, as you mentioned, sort of frameworks and servers and all that kind of thing shrinking to be small and fast boot up and all that kind of thing, combined with the availability of tools like WireMock, you can now do far more of your testing, what I think of the Goldilocks zone in the middle where you’re isolating individual apps and services by mocking out sort of outside of them on the network. But you’re still getting the benefit of most of your tests are exercising real production code paths. Combine that with sort of improvements and things like observability and debugging tools, you can get the sort of the maximum bang for your buck, I think, a lot of the time by writing tests that way, if that makes sense.

[00:22:05] API Mocking vs Contract Testing

Henry Suryawirawan: Right. And I think the emphasis here is also not in terms of the so called the functional accuracy of the input output, right? But it’s more also like to simulate like what you mentioned, like HTTP behaviors, the networks, right? So it’s more like an integration kind of thing, right? And I think these days people are also familiar with this thing called contract testing. How do you actually differentiate between API mocking and contract testing? Are they the same or are they different? If different, then in which part?

Tom Akehurst: Yeah, it’s an interesting one. They’re kind of adjacent concepts and mocking is often used as part of contract testing. So actually WireMock is used as part of the Spring Cloud contract testing module. The way sort of those kind of consumer driven contract testing tools tend to work is that you define a mock, which is kind of just sufficient for your client code, the thing that you’re building, and then you run some tests with the tool, and the tool then sort of derives a set of tests that it can apply to the server based on what you mocked. So mocking is kind of integral to the process in that regard.

More abstractly, the notion of contract testing in the context of mocking is really important. I think that there are a whole load of benefits to kind of building things with mocks beyond, there are benefits around sort of things like environments stability and so on like that as well. But I think there’s also a similar benefit that you get to kind of mock driven, test driven development where you are kind of saying, this is my interface definition. This is the thing that we’re building to. And there are going to be two implementations of it right from the off. There’s going to be the mock implementation, and then eventually there’s going to be the real implementation. Yeah, and there is a contract that both must adhere to. And contract testing is valuable in that context as well.

A conversation that happens a lot, one of the biggest sources of skepticism about mocking, I suppose, I’m sorry, I’m going off on a slight tangent here, but hopefully it will make sense. One of the sort of points of resistance around mocking that you hear a lot is this idea that it can’t really be trusted. It’s not really realistic. And you see some organizations who will do the vast majority of their testing in a sort of completely integrated arrangement. And it’s this kind of testing where all of it is kind of, I guess, the sort of slightly diffused goal of we’re going to exercise the system in a way that a user would, and then we’re going to see if anything goes wrong with it. And we’re not going to attempt to kind of categorize tests by risk or managing anything like that. We’re just going to do a whole load of testing and maybe things will break or maybe they won’t. This tends to be a very costly and inefficient and, you know, in some ways, very kind of error prone way of testing.

And an alternative to that, making effective use of mocks, I would say, is to have an explicit kind of segmentation of what risks you’re trying to manage with different kinds of testing. One type of testing you’re doing where you’re saying, given that my assumptions about the external contracts I’m integrating with are correct, does the thing that I’m testing actually work the way I expected to? Is it functionally correct? And then outside of that, you can have a usually much smaller set of tests saying, are my assumptions correct? There’s still the place for kind of integrated testing in order to say, both in terms of contract testing and sort of fully integrated testing, we are saying, are these contracts being adhered to by the real system? You know, the mock contract and the real contract, the same thing. But by sort of separating those two risks out, you can save yourself a whole load of, you know, energy and runtime and labor and all sorts of things. Versus sort of saying, I just don’t trust this. I’m going to do everything fully integrated.

[00:25:25] The Economics of API Mocking

Henry Suryawirawan: Yeah, so I think when you mentioned risk based approach, right? I had a previous episode also covering about this. I think tests should be driven by the kind of risk that they want to cover, right? So it’s not like you can test any different permutations are available out there. And also not to mention when you integrate with third party APIs that you don’t actually have control or you don’t have like a line of communication with, it’s very hard to simulate corner cases, edge cases behavior that sometimes could happen, right? But it is random instead of, you know, like you could drive it from a certain input, right? So sometimes all this can be simulated through a mock behavior right? You know, using WireMock or some other tools available out there. So I think, thanks for mentioning about this risk coverage, right?

Tom Akehurst: It’s just one more observation to make about that actually. So it’s, yeah, I’m glad you mentioned that aspect of it because that another challenge, I guess, within testing is the economics of testing sort of different scenarios. And quite often, there are things that are relatively straightforward to test, where they’re happy paths or near happy paths and don’t require a lot of data in order to implement that kind of thing. And then as you kind of step away from those cases, often things get progressively more difficult. If you have cases where you need to load the systems you depend on with very specific data or very large amounts of data. And then, as you alluded to, I think, the cases where you want to simulate failures or errors that you can’t really make these systems produce on demand, they become sort of difficult or impossible.

The other downside of integrated testing is you tend to gravitate towards the tests that you can practically implement rather than maybe the ones that are actually most valuable in managing risk. And the advantage of bringing mocks into the equation is that they kind of flatten the economics out. The difficulty involved in testing some weird edge case is about the same as the difficulty involved in testing your sort of number one happy path.

Henry Suryawirawan: Right. And it’s very important, especially in the integration kind of scenario, right? There will always be certain cases that you never thought before. And sometimes when it happens, you want to cover it as well, right? So maybe in the future so that it won’t happen again. So again, like this is quite a common, typical use cases whenever you integrate with third party APIs, right?

[00:27:27] API First Design

Henry Suryawirawan: In the explanation you just gave, you mentioned that actually a mock driven development kind of thing, right? So I think this is quite similar to the concept of API design first or something around that, right? So, you know, as the creator of WireMock, what kind of development workflow should people do whenever they want to integrate with third party APIs or, you know, in a microservice environment? Is it always the case that we should always come with the API first?

Tom Akehurst: So I think in, certainly, in cases where you’re in environments where there is an API that needs to be built. And there is some opportunity for collaboration between consumer and producer. So very typical in microservice environments, you have this problem of a new product feature needs to be built, but there’s this API and it doesn’t have this API feature yet. And the mobile team needed and the web team needed. The data team need it. So there has to be this kind of collaborative design process. And I think it can be very valuable to build things up front, to sort of design things up front and to actually think about these things rather than writing code first and then generating the design from it. For all of the usual reasons, you know, that sitting and thinking about things and designing them and actually sort of mulling over the consequences of designing things a particular way generally produces better results than just kind of bashing the keys and going for it.

Well, I think also specifically, in the context of APIs, I guess, and this is another facet of kind of organizations that rely heavily on integrated testing. I think that there are places where internal APIs are kind of like these opaque pipes where two things talk through, but nobody really cares what’s going on as long as the system functions overall. And that’s maybe okay when there’s sort of a producer and one consumer. But then if the thing is to be genuinely an API, if at some point there are going to be 10 other teams that are consuming this, then not sort of driving the design forward thoughtfully. So if you continue to take that approach where if these two endpoints can communicate, then it’s all fine, then you’ll probably make erratic design choices that aren’t really consistent with the organization style or the API style, you’ll probably introduce breaking changes. You’ll do things which are going to make it harder and harder for more and more people to adopt this. So actually treating it as a proper API is something which is to be co-designed and evolved in a way which avoids breaking changes and surprises and all that kind of thing is really important.

Where I think mocking fits into this is that mocks can be prototypes of APIs. I mean, this is something that our cloud product is particularly oriented towards is the idea that when you’re designing an API, you want to kind of get it into people’s hands so that they can try it and they can validate it as quickly as possible.

I think a lot of API first type tools stop short of actually giving you something practical you can work with. You know, so you have a design document and maybe you have a bunch of governance rules that you can run against them, which is great. The way I think of it is they’re kind of validated by inspection. You know, you have a lot of people looking at them and going, yeah, okay, this looks about right. Whereas really what you should be doing is giving it to developers and saying, okay, go and code something. Try and build some version of the thing that you want to use this API feature for.

You can guarantee that, nearly always, there’ll be that ‘oh’ moment where you realize that despite the fact you spent three hours in a design session, talking to people about what this API design should look like. The second you try and write some code that uses it, you realize some really obvious thing that you’ve missed some fields that you’re absolutely going to need in the data. Otherwise. You can’t proceed with your workflow. And it’s that kind of thing, you know, the sooner you kind of get it out of the woodwork, the better, particularly in organizations that have built where APIs are being built, as kind of facades over legacy stacks and so on like that. Or where the cost of implementing an API feature is very, very high and itself involves kind of lots of coordination.

Then the value of sort of shifting left that kind of feedback point where you discover whether or not the API’s design is right is huge. You know, you see these banking environments and somewhere like that where an API is this facade over decades and decades of legacy tech. And it can literally take months to surface one, one new piece of data. I mean, there was a friend of mine who works for a big bank who was saying, he had to sort of project manage a kind of three month piece of rework, because there was one field missing from an API and it involved this stack of five teams going all the way down to some really old tech to expose this field.

So yeah, personally, I’m a very strong believer in using mocking as prototyping as a way of surfacing those problems early so that you can deal with them as cheaply as possible.

Henry Suryawirawan: Yeah, I used to work in a bank as well, and I could really relate that experience, right? So changing, especially when it involves a lot of coordination with different teams, especially distributed teams some more, right? So it’s always a good idea to nail down the interface, the contract, the API spec, and things like that, before you actually start implementing and figure out the issue later on, right? So if you can kind of like shift left the API design, I think that will be definitely useful. And besides, if everything really to the spec, right, like you nailed it down, you implement as expected by the spec, I think when you first integrate, it would work, typically, quite beautifully. Like everyone would just be happy simply because you have verified everything in a simulated behavior. And when it comes to the real production one, it actually works really, really well. So thanks for mentioning about this, right.

[00:32:32] Impact to the Developer Experience & Productivity

Henry Suryawirawan: Another aspect of using tools like WireMock you mentioned is about developer experience and developer productivity. So maybe we are not talking about, you know, like open source developer experience, but it’s more like the developer experience and team, right? Engineering team and productivity. So tell us what are some aspects that you think tools like WireMock helps in terms of improving developer experience and productivity?

Tom Akehurst: Sure. I think we’ve touched on some of these already, but the biggest aspect is simply that, in a lot of environments where you have lots of APIs and lots of different types of API of varying levels of sort of developer experience and accessibility themselves. They come from different sources, different vendors, all that kind of thing. Your development environment, the one that you as the developer are working within, where that environment is made up significantly of kind of other people’s APIs. The stability and as I say, the kind of the developer experience that those APIs themselves expose has a huge impact on your ability to be productive.

So, you know, concretely, if you’re integrating with a third party API, which is old, it’s maybe run by a vendor who didn’t put a huge premium on developer experience, and as a result it has a sandbox, which is slow, flaky, maybe not always running quite the same code as the production system, hard to get large amounts of data into, doesn’t have the capacity to do any performance testing. All of these things will impact you directly as a developer, you know, by destabilizing your environment.

And so if you’re working on a sort of highly integrated piece of software, every one of these external APIs that isn’t presenting you an excellent developer experience is kind of degrading the quality of yours by a little bit. It’s a little bit like sort of availability where you’re, you know, if you’re in a highly networked environment, kind of every non perfect sort of availability number for each individual dependency reduces yours by a proportionate amount. And it’s similar with developer experience. And I think if you’re wrestling with dozens of different third party sandboxes that have all got these different sets of problems associated with them, then you can spend a lot of time not actually doing your job.

It’s not just third parties. It’s, you know, APIs built on top of legacy systems. Sometimes sort of commercial off the shelf software that’s sort of installed on premise and maybe you’ve only got one kind of non production license for it. So everyone’s sharing the same environment. It’s running on some ancient server infrastructure that no one wants to buy anymore. Also, you can’t run any more of it, and all of these kinds of problems. So mocking really get that lets you kind of build this sort of insulating wall, I guess, around your own environment. You can kind of say, I don’t need all of that while I’m doing my development. I’m going to keep it. I’m going to sort of build an environment that I can fully control and, you know, get that kind of determinism and performance and all of those things that I need in order for my developers to remain in flow.

Henry Suryawirawan: Yeah, so I think maybe it’s slightly related to that as well as if let’s say those third party APIs or systems, right, needs kind of like a hardware driven. You know, like you have to install something on a certain location. Probably that’s also hard to simulate, right? So I think if you are able to mock that, I think that will be perfect as well.

[00:35:32] Working More Effectively with Distributed Systems

Henry Suryawirawan: And I think you mentioned something quite interesting, right? These days, I mean, many people work with, you know, SaaS, APIs, even microservices within their environment. And as we can see, the trend is going more and more towards that distributed system, right? And there are a lot of challenges definitely working with distributed services. So maybe in your view, what are some insights that you think we can think about in terms of improving our experience, in working with this type of distributed system?

Tom Akehurst: Yeah, I guess a lot of the things I’ve already mentioned, you know, I think treating the APIs, the messages you pass around between these systems as kind of first class artifacts, however you do that. You know, ones where you make them visible and legible. You make them things that you treat as independent artifacts of design. You apply governance rules to them. You try and make them consistent. All of these kinds of things will make it easier.

And then I think, like I say, adopting testing strategies that take advantage of this. So having this notion of I will do most of my testing out to the edge of my boundary for my service or my app or whatever I’m interested in. And I will assume that my assumptions about my contracts are correct, but then I’ll have other supporting testing strategies that will be about validating whether that’s actually true.

It’s a sort of a vintage time at the moment, I suppose, for tooling generally in the API space. I mean, there’s lots of progress happening around standards at the moment. So OpenAPI, has added Arazzo and Overlays and all these kinds of things. So I think the richness with which you can describe APIs, and you can increasingly sort of describe facets of APIs in ways that are useful for then verifying them, observing them, documenting them, all that kind of thing. You know, I think that both the standards and the tooling around those standards is improving a pace at the moment.

Yeah, there’s this sort of diffused area, you know, kind of referred to as API observability, which I think looks really interesting at the moment, where the sort of consolidation around those standards and sort of Open API in particular, kind of allows you to then look at API traffic and draw lots of kind of rich conclusions from it. So I think it’s going to be increasingly necessary to take advantage of that kind of tooling where you have these vast API landscapes. You can figure out what’s going on within. You know, when and how things are changing and what the direction of travel is.

Henry Suryawirawan: Yeah, you mentioned standard. I think OpenAPI has been kind of like the go to standard these days, right? But I could remember back then when REST was just starting to peak in terms of adoption, right? There was no, you know, such tools and for people who work with SoapUI, you know this SOAP land before, right. I think one good thing about SOAP is like it’s pretty standardized, you know. It’s an interface that is well defined. You can actually use any kind of tools as long as it conforms to the spec. So I think, thank God that, you know, OpenAPI exists, right? It used to be called Swagger, by the way. So I think that these tools and standardization definitely is a good thing, right, for us developers, so that we can easily integrate with each other.

[00:38:15] API Virtualization/Simulation

Henry Suryawirawan: There’s another trend that I see in API world called API virtualization or maybe a simulation, that kind of thing. So maybe if you can tell us, what is this actually. Is this something about virtual machines or something like that?

Tom Akehurst: Yeah, I’m glad you brought this up actually, because this is a, there’s definitely kind of a language problem in API mocking or simulation or virtualization, depending on how you want to look at it. So, to give a very brief history lesson that might explain this a little bit, there was the sort of SoapUI, and even kind of before that I would argue that it was part of a generation of tools that called themselves service virtualization. The product that really popularized it was called Lisa from, it was originally from a company called ITKO and then it was Computer Associates. And that was, they use the category term service virtualization. And I think this was before kind of VMware and that kind of virtualization really took off. So it was a lot less of a confusing term back in those days.

Mocking obviously sort of came out of the Agile movement. Came out of the kind of London extreme programming way of doing things, I guess. And I was very into that around the time that I first started building WireMock. And so, you know, mocking as a sort of language and a set of idioms kind of made most sense. And one of the reasons WireMock appeal was it sort of spoke the language of that kind of growing cohort of Agile developers. So the mocking thing sort of came from that period and that set of practices.

API virtualization, I guess, is a kind of reheating of service virtualization, I suppose. I don’t think there’s really, I don’t feel like there’s really a sort of meaningful difference there. And then simulation is this sort of term that some open source tool vendors and commercial vendors have sort of started using as a further break with the past that’s maybe a little bit more descriptive than virtualization or mocking.

I think one thing is worth clearing, when I talk about mocking, personally, and I’m probably in a something of a minority here, I have quite a broad view of what that could mean. I think it can mean very simple kind of canned responses, you know, the thing that I think most people associate mocking with, all the way up to sort of quite complex rich dynamic behavior.

We actually did a survey within the company recently about this, and we sort of discovered that most people seem to see those two as being very different parts of the spectrum, if you like. You know, that mocking is the sort of simple canned example type thing that you do when you’re writing a unit test or a narrow integration test. And then simulation and virtualization and all of that is where you’re doing things that are data driven or sort of templated and dynamic or stateful or introducing any of those kind of sources of complexity. So yeah, I hope that’s cleared up a little bit. I think maybe the convention we’re going for is mocking if it’s, if you’re doing it in code in your inner loop, and then maybe simulation if you’re doing more complex stuff and you’re doing it in your outer loop, as a simple rule of thumb there.

Henry Suryawirawan: Right. And sometimes I think I see people also introduce this kind of like fault injection as part of their simulation, right? So that you can kind of like simulate the failure behavior that your API sometimes is not designed for, I guess, right? So thanks for clarifying that and a little bit of history as well, how all this naming actually came. So thanks for that.

[00:41:13] AI Advancement in API Development

Henry Suryawirawan: So I think talking about technology these days, we can’t run away from AI. So what do you see AI involvement in this API mocking and API development?

Tom Akehurst: I don’t like making predictions about AI, because it feels like everything’s changed within five minutes of you making any kind of declaration. But I guess from what I’m seeing at the moment, one thing that seems to be becoming clear is that the LLMs need to interact with APIs in order to get things done. And while there’s some discussion happening at the moment about whether APIs will go away and agents or LLMs or whatever will just kind of become web scrapers and we won’t need to build separate APIs anymore. Firstly, I’m not so sure about that because I think when you’re looking at sort of consumer facing web based systems, then there is an argument that they generally have much better human UIs than they do APIs.

But I think there’s a whole huge swathe of software out there that is only accessible via its API. So in order to make those kind of things available to AI applications, it’s going to be necessary to provide APIs that AIs can use. It seems that AI, LLMs, I suppose, kind of, they have a different taste in APIs than developers do. And the kind of very normalized, DRY style that we tend to go for in developer API design, it’s not very AI friendly. The AIs tend to prefer things where all the context is kind of there and present and made explicit in one place rather than it being, you know, achieved through references and shorthands kind of all over the place. So I think there’s going to be a move to start building APIs which are intended for agents and that adopt this very denormalized style relative to previous generation APIs. So I guess that’s one thing, I would say.

Another thing I think is happening generally is the AI coding assistants. And I guess, you know, the sort of agents that are just starting to come into the fall now for assisting with coding, I think, are producing a, a lot of demand and revealing bottlenecks in kind of enterprise software delivery systems. So it was interesting, the Google DORA report that came out, I think at the end of last year, you know, which is quite a big sort of quantitative study of developer productivity and the sort of various influences on it. And one of the things I looked at is the use of AI coding assistants. And they found that on average, it was actually, there was a net negative productivity impact where these were present.

And my hypothesis on this is a sort of theory of constraints, you know, that if you make the thing is something faster, which is not the thing which is constraining your systems throughput overall, then you will just build up work in progress kind of around wherever the actual bottleneck is. And a lot of organizations don’t have the downstream kind of software delivery throughput to be able to cope with the more untested code being produced. I guess add to that, that we probably at the moment trust the code produced by LLMs a bit less than that produced by most human beings. I think there’s this big problem to solve generally in terms of how organizations do end to end software delivery in a way that can take advantage of the productivity benefits of coding assistants.

Henry Suryawirawan: Right. So I’m sure, I mean, many people would have tried, you know, coding assistants, right, including generating tests. And I’m sure people will be able to generate WireMock compliant tests through coding assistant, right?

[00:44:25] Building API for AI Agents

Henry Suryawirawan: So I’m actually interested in the one thing that you mentioned about testing API that is going to be used by the LLM or the AI agents, right? These days, people talk about agentic AI. You know, agent that can collaborate with each other and solve a particular task. And these days, there are also so many new tools for doing deep research, right? Gemini, ChatGPT OpenAI also produced one. Maybe I don’t know whether you have experience or not. How do you typically see this kind of API mocking that we need to do in order for us to simulate an LLM agent actually using our API? Is there such thing? Or is this something that we have to kind of like still figure it out along the way?

Tom Akehurst: I’m not aware of anything built and mature and in the public domain that’s doing this yet. It is something that we’re looking at as a company and trying to figure out. I feel like I’m not yet in a position to make any strong statements about what this is or how it works. You’re right. It is possible to get, so I suppose that back to my earlier comment about WireMock having a kind of externalized data format. One of the big advantages of that and because it’s been around for a long time in the internet is full of examples of how to create this data. LLMs are actually quite good at producing WireMock mocks. You know, you can ask for something in WireMock JSON format. And in my experience, it will produce something pretty good, pretty useful.

So one of the things we’re looking at is this idea that we should be asking our AI coding assistants to generate the APIs that they want. So if you’re using an AI coding assistant in order to build an agent for something, then we should be sort of asking the LLM for an API design and then expressing it in WireMock. So then we can go and immediately test it in our agentic workflow. This is an idea we’re exploring, whether it will take us anywhere or proved to be viable or not, I guess time will tell. But being able to use mocking to remove non-determinism when you’re testing these flows is a valuable use case for it as well.

So where you have a whole load of workflow steps, and some of them involve calling an LLM, and some of them involve calling out to fetch data or perform operations in external systems or APIs. You have a difficult enough problem with non-determinism anyway with the LLMs. And if you’re also integrating with a bunch of sandboxes which data maybe isn’t completely stable and so on like that, then you’re sort of multiplying the problem of non-determinism and how you can sort of repeatedly test. So I think there’s sort of a more mundane use case there just in terms of limiting the scope of the things that can change between one test and the next.

Henry Suryawirawan: Yeah, I think what you mentioned is kind of like a cool use case, right? So using AI LLM to actually produce this mock compliant, you know, basically creating a lot of API mocks, that you can test, simulate since the very beginning. It also aids in terms of, you know, this API first design that we just talked about earlier. So I think a really good use case for people who probably want to build a more robust API development experience within your team, right? Please also try using this AI LLM. So I think AI LLM for me also is, has been my go to thing as well. Like sometimes I feel addicted these days. So I’m sure like once you generate a lot of things using AI, maybe one day, you kind of like make it more natural in terms of the workflow, right? It’s not like something that you have to think about.

[00:47:25] 3 Tech Lead Wisdom

Henry Suryawirawan: So Tom, I have one last question for you before we wrap up our conversation. Normally, I ask my guests to actually share this thing called the three technical leadership wisdom. You can think of it just like advice that you want to give to us. So maybe if you can share your version to us.

Tom Akehurst: Sure, okay. So I did a bit of thinking about this before and I hope these don’t turn out to be completely trite. The first one I was going to talk about, it’s not really directly related to the things that we talked about already, but an observation about kind of hiring engineers for a team, particularly in a startup. My bit of wisdom I wanted to share was to sort of challenge a bit of conventional wisdom, I guess, which is that if you’re building a startup, that you should look for these kind of scrappy, very customer focused, very business focused engineers who will kind of move fast and break things, I suppose, if that’s not too much of a sort of tainted way of putting it.

You know, while there are developers out there who can be both scrappy and great engineers, my experience of hiring is that there tends to be a bit of a spectrum of people where you have people who are great engineers, are very rigorous and so on like that, but it may be not sort of business and customer oriented in the way that the people at the other end are. And then the people at the other end have more of those focuses, but maybe don’t produce such great quality code or tend to introduce technical debt at a high rate when they build things.

The point I want to make is that I think you can hire the rigorous engineers and then you can teach them, sort of, I guess, help them to expand their comfort zone to do the kinds of things that are needed to make a startup work. So adopting more of a commercial focus and more of a user and customer focus. But also that kind of flexibility around things like quality and use of technical debt strategically and all of that kind of stuff. And I would argue actually it’s easier to hire people who are great engineers and then show them how to do that stuff rather than hire people which are scrappy and terrible engineers and then try to make them good engineers. It’s harder to move in that direction. So that’s number one.

The second one I think is probably mostly covered it already, but I guess I wanted to say the making an open source project successful is almost a sort of product management cliche, really, but it’s kind of about making your end users successful and making them look good amongst their peers. And I think a large part of that is if they are going to kind of stake their reputation on introducing the tool that you’ve built into their organization, then you need to show that you’re behind it and that you’re serious about it and that this isn’t some developer’s whim. It’s a serious project that you’re going to document and support and standby. And like I said earlier, kind of do all the boring work in order to make a success out of in the long term. So I think that’s number two.

And I guess the third one kind of relates to some of the things we talked about in terms of working in larger engineering orgs. Being productive in organizations that have multiple teams and many services and all that kind of thing is about a couple of key things, in my opinion. One is decoupling teams, to some extent individuals, from the sort of broader context of the technology enough, that they’re, you know, their own sort of working set, their context is at a manageable size. And then secondly, kind of as I alluded to earlier as well, look for as many opportunities as you can to sort of shift feedback left. You know, to get meaningful feedback about the quality and fitness for purpose of your code, to try and make that happen as early as possible, even if actually that means doing a bit more design and planning around things like APIs and interfaces.

Henry Suryawirawan: Wow. So thanks for sharing. I think it’s pretty unique. I would say the last one, right? And especially also the hiring. Actually, this is my first time hearing about hiring the maybe more good quality engineer, right? But try to train them to be a little bit more scrappy, right? A little bit more fast in terms of producing. Maybe not so much up to the quality standard that he aspires, but at least you produce business outcome and generate value faster.

So, Tom, for people who want to connect with you or learn more things related to WireMock and API mocking in general, is there a place where they can find you online?

Tom Akehurst: Yeah, so I mean, LinkedIn is my main watering hole these days, so you can find me there. And, yeah, if you want to drop me an email at any point, I’m tom@wiremock.org .

Henry Suryawirawan: Thanks for that. I’ll put it in the show notes. So thank you so much, Tom, for sharing this conversation today. I think we all learned a lot about API mocking and best practices on doing that.

Tom Akehurst: Thank you very much for having me as well. It’s been great fun.

– End –