#217 - Impact Intelligence: Deliver Real Business Impact from Your Initiatives - Sriram Narayan
“Impact intelligence is an ongoing process of collecting, analyzing, and using data to maintain constant awareness of the business impact of initiatives.”
Understand the what, why, and how of your mainframe code.
Use AI to uncover critical code insights for seamless migration, refactoring, or system replacement.
Why do so many well-intentioned initiatives fail to move the needle?
In this episode, Sriram Narayan, author of ‘Impact Intelligence,’ reveals how to ensure your efforts translate into real, measurable business impact. Stop shooting in the dark and start delivering tangible results that matter.
Key topics discussed:
- What “Impact Intelligence” means and why it is crucial for any business
- The common pitfalls: Why many tech and digital initiatives fail to achieve their intended business impact
- The common misconceptions about “outcomes” in tech and product teams, and why delivery or adoption metrics are not enough
- Surprising insights from the non-profit sector on rigorous impact measurement practices
- Understanding the difference between immediate (proximate) results and long-term (downstream) impact
- How to visualize and map your initiatives to core business goals using an “Impact Network”
- The critical challenge of “Impact Attribution” – how to know if your project actually moved the needle
- Addressing “Measurement Debt” — if you can’t measure it, should you build it?
- The iRex framework: A modular approach to building your organization’s Impact Intelligence
- Balancing speed vs impact: Not just shipping features, but delivering measurable business results
Whether you’re a tech leader, product manager, or executive, this episode will equip you with actionable frameworks and real-world examples to focus on what really matters: delivering measurable, meaningful business impact.
Tune in and start building your organization’s Impact Intelligence muscle today!
Timestamps:
- (02:22) Career Turning Points
- (10:52) Impact Intelligence
- (11:40) The Importance of Impact Intelligence
- (15:09) Understanding Business Impact
- (19:11) Learning & Adopting from the NGO Space
- (22:35) Impact Feedback Loops
- (26:25) Proximate vs Downstream Impact
- (28:20) Building an Impact Network
- (36:47) Differences with OKR
- (38:12) Impact Attribution
- (44:51) The Importance of Measurement & Measurement Debt
- (48:31) iRex Framework
- (54:26) Balancing Between Speed of Delivery and Business Impact
- (57:32) 1 Tech lead Wisdom
_____
Sriram Narayan’s Bio
Sriram Narayan is an independent consultant in the area of impact intelligence. He also helps clients improve digital, product and tech performance.
Pearson published his first book, Agile IT Org Design , in 2015. It won endorsements from the then CIO of The Vanguard Group and the then MD of Consumer Digital at Lloyds Bank .
Sriram has served in product, technology, innovation, and transformation leadership roles since 2006. Along the way, he created Cleararchy , a formulation for organizing hierarchy for the digital age and an alternative to formulations such as Holacracy and Teal organizations.
He has also helped some of his clients move to a product operating model. His write-up of the topic in 2018 has since become a de facto industry reference. His other writings and talks are available at agileorgdesign.com
Follow Sriram:
- LinkedIn – linkedin.com/in/mrsriramnarayan
- Bluesky - @srny.bsky.social
- Twitter / X – @sriramnarayan
- 📚 Impact Intelligence Website – impactintel.net
- 📚 Agile Org Design Website – agileorgdesign.com
- Email – sriram@agileorgdesign.com
Mentions & Links:
- 📚 Agile IT organization design – https://www.thoughtworks.com/insights/books/agile-it-organization-design
- 📚 Continuous Delivery – https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912
- 📚 The Lean Startup – https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous-Innovation/dp/0307887898
- ✍🏼 Product Operating Model – https://martinfowler.com/articles/products-over-projects.html
- 🎥 Keynote talk at Agile India (March 2025) – https://www.youtube.com/watch?v=-qlKOAEPggo
- Lagging and leading indicator – https://amplitude.com/blog/leading-lagging-indicators
- Objectives and key results (OKR) – https://en.wikipedia.org/wiki/Objectives_and_key_results
- Attribution analysis — https://medium.com/data-science-at-microsoft/attribution-analysis-how-to-measure-impact-part-1-of-2-324d43fbbba0
- DORA metrics – https://dora.dev/guides/dora-metrics-four-keys/
- Continuous delivery – https://en.wikipedia.org/wiki/Continuous_delivery
- CI/CD – https://en.wikipedia.org/wiki/CI/CD
- Agile – https://www.agilealliance.org/agile101/
- Test-driven development (TDD) – https://id.wikipedia.org/wiki/Test-driven_development
- Customer Satisfaction Score (CSAT) – https://www.qualtrics.com/experience-management/customer/what-is-csat/
- Profit & Loss (P&L) – https://www.investopedia.com/terms/p/plstatement.asp
- Interactive voice response (IVR) – https://aws.amazon.com/id/what-is/interactive-voice-response/
- Jez Humble – https://www.ischool.berkeley.edu/people/jez-humble
- Dave Farley – https://www.davefarley.net/
- ThoughtWorks – https://www.thoughtworks.com/
- GoCD – https://en.wikipedia.org/wiki/Go_continuous_delivery
Check out FREE coding software options and special offers on jetbrains.com/store/#discounts.
Make it happen. With code.
Get a 45% discount for Tech Lead Journal listeners by using the code techlead24 for all products in all formats.
Impact Intelligence
- In the book, I’ve defined impact intelligence as an ongoing process of collecting, analyzing and using data to maintain a constant awareness of the business impact of initiatives, tech initiatives, business initiatives, any new initiative the business decides to invest in. An initiative that is different from the regular operations of the business, meant to change something — maybe to grow the business, save cost, or reduce risk.
The Importance of Impact Intelligence
-
When people make the case for an initiative, somebody on the business side, a technology leader, a data leader, or whoever it is, usually they say, we think we should do this, and it’ll help us in this way. When approved, there is an allocation of business resources, whether it’s team bandwidth, hardware and software resources, or running a marketing campaign.
-
There is certainly some spend involved in the form of resources or direct money or whatever it is. Therefore, it’s important to make sure that it actually translates into some kind of tangible business impact.
-
This has become increasingly important because funding is not as easy as it used to be. Funding has dried up both for startups and big enterprises. Investors are asking if they’ll get the benefits from these initiatives. For big established companies, the situation is even more dire because investors are saying they’re not sure the return on investment will be better than letting the money stay in a bank, or getting it back through dividends, buybacks, or other ways to return money to investors.
-
Investors are saying they would rather have their money back today rather than deploying it into new things supposedly meant to grow the business, because based on track record, they’re not seeing real business growth.
-
CEOs, COOs, CFOs in earnings calls have to deal with these questions. Sometimes they have good answers, but often their answers could use more data and be more powerful. This won’t happen just by preparing for earnings calls. It only happens if there’s a culture of impact intelligence within the company, with the muscle already in place so that at all levels, people are constantly aware of how their work translates into business impact.
-
The more you start doing it, you’ll at least find some opportunities where the linkages become clear.
Understanding Business Impact
-
A lot of people think they are outcome-oriented, that they’re working in an outcome-oriented fashion. But if you ask about the outcomes, they talk about outcomes which are not really business outcomes, but lower level outcomes.
-
In the book, I describe a hierarchy of outcomes. If you say we delivered something or launched a feature, that’s a delivery outcome. If you say we migrated to the cloud, that’s a technology outcome.
-
When you’re talking about business impact, they still need to translate into some kind of business impact.
-
At the lowest level, you have technology outcomes and delivery outcomes. At the next level, you might have people in your product organization saying this feature got great adoption.
-
Feature adoption is also a kind of outcome. It’s one step closer to business outcome, beyond delivery. You delivered and the feature got adopted. But it’s still away from business outcomes. Or somebody might say we ran a marketing campaign and got a million app installations. That’s a marketing outcome. But what does that mean for the business?
-
The closest to business outcomes are things like more subscriptions, higher CSAT, increased retention, or revenue. If you’re an e-commerce business, higher average order value or lower returns. These are much closer to what I call business impact.
-
They ultimately translate into financial metrics like P&L, but even if we don’t go all the way to P&L, one level short of that is revenue, cost, CSAT, and similar metrics.
-
At a technology level, tech leaders say what we do won’t directly translate, because we’re listening to what the business and product want and building things accordingly — we don’t directly own the business outcomes. Fair enough, but there’s still opportunity for all these people to work together to ensure their joint efforts move the needle on business outcomes.
Learning & Adopting from the NGO Space
-
As part of understanding the NGO and nonprofit space, I realized they have to obtain grants from various grant-providing organizations.
-
I found that in the social impact sector, they have monitoring and evaluating agencies in place. These independent agencies monitor project impact and evaluate whether imparting skills really translated into more employability. Did employment in that region increase as a result, and would it have happened anyway? That’s the counterfactual - can we say with reasonable certainty that your efforts led to increased employment, not just an economic uptick?
-
I found that way of thinking quite rigorous. This is well-established in the social sector. In the business sector, we often don’t think about things like this — we just build it, launch it, move on and build the next thing.
-
That’s how I came across the term they were using: impact intelligence.
Impact Feedback Loops
-
If you talk about Agile engineering practices, they have feedback loops, but they’re mostly delivery feedback loops. The retrospectives and scrum retrospectives are examples of another feedback loop where you reflect on the last sprint and look for improvement opportunities. It’s a mechanism for continuous improvement, but they’re delivery feedback loops.
-
The phrase build-measure-learn came out more than 10 years ago — that’s also a feedback loop closer to impact feedback. However, that’s what I call a proximate impact feedback loop, because when they say build-measure, they’re measuring the proximate impact.
-
When I talk about impact feedback loops, I’m going one step beyond the typical build-measure-learn loop to concentrate on downstream impact.
Proximate vs Downstream Impact
-
There is a low-level metric that contributes to the next level metric, which then contributes to another higher level metric and so on. There’s no limit to the number of levels you can have in an impact. It depends on your domain’s complexity and how you see things in your business.
-
I find this any kind of two-level classification as inadequate. If somebody just says input and output, you’re just talking about one link in the chain, but there are many linkages in that chain. So depending on where you want to focus, relative to a particular context, one level is proximate, the next level is downstream.
-
If you go back to the call savings example, the number of successful chatbot sessions is proximate impact and call savings is downstream impact. But that’s in this context. You can go one level higher and ask: call savings - but does that really help control OPEX?
-
Relative to OPEX, call savings is proximate impact and the expected downstream impact is a better contribution to OPEX, a reduction in OPEX.
Building an Impact Network
-
An impact network is a graph where every node represents a metric or something that can be quantified. It’s not a subjective description. The node is not what you’re going to do to improve the metric. These are the forces at play - the metrics that interplay with each other.
-
If you build out this tree, you have call volume, then self-service versus human service, and under self-service, you have all these channels. You’ve built up four levels in this impact network. We haven’t yet discussed what we’ll do to improve these metrics - that’s separate. You can say you’ve thought of an initiative that will contribute here - that’s an overlay on this network.
-
How to build this? It’s usually an iterative process. I work with clients over several sessions to build it out. It’s important to get the relevant key stakeholders involved for consensus on what you’re building, because there are many ways to imagine and represent this. We’re arriving at one representation that’s acceptable to all key stakeholders.
-
You can do it a bit top down, a bit bottom up, starting with the metrics.
-
What metrics do you grapple with daily or monthly? Let’s put those metrics and see if there are interrelationships. Similarly, what do your execs care about? Let’s put those metrics at the top and see if there are linkages. What’s your current book of work? What initiatives are in play and why are you doing them? What benefits do you expect? Have you captured those metrics in this picture, or do we need to break down existing metrics to capture the current work?
-
Even when you arrive at it, that’s not the final version — it’s version 1.0. As you live with it, extend it and use it, it evolves. There needs to be an owner for this impact network who can maintain it regularly.
-
The map partly exists in the founder’s head, cofounder’s head, and a few people’s heads. But if you put them in different rooms and give them an hour to draw that map, they’ll all end up with slightly different maps. Some people say they don’t have time for whiteboarding, we need to run a business, let’s build and ship.
-
But when I get people together to do this, at the end, they often feel it was worthwhile. They now have a shared way of looking at things.
Differences with OKR
-
If you take the AI chatbot example, the key result is framed in terms of delivery — let’s make sure we have this up and running for this product by this quarter.
-
For product, it’s often framed in terms of satisfactory chatbot sessions. If you have a survey at the end of the chat session, we should have at least a 30% satisfaction score. That’s how the KR is framed.
-
I haven’t seen KRs framed in terms of downstream impact. It would be unfair to do so, because people will say their scope is limited. The key difference is OKRs are directed at individuals, while downstream impact is directed at the initiative. The initiative we’ve sponsored better have this downstream impact. You as a group ensure the initiative has a downstream impact.
Impact Attribution
-
Attribution is a well-defined area.
-
Marketing is one domain where attribution is a well-known term. Marketers want to understand how to distribute spending across social media channels, paid listings versus content, and between different channels.
-
If you go back to the AI chatbot example with four different initiative leads — typically each reports their success individually to their managers. They claim success on that basis. Effectively, all four people have claimed the 10% they saw, so if all were true, it should have resulted in 40% savings in call volume. But that didn’t happen. There’s no mechanism within the organization to reconcile this.
-
Attribution is the mechanism now. Instead of asking everyone to report success individually each quarter, we say: last quarter we observed 10% savings in call volume — congratulations everyone. But let’s get together and decide how much was due to external factors. How much is because the business is losing customers with fewer people calling us, and how much is from each of your efforts? Let’s make a reasonable attempt to make it all add up for a consistent understanding of our efforts’ impact. Then we can decide where to spend our next investments.
-
If you start thinking like this, the other pieces fall into place. Some call volume is seasonality, some is business growth or degrowth, and you can develop heuristics and formulas to attribute components to various channels. It’s not very scientific.
-
The best way is by running a controlled experiment. For the AI chatbot, you expose it to some portion of the population but not others, designing the experiment so everything else is well controlled. If you can repeatedly run this and show that on average, when the AI chatbot is exposed, call center volumes drop, that’s good quality evidence. However, we often can’t run these control experiments.
-
If you can’t run control experiments, you can do heuristics-based attribution. Controlled experiments need a good understanding of statistics, or you need to trust your data scientists.
-
Typically, the business isn’t well versed with this. As long as data scientists say what matches business intuition, everything’s fine. Otherwise, people don’t accept those observations. Attribution is a simpler, more approximate process that helps build the muscle in this area and think more rigorously. Then at some point, you might be ready to listen to what data scientists say.
The Importance of Measurement & Measurement Debt
-
It’s increasingly important to only build if you can measure it. If you can’t measure it, maybe you shouldn’t build it. You’re just shooting in the dark. When pitching to build an AI chatbot, I’m only asking for money to build the chatbot, not the surrounding measurement infrastructure like new reports from the call center to prove changes in call volumes. But that’s customer operations, a different department.
-
My team can’t build that report — we don’t have the knowledge or access to those systems. It’s difficult to build all necessary measurements for my initiative. That’s where I recommend that before the initiative is green-lighted, the people responsible should ask: do we have measurements in place? If not, can we wait and ask appropriate teams to develop those measurements before we start building?
-
No business leader will ask for a measurement improvement program. Technology leaders are already asking for tech modernization. But the people signing the checks need to know their return on investment, so they’re best placed to sponsor this program.
-
With this program and team in place to build out measurements wherever there are gaps, when new proposals come, you can ask: what benefit are you promising? Are these benefits verifiable? Through what measurements? Do we have those capabilities today? If not, can we build them first? Even if we decide to do this in principle, we might want to build the measurements first, ensure things are verifiable, then proceed.
-
If the gap is small, maybe the team can handle it. But for basic gaps, you need a measurement improvement program to pay off measurement debt, just like we pay off technical debt. We might rush to release something, but before starting the next release, we pay off technical debt. Similarly, before starting a new initiative with unverifiable benefits, it’s time to pay off the measurement debt you’ve accumulated in that area.
iRex Framework
-
The iRex framework ties together all these recommendations into different modules. It’s hard to adopt all at once. I’ve split it into eight modules, with three introductory ones you can start with. One is called impact visualization, through which you build out the initial version of the impact network.
-
Then there’s the demand management module, about asking for better justifications for what you’ll build. If someone says “let’s build this because it’ll improve CSAT” — that’s not a great justification. How do you know CSAT improved because of what you did? CSAT is a far-away metric. You’ve articulated downstream impact, but you also need proximate impact. Can you articulate what you’ll do in terms of both impacts? Can you specify how much CSAT will improve, in what timeframe, under what assumptions and dependencies? This is about providing better justifications.
-
Before getting into impact attribution about who contributed how much, if there’s no such culture in the organization, start with the impact demonstration. Just like delivery showcases, do a post-release showcase to demonstrate the impact. Whether the impact is less or more than expected, demonstrate the impact and learnings.
-
Then comes impact attribution through the contribution analysis module, where you analyze contributions from different sources — internal and external factors — and ensure they add up. To do this effectively, you need measurements in place, which is why you have the measurement improvement program.
-
In the advanced stages, once you’ve done all this, you can do deviation analysis — what’s the deviation between expected and actual impact? What’s the reason for those deviations? It’s meant to be a learning exercise.
-
It’s not for beating up anybody, but to learn — just like how we work on getting better estimates for our developer stories.
-
From this process, you can report metrics like the benefits realization ratio — the ratio of actual benefit to expected benefit. This gives you a way to compare across initiatives.
-
It’s like a Say:Do ratio, but sometimes fulfilling 60% of your promise on a CSAT metric might be more valuable than fulfilling 100% on another metric. Several metrics work together to help understand what’s happening and make better investment decisions.
-
All this put together is what I call the iRex framework, and you can go about adopting it very gradually.
Balancing Between Speed of Delivery and Business Impact
-
It’s about speed versus accuracy. If you’re shooting at a target, it doesn’t matter if you’re shooting 100 or 10 bullets per minute if you have no idea what’s hitting the target. Impact intelligence is about gaining that insight — is it hitting the bullseye, second ring, third ring, or flying out? Today, in terms of downstream impact, we have very little idea.
-
A few companies like Amazon are very good at this. But most classic enterprises, the established brick-and-mortar businesses, lack this.
-
I’m not sure there’s much balance to strike. Between speed and accuracy, you’ve got to get accuracy right first. Then, as you practice, speed will improve. If we focus on speed first without accuracy, it’s busy work — you’re doing lots of things. That’s what a feature factory is, churning out features.
-
The more you churn out, the more you maintain and run, so costs keep rising yearly. With the same budget, if more goes to run, less is available for new things. Or you keep asking for budget increases, which brings questions like “What did I get for what I spent?”
-
People shouldn’t have trouble trading speed for measurement. It’s like the WIP concept in lean — start less and finish more. But I have a newer definition of finish: it’s not just delivery, it’s impact. Start less and finish more in terms of impact, extending WIP definition to impact level, not just delivery.
1 Tech Lead Wisdom
-
When I was a developer, I used to chase the shiniest tech stack, doing all the cool technical things. But impact matters more than how hot the tech stack is.
-
Sometimes developers focus on tech stacks that look good on their resume, while business impact may not matter much at a junior level. As you progress in your career, this changes — unless you’re unaware of this dynamic and keep chasing the coolest tech stack.
-
As you progress, you need to shift toward impact — talking more about it, articulating it, and thinking about technical decisions in that light. The more we gain these skills, the better for our tech career progression.
[00:01:16] Introduction
Henry Suryawirawan: Hello, guys. Welcome back to another new episode of the Tech Lead Journal podcast. Today, I have the pleasure to meet Sriram Narayan. He’s one of the, you know, now ex-ThoughtWork-ers, but he used to be a long time ThoughtWork-ers. And I kind of like admired the kind of books that he wrote and the kind of things that he created, you know, like things like product operating model, Agile IT organization design. So Sriram here just recently published a new book as well, titled Impact Intelligence. So when I saw the title, it kind of like piqued curiosity. So today we have the pleasure to actually learn what impact intelligence is. So Sriram, thank you for your time. So welcome to the show.
Sriram Narayan: Hey, thanks for having me. A pleasure to do this here with you. I see that you have, uh, you know, illustrious lineup of previous speakers. So I’m honored to be on the list.
[00:02:22] Career Turning Points
Henry Suryawirawan: Thank you for the kind words. So Sriram, before we start our conversation about impact intelligence, I want to invite you to probably tell us some career turning points that you think we all can learn from you.
Sriram Narayan: For me, uh, um, joining ThoughtWorks in 2004 was a big turning point personally for me. Prior to that I had, you know, straight out of college, I did my engineering from a college in Mumbai. And through campus interviews, I got into IT services. That was the trend at that time. I didn’t know any better. So, you know, I went with the flow. And so my, the first six years of my career I wasn’t too happy wherever anywhere I went. Maybe I was naive or, you know, I don’t know what, so I ended up switching five jobs in the first six years or so. And ThoughtWorks was, um, in a way, I think it was my sixth job in 2004. My well wishers had told me that you’ve already, you know, screwed up your resume by changing jobs so frequently. So, you know, better stick to this one.
Fortunately enough, I finally felt like, oh, okay, here’s a place that I can relate to. And I like the ways of working. ThoughtWorks was big into extreme programming at the time. So, you know, I was thrown into the deep end of TDD, continuous integration, mock objects, all those kinds of things, and I loved it very much. And I kind of did well in that environment. So that was a turning point because, you know, um, it was a bit of a risk that I was taking, uh, with my career by making yet another switch to a company. And also Agile was very new at that time, so people had barely begun to hear about it. And so people were telling me like, are you sure you want to leave this big brand name company to go to, you know, this company that nobody has heard of that has an office in Bangalore and that says it’s going to do this Agile way of working. Are you sure you really want to do this right? But something within me felt, told me that yes, this is the next thing for me, right? So I’m glad that it turned out the way it did. So that was the first one.
The second one I think is, um, ThoughtWorks is mainly a professional services company. Uh, but around 2008 or so, they decided to venture into software products as well. And so around 2012, I had the opportunity to move into the products division. There were three products. One of them was called GoCD, which eventually became GoCD. And so, you know, I got an opportunity to go into that team and I’m glad I took that opportunity. I initially went in as a techie, more of a developer. But then soon, there was, um, because it was a technical product, right? So there was soon there was a opening for somebody who would look at the, so it was a build and development product, right? So we almost needed a roadmap on the build side of things and another product roadmap on the deployment side of things, right?
So soon there opened up an opportunity to kind of, um, own and manage the road map on the deployment side of things, right? So through that, I basically ended up doing a fair amount of product management, talking to new customers, prospective customers, existing customers, industry trends. And based on all that building up a road map, defending that with senior management, and then working with UX designers, engineers, etc, to prioritize the features and get them delivered and all that. And through that, I also saw what the state of the industry is with respect to adopting a tool like this, right?
Because continuous delivery was fairly new at that time. I think Jez Humble wrote and Dave Farley they wrote the book in 2010, if I’m not mistaken. And in 2012 we were kind of building a tool that was aligned to the book. And we were trying to, you know, take it to the market. But we saw that the market was not at all ready to think of build and development as a single responsibility, as a single team that does build and dev and deployment. Sorry, not development. And those are some of the experiences that led to thinking about, okay, we should, it’s not just about the technical techniques and skills. It’s also the way teams are organized that matters to how well you can do these things, right? And so that kind of led me into the organization design track a little bit. And eventually ended up, you know, writing the book on agile organization design, which came out in 2015. So I think the move into Studios, ThoughtWorks Studios, which was the products division was another major turning point.
And finally, the third one was when I decided that, you know, that it’s time to move on from ThoughtWorks and get into independent consulting and it’s partly because by then, I’ve been doing a fair amount of this kind of digital transformation consulting, right? Advising people about team structures, moving from projects to products and all that sort of thing. And because of my book and some other articles around that time, I started getting inbound expressions of interest from potential clients who wanted to hire me for my advice on these matters, right? However, from my employer’s point of view, that wasn’t always an attractive business proposition. Because the client just wanted me and my advice for a few months, right? Whereas the employer was in the business of selling software delivery with maybe a little bit of digital transformation advice as a front end, a part of that engagement. So unless it was um, at least a couple of million dollars worth of a deal, they wouldn’t always entertain my request to, you know, support this particular request.
And so I decided, okay, maybe I have moved away from the sweet spot of where the company is and maybe I should try being an independent consultant. I was a bit nervous when I took the plunge because I did not have like a ready lineup of clients or anything like that. But I’m glad I did it because over the next six months to one year, things worked out. I’m still continuing to you know, do my independent consulting.
Henry Suryawirawan: Yeah, thank you so much for sharing your story. I think it’s really interesting right the first few years, you kind of like job hopping, right, jobs to jobs, I think for people who like, I think for me, the things that I learned is that if you are not really, you know, clicking with your job right, don’t be afraid to just try other jobs, right? Especially these days. I’m sure there are plenty of options that you can do until you find the right one. And you stay in ThoughtWorks for so long and until now you become an independent consultant. So I think there are so many learnings there.
[00:10:52] Impact Intelligence
Henry Suryawirawan: So Sriram, let’s go to the main topic for today’s discussion, which is about impact intelligence, your new book, right? So maybe let’s start with what is your definition of impact intelligence and why are you writing this book?
Sriram Narayan: In the book, I’ve defined impact intelligence as an ongoing process of collecting, analyzing and using data to maintain a constant awareness of the business impact of initiatives, tech initiatives, business initiatives, any sort of a new initiative into which the business decides to put some money in. An initiative that is kind of sort of different from the regular operations of the business, right? It’s meant to change something. Maybe to grow the business or to save cost or to reduce risk, anything like that. Yeah.
[00:11:40] The Importance of Impact Intelligence
Henry Suryawirawan: So yeah, I think it’s very interesting, right? Because um, so many tech digital initiatives these days, especially during the so called digital transformation era, right? And many of such initiatives, still statistics says, like maybe more than 50% considered failed, right? I think one of the biggest thing is because they cannot reach the intended impact, right? So tell us like how this kind of problem continues to exist and how impact intelligence can actually help the projects to succeed?
Sriram Narayan: Yeah, I mean, business impact, right? So it’s, uh, when people make the case for an initiative, somebody on the business side or a technology leader or a data leader or whoever it is, usually they say, okay, you know, we need, we think we should do this. And by doing this, It’ll help us in this way, right? Essentially, when it is approved, there is an allocation of business resources, right? Whether it’s team bandwidth, whether it’s other kinds of hardware and software resources or you have to run a marketing campaign, whatever it is, right? There is certainly some spend involved in the form of resources or direct money or whatever it is. Therefore, it’s important to make sure that it actually translates into some kind of tangible business impact, right?
And increasingly that has become important, because funding for all these things it is not as easy as it used to be. Funding has dried up both for startups as well as for big enterprises. In both cases, investors are saying like, okay, you know, are we sure we’ll get the benefits by doing this, right? And in the case of big companies, big established companies, the situation is even more dire, if I may say so, because there are many such companies where the investors are saying that we are not sure that the return on investment on doing this is going to be better than just letting the money stay in a bank. Or giving us back the money in some way by either by paying dividends or by, you know, declaring a buyback, or there are various kinds of things that you can do to return money to investors.
And so investors are saying that we would rather have our money back today rather than, you know, you deploying it into all these new things, which will supposedly meant to grow the business. But based on your track record, we are seeing that it’s not really growing the business, right? So CEOs, COOs, CFOs, when they are in earnings calls, they are having to deal with these kinds of questions. And sometimes they have good answers, but many times, you know, I feel that, their answers could do with more data, more powerful answers, right? And that’s not just going to happen by preparing for an earnings call. It only happens if there is, um, a culture of impact intelligence within the company, right? And that the muscle is already in place so that at all levels of the organization, people are constantly aware of how what I am doing is actually translating into business impact. And it’s not, it’s not trivial or easy to do that. But I think the more you start doing it, you’ll at least find some opportunities where the linkages become clear.
Henry Suryawirawan: Yeah, so I think what you mentioned is really interesting, right? Because these days there are a lot of things where, you know, like startups started to get these issues when getting new funds. Layoffs happening as well in terms of getting to efficiency. Even the traditional big corporates these days, like try to question, whether the transformations initiatives that they do actually bring back the return, like what you said with the investors.
[00:15:09] Understanding Business Impact
Henry Suryawirawan: So I think the key theme here is about business impact, right? So when you deploy new initiatives, digital tech initiatives, how can you actually get the business impacts. But the definition of business impact itself, first of all, I think can be quite abstract. Many people might have different ways of measuring the business impact. In your book, you actually clearly define how do you measure business impact. So maybe let’s start from there. What is actually business impact? And what are some of the maybe misconceptions people have about business impact?
Sriram Narayan: Yeah, no, that’s a great question as well. Thank you. Um, a lot of people think that they are outcome oriented, that they’re working in an outcome oriented fashion or the teams are outcome oriented. But if you really ask about the outcomes, then, uh, they talk about outcomes which are not really business outcomes, right? Which are a lower level outcome. And so in the book, like you pointed out, I describe a hierarchy of outcomes. Where I say that if you say that we delivered something or we launched something, we launched a feature, that’s a delivery outcome, yeah. Or if you say we migrated to the cloud again, okay, that’s a technology outcome, right? And that’s, I’m not trying to underplay those outcomes. They are, you know, they are good achievements. But in, when you’re talking about business impact, they still need to translate into some kind of business impact.
So at the lowest level, you have say technology outcomes, delivery outcomes. And then at the next level, you might have say the people in your product organization. They might say that, hey, you know, this feature got great adoption, right? We have like lots of users using this feature. So feature adoption is also a kind of outcome, right? It’s one step closer to business outcome, right? It’s beyond delivery. Okay. You delivered and the feature got adopted. But it’s still a bit away from business outcomes, right? Or somebody might say say, you know, we ran a marketing campaign and as a result, we have a million installations of the app. Okay, great. So that’s a marketing outcome. Like, you know, you, and your product is now embedded in a million smartphones. Okay, great. But so what? Like how, what does that mean for the business, right?
So I think the closest to business outcomes are things like, okay, we have, uh, more subscriptions or a CSAT has gone up or retention has increased. Or, uh, revenue. We have greater revenue, right? Or you know, if you’re a e-commerce business, the average order value has gone up. Or the returns have come down. Those kinds of things are, I think, much closer to what I call as business impact, right? And of course, you can, the ultimate, like the thing that, financial metrics, of course, they ultimately translate into financial metrics, like your P&L and so on, right? But even if we don’t go all the way to P&L, one level short of that is revenue, cost, CSAT, you know, all those kinds of metrics.
So obviously at a technology level, when we, when tech leaders do their work, obviously they say, okay, what we do is not going to directly translate, because we are listening to what the business wants, you know, what the product wants, and we are building things according to that, right? So we don’t directly own the business outcomes. Fair enough, right? But I think there is still opportunity for all these people to work together to make sure that all their efforts jointly is, you know, in the direction of moving the needle of the business outcome.
Henry Suryawirawan: Yeah. So I think when you explain that, I think so many of us maybe tech leaders or maybe even like product leaders, right? We are guilty of, you know, explaining the kind of outcome that we produce by delivering something, right, in terms of maybe just delivery, in terms of just technology, or it could be just like product features and things like that, right? But we forgot to actually translate it ultimately to the so called business outcome or business impact. I think the way I see it also, like maybe if I can define business outcome is like something that the investors or maybe your board of directors are interested in, right? So when you explain we just migrated to the cloud, and I think those people might not be interested at all, right? But if we talk about profit, maybe cost savings and things like that, they are interested in that. So maybe that’s also one way to kind of like bridge misinterpretations about business outcome.
[00:19:11] Learning & Adopting from the NGO Space
Henry Suryawirawan: So one thing about impact intelligence that you mentioned in the book which I find really interesting for you to uncover is that you mentioned you learn about this concept while studying, you know, how the nonprofit, the sustainability organizations are doing it, right. And then you translate that into this impact intelligence. Like, how did you come up with this idea? Maybe tell us the story behind it so that we can also get the insights behind it.
Sriram Narayan: Sure, yeah. So as soon as I started my independent consulting, a nonprofit organization in Bangalore, they reached out to me and asked me like, what does Agile mean in their context, right? So they wanted my help with that sort of a thing. And so I got involved. I started helping them out. And as part of that, my job is to also understand their space, what they do. And, you know, so as part of understanding the whole NGO space or the nonprofit space, I realized that, okay, you know, over there, they have to, first of all, obtain grants from various grant providing organizations, right?
So they have to make a grant proposal or they have to say, you know, we qualify to get this grant from you because of our past success with various projects or because of our expertise in various areas and so on, right? So for example, you know, a project might have to do with, say, improving the employability of people in a particular region, right? Vocational skills and so on, right? And so there will be a field force that helps with that, right? Conducts various kinds of training, sets up the labs, all those kinds of workshops. Yeah. But before that, you have to get the grant and then you have to execute the project.
And then I found that in the world of, uh, the social impact sector, they already have this in place where they, what they have is they have what are called as monitoring and evaluating agencies. And these agencies, they are independent agencies whose job is to monitor the impact of the project and to evaluate whether, okay, you imparted all these skills, but did it really translate into more employability? Did employment in that region go up as a result, right? And would it have happened anyway? That is the counterfactual, right? Like, you know, were there other factors? You know, was it, can we say with a reasonable amount of certainty that it is your efforts that led to the increase in employment and not just an uptick in the economy, for example, right?
So I found that, okay, wow, these, you know, that’s quite rigorous, right? That way of thinking about things. And I saw that, oh, wow, this is already well established in the social sector. Whereas we, in the, you know, in the business sector, we often don’t think about things like this, right? We just build it, launch it and move on and we build the next thing. So that led me to, you know, research a little more. And that’s how I came across this term that they were using impact intelligence, right? I can’t remember if I also thought about that term and looked it up and then I came across it or, you know, was it the other way around. But nevertheless, you know, that is how it happened.
Henry Suryawirawan: Yeah, I find it really interesting, right, the way we actually learn from the NGO, how they actually assess, you know, like what you said, monitor and evaluate the impact of their so called non profit activities, right? And then see how actually that translates to an actual, you know, either, for example, I don’t know if it’s sustainability impact, right? Which sometimes are really, really hard to quantify, right? But also at the same time, if we look at the tech in digital projects, right? Sometimes also it’s really hard to quantify whether it is really our project that actually moved the needle, right? Because there could be so many other things that also impact, the outcome that we have.
[00:22:35] Impact Feedback Loops
Henry Suryawirawan: So I think all this come back to what you call impact feedback loop, right? So I think the way we actually run a particular product or project, for example, right? Sometimes we know that we have to get feedback loop, but mostly are still kind of like the technical feedback loop. You know, the CI/CD, the TDD, and all that. Sometimes also maybe the, some product kind of like feedback loop, either the user adoption, you know, the features being used and all that. But you have more feedback loops that you built inside the kind of like hierarchy that you mentioned. So tell us how, what is this impact feedback loop? How can we use it? And, you know, like, is there something that we should do differently now, uh, in our projects?
Sriram Narayan: Yeah, like you mentioned, right? I mean, if you talk about, say, things like the Agile engineering practices, then they do have feedback loops, but they are mostly like delivery feedback loops, yeah? And even the retrospectives, the scrum retrospectives and so on, right? They are example of potentially another feedback loop where you reflect on, you know, the last sprint and think about how the team can, uh… Is there an opportunity for improvement in the next sprint and so on. So it’s, uh, in one way, it’s a mechanism for continuous improvement. It is also a kind of a feedback loop, right? But they are delivery feedback loops.
Now, yes, like you said, in the Lean Startup Book that came out more than 10 years ago, they popularized the phrase build-measure-learn, right? So that is also a kind of feedback loop, right? When you say build-measure-learn, product people are often quite familiar with this particular loop, right? So now in that world, right, and you say build-measure-learn, I think that is a feedback loop and that is closer to impact feedback. However, that is what I call as a proximate impact feedback loop, right? Because when they say build-measure, they’re measuring the proximate impact, right?
So to, you know, to give you an example, like I used one of these examples in the book, right? Let’s say there is a big, established company, uh, with a big call center or a customer contact center, right? Where there are lots of agents handling calls. And somebody comes up with the idea that, okay, if we build this AI chatbot, then maybe, you know, more people will be served with the chatbot itself. And we’ll have end up not having so many calls get into the contact center. And that way we can potentially grow the business without having to grow headcount in the contact center, right? That might be the thinking.
Now if you just use build-measure-learn, then from a chatbot point of view, you might simply just measure the number of successful chatbot interactions, right, and you might try iterating based on that, right? So that’s good. That’s much better than simply delivering the chatbot and declaring success, right. However, in the context of the impact feedback loops that I described in the book, we have to go beyond proximate impact to downstream impact saying, okay, it seems like the chatbot has a good uptake, right? Like a lot of people are using the chatbot. We have this survey at the end of the chat session, and it looks like 30% of the people are saying yes. This chat session, you know, I got what I want out of this chat session, right? But did that really translate into call savings? It may or it may not.
For example, what might be happening is they could have, maybe the users could have done the same thing using the mobile app, by navigating the menus in the mobile app, right? But instead of navigating the menus, they chose to interact with the chatbot. Or the website. So what has happened is you just switch from one self service channel to another, right? The total amount of self service may not have gone up and therefore the call savings might have not really, you know, realized. But unless we get into this, we wouldn’t know this, right? So when I talk about impact feedback loops, I’m going one step beyond the typical build-measure-learn loop to concentrate on downstream impact.
[00:26:25] Proximate vs Downstream Impact
Henry Suryawirawan: Yeah, so proximate and downstream impact, right? So I think this is crucial definition or key terms, right? So is it kind of like similar to what some people call, you know, the output/outcome or the lag, lagging and leading indicator, those kind of thing?
Sriram Narayan: It’s kind of related, but if you see, you know, I also use this in conjunction with the visual that I call the impact network, right? And the impact network is basically a map of linkages, right? Like you say, basically, there is this low level metric that contributes to this next level metric, which then contributes to this another higher level metric and so on. And there is really no limit to the number of levels you can have in an impact (network). It depends on the complexity of your domain, right? And it depends on how you see things in your business, right? And so because there can be several levels, right? I find this any kind of two level classification as inadequate. If somebody just says input and output, you’re just talking about one link in the chain, right? But there are many linkages in that chain, right? So depending on where you want to focus, like relative to a particular context, one level is proximate, the next level is downstream.
But then if you go higher up, like if you go back to this call savings example, right? So the number of successful chatbot sessions is proximate impact and call savings really is downstream impact. But that is in this context. But you can go one level higher context and say, okay, call savings. But so what? Does that really help me control my OPEX, right? Operational expenditure. So now in that case, right, relative to OPEX, call savings is like proximate impact and the downstream, the expected downstream impact of call savings is a better contribution to OPEX, right? A reduction in OPEX. So that’s why, you know, I think they are related, but they are like, you know, the terms that you mentioned, they are, you can take any link within the impact network and apply those terms to the end points of that link.
[00:28:20] Building an Impact Network
Henry Suryawirawan: Yeah, I think the key thing that you mentioned just now is actually the impact network, right? So this is one way actually for companies to improve their so called impact intelligence, because if you previously just do, I don’t know, like product metrics, maybe even like just project success, right? You may not have this kind of impact network, which I find can be really powerful, because I don’t know whether, you know, I think in all my, all the experience in any organizations that I joined, I rarely see this. Maybe mostly focus on like, I don’t know, financial statements or maybe like project roadmaps and things like that. But to actually see in an organization how all things relate to each other and the kind of impact they bring until the very higher up either like, for example, profit, revenue, and things like that, I think it’s kind of missing.
So tell us how to actually build this impact network for people who want to give it a try, right? Maybe we can take an example of this, you know, chatbot or, you know, customer support domain. Maybe how can people visualize, you know, this impact network?
Sriram Narayan: Yeah, this is going to be a bit challenging to do without a visual, but I’ll, I’ll attempt it. So first of all, you know, the thing to realize is the impact network is in a way, like since this is a Tech Lead Journal and I’m, you know, the audience is going to be mostly techies, I think I can use this terminology and say that an impact network is basically a graph where every node in the graph represents a metric or at least something that can be quantified. It’s not a subjective description. So every node in the graph represents a metric, right? So the node is not like what you’re going to do to improve the metric. No, that is not a node in the impact network. It’s just, these are the forces at play. These are the metrics that interplay with each other, right?
So to go back to the AI chatbot example. So at the very top, I might say, okay, I have, the COO wants to cut operational expenditure and he’s asked for various ideas to cut operational expenditure. So you have operational expenditure at the top of this network. And then there are many ways to do it, right? And one of the ideas that came out was, hey, you know, maybe we can address, uh, call volume at the contact center. And maybe there is something we can do to save call volume and improve customer satisfaction at the same time, right? So that is the next level down, right? Saying, okay, if I save call volume, I’m making a potential contribution to OPEX.
So that is one link. Or actually, even before that, right, you can think of, okay, OPEX, customer operations is one heading of OPEX, right? So the whole cost of customer operations or customer support operations, right. And then call center operations. Then the next level down is okay, because call center operations has fixed costs and variable costs, right? The fixed cost is all the infrastructure and other things. One of the variable cost is the call volume. Because as the call volume increases, you might have to hire more people to attend to that volume, right? So that is the variable cost, but there are fixed costs. So those are like, if you go down the impact network, you have OPEX at the top and the next level you have one of the contributors to OPEX is cost of call center operations, right? Then the next level down. Okay, that can be split into fixed costs and variable costs, right? Now variable costs, one of the examples of variable cost is call volume, right?
So now the way you try to address call volume, right, is by introducing self service. You try to introduce various self service channels. But each self service channel can be an initiative in itself, right? So for example, when people call the number, they’re not immediately directed to a human. There’s usually some kind of a, you know, press one for this, press two for this and so on, right? And that is called the IVR or the interactive voice response menu. And somebody designed that menu with a view to maximizing self service, right? So that itself is an initiative that is also a self service initiative, right? So the one of the headings under uh, call volume is this IVR. Then there are other self service channels like your digital channels, right? But, uh, digital, again, you have the website, you have the mobile app, right? And then you could choose to go into them and do whatever you want, navigate the menus and get it done. Or you could choose to look at that chat bot icon at the bottom right or whatever and click on that. And then self service yourself through that, right? So that’s a channel within a channel, but, you know, that’s a separate initiative.
So if you build out this tree, so you have like call volume, then there is self service versus human service, right? And under self service, you have all these channels. So you already built up like four levels in this tree or this impact network, right. At no point, we have not yet discussed what are we going to do to improve this metric, right? That is separate, right? That’s like you can map, you can say, okay, I’ve thought of an initiative and I think this initiative will contribute over here, right? That’s an overlay on this network.
So now coming to how to build this, right? It’s usually an iterative process, right? I usually work with clients over several sessions to build it out. And it’s important to get at least the relevant key stakeholders involved so that there is some consensus on what you’re building, because there are many ways to imagine this and many ways to represent this, right? And we are arriving at one representation that is more or less acceptable to, you know, all the key stakeholders.
And the way you do it is you can do a little bit top down, a little bit bottom up, right? Start with the metrics that you’re, I’ll usually talk, okay, what metrics do you anyway grapple with on a daily basis or on a monthly basis? So let’s put those metrics and let’s see if there are already some interrelationships between those metrics, right? Similarly, I’ll say what are the things that your execs care about, right? Let’s put those metrics at the top. Let’s see if there are linkages coming up, right? Then I’ll ask, what is your current book of work? What are the things that are currently in play, right? And those things, those initiatives, why are you doing them? What benefits do you expect? So those metrics, have you already captured them in this picture, right? Or do we need to break down an existing metric into something else to capture the existing book of work, right?
So I use a bunch of these kind of mechanisms to, some of it is inside out, outside in, bottom up, top down, right? And iterate on that a few times to arrive at. And even though you arrive at it, that is not the final version. That is like a version 1.0. And as you live with it and extend it and use it, it’s going to evolve. So, you know, that needs to be a owner for this impact network who can, you know, maintain it on a regular basis.
Henry Suryawirawan: Wow, thank you for your attempt, right, to kind of like visualize that in the narration kind of way, right? So I think, uh, if people can follow, right, I, I’m sure it’s kind of like powerful, right? The way you kind of think in terms of the metrics, right? And how they interlink with each other from top to bottom, right? And like I said, right, I rarely see this in any organization, especially in startups, right? Where people just scramble and try to build initiatives over initiatives and deliverables, right?
Sriram Narayan: Yeah. You know, that’s a very good point. And I think partly the map exists in the founder’s head, in the cofounder’s head and in a few people’s head, right? But unfortunately, if you put them all into different rooms and ask them, give them an hour and give them a, you know, ultimatum to draw that map, you will find that they will all end up with slightly different maps. It won’t be exactly the same, right? And that is where the opportunity is to say, okay, you know, some people think that, oh, I don’t have time for all this white boarding stuff, right? Like we got to run a business. Let’s build it and ship it already, right?
So, but they often don’t appreciate, but I’ve seen that when actually I managed to get people together and do this, then at the end, often more often than not, they feel, oh, this was worthwhile, right? We, you know, we now have a shared way of looking at all of these things. And, you know, so, yeah, I would highly encourage people to, you know, set aside… Like people do so many leadership offsites, right? I think this could be a great exercise for an offsite, like spend half a day doing this, right, and iterate on it a little bit.
Henry Suryawirawan: Yeah, so I was laughing when you mentioned that, right? Of course, this exists in some people’s head, right? Especially the board of directors or maybe the investors and all that, right? But typically, it doesn’t translate down to the layers below…
Sriram Narayan: Yes.
Henry Suryawirawan: Like what you said, the shared understanding is not there. And because of no shared understanding, right, people could interpret the things that they are doing in a different way. And at the end of the day, it doesn’t actually align to, you know, making the business impact that maybe some of those people actually would have expected.
[00:36:47] Differences with OKR
Henry Suryawirawan: Is this something similar to the concept of OKR? I know in some companies, typically in a startup who follow, you know, like the Google way or, you know, those big tech giants, right? Is this concept actually similar to OKR?
Sriram Narayan: See, if you again take this example, right, like, the AI chatbot example. Now, the AI chatbot team, their OKR is typically framed as, the key result is framed as, one, it’s framed in terms of delivery. Like, let’s make sure we have this up and running for this product by this quarter, right? That is one key result. It’s framed in terms of delivery. And to product, it’s often framed in terms of number of satisfactory chatbot sessions, right? Like they say that if you have the survey at the end of the chat session, then we should have at least a 30% satisfaction score at the end of the chat session. That is how the KR is framed, right?
I have not seen the KR framed in terms of downstream impact. And it’s unfair also to frame the KR in terms of downstream impact, because people will say, hey, but you know, my scope is limited. You can’t give me a KR in terms of the downstream impact, right? So the key difference is OKRs are directed at individuals, whereas this is not directed at individuals. This is, the downstream impact is directed at the initiative. The initiative that we have sponsored, it better have this downstream impact. You as a group ensure that the initiative has a downstream impact.
[00:38:12] Impact Attribution
Henry Suryawirawan: So I’m quite curious, right, because, you know, so many projects out there, so many things that could happen that could drive business impact. Sometimes we come up with hypothesis, okay, by doing this, we are sure that we can, you know, move the needle in this area. But eventually, right, it’s really hard to actually kind of like attribute that the things that we do actually translate to that improvement.
So maybe from your actually, experience, how can actually people start attributing their initiatives into the actual impact? Because there are so many variables happen. Or like what you mentioned, even in the very beginning with the non profit. Could be just the economical situation that improved, and suddenly, you know, you have your metrics coming up as well. So maybe how can we attribute things better?
Sriram Narayan: Yeah, well, you know, attribution is a sort of a well defined area. There are many cases where, you know, people study attribution. So marketing, for example, is one domain where marketing attribution is a very, very well known term. So for example, marketers want to understand, you know, how to, how to distribute their spending across various social media channels, right? You know, how to do paid listings versus focus on content. And one channel versus another and so on, right? And they know that, you know, it’s like in marketing, they say you first create awareness of whatever you’re offering. Then the user enters a period of consideration where they consider your offering. And finally it leads to conversion, right? It might lead to conversion or they become a…, yeah.
So in those things, marketing attribution is a thing where they think about, you know, how do we, how do we know that it was the, you know, whether it was the Facebook campaign that helped generate leads or whether it was the, you know, the Google campaign or, you know, something else. And what sort of campaign, right? Like all those things. So the, uh, so one is to take some inspiration from how they do things. Although now that area is in a bit of a crisis, because iPhone disabled the identifier, for all these things, right? So there they’re having difficulty tracking things and they have moved on to digital fingerprinting and all those kinds of things. So that is one.
The second is I have given a couple of examples in the book about how to think about attribution, right? So for example, the way to, first of all, the way to, If you start thinking about it the right way, I think people are smart enough to find the rest of the steps. So what happens typically, if you see, if you go back to the AI chatbot example. Usually, you know, there is a, let’s say, there is somebody, let’s talk about four different initiative leads, right? There is an initiative lead for the AI chatbot. Then there is a lead for the website. And there’s a lead for the mobile app. And there is a somebody in charge of, somebody in customer operations who’s designing the IVR, the interactive voice response menu, right? Now, potentially all four of these are candidates for self service, right? Now typically what happens is each of them report their success individually to their respective managers or bosses, right? And so they say they all look at call volume, they might get some data and say, oh, I can see that call volume dropped by 10% last quarter. And it must be because of what I’ve done, right? And there is, so… more or less these, they claim success on that basis to their respective managers.
So effectively what has happened is all four people have claimed that 10% that they saw, right? And so if all of that were true, it should have resulted in 40% savings, right, in call volume. But that didn’t happen. And there is no mechanism within the organization to reconcile this, right? So attribution is the mechanism now. To say, okay, instead of asking you all to report your success individually, what I want, I’m going to do is, once a quarter, I’m going to say, hey, last quarter, we’ve observed a 10% savings in call volume. Great. Congratulations everybody. But now let’s get together and decide how much of this was due to external factors. How much of it is because maybe the business is losing customers. And so there are fewer people calling us, right? Um, and then how much of it is because of each of your respective efforts. And let’s make a reasonable attempt to make sure it all adds up so that we have a, you know, consistent understanding of the impact of our own efforts. And so that we can decide where to spend our next round of investments on, right?
So if you start thinking like this, then the other pieces will begin to fall into picture, right? Because then you can say, okay, now, some of the call volume is seasonality. Some of it is business growth or business degrowth, right? And then you can come up with some heuristics and formulas, etc, to start attributing components to various channels, right? It’s not very scientific.
Now, the best way to do this is to run a controlled experiment, right? Where you say, like in the case of the AI chatbot, you say, okay, I’m going to expose the chatbot to, you know, some portion of the population and not expose it, right? And I designed the experiment in such a way that it’s everything else is well controlled for, right? And if I run this experiment, and if I’m able to repeatedly run this experiment and show that, yes, on average when the AI chatbot is exposed, the corresponding volumes at the call centers drop by a little. That will be very good quality evidence, right? However, very often we are not in a position to run these kind of controlled experiments, yeah. There are various reasons why, and I go into all of them in the book.
Short of being able to run controlled experiments, what can you do next, right? This is what you can do next. You can do some heuristics based attribution. And the other challenge with controlled experiments is you need to have a good understanding of statistics. Or you need to trust your data scientists very well to go by what they say. If they say that, you know, oh, this experiment was underpowered and therefore we can’t rely on these results. If you don’t know what underpowered means, right, you still got to trust what your data scientist is saying, right? Or if they say that, oh, you know, the effect size is not good enough or some other terminology like that.
And typically what happens is the business is not well versed with this, right? So there’s often a bit of, as long as the data scientists say what matches the business’s intuition, everything is well and good. Otherwise, you know, people don’t really accept those observations, right? Whereas attribution is, I think, a simpler process. It’s a more approximate process, but it’s a simpler process and it helps build the muscle in this area, right? Build the muscle to think about this way and to get more rigorous. And maybe then at a certain point, you’d be ready to listen to what the data scientists have to say.
Henry Suryawirawan: Yeah, I think that’s really a very good example, right, especially when you mentioned, you know, the four initiatives all claim the same kind of impact, you know, at the business level, right? Simply because people don’t have all these attributes or maybe variables, you know, that they measure when people claim, right, you can’t really attribute it.
[00:44:51] The Importance of Measurement & Measurement Debt
Henry Suryawirawan: So I think all this comes back to having good measurement, maybe good analytics, and maybe good thinking in the first place, right? When you actually have the initiatives, right? When you plan the initiatives, to kind of like know what things to measure, right? Or what levers that might move because of certain things. And I know in many organizations, this capability might not be there. Is it something that, as a, I don’t know, like digital product team or maybe organization, they should also build this kind of capability?
Sriram Narayan: Yes, absolutely. Yeah, I mean, I think it’s, uh, increasingly important to say that you should only build it if you can measure it. If you can’t measure it, maybe you should not build it, right? Because you’re just shooting in the dark, so to say. But what happens usually is that, when I’m pitching, like if I’m pitching to build the AI chatbot, right, I’m only asking for money to build the AI chatbot. I’m not asking for money to build all the surrounding measurement infrastructure so that because I might have to get a new report from the call center to prove that there is a change in data volumes, right? In call volumes. But that is customer operations, a different department. Maybe it’s handed off to a vendor or something.
So, you know, my team cannot build out that report, right? We don’t have the knowledge. We don’t have access to those systems and so on. So it’s difficult to start building all the necessary measurements in the context of my initiative. And that’s where I recommend that, before the initiative is green lighted, the people who are responsible for saying, okay, let’s go ahead with this. They should say, do we have the measurements in place? If not, can we wait a bit? And ask the appropriate teams to develop those measurements, right? And then let’s go ahead and start building this, right?
So there are multiple ways to do this. One of the ways I’ve suggested in the book is for the execs, the COO and the CFO. In a way, I’ve targeted this book at a COO, CFO audience, because I feel they are the people with the authority to do these things. Like if you want to create a measurement improvement program, right? Uh, no business leader is going to ask for it. And our technology leader is already asking for various things for tech modernization. So they are not going to ask for a measurement improvement program, right? But it’s the people who are signing the check who need to know that what is the return I got for this, right? So they are in the best place to sponsor this program.
And then if you have this program in place and a team in place to build out measurements wherever there are gaps, and then when any new proposal comes, you can say, okay, what is the benefit you’re promising? Are you sure, are these benefits verifiable, right? Through what measurements? Do we have those capabilities in place today? If not, can we build them first? Assuming that, yes, in principle, we are deciding we want to do this, but if we don’t want to start this tomorrow, we want to first build the measurements, make sure that things are verifiable and then go ahead, right?
If the gap is very small, maybe the team itself can do it, right? So if it’s just about building a dashboard on top of some existing base metrics, sure, the team itself can do it. But if there are some basic gaps, then you need something like the measurement improvement program, and get them to, in a way, so, you know, I say pay off the measurement debt, just like we pay off technical debt, right? Like we might be rushing towards the release. So we just build something, get it out. But then before we start work on the next release, we say let’s pay off the technical debt, right? So similarly, right, before you start off a new initiative, if its benefits are not verifiable, it’s a time to think about paying off the measurement debt you have accumulated in that area, right? And then go about your business.
Henry Suryawirawan: Yeah, thanks for mentioning about measurement debt. When I read that part, I also like it, right, because we, we all know about technical debt. But actually measurement debt is equally important, if not more important, because you can quantify in terms of business impacts and all that, right?
[00:48:31] iRex Framework
Henry Suryawirawan: So measurement improvement program, uh, you mentioned this term, right? And actually it’s part of this thing called iRex framework, right? So for organizations that want to improve their impact intelligence, they can adopt this iRex framework. So tell us in high level, what is iRex framework? How can people start using them to actually, you know, build their impact intelligence?
Sriram Narayan: Yeah. So the iRex framework is just a way of tying together all these recommendations into a framework with different modules so that, you know, it’s hard to adopt all of these at one go. So I’ve kind of split it into eight different modules. And three of those modules are introductory modules. You can start with any of them in the beginning. So one of them, for example, is called impact visualization. And that is the module through which you build out the initial version of the impact network, right? What we just discussed.
Then there is a module called the demand management module, which is really about asking for better justifications for what you’re going to build, right? If somebody says, let’s build this. Because by doing this, we are going to improve CSAT. Oh, that is not a great justification, because how do you know that CSAT improved because of what you did, right? So CSAT is a far away metric. Okay, you articulated downstream impact, but you also need the proximate impact. So can you articulate what you’re going to do in terms of its proximate impact and in terms of its downstream impact? Can you talk about, you know, not just talk about, we are going to improve CSAT by how much, in what time frame, under what assumptions and dependencies, right? You provide better justifications for what you’re doing. So that is part of the demand management module.
And then, you know, even before we get into this whole impact attribution, saying who contributed how much. If there is no such culture in the organization, the initial step is just impact demonstration, right? Just like we do showcases in our delivery work, right? But can you do a post release showcase and demonstrate the impact, right? It’s a different matter whether the impact is less than or more than what was expected. We’ll come to that later, right? But just demonstrate the impact and the learnings and so on, right? So that is the impact demonstration module.
And then when you come to impact attribution, that’s where, uh, I call it the contribution analysis module, where you analyze the contribution coming from different sources, internal factors, as well as external factors, and try to make sure that they add up. But to do that effectively, you need to have the measurements in place, which is why you have the measurement improvement program.
And then there are the advanced stages where once you’ve done all this, you can actually go to, for example, deviation analysis. The deviation analysis is, what is the deviation between the expected impact and the actual impact? And what is the reason for those deviations, right? It’s meant to be a learning exercise, right? Because sometimes, an initiative is okayed only because it promised, say, a 15% uplift. Okay. Had it promised a 10% uplift in the beginning, maybe it would not have been prioritized because 10% is not good enough, right? Now, what happens at the end, if and when you actually check, what if it turns out that no, the impact was actually only 9%, right? So if we knew this at the beginning, we would not even have prioritized this, right?
So it’s not about, you know, it’s not for beating up anybody or anything, but to learn. To learn that, you know, in future, how can we like just like we talk about getting to better estimates for our, you know, developer stories and so on, right? Like you estimated it to be a three pointer, but it took a month to deliver this feature, right? So similarly, if you estimated something at 15% uplift, and it only ended up being a 9% uplift, what can we learn from that, right? And how can we do better?
And then, out of all this process, you can report a couple of metrics. One of the metrics, uh, I call it as a benefits realization ratio, which is simply the ratio of actual benefit to expected benefit. So if the actual uplift was 9% and the expected uplift was 15%, so you have nine by 15, which is like what, three by five, right? So 60%. 60% is your benefits realization ratio. So this gives you a way of, uh, a rudimentary way of comparing across initiative saying, okay, this initiative fulfilled 60% of its promise, whereas this other initiative fulfilled 80% of its promise. It’s a kind of a Say:Do ratio, if you’re familiar with that term, right? But that doesn’t mean necessarily, depend, sometimes fulfilling 60% of your promise on a CSAT metric might be more valuable than fulfilling 100% of your promise on some other metric, right? So there are a few metrics that go together to understand what’s really going on and to help us make better investment decisions in the next round, right?
So all this put together is what I call the iRex framework and you can go about adopting it very gradually. In fact, I’ve been fortunate enough to have clients who are willing to try this in different ways. And I’ve had cases where one client started off with impact in visualization. Another started off with the demand management, demand management module. And the third actually started off with the impact demonstration module, right? So, and that’s how I can kind of came to suggest all three of them as potential starting points, depending on where you are and what your priorities are.
Henry Suryawirawan: Right. For those listeners who are interested into diving deep into this iRex framework and all the modules inside, right, I think you can check out Impact Intelligence book. I find the concepts really, really interesting, right, especially even though just starting from the beginner modules, right, the three beginner modules. I think we can all reap benefits just by following that. And obviously, I can’t, you know, conceptualize the advanced modules yet, because I have yet to be involved in those kind of initiatives, but I think it can be really powerful. Especially, like what you mentioned for the COO and CFO out there who sometimes question, like, you know, you spend so much money and you kind of like spend so many resources, is it really bringing the true benefit, right? So I think, uh, do check it out.
[00:54:26] Balancing Between Speed of Delivery and Business Impact
Henry Suryawirawan: And one last question that I have about all this impact intelligence. I know these days, you know, organizations try to prioritize speed of delivery, right? And you know, they come up with so many initiatives, try to deliver as fast as possible. At the same time, your impact intelligence also advocate people to start measuring business impacts more accurately. So how can we balance between this speed of delivery and also, you know, knowing the outcome of the impact that we are building by those initiatives?
Sriram Narayan: It’s kind of speed versus accuracy, right? Like if you’re shooting at a target. It doesn’t matter whether you’re shooting 100 bullets per minute or 10 bullets per minute. If you have no idea what’s hitting the target or not, right? So an impact intelligence is about gaining that idea. What’s hitting the target, right? And you know, is it hitting the bullseye or the second ring or the third ring or it’s just flying out, right? Today, unfortunately, at least in terms of downstream impact, we have very little idea, right? There are a few companies like Amazon. They are very good at this. They have teams of econometricians who develop models like this and, you know, they are very good at this. And I’m sure there are many other, you know, tech companies that are good at this. But the vast majority of what I call the classic enterprise, right? The established businesses, you know, who’ve been there, who are brick and mortar and so on, right? They lack this.
And therefore I would say, you know, I’m not sure if there is much of a balance to be struck at all, right, if between speed and accuracy, I think you’ve got to get accuracy right first. And then as you practice, speed will improve, right? Whereas if we focus on speed first without any idea of accuracy, right, then it’s, your, it’s busy work. You’re doing lots of things. Oh, yeah, right? We’re churning out. That’s what a feature factory is in some ways, right? Churning out features. But we don’t know. And it kind of goes back. You know, if you do all this, the more you churn out, the more you have to maintain and keep running, right? So your run costs keep going up year on year. And given the same budget, if more goes to run, less is available for building new things. Or you have to keep asking for new, more, an increase in the budget, which is going to come with more questions like, okay, why, what did I get for what I did? What do you spend last year?
So I think the speed versus delivery conversation is, uh, uh, you know, uh, speed versus uh, measurements, it should not, people should not have too much trouble trading off speed for measurement, right? It’s a bit like WIP concept that we talk about in Lean, right? Work in process or work in progress, right? Start less and finish more, right? But I have a newer definition of finish. Finish is not just delivery. Finish is impact, right? Start less and finish more in terms of show more impact, right? And so maybe extend the definition of WIP to impact level, not just the delivery level.
Henry Suryawirawan: Right. I like the way you kind of like use the analogy of, you know, shooting and, you know. But if you shoot not in accurate way, also it’s kind of like waste. That kind of like reminds me also the, you know, this speed and reliability thing in, you know, the DORA metrics, right? So, it may not be a trade off, right? So if you improve both aspects, you can actually become a much more high performing organizations altogether.
[00:57:32] 1 Tech Lead Wisdom
Henry Suryawirawan: So, Sriram, thank you so much for this conversation. I think, we can all learn a thing or two about using business impact intelligence to actually improve our delivery. But as we reach the end of our conversation, I would like to ask you this one question that I always ask my guests at the end. I call them the three technical leadership wisdom. So if you think of it just like advice, maybe if you have the version of wisdom you can share with us today.
Sriram Narayan: This is a tough one for me. I’ve been thinking about it and, um, uh, well, in the context of this topic, maybe it’s a bit unimaginative. But what I can say is in the context of this topic, the sort of obvious takeaway wisdom is that like I, when I was a developer, right, I used to run behind the shiniest tech stack. The hottest new tech stack and doing all the cool technical things and all that, right? But then, um, it’s kind of obvious from our discussion that what I’m going to say is that, you know, impact matters more than the… how hot the tech stack is that you are, they’re working on, right? But sometimes, yeah, developers might feel from their own personal interest point of view that, oh, this is, this tech stack is going to look good on my resume, right? Whereas, um, the business impact may not be so much at a junior developer level, right? But as you progress in your career, it changes, right? So unless you’re aware of this dynamic, right, we might get into the trap of always chasing the coolest tech stack versus at some point in your career, as you progress, you’ve got to shift your balance towards the impact side of things and, you know, start talking more and more about impact and being able to articulate it and being able to think about your technical decisions in that light, right? So yeah, those skills, the more we gain, I think it will be better for us in our tech career progression, right? So I’m afraid I don’t think I have three, but I’ll leave you with this one.
Henry Suryawirawan: Yeah. So definitely very, um, how should I say, very reflective, right? So because all techies out there, I’m sure we all like to chase cool technologies, new trendy technologies. AI might be the buzzwords these days, right? But don’t forget, there’s always the impact that you have to kind of like prove or demonstrate, right, before you can actually think that the technology adoption is a successful one, right?
So Sriram, if people love this conversation, they want to talk to you more, ask you about impact intelligence or any other things, right? Is there a place where they can reach out online?
Sriram Narayan: Uh, Sure. Yes. I mean, um, you can connect with me on LinkedIn or, um, BlueSky. And you could go, to get my handles, you could go to my book website, which is, www.impactintel.net . So that’s intel with a single l, impactintel dot net. Or you could email me at, uh, sriram@agileorgdesign.com . So that’s, that’s happens to be the, um, book website for my earlier book, Agile Org Design. So yeah, it was, uh, yeah, and I would, I would look forward to further interactions with people who have listened to this and who have, who want to have a follow up chat or something. So thank you very much for, um, setting this up for having me here. And it was, uh, I enjoyed this conversation very much.
Henry Suryawirawan: Right. It is my pleasure to have you in the show. So thank you so much for spending your time and explaining about impact intelligence. So thank you, Sriram.
Sriram Narayan: You’re welcome. My pleasure.
– End –