#129 - GIST Framework for Building High-Value, High-Impact Products - Itamar Gilad
“The difference of why some companies are so much more successful at producing high value, high-impact products than others comes to 4 areas of GIST (Goals, Ideas, Steps, Tasks).”
Itamar Gilad is a coach and author with over 20 years of experience in product management, strategy, and growth, and was previously a product manager at Google and the head of Gmail’s growth team. In this episode, we discussed all things about product management and how to build high-value products. Itamar first shared his journey at Google growing Gmail to 1 billion MAUs and some of his lessons learnt on managing large-scale product changes, getting users feedback, and dogfooding. Itamar then explained in-depth his GIST framework as an alternative to the product roadmap, a collection of methods and best practices for producing high-value and impactful products. He shared some challenges working with product roadmap and how teams can create better alignment instead. He also shared how we can do product prioritization better by using the ICE technique and his Confidence Meter. Towards the end, Itamar shared the different ways of how companies can conduct product experimentation and how to use the GIST board to improve the way we execute product development.
Listen out for:
- Career Journey - [00:04:17]
- Growing Gmail - [00:06:06]
- Managing Large Scale Product Changes - [00:07:26]
- Getting Feedback from a Major Product Change - [00:10:48]
- Dogfooding - [00:15:21]
- GIST - [00:19:10]
- Problem with Product Roadmap - [00:27:17]
- Creating Alignment - [00:34:22]
- Prioritization and ICE - [00:38:02]
- Doing Product Experimentation - [00:43:59]
- Project & Task Management - [00:48:43]
- 3 Tech Lead Wisdom - [00:54:39]
_____
Itamar Gilad’s Bio
Itamar is a coach, author and speaker specializing in product management, strategy, and growth. For over two decades, he held senior product management and engineering roles at Google, Microsoft and a number of startups. At Google, Itamar led parts of Gmail and was the head of Gmail’s growth team (resulting in 1Bn MAUs).
Itamar publishes a popular product management newsletter and is the creator of a number of product management methodologies including GIST Framework and The Confidence Meter. Itamar is based in Barcelona, Spain.
Follow Itamar:
- LinkedIn – linkedin.com/in/itamargilad/
- Twitter – @ItamarGilad
- Website – itamargilad.com
- PM resources – itamargilad.com/resources
- Newsletter – itamargilad.com/newsletter
Mentions & Links:
- GMail – https://www.google.com/gmail/about/
- Inbox zero – https://www.techtarget.com/whatis/definition/inbox-zero
- Assumptions mapping – https://designsprintkit.withgoogle.com/methodology/phase2-define/assumptions-mapping
- Marty Cagan – https://www.linkedin.com/in/cagan
Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available.
Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags.
Growing Gmail
- That’s one of the things I really liked about Gmail. The customer focus, the fact that we were constantly striving to add more value and to make the product still relevant. And that’s one of the key takeaways that I always teach. Start with the customer, put them at the focus of everything.
Managing Large Scale Product Changes
-
When you work on this scale, there’s a number of complications that are not typical for any other kind of PM role that I experienced at least. The one thing is that you mentioned, any minor change, you change the color of the send button, someone will hate it, someone will love it. It might affect the productivity of some people. So you need to be more cautious. So we were a bit kind of risk averse, I would say, in a sense that we tried to, while creating value, not to ruffle the product too much.
-
This was a major change in the user experience. And in order to reduce the risk, because it was very risky, we took a long time to develop this thing. It took about 12 to 14 months. We’ve done numerous user studies, some of them including hundreds of customers or users. We’ve done a lot of data analysis. We’ve done a lot of usability tests. Tens of thousands of Googlers volunteered to actually use this on their personal email. I really was appreciative of that.
-
And when you do such things, you are reducing the risk, so when you launch, eventually, you’re not surprised. You are hardly ever surprised because you actually already learned all the bad sides that might happen, all the negative aspects, and you fix them along the way. And that’s really informed my thinking as well about how the bright way to develop a product.
Getting Feedback from a Major Product Change
-
For me, the word feedback is usually after you launch. Don’t get to this point and then just learn for the first time whether or not they like it. That’s a very common mistake and a very costly one, because there’s statistics to show that most ideas actually are not good.
-
If you look at A/B experiments, etc that Netflix, Microsoft, Google have run, the best-case scenarios, one in three ideas actually create any sort of measurable improvement. So don’t wait to the point that they need to give you feedback after you launched it to tell you that was actually a terrible idea. Usually the feedback you’ll get is that they will just not use it cause it just doesn’t do anything good in their lives. Much better is to start as early as possible by validating this idea.
-
And sometimes you can validate it just on paper. You just do a little back-of-the-envelope calculation. Say how many people actually have this problem? How many of those will actually see that change that we created? Cause not all of them will. How many will convert and start using it? And how many of those will actually see the benefit we expect and keep using it?
-
And sometimes a lot of ideas, just on paper, we realize are not as strong as you think. Either because they don’t solve a big enough problem or because the likelihood that they will solve the problem to these people is small. So just this analysis, which is very cheap to do, will help you a lot.
-
Then you have a lot of techniques to fake the product before you build it. You do a Wizard of Oz test. We’ve done this for the tab inbox.
-
Before we launched, we sent it for preview to a lot of article writers and newspapers, technical reviewers ahead of time, and they had a week to review. And the reviews were very tappy. They were like, I don’t understand why this is needed. No one was excited, but we knew by then we tested so much with the regular users that they actually love it and it does exactly what they need. So we didn’t care. So that just comes to show you that even the technical press and even the experts can’t really predict always what works, what doesn’t work.
Dogfooding
-
By the way, it’s not just Google. It’s a very common practice in Silicon Valley or in tech in general. Historically, Apple relies on it extensively. They don’t call it dogfooding. But Apple doesn’t like to do external user research. They use their employees extensively to test their products, for years sometimes.
-
It’s a really powerful way to get your colleagues to participate and to give you feedback, early feedback. They are much more tolerant of bugs than the customers would be. They’re much more supportive. And also many of them are technical enough to tell you exactly what’s going on, what’s wrong.
-
But they don’t always represent your ideal customer. The one that you want. I mean, most of the people you find in your company are very technical or tech savvy. Have a certain level of education, income, certain age groups, etc. So be aware that this is not necessarily representative.
-
What we did with the inbox, first off, we did fishfood, which was the team itself started testing a very rough version of the product on our own inboxes just to convince ourself that there is value there. That’s yet another test I recommend trying, if you are in the target audience.
-
There’s also something called a bug bash. If you are not in the target audience, you’re not going to be able to dogfood this in your real life. But have the team come for a day or half a day and just try to complete some tasks with the software they’ve developed, and you’ll be surprised how much more enlightened they become by the end of this experiment. So bug bash is another interesting way to get into the shoes of the customer.
GIST
-
One of the questions at the back of my mind throughout my career was why some companies are so much more successful at producing high value, high impact products than others. And part of the reason is corporate culture, and some companies are a bit more behind and some are more advanced. But I don’t think that covers all of it. Sometimes companies with bad cultures manage to do a good product and vice versa. So I was looking for the kind of best practices or the principles that really help.
-
And the principles are pretty well understood. You need to be customer centric. You need to be evidence driven or evidence guided. You need to be able to adapt your plan, not just stick to roadmap. This is true on the roadmap level. It’s true on the sprint level. You need to be truly agile given your information.
-
And you need to empower people, because it doesn’t work if you centralize all the decisions in the hands of a few very smart people. You need to distribute the decision to create an organization that is intelligent enough to coalesce around the problems, try out things, etc. So those are the principles. Those are well understood for decades.
-
But some companies are better at implementing them than others. Google was an example. And I found that the difference comes to four areas of change or four areas that we need to tackle.
-
The first is how we set up goals, and what’s in these goals.
-
Everyone uses OKRs these days, that’s great. But OKRs can contain terrible goals, really bad goals, and they can contain the right type of goals. It can contain dozens of objectives and key results. It can contain just three but very focused ones.
-
So what is the right practice there? And we know some of the rules, outcomes over output. Less is more. Tied to actual behavioral metrics of customers, not just revenue.
-
But the actual guidelines of how to do this escape a lot of companies I work with. So that’s one area that I think a lot of companies can improve. Each one of these goals is tied, but each one is semi-independent.
-
You can implement just the goals changes to start with. You can set the North Star metric. You can set your metrics tree. You can start using OKRs correctly.
-
-
The second change is how do you choose which ideas to invest in? That’s the ideas layer.
-
And idea, just for clarification, is a hypothetical way to achieve the goal. It could be launching a new feature. It could be partnering. It could be starting to use a new API. It could be purchasing a company that does exactly what you want.
-
The statistics suggested that best case, one in three ideas will work. And in reality, in most companies, in most cases, it’s one in ten, unfortunately. Which means maybe 90% of what’s currently in your product backlog, in your roadmap, is stuff you don’t really want to do. It’s a waste. It’s going to do nothing. Just look at the existing product, how bloated they are with features that get almost no usage.
-
This is true for every product I worked on. These features are a liability. You have to keep maintaining them. You need to keep sustaining them. They create bugs for you in the future. They complicated your QA metrics. So it’s best not to launch these things. It’s best to invest in the things that really move the needle.
-
So prioritization is a key challenge. It always comes up as one of the top challenges when I speak to product teams. We don’t know how to choose the best ideas. How do you choose? Based on opinions, based on consensus, based on the opinion of the highest person, you know the system. And these are very unreliable heuristics. So they send us again and again in the wrong direction.
-
What I teach usually is how to use ICE, which is very common. How to choose ideas and how to attribute confidence based on evidence. And for that, I created a tool called the confidence meter.
-
How to choose the ideas that are most likely to succeed. But there’s a big quotation around most likely, because this is again, just a set of assumptions and a set of guesswork and some work with evidence. It doesn’t guarantee that if you do good prioritization, you necessarily land on the right ideas. No one can actually predict.
-
-
You need to test these ideas. And that’s the step layer. So the step layer is about how to experiment, how to get ideas from concept to a launched feature by combining learning and building at the same time.
-
A Wizard of Oz test is an early stage test. Before that, you can do back-of-the-envelope calculation. Before that you can do ICE analysis. Before, you can do assumption mapping. There are a ton of things.
-
Later on, if you see the idea is still worthwhile investing, if you also pivoted, you improve it along the way, then it becomes worthwhile starting doing the more expensive experiments. You know, the dogfoods or the fishfoods. The early adopter programs, alphas, betas, etc, A/B experiments. Based on that, you need to decide how many of these tests you need to build along the way.
-
Sometimes you can do just one test and launch the idea. Sometimes you can rely on expert opinion, cause it’s a very tiny change, and it doesn’t create much risk. It’s an art form. You need to learn to combine. For me, a step is like an experiment or an iteration that teaches something.
-
-
Then the big question is how to run this whole thing. How to do project management with an agile team. How to bring this to their lives and how to prevent the managers and the stakeholders from randomizing us constantly and coming in with new ideas and pushing us around. And so that’s the task layer. This is how to get the teams to work on the right things with a tremendous amount of context.
-
What I see is a lot of engineering teams or development teams caught up in protective agile layers. Protected from the customers, protected from the business. They don’t need to be bothered with these things. Just give us a prioritized product backlog that just break each item in the backlog into user stories. And after that, we will build it for you.
-
That’s a terrible mindset. This is not how good companies build products. You should not work this way at all.
-
What you should do is give a lot of context, the goals, the ideas, the steps. All these create a lot of context in the minds of the people who do the work. And kind of impart them to A, suggest ideas. B, invent some of the experiments. And C, become explorers in a sense. Become discoverers. Not just people who deliver, but people also discover. And that should be part of their job description. And this should be part of what they’re compensated on. We should move away from this output focus that a lot of teams live in.
-
Problem with Product Roadmap
-
The criticism of product roadmap is widespread.
-
Let’s start with a positive. Why do we need roadmaps? What function do roadmaps provide? So roadmaps provide some sort of sense of security or some sort of sense that we are well-planned and we know what to expect. And then we can plan the rest of the company’s work. The marketing team can start preparing their marketing materials and training the sales team. And the sales team can prepare and maybe inform the customers. And depending on which company you work for, the CTO is happy cause he or she can level the resources and kind of know how many they need to recruit each quarter.
-
Planning. It’s wonderful. Everyone loves a good plan. And this is a mindset that we inherited from the industry that preceded us, you know, the 20th century kind of manufacturing. And probably that worked when you’re producing cars or packaged consumer goods or all these other things.
-
The challenge is we face a lot more uncertainty than these industries of the past. There’s a lot of uncertainty in our markets, because the markets are very dynamic with software and the internet. Two guys or two girls in the garage can disrupt our market within a few years. There are far fewer barriers to entry. The customers have a lot of choice. The customers are very fickle. They can change their minds. So we can’t afford to just lock ourself into a plan that doesn’t react to new information. That will be very, very bad for a business and very bad for a product.
-
The second thing is these plans never come together as we expected. You just need to look at last year’s roadmap. How well did we deliver on it? Some things were delivered very late, some things were canceled. New ideas that we didn’t even agree on, and in the middle of the roadmap, all of a sudden hop to the top.
-
It just shows that this heavyweight process, this tremendous effort in actually prioritizing ideas and choosing and putting them on a roadmap and scaling the resources, it’s not worthwhile doing. We need to actually try to do something that is more agile. And I can go on about all the different negative side effects, cultural and other, that kind of being attached to roadmaps create.
-
A lot of organizations understand this, and they moved from yearly roadmaps to quarterly roadmaps. And then after that, there’s a bit of a non-committal roadmap. I call it the next and later sort of concept. But even committing to a quarter each time means that during this quarter, you don’t really have the freedom to launch whatever you think you need to launch. It’s imperative to have this freedom. A quarter is a lot of time in most of our businesses.
-
I think a pragmatic approach to this is to realize that there are certain types of ideas, where we gain high confidence. We already tested and we went through our dogfood and the early whatever. We’re pretty sure this is a good idea. And by that point, we already built the product to a certain level in order to test it that we know already. Also, we have pretty good confidence about how much effort it will take to finish. With these ideas, I’m pretty comfortable to put it on a timeline because the risk is pretty low that by committing to this we’re actually doing something bad. It’s a good idea. We want to launch it. We know how. So for those, feel free to commit.
-
The other ones, the ones that are in the process of being evaluated, I would say don’t commit to them. Just color them and say no. These ideas are medium confidence. We only tested them at this level. They might happen, they might not happen. We don’t commit. We don’t know for sure. And then there are ideas that are low confidence, etc.
-
At the goals level, I would say that most company’s strategies stretch multiple years. Then there’s a yearly OKR for the company that says, this is where we want to be by the end of the year. But you could actually, on a timeline say, we want this particular key result or this particular objective to complete by the end of H1, right? So by this date. So you can kind of plot your goals, your company level goals and team level goals on a timeline. That’s sometimes called an outcome-based roadmap. So what do we want to achieve by when? That’s also a bit brittle, but it’s less likely to break than committing to a set of concrete ideas, I assure you.
-
Then there’s the question. Okay, but what do we tell the customers? How do we prepare? What about marketing and sales? What about resource leveling? I do think that if you want to be truly agile, these people need to learn agility as well. The CTO needs to learn to plan in a more agile way and to learn that sometimes he or she will need to recycle people from a project that failed to another project, and that’s just the way it is.
-
The marketing and salespeople I talk to are not necessarily in love with roadmaps either. They know that they’re late. They know that most roadmaps actually don’t come together as expected. They really want to create value for the customers too, because that makes their life so much easier, right? High value is what the customers want. That aligns all goals with their goals.
-
So the business teams really understand this. I saw a lot of business teams being willing to be much more agile and adaptive and lean than the executives give them credit. They don’t want to tie the engineering team necessarily to a set of deliverables if they know they can make their business goals. And that’s really where we need to all align. Not on a set of launches, but on a set of business goals that the teams commit to.
-
Big question of trust there. You need to build the trust in the engineering or the development team that they can actually deliver this. And this is a big hurdle to traverse. But if you’re there, the companies that are in this position, they’re much better aligned. They spend much less time on planning. They have much better outcomes at the end of the day, because everything they do is towards business and user goals. And they don’t need to invest as much resources. They actually use the resources in a much better way.
Creating Alignment
-
Alignment is a major challenge. And there’s more than one underlying problem or cause for misalignment. With GIST, I’m trying to help deal with some of the causes. I cannot guarantee that’s gonna help with everything.
-
One is that we try to align teams around projects, around ideas. Let’s say I have an idea. In my team, I want to do something, then I’m going to your team cause I’m dependent on you. There are dependencies. And I want you to commit to my idea. I want you to work on my project. From experience, as a very experienced product manager, I tell you most of the time this fails. They have their own ideas. They don’t want to commit to your project. You’ll get a lot of pushback. Or sometimes they will say, no, I would disagree this is the best way.
-
I suggest aligning on goals. So come to that other team and say, listen, we want to achieve this outcome. Do you think this is a good goal? We depend on you? Are you willing to commit to this goal? This is a much easier discussion, because then they can say yes or they can say, “Yes, but we have more important goals, so maybe talk to me next quarter.”
-
Both ways are good, because if they say no, you know you shouldn’t commit to this goal, cause you’re not going to achieve it. If they say yes, good, then you ask them to copy this goal, this OKR into their OKRs, which is a form of commitment. Doesn’t mean you’ll get it necessarily, but it does mean that they are serious about it. This is called a shared OKR.
-
What really helps teams align is, if from the top, there are clear goals. And you’d be surprised in how many organizations that’s not the case at all. There’s no clear North Star metric. There’s no clear set of metrics trees. There’s a million goals. Everyone’s idea becomes a goal.
-
Essentially, the OKRs are not about outcomes. They’re about a set of ideas that we’ve decided to implement. And that sends people in various directions, or they get a lot of teams to work on this one of these massive projects. So it kind of creates alignment, but it’s really just creates a massive project for us to collaborate on.
-
I suggest a process of starting with very few company level goals. If you’re a medium-sized company, don’t do middle level goals. Don’t do departmental goals or disciplinary goals. Don’t do UX goals versus engineering goals. Do team goals. And each team needs to ask themselves, how can we contribute to the company’s goals? And the answer often is, I need to collaborate with these three other teams in order to achieve that particular goal.
-
Middle managers are very good at helping the teams connect the dots and helping them, actually forcing them to collaborate. But you don’t need the middle managers to create goals in order to do this. In a larger company where you start having business units, etc, then yeah, it makes sense to have more middle level goals.
-
So fewer goals, fewer mid-level goals. Alignment on goals instead of ideas.
Prioritization and ICE
-
Everyone comes with their favorite idea. Everyone is convinced that this idea needs to be at the top of the list. Sometimes it turns into political pressure, escalations. And often as a product manager, I had the least power, least influence, and the least political skills. I didn’t know how to escalate. I didn’t know how to do all this stuff that these guys were experts at. So how do you defend against this?
-
First and foremost, you need good goals. If there’s no clear agreement on what your company and your team are trying to achieve, everything is a good idea. It’s really open market. Just whoever pushes harder will get their idea. So you need to be very clear about what your goal is .
-
Without the goals, it’s really hard for you. You have nothing to back on, to rely on, to push back sometimes on ideas. Then, when you have a bunch of ideas, you need to somehow stack rank them and create what I call a hint, which ideas you want to test first. And that’s ICE essentially. How does it do it? By breaking the question to three parts.
-
One is, what is the impact? Impact on what? On the goals. Which particular goal? Usually, it’s either the North Star metric of your company or your business unit, or if that’s really hard to measure, then your North Star metric as a team. If that’s not the case, so maybe on a key result, a specific key result. Of course, it’s a guess, but this guess will improve. The more you test the idea, you start with a guess, and then it becomes a much more educated guess with some of these techniques I mentioned.
-
Second question is Ease. That’s the E at the end. That’s basically the opposite of person weeks in most development teams. In a marketing team, it might be marketing dollars. It whatever is the most scarce resource. An easy idea is one that we can launch relatively quickly and with low cost. An expensive idea, a low ease idea, is the opposite.
-
And then there’s confidence. And confidence is basically how sure are we that the impact and the ease of what we think they are. So when they’re a guess and all they’re based on is my gut feeling, the confidence should be near zero. Because, if there’s an unreliable sort of evidence, that’s self conviction. Every terrible idea that was ever out there in the world, someone thought it’s a good idea. And we can see now in some very notable companies, how very experienced entrepreneurs push them to do really stupid things, because they’re sure they’re good ideas and they’re very convinced in their ability to predict the future. So don’t fall into this trap.
-
-
But then if you start doing back-of-the-envelope calculations and reviews with your peers and with experts and market research, competitive research, customer interviews, etc, you go up a scale of confidence. That is logarithmic, by the way. It goes up. The test becomes harder and harder. Most ideas will fall along the way. This is the confidence meter. And then you can give yourself higher and higher confidence scores along the way. And what happens usually is you also are able to adjust the I and the E.
-
The impact in the E, because you know much better now how impactful this is and how easy it’s going to be. And usually the impact goes down and the ease goes up, cause you realize it’s not going to be as impactful. There’s something called the planning fallacy, a known psychological mechanism we have that we tend to underestimate the time of task and overestimate the impact or the benefits they will give us.
-
So that’s how I recommend using ICE. Do it once, just rank your ideas and then keep doing it as you’ll get more evidence. Don’t stop there. And definitely don’t rely on ICE as a one time magic algorithm to tell you what to build.
Doing Product Experimentation
-
What I notice is that a lot of people, because the word experiment sounds very rigorous, they think either of A/B experiments or betas or what they call MVP, which is often a near complete version of the product with basically all the features. Maybe it’s not super polished, it’s more like a beta.
-
That’s way too late to wait to learn whether or not it’s a good idea or not. It’s basically waterfalls. It’s just basically building the whole thing.
-
What I notice, a lot of companies don’t realize what a large gamut of experimentation techniques, validation techniques we have at our hands, at our fingertips to validate ideas. And some of them don’t require actually experimenting at all. I like to break it into five buckets.
-
There’s assessment, which is just doing it on paper, just looking whether or not the idea aligns with the goal. Doing assumption mapping to see how much risk is in the idea. These are all things you do without collecting external data even. And even those a lot of time allow you to eliminate a large swath of your ideas.
-
Then there’s fact finding, which is looking at data that you already have. Usage data, behavioral data, other things, conducting user interviews or relying on user interviews you’ve done in the past. Cause it’s best not to do them just on demand, but to do them on an ongoing basis. Look at competitive result, like look deeply at your competitors. Doing field research.
- There’s all sorts of ways to get data and then ask whether or not this data actually confirms our assumptions over the ideas. And that’s a key point, by the way. What we test is not the idea entirely, but some of the assumptions within the idea. So it’s sometimes good for a large idea to break out and ask what are the assumptions?
-
And then the next phase is testing. The idea is starting to build, but of course, in the first stage, you don’t need to build the whole thing. You can fake it. You can do fake door tests, landing pages, call outs, buttons in your UI.
- Companies have validated the whole business model in this way. Human operated test, we talked about the Wizard of Oz. There’s something called the concierge test. There are a lot of techniques and, of course, the fishfooding. So there are a lot of ways to test ideas in a very, very early stage before they’re even close to being finished.
-
Then there’s another layer, which is kind of mid-level test. Alphas, early adopter programs. There’s all sorts of ways to test ideas that are not polished, not fully implemented, just the core scenarios. Not scalable, not built like we would build them to launch.
-
And then later on, there’s late stage tests, like betas, previews, labs, all the most rigorous tests, which are A/B experiments, which have a controlled element.
- Even the launch itself is another test. So we have tests and then we have experiments. I call experiments only the things that have a control element. Then there’s release. The release itself is an experiment too.
-
-
So basically, it’s a matter of scheduling them and putting them, choosing which ideas you want to test, and then asking yourself, which are the key assumption, and then what’s the first step? Who should work on this step? What’s next, etc? And that kind of brings us to the next layer, which is how to project manage this, how to test multiple ideas in parallel, how to allocate resources, how to stage it, etc.
Project & Task Management
-
I would say that Agile in general is a very positive thing, especially compared to waterfall. Over the years, especially Scrum became more and more strict, more, more structured, more and more full of process. It became a very process heavy thing. And then there’s a lot of process people involved with dedicated Scrum Masters and dedicated agile coaches just to drive the process forward. So it’s kind of becoming reminiscent of the days of project management in the days of waterfall.
-
What I try to do with GIST is to inter-operate with it in a way that doesn’t require the scheme that you’re using today to change very much. So what I suggest teams do is use what I call the GIST board, which is basically a new process where you put on a board, it could be a physical board or a digital board, three columns.
-
One is your goals, just the key results you committed to. And generally, for a team, a product team of 10 engineers or less, I recommend no more than four key results per quarter. Even four is a lot, cause teams just cannot move that much, that many key results. Those could be business or product or user facing key results. This could be also technical or design key results. I mean, if we need to cut down technical debt or close a bug backlog, those are very important goals as well. Don’t kick them out of sight, cause the teams will do them without you knowing.
-
You need to have a very conscious decision with your other leads, the designer and the engineering leads, what’s the ratio? And then you create a board around that. You pick the top ideas you want to start with based on ICE or whatever method you like to use, and you put them next to it in the ideas column, just the ones you’re starting to work on right away.
-
Then next to those, you put the steps. Not all steps, just the ones you are planning to do in the next couple of weeks, maybe. Cause that part will change a lot. The whole board has to be very dynamic, cause some ideas might turn out to be bad. And then you need to remove the entire idea with all its steps from the board and put another one instead. Some steps complete. Some steps need to be redone. So it’s a very dynamic thing.
-
It’s the job of the leads to keep managing the board, to keep it up to date. But I highly recommend meeting with the team at least every week or every other week to review the board and to talk about the next steps.
-
And that’s a perfect kind of segue into the planification or the sprint planning or whatever cycle planning you’re using. Cause that brings them a lot of context. It reminds them what are the goals? What ideas are we pursuing? What are the steps?
-
We’re not just coding. We’re not just trying to push stuff into production, mark it as done, and move on to the next ticket. We’re actually trying to complete an experiment or a delivery, and that’s tied to trying to validate an idea, and that idea is tied to a goal. So a lot of engineers tell me this really was missing. Before that, I didn’t know I was working in this vacuum.
-
Once they understand, a very nice side benefit that I experienced a lot in Gmail is that you don’t have to tell them that much. They understand what needs to be done. They come up with much better ideas than you.
-
Of everything I launched in Google, maybe 60% wasn’t my idea at all. It’s just people were creative and came up with much better ideas than mine. And that’s exactly what you want to do. Cause then you have more free time to focus on researching the market, understanding the needs, etc, instead of trying to spoon feed people with exact requirements, which is enormously tiring. And it doesn’t get you what you want at the end of the day. Cause no matter how explicit your user stories are going to be, they’re still getting wrong if they don’t understand the context. So much better to give them the context.
-
And at the end of the day, once you have your GIST board and you have this column of steps, that turns into the backlog that feeds into the Scrum or Kanban process. You just need to prioritize it. And that’s exactly what they need.
-
The big change is that maybe mid sprint, there will be changes. Maybe mid sprint, we will realize these steps we thought were useful are not useful because we learn new information. We need to scrap this plan. So for teams that are very, very strict about planning the entire scope and estimating it and are not willing to change, this could be a problem. You can either do shorter sprints or you need to challenge this assumption that this is actually a good thing, and it’s really helping. I would argue that being willing to change the scope of the sprint is really true to the nature of Agile.
3 Tech Lead Wisdom
-
Recognize what type of company you work for and adjust your expectations and your mode of operations accordingly.
-
Marty Cagan gave a really good kind of dichotomy of company types. It’s the ones that you treat product organization, which they often call IT as a delivery organization. You’re working in a delivery team. You’re basically there to do whatever the business tells you to do. They don’t trust you at all to have your own opinions or to know what needs to be done. And incidentally, these organizations tend to be the strictest embraces of this heavyweight Scrum and SAFe, etc. Maybe because delivery is the name of the game, and those, like Scrum today, are so targeted at delivery.
-
If you are working in such an organization, a lot of what I just described doesn’t apply to you. Don’t expect your organization to actually be able to mutate into this much more modern thinking type of organization. So you will be endlessly frustrated if you expect them to. You should understand the rules of the game within these very limited confines. Try to be more creative or more evidence guided.
-
The second type of team that Marty Cagan describes is a feature team, which is basically a feature factory. There’s a lot more autonomy to decide how to implement the feature, what to do inside. They trust you on this and maybe you invent the features too. But it’s basically still very output focused. Like launch feature after feature after feature. You don’t challenge the feature so much in the middle. You don’t do a lot of discovery. You basically just decide through some sort of prioritization scheme what to build, and then you build it. And there’s a strict focus on roadmaps and a lot of time is spent on planning.
-
The third type is a product organization where they actually understand the importance of outcomes over output. They understand the importance of discovery and they do it to some level. No organization is actually 100% pure and doing it correctly and not opinionated. And maybe we shouldn’t strive to be in this situation. There you can find a lot of places where you feel there’s still a lot of pain, there’s still a lot of waste being created and try to fix those, move the organization to a slightly better place. I think GIST applies to these last two groups.
-
So recognize where you are, what kind of organization, how much willingness they have to adopt these new things, and adjust your expectations in your work plan accordingly. If you want to be this change agent.
-
-
On your voyage to try to change things, use evidence a lot.
-
If you go to a battle of opinions with a person who is more influential than you or more senior, you will almost always lose, on principle, no matter how good your rationale or opinions are.
-
If you come to that same discussion with evidence, that’s a more interesting discussion. You might get more positive results out of that. At least they’re giving you the room to talk at their level. It’s not just instructing you to follow their idea. So evidence is such a powerful thing.
-
Don’t overestimate. Don’t expect it to be magical. Some people still are very opinionated, but learn to use it. And the confidence meter, that’s why it’s so popular.
-
-
Learn to let go.
-
I had a challenge as a product manager for a very long time. I felt tremendous responsibility for the success of the product. And I wanted to instruct everyone exactly what to do and how to do it. And that included designers and engineers. I wanted it to be at a very high level based on my kind of criteria. And that doesn’t work very well. They don’t like you, first off, for doing that. And second, it’s not the right way to do, cause they’re the experts in their area of work expertise.
-
Part of the reason I adopted GIST is because that enabled us to have a much more kind of balanced discussion. And part of the problem is that a lot of the responsibility does lay on the shoulders of the PMs.
-
So change the dynamics and say, no, it’s our collective responsibility to meet the goals. If one of us fails, all of us fails. And then it’s much easier for you to also delegate to them decisions because they’re optimizing for the same thing as you. They’re not optimizing just for code quality or just for implementation of Scrum properly. They are optimizing us to achieve the company goals and the team goals.
-
- These are my three tips. Recognize the type of company you work for and adjust to it. Use evidence where you can. And learn to let go, learn to let your team members guide some decisions.
[00:01:10] Episode Introduction
Henry Suryawirawan: Hello again, my friends and my listeners. Welcome to the Tech Lead Journal podcast, the podcast where you can learn about technical leadership and excellence from my conversations with great thought leaders in the tech industry. If this is your first time listening to Tech Lead Journal, subscribe and follow the show on your podcast app and social media on LinkedIn, Twitter, and Instagram. And for those of you longtime listeners who want to appreciate and support my work, subscribe as a patron at techleadjournal.dev/patron or buy me a coffee at techleadjournal.dev/tip.
My guest for today’s episode is Itamar Gilad. Itamar is a coach and author with deep experience in product management, strategy, and growth, and was previously the head of Gmail’s growth team. In this episode, we discussed all things about product management and how to build high value products. Itamar first shared his journey at Google growing Gmail to 1 billion monthly active users and some of his lessons learned on managing large scale product changes, getting users feedback, and dogfooding. Itamar then explained in-depth his GIST framework as an alternative to the product roadmap, a collection of methods and best practices for producing high value and impactful products. He shared some challenges working with product roadmap, and how teams can create better alignment instead. He also shared how we can do product prioritization better by using the ICE technique and his Confidence Meter. Towards the end, Itamar shared the different ways of how companies can conduct product experimentation and how to use the GIST board to improve the way we execute product development.
I really enjoyed my conversation with Itamar, hearing some of his lessons learned growing Gmail and techniques to improve product management and development. If you also find this episode useful, I would appreciate it if you can help share it with your colleagues and your community so that more people can learn from this episode. Also leave a 5-star rating and review on Apple Podcasts and Spotify. Let’s go to the conversation with Itamar after a few words from our sponsors.
[00:03:43] Introduction
Henry Suryawirawan: Hello, everyone. Welcome back to another new episode of the Tech Lead Journal podcast. Today, we have a topic about product management. I’m really happy to see Itamar Gilad today. Itamar is a very experienced product manager. He has worked in multiple big companies like Microsoft, Google, and in fact, he has helped grow Gmail product into one billion monthly active users. I think this is a lot of expertise that we are going to learn here from Itamar. So thank you for this opportunity and looking forward for our conversation.
Itamar Gilad: Thank you for inviting me, and yeah, I’m looking forward to it as well.
[00:04:17] Career Journey
Henry Suryawirawan: So, Itamar, I always like in the beginning to ask about your career journey. Maybe if you can mention any highlights or turning points that you think will be good for us to learn from.
Itamar Gilad: Yeah, for sure. I started out as an engineer and I worked in engineering for a while as a software developer. I really liked it. I started rising up the chain of command and I found myself an engineering manager of sort, and I felt that was not the right career path for me. So I kind of switched to the dark side a bit and became a product manager.
That was in the year 2001, so you can see how old I am actually. And that was probably a better career choice for me. I spent about 15 years in product management, mostly in Israel where I’m from. But I worked also for some international companies. And then I spent a few years in Switzerland as a product manager working for Google. During this 15 years, I had a chance to work for startups, for scale ups, for Microsoft, and then for Google, initially for YouTube, but then I spent a few years in Gmail, as you mentioned.
And that was an overwhelming experience. We had, as you said, a billion active users, give or take, and a lot of impact. But at the end of this period, I felt it was time for yet another change. So I stopped being an active product manager, and I became a coach for product management teams for leaders. And I started writing a lot about product management. Some of my ideas or some of aggregation of things that I read from other people much smarter than me. And that’s where I’m today. I’m coaching, I’m presenting, I’m writing, and I’m having a chance to speak to you.
Henry Suryawirawan: Thanks for sharing your story. So if all of you do not know yet, Itamar also wrote newsletters and he also published several self-published books. Those are really short, snappy, but very concise and dense. So make sure to check out his website. Later on, I’ll put it in the show notes as well.
[00:06:06] Growing Gmail
Henry Suryawirawan: So Itamar, let’s start from your Gmail experience because I think this might be a good opportunity for many of us to learn from. So maybe if you can identify when you started joining Gmail, how big was the product back then? And what was the evolution like, growing from that stage until 1 billion monthly active users?
Itamar Gilad: So I joined Gmail in 2011. It was pretty massive by then. It had, I’m using official number, 420 million users already, so I cannot really take any of the credit for the big success of Gmail. It was a wonderful product that took on, transformed email. People really loved the amount of value it gave. I think it stayed a very honest product and really tried to help the users and help the customers. And while it showed ads, we had a clear separation of the ad team being there to monetize the product. And the rest of us were there to create value for the customers. And this balance of give and take is very important in every product. But we weren’t there just to sell more ads, I can tell you. And that’s one of the things I really liked about Gmail. The customer focus, the fact that we were constantly striving to add more value and to make the product still relevant.
And that’s one of the key takeaways that I always teach. Start with the customer, put them at the focus of everything.
[00:07:26] Managing Large Scale Product Changes
Henry Suryawirawan: Nice! We always need to start from the customer solving the real problem, right? Not like imaginary problems. But maybe specifically for some product management techniques. Is there anything that you do differently at that large scale number of users? Because I think any small change that you do, sometimes I also experience that, whenever Gmail change the UI/UX, some people will like it, some people will really hate it. So how do you do this balance for product management in order to cater for billion number of users?
Itamar Gilad: Yeah. When you work on this scale, there’s a number of complications that are not typical for any other kind of PM role that I experienced at least. The one thing is that you mentioned, any minor change, you change the color of the send button, someone will hate it, someone will love it. It might affect the productivity of some people. So you need to be more cautious. And especially with email, because it’s kind of a mission critical product for so many people, you need to have a gentle hand. So we were a bit kind of risk averse, I would say, in a sense that we tried to, while creating value, not to ruffle the product too much.
There was, just as I joined, a major launch of a redesign of Gmail that was very controversial. Some people really loved it. It modernized Gmail. Before it was kind of ugly, but very functional and people liked it. And then we launched this redesign that went across all of Google—by the way at the same time there was this new design language just launched. And a lot of people really hated it, because it make the information less dense. They saw fewer lines or fewer threads inside of the email. So that taught me a lot about how sensitive we should be about these changes.
Years later, I had an opportunity to launch a major innovation, if you like, or a major change when we introduced the tabs. So we realized that a lot of users are not really organizing their inbox in any sort of way. Everything that comes in stays in the inbox. And if it goes to the other page, it’s as good as archived for them. It’s really hard for them to sift through the piles of email. The advanced users, the power users had techniques. They knew how to use labels and filters and all the other tools of Gmail. But the vast majority of users didn’t.
So we had to develop a system that enables them to stay a bit more organized without having to do a lot of work. And that’s led to, after research, to this development of the social tab, the promotions tab, and there are few other optional tabs you can activate. This was a major change in the user experience. And in order to reduce the risk, because it was very risky, we took a long time to develop this thing. It took about 12 to 14 months. We’ve done numerous user studies, some of them including hundreds of customers or users. We’ve done a lot of data analysis. We’ve done a lot of usability tests. Tens of thousands of Googlers volunteered to actually use this on their personal email. I really was appreciative of that. And when you do such things, you are reducing the risk so when you launch, eventually, you’re not surprised. You are hardly ever surprised because you actually already learned all the bad sides that might happen, all the negative aspects, and you fix them along the way.
And that’s really informed my thinking as well about how the bright way to develop product. You don’t need to be as risk averse as Gmail. Obviously in your product you should maybe take more risk. But still these techniques, I think, are very useful.
[00:10:48] Getting Feedback for a Major Product Change
Henry Suryawirawan: So, there are two things that I have interest as well. So when you roll out this change, for example introducing these new tabs, it’s like a new workflow for many people, and sometimes it could be distractive. How do you actually engage critical feedback from the users, knowing that your product is being used in multiple countries, multiple geographies, multiple cultures, language might be different as well?
So how do you do this research and how do you actually get the feedback that actually allow you to take the risk and make the change?
Itamar Gilad: So first off, feedback for me, the word feedback is usually after you launch. And there’s some feedback tool, they let you know. I assume that’s not what you meant. But don’t get to this point and then just learn for the first time whether or not they like it. That’s a very common mistake and a very costly one, because there’s statistics to show that most ideas actually are not good.
If you look at A/B experiments, etc that Netflix, Microsoft, Google have run. The best case scenarios, one in three ideas actually create any sort of measurable improvement. So don’t wait to the point that they need to give you feedback after you launched it to tell you that was actually a terrible idea. Usually the feedback you’ll get is that they will just not use it cause it just doesn’t do anything good in their lives. Much better is to start as early as possible with validating this idea.
And sometimes you can validate it just on paper. You just do a little back of the envelope calculation. Say how many people actually have this problem? Let’s put a guess. How many of those will actually see that change that we created? Cause not all of them will. How many will convert and start using it? And how many of those will actually see the benefit we expect and retain, keep using it?
And sometimes a lot of ideas, just on paper, we realize are not as strong as you think. Either because they don’t solve a big enough problem or because the likelihood that they will solve the problem to these people is small. So just this analysis, which is very cheap to do, will help you a lot.
Then you have a lot of techniques to fake the product before you build it. You do a Wizard of Oz test. We’ve done this for the tab inbox. The very first test we’ve done is we invited people to usability test. They saw Gmail already with the tabs and inside the tabs, by magic, they saw their messages, which were actually came from their inbox. How did this magic happen?
First off, they signed an agreement that we are allowed to process their inbox a little bit. Not actually go into the message, it’s just the subject and sender. While the interviewer was interviewing them, we kind of scraped the top 15 messages from their inbox with a Chrome extension. And then at the back we moved them manually. We guessed, this looks like an update, let’s put it here. This looks like a promo. This looks like social. So after a few minutes, we gave the researcher a green light and said, okay, show them the new inbox. But of course, it was a facade. This wasn’t really Gmail, this was just a facade. It showed them how it might be. And we got incredible feedback just from this simple experiment.
And we saw A, there’s tremendous value in it. For most of these customers, the first ones were 12. 10 out of them absolutely loved it and wanted this right away. And then there were two that said no, because I already have a system to organize this and this will conflict with my system. Those were the power users and this ratio of about 12 to 15% of people that don’t want the inbox and hate it when we show it to them repeated throughout our research. So it was really interesting to see.
Incidentally, a lot of my colleagues were in this group, so it was a hard sale to convince them that people actually need this. Cause they were like, we have solutions for this. Why do we need another one? And also the technical press. Before we launched, we sent it for preview to a lot of article writers and newspapers, technical reviewers ahead of time, and they had a week to review. And the reviews were very tappy. They were like, I don’t understand why this is needed. No one was excited, but we knew by then we tested so much with the regular users that they actually love it and it does exactly what they need. So we didn’t care. So that just comes to show you that even the technical press and even the experts can’t really predict always what works, what doesn’t work.
Henry Suryawirawan: Wow! This is a very valuable sharing, especially for people who like to do experiment of their products, introduce new ideas, right? I think always validating with your customers, your users. Not following some reviews from some experts or press, or even maybe the internal employees. Sometimes you can’t trust them as well. So always validate with your users.
[00:15:21] Dogfooding
Henry Suryawirawan: Relating to the internal employees, I think there’s one thing that I also love from what Google is doing, which is called the dogfooding. You mentioned about it as well, right? Before you launch, you do very thorough, thousands of Googlers using it. Maybe tell us a little bit more about dogfooding. Why it is important and how we should do it maybe more properly?
Itamar Gilad: Absolutely. By the way, it’s not just Google. It’s a very common practice in Silicon Valley or in tech in general. Historically, Apple relies on it extensively. They don’t call it dogfooding. But Apple doesn’t like to do external user research, but they use their employees extensively to test their products, for years sometimes.
Microsoft, the same. When I joined Microsoft in 2003, the first thing I noticed was that Outlook, the mail client was completely buggy. They asked people what’s going on and said, oh, that’s because we’re all dogfooding the next version of Outlook. So that was the norm.
So it’s a really powerful way to get your colleagues to participate and to give you feedback, early feedback. They are much more tolerant of bugs than the customers would be. They’re much more supportive. And also many of them are technical enough to tell you exactly what’s going on, what’s wrong.
But you mentioned a very important point. They don’t always represent your ideal customer. The one that you want. I mean, most of the people you find in your company are very technical or tech savvy. Have a certain level of education, income, etc, certain age groups. So be aware that this is not necessarily representative.
What we did with the inbox, for example. I invited dogfooders. First off, we did fishfood, which was the team itself started testing a very rough version of the product on our own inboxes just to convince ourself that there is value there. That’s yet another test I recommend trying, if you are in the target audience.
There’s also something called a bug bash. Even if you are developing some, I don’t know, B2B system for, I don’t know, doctors for information retrieval. You are not in the target audience. You’re not going to be able to dogfood this in your real life. But have the team come for a day or half a day and just try to complete some tasks with the software they’ve developed, and you’ll be surprised how much more enlightened they become by the end of this experiment. So bug bash is another interesting way to get into the shoes of the customer.
With the new tab inbox, we launched it to anyone who wanted, but I also send them a survey to try to understand what kind of inbox management they’re employing already. So we had the big inboxers, which were the people who were not cleaning their inbox, and those were the ones we listened to most closely. We had the small inboxers and the zero inboxers, which are like the, I don’t know, the ninja cult of inbox management. You know, inbox zero. Almost no one does it. It requires extreme discipline to do it. But in Google, a lot of people thought that that’s how people actually should use their inbox. Completely irrelevant to the regular user. So when a zero inboxer came to me and said, you know, here’s a problem, I would qualify this with this kind of I know that person has a certain mindset that is not necessarily representative of the larger population.
So very powerful tool. But learn to qualify who’s saying what? Your managers are not representative of the audience either. External experts. They’re experts. It’s great to listen to them. It’s great to talk to them. They cannot necessarily predict how people will see the product. People react to products in a very complex way. Partly emotional, partly intellectual, partly rational, partly not. The social elements. It’s a very hard thing to predict, really. So don’t trust logic alone or the views of experts.
Henry Suryawirawan: Thanks for sharing that. I think it’s really valuable, all these lessons learned from your Gmail experience.
[00:19:10] GIST
Henry Suryawirawan: So let’s move on to what you’re doing now. You are now coaching people more on the product management, teaching them how to probably do product management much better. And you have one technique, which I think called GIST. G-I-S-T, which stands for Goals, Ideas, Steps, and Tasks. Maybe can you share a little bit, what is this GIST about and what kind of problem are you trying to solve by having this framework?
Itamar Gilad: Sure. So one of the questions at the back of my mind throughout my career was why some companies are so much more successful at producing high value, high impact products than others. And part of the reason is corporate culture, and some companies are a bit more behind and some are more advanced. But I don’t think that covers all of it. Sometimes companies with bad cultures manage to do a good product and vice versa. So I was looking for the kind of the best practices or the principles that really help.
And the principles are pretty well understood. You need to be customer centric. You need to be evidence driven or evidence guided, which I think is very, very important. You need to be able to adapt your plan, not just stick to roadmap. This is true on the roadmap level. It’s true on the sprint level. You need to be truly agile given your information. And you need to empower people, because it doesn’t work if you centralize all the decisions in the hands of a few very smart people. You need to distribute the decision to create an organization that is intelligent enough to coalesce around the problems, try out things, etc. So those are the principles. Those are well understood for decades.
But some companies are better at implementing them than others. Google was an example. I saw in Google how some of these were very powerful. And you could even see some of this coming out in the examples I gave you. And I could compare this to my experience in Microsoft that was in a completely different place, I would say, on this aspects. Maybe it changed since then. And I found that the difference comes to four areas of change or four areas that we need to tackle.
The first is how we set up goals, and what’s in these goals. So everyone uses OKRs these days, that’s great. But OKRs can contain terrible goals, really bad goals, and they can contain the right type of goals. It can contain dozens of objectives and key results. It can contain just three but very focused ones. So what is the right practice there? And we know some of the rules, outcomes over output. Less is more. Tied to actual behavioral metrics of customers, not just revenue.
But the actual guidelines of how to do this escape a lot of companies I work with. So that’s one area that I think a lot of companies can improve, and that’s the goals layers. Each one of these goals are tied, but each one is semi independent. You can implement just the goals changes, for example, to start with. You can set the North Star metric. You can set your metrics tree. You can start using OKRs correctly. And those are some of the things I teach.
The second change is how do you choose which ideas to invest in? That’s the ideas layer. And idea, just for clarification, is a hypothetical way to achieve the goal. It could be launching a new feature. It could be partnering. It could be starting to use a new API. It could be purchasing a company that does exactly what you want. These are all ideas, hypothetical ways.
As I mentioned earlier, the statistics suggested that best case, one in three ideas will work. And in reality in most companies, in most cases, it’s one in ten, unfortunately. Which means maybe 90% of what’s currently in your product backlog, in your roadmap is stuff you don’t really want to do. It’s a waste. It’s going to do nothing. And it’s not hard to see the result of this. Just look at the existing product, how bloated they are with features that get almost no usage.
This is true for every product I worked on. These features are a liability. You have to keep maintaining them. You need to keep sustaining them. They create bugs for you in the future. They complicated your QA metrics. So it’s best not to launch these things. It’s best to invest in the things that really move the needle.
So prioritization is a key challenge. It always comes up as one of the top challenges when I speak to product teams. We don’t know how to choose the best ideas. How do you choose? Based on opinions, based on consensus, based on the opinion of the highest person, you know the system. And these are very unreliable heuristics. So they send us again and again into the wrong direction.
So what I teach usually is how to use ICE, which, again, is very common. It’s very well known. But to go very deep on the I, on the E, and especially on the C, the Confidence. How to choose ideas and how to attribute confidence then based on evidence. And for that, I created a tool called the confidence meter, which is now gaining popularity. Quite a few companies are using it. If you want, we can talk about it later.
So that’s idea layer. How to choose the ideas that are most likely to succeed. But there’s a big quotation around most likely, because this is again, just a set of assumptions and a set of guesswork and some work with evidence. It doesn’t guarantee that if you do good prioritization, you necessarily land on the right ideas. No one can actually predict. Definitely, not ICE.
So you need to test these ideas. And that’s the step layer. So the step layer is about how to experiment, how to get ideas from concept to a launched feature by combining learning and building at the same time.
And I gave some examples. So a Wizard of Oz test is an early stage test. Before that you can do back of the envelope calculation. Before that you can do ICE analysis. Before, you can do assumption mapping. There’s a ton of things. And later on if you see the idea is still worthwhile investing, if you also, you pivoted, you improve it along the way, then it becomes worthwhile starting doing the more expensive experiments. You know, the dogfoods or the fishfoods. The early adopter programs, alphas, betas, etc, A/B experiments. Really depends on the type of idea, if it’s a big idea or a small idea, if it’s risky or not risky in your level of risk. I mentioned in Gmail we were very risk averse, but some companies should be less risk averse than us. Based on that, you need to decide how many of these tests you need to build along the way.
So sometimes you can do just one test and launch the idea. Sometimes you can rely on expert opinion, cause it’s a very tiny change and it doesn’t create much risk. It’s an art form. You need to learn to combine. So that’s what I teach in the steps layer. For me, a step is like an experiment or an iteration that teaches something.
Then the big question is how to run this whole thing. How to do project management with an agile team, especially, we all have the Scrum teams or Kanban teams or whatever. How to bring this to their lives and how to prevent the managers and the stakeholders from randomizing us constantly and coming in with new ideas and pushing us around. And so that’s the task layer. This is how to get the teams to work on the right things with tremendous amount of context.
Because what I see is a lot of engineering teams or development teams caught up in protective agile layers. Protected from the customers, protected from the business. They don’t need to be bothered with these things. Just give us a prioritized product backlog that just break each item in the backlog into user stories. I don’t know if you’ve ever seen this process. And after that, we will build it for you. That’s our job.
That’s a terrible mindset. I’m sorry. This is not how good companies build products. You should not work this way at all. What you should do is give a lot of context, the goals, the ideas, the steps, all this creates a lot of context in the minds of the people who do the work. And kind of impart them to A, suggest ideas. B, invent some of the experiments. And C, become explorers in a sense. Become discoverers. Not just people who deliver, but people also discover. And that should be part of their job description. And this should be part of what they’re compensated on. We should move away from this output focus that a lot of teams live in. So that’s what I teach in the task layer.
Henry Suryawirawan: It’s really jam-packed! And I think there are a lot of insights. Maybe we will cover one by one.
[00:27:17] Problem with Product Roadmap
Henry Suryawirawan: But before we start with all these goals, task, steps and all that, you touched on something that is pretty common in many startups or product company, right? It’s about product roadmap. And I know that specifically you wrote something about problems with product roadmap. Some companies do like a yearly product roadmap or even quarterly. So tell us why this is a challenge and what does GIST do differently with product roadmap?
Itamar Gilad: Right. So the criticism of product roadmap is widespread. It’s not just me.
Let’s start with a positive. Why do we need roadmaps, what function roadmaps provide. So roadmaps provide some sort of sense of security or some sort of sense that we are well-planned and we know what to expect. This feature will land in this quarter, etc. And then we can plan the rest of the company’s work. The marketing team can start preparing their marketing materials and training the sales team. And the sales team can prepare and maybe inform the customers. And depending on which company you work for, the CTO is happy cause he or she can level the resources and kind of know how many they need to recruit each quarter.
Planning. It’s wonderful. Everyone loves a good plan. And this is a mindset that we inherited from the industry that preceded us, you know, the 20th century kind of manufacturing. And probably that worked when you’re producing cars or packaged consumer goods or all these other things.
The challenge is we face a lot more uncertainty than these industries of the past where classic business management was dominant. There’s a lot of uncertainty in our markets, because the markets are very dynamic with software and the internet. Two guys or two girls in the garage can disrupt our market within a few years. There’s far fewer barriers to entry. The customers have a lot of choice. The customers are very fickle. They can change their minds. So we can’t afford to just lock ourself into a plan that doesn’t react to new information. That will be very, very bad for a business and very bad for a product.
The second thing is these plans never come together as we expected. You just need to look at last year’s roadmap. How well did we deliver on it? Some things were delivered very late, some things were canceled. New ideas that we didn’t even agreed on, and in middle of the roadmap, all of a sudden hop to the top. And that may be okay cause that actually makes us a bit more agile.
It just shows that this heavyweight process, this tremendous effort in actually prioritizing ideas and choosing and putting them on a roadmap and scaling the resources, it’s not worthwhile doing. We need to actually try to do something that is more agile. And I can go on about all the different negative side effects, cultural and other, that kind of being attached to roadmaps create.
A lot of organizations understand this, and they moved from yearly roadmaps to quarterly roadmaps. And then after that, there’s a bit of a non-committal roadmap. I call it the next and later sort of concept. But even committing to a quarter each time means that during this quarter you don’t really have the freedom to launch whatever you think you need to launch. It’s imperative to have this freedom. A quarter is a lot of time in most of our businesses.
So I think a pragmatic approach to this is to realize that there are certain types of ideas, where we gain high confidence, we already tested and we went through our dogfood and the early whatever, we’re pretty sure this is a good idea. And by that point, we already built the product to a certain level in order to test it that we know already. Also, we have pretty good confidence about how much effort it will take to finish. With these ideas, I’m pretty comfortable to put it on a timeline and say, we’re going to launch this Q1, even specific date on Q1, because the risk is pretty low that by committing to this we’re actually doing something bad. It’s a good idea. We want to launch it, we know how. So for those, feel free to commit.
The other ones, the ones that are in process of being evaluated, I would say don’t commit to them. Just color them and say no. These ideas are medium confidence. We only tested them to this level. They might happen, they might not happen. We don’t commit. We don’t know for sure. And then there’s ideas that are low confidence, etc. So that’s at ideas level.
At the goals level, I would say that most company’s strategies stretch multiple years. Then there’s a yearly OKR for the company that says, this is where we want to be by the end of the year. But you could actually on a timeline say, we want this particular key result or this particular objective to complete by end of H1, right? So by this date. So you can kind of plot your goals, your company level goals and team level goals on a timeline. That’s sometimes called an outcome-based roadmap. So what do we want to achieve by when? That’s also how to predict. That’s also a bit brittle, but it’s less likely to break than committing to a set of concrete ideas, I assure you. So it’s worthwhile doing this exercise.
Then there’s the question. Okay, but what do we tell the customers? How do we prepare? What about marketing and sales? What about resource leveling? These are really important questions. I’m not discounting them. I do think that if you want to be truly agile, these people need to learn agility as well. The CTO needs to learn to plan in a more agile way and to learn that sometimes he or she will need to recycle people from a project that failed to another project, and that’s just the way it is.
The marketing and salespeople I talk to are not necessarily in love with roadmaps either. They know that they’re late, they know that most roadmaps actually don’t come together as expected. They know that a lot of these features that are promised are not the best ones. They really want to create value for the customers too, because that makes their life so much easier, right? High value is what the customers want. That aligns all goals with their goals.
So the business teams really understand this. I saw a lot of business teams being willing to be much more agile and adaptive and lean than the executives give them credit. They don’t want to tie the engineering team necessarily to a set of deliverables if they know they can make their business goals. And that’s really where we need to all align. Not on a set of launches, but a set of business goals that the teams commit to. Big question of trust there. You need to build the trust in the engineering or the development team that they can actually deliver this. And this is a big hurdle to traverse. But if you’re there, the companies that are in this position, they’re much better aligned. They spend much less time on planning. They have much better outcomes at the end of the day because everything they do is towards business and user goals. And they don’t need to invest as much resources. They actually use the resources in a much better way.
So this is, I think, is the ideal you need to aspire to.
Henry Suryawirawan: Thanks for explaining that. I think it’s really important to understand why product roadmaps fail, and what are some of the things that you propose here to make it better. I think apart from trust, I think you also touched on in the beginning that the team also needs to know a lot of context. Not just like, okay, here are the features that we want to build, but also tie it back to like why we wanna do it, which customer, what kind of problems, what business goals that we want to do.
[00:34:22] Creating Alignment
Henry Suryawirawan: And I think another challenge that I normally see is about alignment. We have multiple teams in the company. We have one big goal, but we have to align between multiple teams. How does maybe GIST do differently to align these teams or different people, different managers so that they can go towards the same goal? Do you have any tips here?
Itamar Gilad: Well, alignment is a major challenge. And there’s more than one underlying problem or cause for misalignment. With GIST, I’m trying to help deal with some of the causes. I cannot guarantee that’s gonna help with everything.
One is that we try to align teams around projects, around ideas. Let’s start from the bottom. Let’s say I have an idea. In my team, I want to do something, then I’m going to your team cause I’m dependent on you. There are dependencies. And I want you to commit to my idea. I want you to work on my project. From experience, as a very experienced product manager, I tell you most of the time this fails. They have their own ideas. They don’t want to commit to your project. You’ll get a lot of pushback. Or sometimes they will say, no, I would disagree this is the best way.
I suggest aligning on goals. So come to that other team and say, listen, we want to achieve this outcome. We want to reduce the number of, I don’t know, hacker break-ins into accounts by 50% next quarter. Do you think this is a good goal? We depend on you? Are you willing to commit to this goal? This is a much easier discussion, because then they can say yes or they can say, “Yes, but we have more important goals, so maybe talk to me next quarter.”
Both ways are good, because if they say no, you know you shouldn’t commit to this goal, cause you’re not going to achieve it. If they say yes, good, then you ask them to copy this goal, this OKR into their OKRs, which is a form of commitment. Doesn’t mean you’ll get it necessarily, but it does mean that they are serious about it. This is called a shared OKR. I used this a lot in Google. So that creates alignment across teams, for example.
What really helps teams align is if from the top there are clear goals. And you’d be surprised in how many organizations that’s not the case at all. There’s no clear North Star metric. There’s no clear set of metrics trees. There’s a million goals. Everyone’s idea becomes a goal. Essentially, the OKRs are not about outcomes. They’re about a set of ideas that we’ve decided to implement. And that sends people in various directions, or they get a lot of teams to work on this one of these massive projects. So it kind of creates alignment, but it’s really just creates a massive project for us to collaborate on.
So I suggest a process of starting with very few company level goals. If you’re a medium sized company, don’t do middle level goals. Don’t do departmental goals or disciplinary goals. Don’t do UX goals versus engineering goals. We don’t need any of this. Do team goals. And each team needs to ask themselves, how can we contribute to the company goals?
And the answer often is, I need to collaborate with these three other teams in order to achieve that particular goal. Middle managers are very good at helping the teams connect the dots and helping them, actually forcing them to collaborate. But you don’t need the middle managers to create goals in order to do this. In a larger company where you start having business units, etc, then yeah, it makes sense to have more middle level goals.
So fewer goals, fewer mid-level goals. Alignment on goals instead of ideas. Those are just some of the ideas I can provide.
Henry Suryawirawan: Thanks for sharing that. So one thing that I pick interest is about fewer goals. Some companies, of course, if not all, they have many ideas and they have multiple stakeholders, right? So this stakeholder has maybe more revenue. The other is more about active users or some other about cost saving.
[00:38:02] Prioritization and ICE
Henry Suryawirawan: You touched on earlier about prioritization. So we have so many ideas within company. Maybe they are all aligned to the same goal, or maybe they are all different. How do you do this prioritization? Maybe can you tell us a little bit more about this ICE technique and coming to the confidence meter that you also touched on earlier as well?
Itamar Gilad: Alright, cool. And by the way, you raise a good point. I talked about alignment inside the product organization in between teams, but there’s even a bigger question of alignment against business stakeholders, etc. But prioritization kind of comes into that. Cause that’s really where we feel some of the pressure, right? Everyone comes with their favorite idea. Everyone is convinced that this idea needs to be at the top of the list. Sometimes it turns into political pressure, escalations. And often as a product manager, I had the least power, least influence, and the least political skills. I didn’t know how to escalate. I didn’t know how to do all this stuff that these guys were experts at. So how do you defend against this?
First and foremost, you need good goals. I’m sorry to go back to this. If there’s no clear agreement of what your company and what your team is trying to achieve, everything is a good idea. It’s really open market. Just whoever pushes harder will get their idea. So you need to be very clear about what’s your goal. If you’re the onboarding team, your goal is to improve the percentage completion rate of onboarding, for example. That’s your local North Star of your team. And this should be reflected in your goals. And when someone comes to you with an idea and says, “You know what will be cool? We really want to do this cross-company initiatives that put emojis and everything”. You say, that’s wonderful, but unfortunately, that doesn’t stack up very high in my list of priorities, because look, this is what I’m committed to, and this was reviewed by my managers and my managers’ managers and the stakeholders, and everyone said yes. The onboarding team needs to focus on these things. So now you’re coming with this other thing. I’m sorry, this quarter I cannot help you. And be willing to take the idea. I mean, write it down, put it in your idea bank, but don’t necessarily. Without the goals, it’s really hard for you. You have nothing to back on, to rely on, to push back sometimes on ideas.
So that’s number one. Then when you have a bunch of ideas, you need to somehow stack rank them and create what I call a hint, which ideas you want to test first. And that’s ICE essentially. Some people consider ICE like magic. It’s like it will tell me exactly what I should build and after that I just build it. I don’t need to test it anymore cause ICE said. That’s a completely bogus interpretation towards ICE. It doesn’t work this way. It will never work this way. There’s no magic. No one can tell you.
ICE just gives you a hint. How does it do it? By breaking the question to three parts. One is, what is the impact? Impact on what? On the goals. Which particular goal? Usually, it’s either the North Star metric of your company or your business unit, or if that’s really hard to measure, then your North Star metric as a team. If that’s not the case, so maybe on a key result, a specific key result, right? We wanted to shorten the average onboarding time from 30 days to 2 days. Which ideas, what’s the impact potentially on this thing? Of course, it’s a guess, but this guess will improve. The more you test the idea, you start with a guess, and then it becomes a much more educated guess with some of these techniques I mentioned.
Second question is Ease. That’s the E at the end. That’s basically the opposite of person weeks in most development teams. In a marketing team, it might be marketing dollars. It whatever is the most scarce resource. An easy idea is one that we can launch relatively quickly and with low cost. An expensive idea, a low ease idea is the opposite.
And then there’s confidence. And confidence is basically how sure are we that the impact and the ease of what we think they are. So when they’re a guess and all they’re based on is my gut feeling, the confidence should be near zero. Because, if there’s an unreliable sort of evidence, that’s self conviction. Every terrible idea that was ever out there in the world, someone thought it’s a good idea. And we can see now in some very notable companies, how very experienced entrepreneurs push them to do really stupid things, because they’re sure they’re good ideas and they’re very convinced in their ability to predict the future. So don’t fall into this trap.
But then if you start doing back of the envelope calculations and reviews with your peers and with experts and market research, competitive research, customer interviews, etc, you go up a scale of confidence. That is logarithmic by the way. It goes up. The test become harder and harder. Most ideas will fall along the way. This is the confidence meter I mentioned, it kind of encompasses that. And then you can give yourself higher and higher confidence scores along the way. And what happens usually is you also are able to adjust the I and the E.
The impact in the E, because you know much better now how impactful this is and how easy it’s going to be. And usually the impact goes down and the ease goes up, cause you realize it’s not going to be as impactful. There’s something called the planning fallacy, a known psychological mechanisms we have that we tend to underestimate the time of task and overestimate the impact or the benefits they will give us. So this is a good way to adjust for this.
So that’s how I recommend using ICE. Do it once, just rank your ideas and then keep doing it as you’ll get more evidence. Don’t stop there. And definitely don’t rely on ICE as a one time magic algorithm to tell you what to build.
Henry Suryawirawan: Right. So I think I read one of your blog as well when you illustrate how to use this ICE for building, I don’t know, like an internal portal versus chat bot or something like that. That is probably one good illustration how we should do this ICE technique, right? Do it once, but also doing iteratively. And as you can see, the I, C, and E will change based on more information, more data that you have. And I think that’s a very good exercise for people to look at.
[00:43:59] Doing Product Experimentation
Henry Suryawirawan: So now we have done all this prioritization. We have good amount of ideas. How do we break it down? How do we deliver what you call experiments or steps, which is the next step? How do you actually scope them and make sure how we deliver them? Because ideas are just ideas. The most important thing is how do we roll it out to the customers, to the users that will use that idea.
Itamar Gilad: Right. So experimentation, right? Everyone wants to do it. It’s a magic word. I hear a lot of companies actually are interested to introduce it. What I notice is that a lot of people, because the word experiment sounds very rigorous, they think either of A/B experiments or betas or what they call MVP, which is often a near complete version of the product with basically all the features, all the bells and whistles. Maybe it’s not super polished. When I was young, this was called a beta or maybe an alpha, but it’s more like a beta, really. That’s way too late to wait to learn whether or not it’s a good idea or not. You invest, it’s basically waterfalls. It’s just basically building the whole thing.
What I notice, a lot of companies don’t realize what a large gamut of experimentation techniques, validation techniques we have at our hands, at our fingertips to validate ideas. And some of them don’t require actually experimenting at all. So I like to break it into five buckets.
There’s assessment, which is just doing it on paper, just looking whether or not the idea aligns with the goal. Doing assumption mapping to see how much risk is in the idea. These are all things you do without collecting external data even. And even those a lot of time allow you to eliminate a large swath of your ideas. Park them. Say for now. Don’t look the most promising.
Then there’s fact finding, which is looking at data that you already have. Usage data, behavioral data, other things, conducting user interviews or relying on user interviews you’ve done in the past. Cause it’s best not to do them just on demand, but to do them on a ongoing basis. Look at competitive result, like look deeply at your competitors. Doing field research. There’s all sorts of ways to get data and then ask whether or not this data actually confirms our assumptions over the ideas. And that’s a key point, by the way. What we test is not the idea entirely, but some of the assumptions within the idea. So it’s sometimes good for a large idea to break out and ask what are the assumptions? There’s a very good tool called assumption mapping by David J Bland. Very highly recommended.
And then the next phase is testing. The idea is starting to build, but of course in the first stage, you don’t need to build the whole thing. You can fake it. You can do fake door tests, landing pages, call outs, buttons in your UI. They do nothing except when you click on them, it shows the intent that people actually are interested in this thing, and then you show them a little pop that says, you know, we’re not quite ready, but do you want us to notify you when it’s ready? Which is another test, of course. Companies have validated the whole business model in this way. It’s a really powerful technique too. Human operated test, we talked about Wizard of Oz. There’s something called concierge test. There’s a lot of techniques and, of course, the fishfooding I mentioned. So there’s a lot of ways to test ideas in a very, very early stage before they’re even close to being finished.
Then there’s another layer, which is kind of mid-level test. Alphas, early adopter programs. There’s all sorts of ways to test ideas that are not polished, not fully implemented, just the core scenarios. Not scalable, not built like we would build them to launch. And then later on, there’s late stage tests like betas, previews, labs, all the most rigorous tests, which are A/B experiments, which have a controlled element.
Even the launch itself is another test. So we have tests and then we have experiments. I call experiments only the things that have a control element. Then there’s release. The release itself is an experiment too. Like when we release the tab inbox, we did this very gradually. We launched to 5% and we monitor this 5%. If there’s anything negative starting to happen with any of their metrics, then we launched to another 10% and then 15%. We survey the people. And sometimes you do a holdback, you launch to 99.5% and then you hold 0.5% for another few weeks out of the thing just to see still if there are differences.
You have all these techniques to use. So basically, it’s a matter of scheduling them and putting them, choosing which ideas you want to test, and then asking yourself, which are the key assumption, and then what’s the first step? Who should work on this step? What’s next, etc?
And that kind of brings us to the next layer, which is how to project manage this, how to test multiple ideas in parallel, how to allocate resources, how to stage it, etc.
[00:48:43] Project & Task Management
Henry Suryawirawan: Is there any specifics about project management or tasks? You mentioned about agile project management. I guess in most companies these days, they practice agile, whether it’s fake Agile or maybe some kind of Agile. Is there any tips that you think, based on your consulting, anything different that we should be aware of about this project management for the task?
Itamar Gilad: I would say that Agile in general is a very positive thing, especially compared to waterfall as I practiced it when I was a young engineer. So it brought a lot of advances. Over the years, especially Scrum became more and more strict, more, more structured, more and more full of process. And I think some of the things teams practice today, honestly a lot of the founders of Agile are not super keen about. I will not name names, but it became a very process heavy thing. And then there’s a lot of process people involved with dedicated Scrum Masters and dedicated agile coaches just to drive the process forward. So it’s kind of becoming reminiscent of the days of project management in the days of waterfall. Cause waterfall was such a heavyweight thing that we needed dedicated project managers that push through it.
I’m not trying to break up all this. And if it works for you, keep using it. What I try to do with GIST is to inter-operate with it in a way that doesn’t require the scheme that you’re using today to change very much. It might require some changes, but not a lot. So what I suggest teams do is use what I call the GIST board, which is basically a new process where you put on a board, it could be a physical board or a digital board, three columns.
One is your goals, just the key results you committed to. And generally for a team, a product team of 10 engineers or less. I recommend no more than four key results per quarter. Even four is a lot, cause teams just cannot move that much, that many key results. Those could be business or product or user facing key results. This could be also technical or design key results. I mean, if we need to cut down technical debt or close a bug backlog, those are very important goals as well. Don’t kick them out of sight, cause the teams will do them without you knowing. You need to have a very conscious decision with your other leads, the designer and the engineering leads, what’s the ratio? And then you create a board around that. You pick the top ideas you want to start with based on ICE or whatever method you like to use, and you put them next to it in the ideas column, just the ones you’re starting to work on right away.
Then next to those, you put the steps. Not all steps, just the ones you are planning to do in the next couple of weeks maybe. Cause that part will change a lot. The whole board has to be very dynamic, cause some ideas might turn out to be bad. And then you need to remove the entire idea with all its steps from the board and put another one instead. Some steps complete. Some steps need to be redone. So it’s a very dynamic thing. It’s the job of the leads to keep managing the board, to keep it up to date. But I highly recommend meeting with the team at least every week or every other week to review the board and to talk about the next steps.
And that’s a perfect kind of segue into the planification or the sprint planning or whatever cycle planning you’re using. Cause that brings them a lot of context. It reminds them what are the goals? What ideas are we pursuing? What are the steps? We’re not just coding. We’re not just trying to push stuff into production, mark it as done, and move on to the next ticket. We’re actually trying to complete an experiment or a delivery, and that’s tied to trying to validate an idea and that idea is tied to a goal. So a lot of engineers tell me this really was missing. Before that I didn’t know I was working in this vacuum. I didn’t know. No one actually told me. I didn’t understand.
Once they understand, a very nice side benefit that I experienced a lot in Gmail is that you don’t have to tell them that much. They understand what needs to be done. They come up with much better ideas than you. Of everything I launched in Google, maybe 60% wasn’t my idea at all. It’s just people were creative and came up with much better ideas than mine. And that’s exactly what you want to do. Cause then you have more free time to focus on researching the market, understanding the needs, et cetera, instead of trying to spoon feed people with exact requirements, which is enormously tiring. And it doesn’t get you what you want at the end of the day. Cause no matter how explicit your user stories are going to be, they’re still getting wrong if they don’t understand the context. So much better to give them the context.
So that’s how it works. And at the end of the day, once you have your GIST board and you have this column of steps, that turns into the backlog that feeds into the Scrum or Kanban process. You just need to prioritize it. And that’s exactly what they need.
The big change is that maybe mid sprint, there will be changes. Maybe mid sprint, we will realize these steps we thought were useful are not useful because we learn new information. We need to scrap this plan. So for teams that are very, very strict about planning the entire scope and estimating it and not willing to change, this could be a problem. You can either do shorter sprints or you need to challenge this assumption that this is actually a good thing and it’s really helping. I would argue that being willing to change the scope of the sprint is really true to the nature of Agile. But I’m not the developer here, so it’s easy for me to make this statement.
Henry Suryawirawan: And also you are the product manager, because sometimes product manager wants to change our agenda as well. So I think, yeah, many people refuse to do this in the mid of sprint or whatever that is, right? But I think the true agile will cater for that. Either dropping the previous ones or looking at the capacity, whether they still can do it, and they will accommodate that if, let’s say there’s a good impact, there’s the good business outcome, right? So thanks for reminding that. And I think, hearing what you explained about GIST board, it seems like a very powerful technique. Very interesting. Because yes, I can see it also, many development teams, they don’t understand the Big Why, right? The goals, why they’re doing certain tasks. For example, even if you add a particular button, right? Sometimes we don’t know how this will trickle down to the company goals. So thanks for explaining that as well.
[00:54:39] 3 Tech Lead Wisdom
Henry Suryawirawan: Yeah Itamar, thank you for this elaborate explanation about GIST. I really learned a lot, and I’m sure many listeners here would also learn from your sharing about GIST. Unfortunately, we have to wrap up the conversation because of the time. But I have one last question that I always ask from all my guests, which is to share what I call three technical leadership wisdom. I guess for your case, you can also do a three product leadership wisdom. I’ll leave it up to you which one that you wanna do. But can you share maybe some wisdom for us to learn from you?
Itamar Gilad: Alright. So tip number one is to recognize what type of company you work for and adjust your expectations and your mode of operations accordingly. Marty Cagan gave a really good kind of dichotomy of company types. It’s the ones that you treat product organization, which they often call IT as a delivery organization. You’re working in a delivery team. You’re basically there to do whatever the business tells you to do. They don’t trust you at all to have your own opinions or to know what needs to be done. Maybe that works for some types of people. And incidentally, these organizations tend to be the strictest embraces of this heavyweight Scrum and SAFe, etc. I don’t know why. Maybe because delivery is the name of the game, and those, like Scrum today is so targeted at delivery.
If you are working in such an organization, a lot of what I just described doesn’t apply to you. It would not fly. Don’t expect your organization to actually be able to mutate into this much more modern thinking type of organization. I don’t want to be critical. Maybe this model is working for certain types of industries or certain types of products. So you will be endlessly frustrated if you expect them to. You should understand the rules of the game within these very limited confines. Try to be more creative or more evidence guided.
The second type of team that Marty Cagan describes is a feature team, which is basically a feature factory. There’s a lot more autonomy to decide how to implement the feature, what to do inside. They trust you on this and maybe you invent the features too. But it’s basically still very output focused. Like launch feature after feature after feature. You don’t challenge the feature so much in the middle. You don’t do a lot of discovery. You basically just decide through some sort of prioritization scheme what to build, and then you build it. And there there’s a strict focus on roadmaps and a lot of time is spent on planning. That’s one of the key signs. I think there, there’s a huge potential for improvement moving from feature to product. In a sense. This is really the potential. So you really need to understand whether or not your company is ripe and ready to actually make this transition. If you can find allies and then you can start bringing some of these techniques.
The third type is a product organization where they actually understand the importance of outcomes over output. They understand the importance of discovery and they do it to some level. And there, no organization is actually 100% pure and doing it correctly and not opinionated. And none of this. It’s impossible. And maybe we shouldn’t strive to be in this situation. But there you can find a lot of places where you feel there’s still a lot of pain, there’s still a lot of waste being created and try to fix those. Move the organization to a slightly better place, I think GIST applies to these last two groups. So GIST and everything, of course, it’s part of a much larger ecosystem of ideas.
So recognize where you are, what kind of organization, how much willingness they have to adopt these new things, and adjust your expectations in your work plan accordingly. If you want to be this change agent, of course.
Recommendation number two is on your voyage to try to change things, use evidence a lot. Here’s a thing I discovered. If you go to a battle of opinions with a person who is more influential than you or more senior, you will almost always lose, on principle, no matter how good your rationale or opinions are. If you come to that same discussion with evidence. Like that person will say, you know what’s the best thing in the world? We need to build this NFT or this, I don’t know, machine learning thing. And you will say, no, you know, we just did this longitudinal study, and that shows that people really want an analytics dashboard. Look at the data. That’s a more interesting discussion. You might get more positive results out of that. They might say, okay, yeah, but I want you to test my idea. Fine, you have all these techniques to test ideas, that’s fine. But at least they’re giving you the room to talk at their level. It’s not just instructing you to follow their idea. So evidence is such a powerful thing.
I’ve seen very authoritative CEOs that used to come to startup development teams and tell them, build this, build the other. I just told them a few techniques and then a junior PM came and challenged the CEO and said, you know, here is a business model canvas that shows that your latest idea will not work. And the CEO was very pleased. It saved the company a lot of money. They didn’t need to pursue another wild idea just to discover months or years later, it doesn’t work.
So evidence is such a powerful thing. Don’t overestimate. Don’t expect it to be magical. Some people still are very opinionated, but learn to use it. And the confidence meter, that’s why it’s so popular. Cause a lot of people were looking for such a tool to have in the back of their pocket. And when people tried to push their ideas to show them that that’s based on really weak evidence and we shouldn’t have that much confidence as they are expressing right now.
Tip number three is to learn to let go. I had a challenge as a product manager for a very long time. I felt tremendous responsibility over the success of the product. And I wanted to instruct everyone exactly what to do and how to do it. And that included designers and engineers. I wanted it to be at a very high level based on my kind of criteria. And that doesn’t work very well. They don’t like you, first off, for doing that. And second, it’s not the right way to do, cause they’re the experts in their area of work expertise. So part of the reason I adopted GIST is because that enabled us to have a much more kind of balanced discussion. And part of the problem is that a lot of the responsibility does lay on the shoulders of the PMs. I mean, if you’re an engineer in most companies, if the product idea fails, that’s not your fault. You deliver the code. Who cares? The product manager made the mistake, right? So change the dynamics and say, no, it’s our collective responsibility to meet the goals. If one of us fails, all of us fails. And then it’s much easier for you to also delegate to them decisions because they’re optimizing for the same thing as you. They’re not optimizing just for code quality or just for implementation of Scrum properly. They are optimizing us to achieve the company goals and the team goals. It’s much easier to let go with this model. Also, you need to realize what your personality type is. I’m a bit of a control freak, but still it really helped me when things went this way.
So, these are my three tips. Recognize the type of company you work for and adjust to it. Use evidence where you can. And learn to let go, learn to let your team members guide some of the decisions.
Henry Suryawirawan: Right. I guess the letting go part also applies not just for product manager. For every roles, engineers, testers, whatever that is. I think we always can use some kind of a letting go, right? So that we don’t use ego and pursue our ideas mostly, but also listen and contribute with others. So thanks for sharing that wisdom.
So, Itamar, if people love your GIST framework, they wanna learn more or they wanna connect with you, is there a place where they can find you online?
Itamar Gilad: Yeah, there’s a couple things I suggest. One is go to itamargilad.com/resources and then you’ll find all of these things. Some of them has downloadable templates, some of them as stocks. Some of them are just links to articles. All of my stuff is online. It’s very well shared. And you can start using it. The GIST board template, for example, is there. The confidence meter is there also as a spreadsheet. The other thing is I regularly share new tools and new things and new insights with the people who follow me on my newsletter. So I suggest just subscribing itamargilad.com/newsletter. I assume you’ll be putting these two links in the description.
Henry Suryawirawan: Certainly. I will put that in the show notes as well. So thank you so much Itamar for this crash course about GIST and product management. Thank you for sharing.
Itamar Gilad: My pleasure. Thank you for inviting me and I hope this will be useful for the listeners.
– End –