Skip to main content

Product Management Webinar: Product Roadmaps vs GIST Framework

Product Roadmaps vs GIST framework with Itamar Gilad

Watch this webinar with special guest, Itamar Gilad, Product Coach, Author, and Speaker, and our host, Janna Bastow, CEO of ProdPad. We explore the pros and cons of two planning methods with their own inventors discussing preferred planning frameworks. On the one side, we have the new GIST framework, and its creator debating that it could in fact replace the traditional product roadmap. And on the other, we have the creator of the Now-Next-Later product roadmap that’s fast replacing the timeline roadmaps across the globe. 

About Itamar Gilad

Itamar is a coach, author, and speaker specializing in product management, strategy, and growth. For over two decades, he held senior product management and engineering roles at Google, Microsoft, and a number of startups. At Google, Itamar led parts of Gmail and was the head of Gmail’s growth team (resulting in 1Bn MAUs).

Itamar publishes a popular product management newsletter and is the creator of a number of product management methodologies including the GIST Framework and The Confidence Meter.

Key Takeaways:

  • Whether the GIST framework can replace product roadmaps
  • How ICE scoring can be used for regular product ideas
  • The main building blocks of the GIST framework
  • The best prioritization process 
  • Why some roadmapping methods fail modern-day PMs
Dots watching a webinar

[00:00:00] Janna Bastow: Hello everybody. Welcome, come on in. You’re just joining the ProdPad Product Expert Series, this series of webinars that we run here. It’s usually in the format of either a presentation or a fireside, like we’re doing today a fireside being a conversation between myself and the expert, and it’s always with a focus on delivering valuable expertise and content and learning and this facilitation of the sharing of ideas. We wanna learn from the experts that we are bringing in here today.

Now before we jump in and I introduce my guest, I just wanted to talk a little bit about this tool that we’ve built This is a tool that myself and my cofounder Simon and I built when we were product managers ourselves. We essentially needed something to help us keep track of experiments and feedback and all the ideas, all the stuff in our backlog our roadmaps and everything. And it was basically something that gave us control, and organization, and transparency into this product management system we were building, and helped us create this single source of truth for our product decisions. So we started sharing it with other product people around us, and it’s now used by thousands of teams around the world.

So it’s actually something that you can use you can try out for free. We have a completely free trial, you don’t need a credit card to get started. We even have a sandbox mode which you can use completely for free. It has example data in there including lean road maps and OKRs and idea backlogs that are prefilled with all sorts of stuff, so you can get a feel for how this all, stuff all sorta fits together. And our team is made up of product people. So, it was founded by product people, we’ve got lots of product people within the team, so we live and breath by your feedback. So we’d love for you to jump in, give it a try, and then let us know what you think.

And there’s actually one thing that I wanted to call your attention to, is the new feature that’s in beta. And as a product person, you can imagine how much we love getting new stuff out there and sharing it with you. It’s actually a new confidence score that you can assign to the ideas in your backlog. So i- for those of you who know ProdPad you know that you can create a, add an effort and impact score and then map it out on this priority chart. And you can see how recently updated your ideas you can also now see the confidence that you have in your ideas. So is this something that you just made up, you’ve heard your CEO ask for once, or a customer quickly ask for, or is this something that you’ve got lots of research behind, you’ve done a bunch of prototypes you’ve done the legwork to prove that this is in fact the right thing to go work on. So that confidence builds, and it gives you a better idea as to what sort of things maybe should go on the roadmap and maybe which things you need to do some work on before you push them forwards.

So, that’s one of the latest features that we have available in ProdPad. It is under beta, but you can turn that on yourself if you get in there and start a trial. We’d love to show that off to you today. But there’s a reason why I wanted to flag that one up today, because we’re actually going to be talking about confidence meters today, because we, our special guest today is Itamar Gilad, who is the creator of the Confidence Meter, among other things.

So I actually know Itamar through the Mind the Product community. For those of you who know me, know that I’ve been involved with [inaudible 00:06:21] to that, where I’ve been learning from his articles and his talks on stage for years. is a coach, he’s an author and a speaker and he specializes in product management, strategy, and growth. He has over 20 years of experience in senior product management and engineering roles at places like Google, Microsoft, and startups.

At Google, he led parts of Gmail and was the head of Gmail’s growth team, resulting in one billion monthly active users. He publishes a popular product management newsletter, and has developed several product management methodologies, including the GIST framework, which we’re gonna be talking about today, and a Confidence Meter, which is com- which is something we’re gonna be talking about today as well. So, huge welcome, Itamar, thank you so much for taking the time to chat to us today. Everyone, jump in and say welcome to Itamar.

[00:03:28] Itamar Gilad: Thank you very much for inviting me. It’s a pleasure, looking forward to it.

[00:03:32] Janna Bastow: Absolutely. Great to have you here, and thanks for taking the time to chat and to talk us through this. So, tell Us a little bit about your background. What have I missed from that that high level bio? How did you get into product management?

[00:03:42] Itamar Gilad: I started as an engineer back in the ’90s, I’m pretty old, and somewhere in the early 2000s I s- started switching to the dark side and I became a product manager. And I’ve kept doing this for about 15 years, as you said, in, in Israel where I’m from, initially in startups and scale-ups, but then I also worked in a couple big internationals, Microsoft and Google. And Google I worked initially on YouTube, but then mostly on Gmail and that was a pretty interesting set of challenges, obviously, and a lot of my thoughts on some of the things I’m gonna share with you are partly based on my experiences at Google and other companies in trying to summarize some of the learnings I got along the way, a ton of mistakes, obviously.

And for the past six years, I’ve been coaching and working with many product teams, and writing some of the things I learned, and that actually taught me an awful lot more as well so happy to be here and share. 

[00:04:39] Janna Bastow: Certainly thanks for sharing that. And, yeah, I can absolutely empathize with that making a ton of mistakes. I My first product jobs, and actually, frankly all my product jobs were a series of making mistakes, doing it the wrong way, and then trying to learn from them and improve over time.

[00:04:51] Itamar Gilad: Yep, that’s it’s pretty hearing all this praise being read to you all day, all your background.

[00:04:56] Janna Bastow: [laughs]

[00:04:56] Itamar Gilad: But it’s actually, in hindsight, I wish I’d been more humble about our abilities in the face of uncertainty, et cetera. So I’m here based o- because I was lucky to work with good people, and I was lucky to learn from a lot of smart people before me.

[00:05:11] Janna Bastow: Yeah. And I think that’s that’s absolutely fair. One of the things that we do as we learn is that we write down the things that really work for us. And so you’ve done a great job of laying a path for other people to follow behind you, writing down and sharing the way from some of the gotchas that you’ve fallen into.

[00:05:25] Itamar Gilad: Yeah, I try to publish every month, I have a newsletter. But let’s maybe jump in and let people whether or not is this of any value to them, actually?

[00:05:34] Janna Bastow: Yeah, absolutely. So, one of the that you’re probably best-known for is this GIST framework, G-I-S-T. So tell us about that.

[00:05:43] Itamar Gilad: So, I had a chance to work in Google and in other companies that were less good, and I had a chance to work with a lot of other different product organizations. And one of the things I noticed is that the difference between the well-functioning organization that is ab- able to create consistently outcomes and impact, versus the more kind of let’s say middle of the road organization is their ability to deal in these [inaudible 00:10:16] areas of setting goals, prioritizing ideas, experimenting and adjusting, and implementing this, turning it into action with their Agile team, and with their stakeholders, and with their managers, and that’s often how this part.

So I started thinking around these lines while I was at Google. I started identifying that, I tried to reverse-engineer the Google secret sauce. And then as I left and I started working with my clients, I started giving them piecemeal, and eventually it turned into a framework which I call GIST, Goals, Ideas, Steps, and Tasks. And it’s, to be perfectly honest, it’s not completely original, it’s based on many smarter people that came before me, but it tries to package a lot of the thinking, a lot of the meth- m- methods or methodologies that I found most useful into one actionable framework. So, shall I go and jump into it or how would you like to-

[00:07:08] Janna Bastow: Yeah.

[00:07:08] Itamar Gilad: … go?

[00:07:09] Janna Bastow: Yeah would love to hear how it’s different than some of the other ones. How does it and how is it similar to some of the other ones that like where did it take inspiration from?

[00:07:18] Itamar Gilad: So it’s both I think it’s both similar and different. Let me just maybe project my s- my screen, and I will go, I’ll walk you through this.

[00:07:26] Janna Bastow: Yeah, I think I’ve got to stop projecting mine, and I see you’ve got something in there. Oh, where is that button? All right.

[00:07:32] Itamar Gilad: All right.

[00:07:33] Janna Bastow: Now you should be able to share yours, get in there.

[00:07:35] Itamar Gilad: All right, let’s go. Hopefully you guys are seeing this, so this is just the goals, ideas, steps and tasks, and th- the core idea is you need to focus on a very small subset of goals at company level, at mid-level, at team level, that are a- about outcomes, obviously, and about the things that are most important to you. And what I tried to package in the goals lawyer is, okay, us, and how to basically distill your [inaudible 00:12:22] through a set of, … Oh sorry. I’m going out of order there. Through what I call matrix trace, so basically you have your top-level metrics. [inaudible 00:12:33] the no star, and it’s the business KPI. And from those, you start braining them down into sub-metrics, sometimes called input metrics, and then that creates a model for you to visualize how your company’s growing, and how you can improve, and where you are underperforming.

And I’ve found that it’s much better to start from this perspective, and then project the goals into the old structure than vice-versa. What a lot of companies do is they start trying to assign to each team and each division a set of goals, and sometimes that maps out into the most important thing. Sometimes it’s just, i- it’s a projection of the org structure into the world. I rather that we look at this and say, the most, the three most important things are we need to reduce churn, from 5% to 3% per month. We need to focus on this opportunity we discovered in emerging markets and the educational sector, and we need to improve the quality of our products and our infrastructure. These are the three things we need to achieve this year, and now lets’ see how this translates into a submetrics and goals, and give this to the different teams. And at the same time, those teams need to generate also local goals, ’cause they are close to the market, they are close to the product, to the technology, so they know things that the managers sometimes don’t know.

So this whole process of top-down, bottom-up is something that Google at a time when I was there did pretty well, and I learn a lot from that, so that’s one of the things I teach. How to traverse the hierarchy and merge the ideas and the goals of of managers and team members. So, so that’s the goals layer, just very , metrics and OKRs.

[00:09:57] Janna Bastow: Yep. Who sets these goals? Is this set by top-level management or is the team involved with it? How does that work in the ideal world?

[00:10:05] Itamar Gilad: I don’t know that there is an ideal world. I- it’s far from being, its companies should chose what’s pragmatic and working f- for it. I- I’ll give you an example for g- from Google, like, the management of Gmail, which is one of the most important products of Google, it’s, in, in terms of usage at least, more than a billion users would start out late in the previous quarter, and sell out skeleton OKR, and say, here are the main objectives we want to focus on.” This key result we already know, we want to achieve.” Sometimes they know which key result, but they don’t know the target. And sometimes they also know what key result and the target, and sometimes none of it, they just say, “This is the objective, suggest some cures us, to us.” They it’s called leadership by intent. They explain the intent, explain what we ae trying to achieve and why, and then they solicit their help of people in the different layers in filling up the gaps.

And sometimes things will come down to me, to my level, to my area of responsibility, and I would suggest from stuff, and that might propagate to the level of the management. And sometimes [inaudible 00:15:42] thinks that I proposed, the 10 and up in the company level OKRs. Not very often, I wasn’t that senior, but it’s very rewarding when something like this happens. So it’s basically, the managers are soliciting the help of filling out the goals. In the same time, they’re also informing the people underneath, what, or the, the people reporting, “What are the most important things?” So this churn goal, for example, it needs to translate into more tangible … ‘Cause changing churn is, can break into many different things. So m- maybe we need to build a special task force for this, maybe we need to have these three things working together. Or maybe five different teams, each one will just take a step of it o- at it from the search perspective, from the I don’t know onboarding perspective, et cetera, and so there’s different ways to translate those goals into action. But th- the key part is that we need to have a discussion. We need to keep having this continuous discussion about what are the most important goals, and why?

The idea part, and I see a lot, and I’m sure you’ve seen this too, is that at this page, people start talking about ideas already. They already forget about the goals, everyone will start debating, ideas, it’s really sexy. And then what you get in the OKR is actually the decisions already which ideas we’re already going to d- to do. And then foc- focus will be on the output, on the ideas rather than on the, they key results, on the, what we need to achieve. So I’m a bit of a purist in this sense, I suggest not putting initiatives or any of these things in the OKRs. Keep the OKRs just about what you want to achieve this quarter or this year, and o- it may be shorter if your s- startup and try to push the ideas to the next level, which is the ideas layer.

 So, in the ideas layer, we’re basically doing prioritization. We’re trying to decide, out of the many ideas that we might be pursuing, which ones are the best?

[00:13:05] Janna Bastow: And are these like high-level ideas, or are these more granular, feature level type things?

[00:13:11] Itamar Gilad: It really depends on where you’re doing the discussion. But I try, I suggest usually for managers, at least if it’s, unless it’s a really big company, to try to s- stop at a goals level, and, it requires a lot of kind of leadership strong leadership mentality, and allow the teams to actually collect the ideas from the stakeholders, from the managers, from the market, from the users, and the customers, and try to prioritize them. And use an objective and transparent process to say, “We think those are the ideas that present the biggest opportunity at the moment, knowing what we know now.” And those idea- their ideas are still testing, first. You don’t necessarily have to build them. So that just means that you need a prioritization system, even if you do the double time on the future goal, and you map out your problem space, and you know all about the users, and you know what you need, what you think are the key problems, and then you go and you evaluate, you start creating ideas. Other ideas will come from the side. You always need prioritization. There’s no clean process that avoids prioritization that I’m aware of.

So it’s not a bad word. I suggest ICE, but it’s obviously not the only method. ICE stands for impact, confidence, and ease. There’s a popular derivative called RICE, reach, impact, confidence, and ease. It’s almost the same thing. I prefer to stick with ICE. And essentially, it’s, the I is about how much does this idea stand to contribute to the goals? And obviously you need clear goals for this, and not many. ICE suggests a maximum of four key results per team. The ease is how easy or hard it is going to be, which is usually the inverse of person weeks. Although if you’re working a marketing team or a team that’s where the scarcest resources, marketing those, it might be others. And then there’s the confidence, which is how sure are we actually that the impact and ease are for real? And you gave a very good explanation of what confidence is. And the exercise here is to collect ideas and to do this kind of quick scoring on all of them, and then do what you suggested, think about, do we want to pick the ideas that are highest impact or the ones that are less risky, highest confidence, or maybe the easiest one, or a combination of those?

[00:15:36] Janna Bastow: Yeah. And somebody actually asked a question, why do you not like to use reach?

[00:15:42] Itamar Gilad: Reach? Why don’t I like to use reach? Explain reach to me. What is it?

[00:15:48] Janna Bastow: That’s a really good question. I’ve heard it explained, and I might be wrong because I don’t actually use reach either but I’ve heard impact being how deeply does it impact each individual customer, so it doesn’t, cause this much pain for them versus reach, which is how many customers does it reach? Now may be interpreting that wrong, maybe other people have their own interpretation. I’m curious to yours as well.

[00:16:09] Itamar Gilad: I’m sorry I th- I blanked out for a second. I thought REACH is a completely different [inaudible 00:21:19]-

[00:16:13] Janna Bastow: Oh.

[00:16:13] Itamar Gilad: … system. You mean the r- the R part in RICE.

[00:16:15] Janna Bastow: R in rice, yeah.

[00:16:17] Itamar Gilad: Yeah I, your explanation was bang-on, that’s exactly what it means, yeah.

[00:16:20] Janna Bastow: Cool, all right. Excellent. [laughs]

[00:16:21] Itamar Gilad: So think of a few examples. Maybe you think of an idea that impacts a few power users that are really, like, small subset, but they’re very important for your business. In ecommerce, for example, a Lot of the times, a lot of the revenue come from a very small set of buyers. In usage, there’s usually a parlor. So you might come up with some ideas that touch on those people, the reach is pretty low, but with thee people it’s actually, it’s very important to, to be successful. So, so that’s just one example where there R doesn’t always make sense to start breaking things into reach.

I think people love RICE because it gives them a more systematic way to, to calculate impact. Impact is the hardest part to calculate in the ICE course. And in my workshops, we really work on trying to find other ways except just banking on REACH.

[00:17:13] Janna Bastow: Yeah.

[00:17:14] Itamar Gilad: But it is an important factor in some of these [inaudible 00:22:31] questions. Sometimes you do need to factor in reach, but it’s just part of the I. That’s my take on it.

[00:17:24] Janna Bastow: Yeah. No, I don’t personally, yeah, use reach. I’ve tried to group it in with impact as a more holistic number. But one of the things, and maybe this is controversial, but I actually don’t use RICE or ICE in its fullest extent, and actually for a reason. Because I love the idea of tracking and gathering information on what the impact is, and gathering the information on things like the the confidence and the ease or effort. But I don’t like multiplying the ones together. Maybe you could speak to this. Because what I find is if you take a, an assumption, and you multiply it by an assumption, and multiply an assumption, you get a bigger assumption? Yeah.

[00:17:59] Itamar Gilad: Absolutely.

[00:18:00] Janna Bastow: And what happens is like, if the seven is a guess, and the two is a guess, and the five is a guess, by the time you’re done you’ve got a whole bunch of guesses, which might give you some information, but then what happens is people tend to use the final scores as the word of God, right? The computer has spoken, and now we do this one first, and absolves themselves from the decision making of saying, “Actually, but we know we have to do the double option first,” because, the law says so, or because, we’ve got this thing that’s got a drop-dead deadline, or because we know that this is actually the bigger problem, even though the score doesn’t actually really take that into account. 

[00:18:36] Itamar Gilad: Yeah. Yeah I’m totally with you. And I think that’s a very good point. There’s a lot of noise, a lot of variability in all of these numbers, especially when you’re in the early stages of validating an idea. So in the multiplication of the three, there’s tremendous amount of noise. So I would argue that a long as confidence, and I will explain what, exactly what it means in a second, is low, you shouldn’t even look at the ICE score too much. It’s it’s a- it’s completely noisy. And even when you do have ICE scores, they’re a hint, they’re suggesting, “These are the first ones to test.” But you should never just go ahead and launch them just based on, “That’s the highest scoring idea.” That’s a recipe for disaster.

And as as we mentioned, sometimes the better thing is to just solve by impact, or solve by ease, or solve by confidence to get that subset of working set of ideas that you want to pursue first. So not necessarily even v- based on the high sore. The high score is there to sometimes just defend you, ’cause sometimes people might come in and say, “My idea or the other.” So you do the calculation with them and you show the total score is pretty low. And that kind of helps you balance opinions with kind of more objective way. B- but you need to leave a lot of room for judgment. If you do see an idea that you do like, you should allow yourself to pick it, even if it’s scoring very low.

[00:19:58] Janna Bastow: [laughs] That’s a, that’s really good guidance. And actually, like they the guidance around if the confidence is low, then don’t use it as a multiplier. Or do- i- ignore it as a multiplier. Because, that’s essentially, I think as you’re gonna be talking about, is the importance of confidence is, you know how if you actually know about this thing or not. ’cause otherwise, often [inaudible 00:25:57], just making these numbers up, we’ve kinda gone, “Yeah, I think it’s important,” or, “Yeah, sure, this should be easy.” And you don’t know until you’ve really dug into it.

I think what what grinds my gears is when people come up with these scores and then they spend forever adjusting the scores or adjusting the algorithm and creating weights for the algorithms to make the score reflect what they wanted the scores to say in the first place, and it’s like, “Hey, just build what you know is the right thing anyways, ’cause there’s a reason we have humans doing this stuff,” product managers making human decisions. That’s your job is to make the decision, it’s not the job of your algorithm to do that.

[00:20:49] Itamar Gilad: Yeah, totally. And there’s no algorithm, honestly. It’s human judgment.

[00:20:52] Janna Bastow: Yeah.

[00:20:53] Itamar Gilad: It just needs to, the challenge is we tend to fall in love with our ideas. So we d- if you don’t do this sort of analysis, you don’t stop people and say, “Okay, what is the impact on the goals?” How easy is it going to be, and what, where does your confidence coming o- on this idea? They tend to be carried away, and fall In love with their ideas, and push for them. And some of these people, you cannot say no to, or they’re more powerful than 

[00:21:16] Janna Bastow: Yeah.

[00:21:17] Itamar Gilad: So I see ICE as the great equalizer. It forces everyone to say, “Okay, these are the rules of the game, let’s first do this analysis.” Sometimes that cuts the discussion my an order of magnitude. I’ve noticed, like, debates that would rage on, which idea is best for, like, hours and days and weeks cut down to minutes or half an hour.

[00:21:39] Janna Bastow: Yeah.

[00:21:39] Itamar Gilad: And people come out, smile happy and feeling that they came up with a better decision.

[00:21:44] Janna Bastow: Yep. And Oliver asked a pretty practical question here, is what scale do you use to define these numbers? Is it a one to five, is it a Fibonacci sequence, one to 10? That sort of thing.

[00:21:52] Itamar Gilad: I tend to use the classic that Sean Alice came up with, which is zero to 10 for each. But if you prefer tee shirt sizes you can use zero to five, then just assign, a extra small, s- small, medium, et cetera. Just be consistent because your ideas, that’s the trick. For impact especially, what’s a 10? So think of something that is like the maximum you can imagine you have a metric you, you want to move it. What’s the maximum you think one idea can possibly move this? Best case scenario, that’s a 10. And then work your worst backwards. What’s five, and what’s … And don’t forget about zero. ‘Cause unfortunately, most of our ideas actually will not work, so there’s a lot of ideas that actually belong in the zero and sometimes in the negative.

So, and it’s a parallel, as well, as far as I know. Very few ideas actually ever reach eight, nine, and 10. If you’re finding one of those zero, you’re extremely lucky, which most people don’t seem to actually resonate with, ’cause they see people very generously uh, giving themself eight, nine, and 10 for their ideas.

[00:22:54] Janna Bastow: Yeah.

[00:22:54] Itamar Gilad: Sounds high, but [inaudible 00:28:55], it’s a nine.

[00:22:57] Janna Bastow: Great idea. [laughs]

[00:22:58] Itamar Gilad: Yeah. So I think you should tone down and and imagine that most of your ideas are m- middle of the road or just incremental, and really keep those high numbers for li- for the big ones.

[00:23:11] Janna Bastow: That makes a lot of sense. Now you mentioned ideas that are zero, which I get. I guess they can have no impact. But you also mentioned negative numbers for i- for ideas. What does that mean?

[00:23:19] Itamar Gilad: So I just really don’t recommend doing this, and I suggest put a zero if you realize it’s going to be of neg- negative impact, but a lot of, like, you worked in the industry for a while, you know this. Sometimes you launch stuff and you realize it was actually a net-negative, because it annoys users, it’s it’s actually moves the metrics in the wrong direction.

 This stuff happens. If you test yourself thoroughly, th- if you test the idea as zero, somewhat protective, but sometimes surprises happen. Think of the Galaxy Note 7 that was like exploding in people’s hands when they went on airplanes. That’s a true negative.

[00:23:57] Janna Bastow: Negative, yep. That’s a good point.

[00:24:00] Itamar Gilad: Yeah.

[00:24:00] Janna Bastow: [laughs]

[00:24:00] Itamar Gilad: So, be aware that this is possible too. Even if you really love an, your idea, it might turn into a negative one.

[00:24:06] Janna Bastow: Yep. All right, that’s good advice, thank you.

[00:24:10] Itamar Gilad: So, shall I explain about confidence a little bit more?

[00:24:13] Janna Bastow: Yeah, I would love to dive into that, thanks.

[00:24:16] Itamar Gilad: All right, confidence is these, one of these terms that it’s real easy to say, “Yeah, I have high confidence in this idea because I’m a smart person, I’m experienced, here’s a completely logical explanation that will tell you why this idea is great,” and in order to defend against this, I found that I need to create weights. This is, again, something that came from my experiences at , Gmail. ‘Cause people at Google can be quite opinionated and these are very smart and very articulate people. So I created this tool, it’s a l- it works a little bit like a thermometer. It’s it goes from very low confidence, so in the area of zero, all the way to very high confidence, the area of 10 and that’s, fits in the ICE score as well. But again, if you prefer zero to five, just change the weights.

And this dark blue area this quadrant is about opinions. It could be your own opinions of conviction, you think it’s a great idea. Congratulations, every terrible idea out there, someone thought it was great. So you get 0.01 out of 10 for that. You created a shiny pitch that, that explains why it’s a th- the best idea, 0.03. You connected it with some thematic support the strategy of the company t- perfectly aligns with it. Or it’s about, you know the web3, and, the multiverse.

[00:25:32] Janna Bastow: We put it on the blockchain, it’s gonna be awesome.

[00:25:34] Itamar Gilad: Exactly. We gen- generative no machine learning, and aI. So that’s absolutely makes it a great idea. And not necessarily, because right now, as we speak, there are thousands of projects that are doomed that are built on these themes, and just, we just needed to look back in history not very far to see how some of these trends didn’t really pan out.

 The lighter blue area is about trying to validate idea but still staying in the building. So you can review it with other people and see what they say. This could be your colleagues, your managers, experts, stakeholders. And they’re giving you opinions, and sometimes this is a h- often this is a harder test, because they might find flaws in the idea y- you didn’t see. So on the flip side, psychologists found a lot of group dynamics that actually make groups choose ideas worse than individuals. One of them is groupthink, another one is politics. Sometimes people kill good ideas or choose bad ideas due to politics. Not in your organizations, obviously, in other organizations, but it does happen sometimes. So that doesn’t give you a lot of confidence either.

Estimates and plans is you try, at least on paper, to make see- to see if their idea makes sense. You can do a business model canvas if it’s a big idea, or some sort of back of the envelope calculation just trying to see the fun in how many people will see it, how many we- will try it, how many will convert, et cetera. And you’d be surprised how much better you can get your impact assessment based on that. And same you can do for estimate, you can break it into its parts and s- see how it converges.

And some ideas will die on paper. Just when you do those exercises, you realize they’re not as strong as you think. It’s perfectly fine to park them here. But even if it looks still great on paper, that’s just a bunch of educated guesses. You still didn’t find outside data, which is this quadrant here there, in the pinkish area. So this data can be anecdotal, it can come from user interviews it can b- come from your data, it can come from customer requests, but sporadic requests or it could be from competitive analysis. Maybe one competitor has it.

A typical case that I see is an idea that the company likes, or at least influential people in the company, and the leading competitor has it, and that’s it. Validation done. This is a good idea, let’s just go ahead and build it, and that almost never pans out. It’s, that competitor may have different perspective or they may have launched an idea that actually is a dud. You don’t know, you don’t s- you cannot trust their judgment to replace yours. You need to test.

And then the next level is getting more data. Larger datasets from the markets. So these could be s- through surveys, through smoke tests, [inaudible 00:34:57] tests, very deep competitive analysis, especially trying to understand what problems or what user needs your competitors are trying to address. So that gives you slightly more confidence.

So for many ideas that are very low-risk, very low effort, you can stop there. If it still looks good, your judgment guides you, don’t go crazy. That’s, you can l- launch it. But if there is more than minor risk, I suggest testing it. And th- the amount of testing depends again on the risk and the size of the idea. And we have various types of tests, from usability tests, moving on to longitudinal user studies, and eventually even to A/B tests, whatever it is that you can do. Early adopters, this whole gamut of things, I have a slide later if you guys want to see the gamut. And these things give you medium and high confidence. And usually you can stop there with most ideas and switch into delivery.

But the nice thing is, as you b- validate, you’re actually building. So you’re not wasting time. You’re, you actually have some working code, and you have a much better understanding what you need to build, so it’s good, it’s not a bad thing to try to strive for higher confidence. And that really helps a lot with these internal debates and opinions. It’s not fighting opinions, it’s just supercharging opinions with a grain of evidence. And this is a tool that’s actually been used by quite a few companies today and it’s exactly for this reason, I think.

[00:29:54] Janna Bastow: Yeah, absolutely, we recommend this too. A lot of ProdPad customers who are asking about how to use the confidence meter that we built in. And I love how [laughs] frankly, vicious this thing is. Because you hae to get halfway through before you’re even at the point where you’re saying, “Okay, this is, we’re, we’ve been talking low confidence here,” three quarters of thew ay through before you’re even saying, “Okay, yeah, this might be a goer.”

[00:30:14] Itamar Gilad: Yeah.

[00:30:15] Janna Bastow: And I think it’s important to raise our standards of what we think is an acceptable idea or not, so we stop building so much junk, right? I We’ve gotta stop spending our time building stuff that isn’t the right stuff to do, because we can actually just spend this time doing the interviews, doing the the surveys, doing these smoke tests, doing, some of these steps to prove or disprove long before we actually spent break code in anything.

[00:30:38] Itamar Gilad: Yeah, absolutely. And it’s important, ’cause some people who don’t come from product needs some visualization of the risk that they are taking by pushing the team to just launch an idea.

[00:30:51] Janna Bastow: Yeah. What this is actually really powerful for is as a product manager, you have to say no a lot. And sometimes you have to say no to yourself, as well. This can be a really powerful tool, if somebody comes up to you and says, “Hey, I’ve got this great idea,” and they’re in love with their idea, everyone loves their ideas. I love my ideas sometimes loves other people’s ideas. But this is a great tool to take the onus off you from saying no, it’s helping you say no, its’ the system saying, “Hey, hold back, don’t say yes to this, because it’s only a good idea because, it’s a trend right now, right? Everyone’s using ChatGPT,” or, “Yeah, kinda showed that back of envelope it might work, but we don’t actually have any proof that this is the right thing to do yet, so let’s keep going, but here’s the steps to get to that stage.”

[00:31:32] Itamar Gilad: Right. 

[00:31:32] Janna Bastow: And it just helps bring people along with that, so it doesn’t look like you, being evil product manager, saying nope, not that one either.” [laughs] helps bring them along with that journey, I think.

[00:31:42] Itamar Gilad: Yeah, avoid the black box product management that so many stakeholders dislike, and justifiably , that just, the product manager just decided your ideas is bad, doesn’t work well,” they’re be- much better at escalating to managers than you it usually doesn’t pan out well for you either. I suggest just generally not saying no. When someone comes to you with an idea, say, “Thank you.” We’re going to evaluate it, put it in your ID bank, triage it later, impact confidences, then communicate back to this, say, person, say, “Look, we think it’s pretty low impact-based here’s our explanation. There’s not a lot of evidence, and it’s going to be that much work, and here are some other ideas which we have that are trying to achieve the same thing,” so help that person understand the reasoning. That’s, it has to be transparent and objective.

[00:32:30] Janna Bastow: Yeah. And that’s exactly why I built ProdPad, was, people were coming to me and thinking that I was this black box where their ideas would go to die. I built it so that I had some transparency into the system, so they could see, “Yeah, that’s a cool idea, but here’s all the other ideas, and here’s the ones that line up with this, and here’s the ones that we’ve put some thought into.” and yeah, it just helped me say no in more ways than I could manage as a, frankly, quite a junior product manager who didn’t have it all figured out back then.”

[00:32:55] Itamar Gilad: Yeah, product is basically systemizing this whole approach, I would say.

[00:33:00] Janna Bastow: Yeah. So thanks to to folks like you for providing insights like this that that we can build into it, so really helpful. 

[00:33:06] Itamar Gilad: Oh, you’re being humble, you are a thought leader yourself, and you actually contributed [inaudible 00:40:16]-

[00:33:11] Janna Bastow: No, you are. Oh, we gotta stop this.

[00:33:13] Itamar Gilad: [laughs]

[00:33:13] Janna Bastow: So you did say a moment ago that you had another slide that sorta talked about the damage of experiments and stuff do you wanna talk us through that? [laughs]

[00:33:19] Itamar Gilad: Yeah I’ll go through this one real quick, and then I’ll go through the gamma. So we’re going out to the steps layer .we, we covered the ji- the goals layer, we talked about ideas, prioritization, ICE. Now we’re going to build this thing, but there’s two options. We like the idea, it looks great on our ICE score. We’re just gonna build it, it’s three months, and then we’ll see what happened. And we can always launch and iterate, right?

So a couple things happen. It’s not three months, it’s six months. That’s just reality. And during these three m- months, you’re not actually learning anything new, ’cause you’re very focused on delivery. So, and then you launch this thing and surprise, it’s actually one of the few ideas the majority of ideas that aren’t good, and no matter how much you will iterate on it, it will not, never work, essentially. And I’ve made this mistake many times.

So instead, of course, what we want to do, start with that same idea, and build it but for a series of what we call learning milestones. So basically, those are experiments, if you like where we build something, put it in front of users, measure the result, and then make a decision, a conscious decision what to do next. And sometimes it means dumping the idea, that’s a very good result too. And sometimes it means pivoting it, so that’s why the arrow is changing directions. And an idea that survives this whole process is built, first, of you, you moving on the scope front, you’re building m- and more and more advanced with each one of those. And e- you’re also learning. So it will be much more profound when you launch it compared to the idea you started with. So it’s both a way to filter the bad ideas and to improve on the good ones.

Now, the industry calls a lot of this experiments, but I feel that experiment is not quite the right world, so- word, so I opted for steps. And i- inventing your terminology is a really bad idea but I just felt that experiments doesn’t q- quite cut it, plus steps fitted just much better than if it was an E. But I’ll show you now, the kind of validation steps you might take here, and not all of them are [inaudible 00:42:42] steps.

So people tend to think that experimentation is at, building a very heavyweight e- experimentation, A/B experimentation platform, or carrying out 1000 years of s- interviews, or doing really the heavyweight stuff. And that’s true, we need to do some of that. But as, there are many other ways to validate ideas. We touched on some of those in the confidence meter. So in order to explain this, I created this kind of visualization, and I like to break it into five buckets. Assessment, fact-finding, tests, experiments, and release results. Catchy acronym AFTER. So for in assessment, there’s things like, you have an idea in your hand, and you ask, “Which goal does this actually align to?” Is it on any of our quarterly or yearly goals? If the answer is no, either you need to park it for now, ore change the goals. Tha- and that’s a really easy and quick validation technique. Business modeling, we talked a little bit about this ICE analysis assumption mapping is a very powerful technique I highly recommend to everyone to learn, David J. Bland. And finally, what are the risks or the assumptions in your idea? And s- take all the reviews. But usually one on one, not … Don’t bring them all together into a stakeholder polity, ’cause they will gang up on you.

So those are really cheap and f- fairly fast ways to just see if the idea pans out. Fact-finding is about diving into your data or running surveys, or doing competitive analysis, or user interviews, or field research. Some of these are not as cheap, some of those you need to g- do on an ongoing basis, but all of those give you data, and that’s the next kind of layer, or next area in the confidence meter.

If the idea still looks good, you’re going to start building it and testing. But the first versions are completely fake. You can’t fake it before you make it. So this could be smoke test, where, or fake door test, where you put a button in the UI, but when they click on it you say, “We’re not quite ready now, do you want us to notify you when it’s ready?” That’s a very classic way to do this. Wizard of Oz, where you put h- a human behind the scenes to, during the usability test to simulate the scene, [inaudible 00:45:05] test where you actually build a functioning service, but it’s completely human-operated. Usability test, you guys know about those. But those are much cheaper ways to validate idea at a higher level without having to commit too much engineering resources.

Then you go into mid-levels test where you are building early versions that are rough, not complete, not polished, not scalable, but they’re good to go where you can start giving this to people. Sometimes these are ex-talent people, sometimes it’s your own team. This is what we called fish food in Google. And then if you still feel good about this idea and this idea is big enough to justify heavyweight testing, you might do a beta, launch it as a lab where people can turn it on in settings, if it’s a, a, for developers, an operating system, you can do a preview. Or you can do a dog food where a, your entire company or large part of your company is actually testing it. Very popular in Silicon Valley. When I joined Microsoft, the first thing I noticed was that Outlook was very buggy, and asked people, “What’s going on?” They said, “Oh, that’s because we’re dogfooding the next version of Outlook that will come in a year and a half from now.” that’s the mentality. Everything, every opportunity to test is a good th- opportunity.

Then, what I do, consider experiments is what statisticians like to call experiments, which is a test with a controller limit. So A/B … Ah, sorry. A/B tests A/B/N tests, multivariate tests, et cetera, and even the launch phase, like you build a whole thing, you’re ready for delivery, launch it gradually. Stop at 30% and see what happens. Then l- stop at 99.5% and do a whole deck experiment and see for a few weeks how this 0.5% differ from the rest. And even after you launch, keep tracking it. Keep looking if the results you expect materialize and there’s no negative side effects. Sometimes you’ll need to roll it back. So use every opportunity to learn before, during, and after the development. So that’s the model.

[00:39:44] Janna Bastow: Awesome. [laughs] Excellent. So with all these different types of validation, there’s almost no reason why we can’t pick a few and validate our ideas before they go live.

[00:39:56] Itamar Gilad: Absolutely, I’m, there’s no excuse, honestly.

[00:39:58] Janna Bastow: Excellent. And I love the difference between fishfooding and dogfooding, I’ve never really used that term, fishfooding. I- and you said the difference is that fishfooding is when your team uses it, and dogfooding is when the whole team u- the whole company uses it?

[00:40:10] Itamar Gilad: Yeah in Google, in Gmail, every time we had a new feature come out we had the team working on it actually use it for a while.

[00:40:17] Janna Bastow: Yeah.

[00:40:18] Itamar Gilad: And that teaches you an awful lot. Th- this was true for a lot of very successful products like the iPhone. The first users of the iPhone were the internal team. It was a big secret, no one els at Apple was allowed to see it. But they started using it themselves.

[00:40:31] Janna Bastow: Excellent.

[00:40:32] Itamar Gilad: Yeah.

[00:40:32] Janna Bastow: Very cool. And you talked about assumption mapping, and David Bland, David J. Bland, I’ve actually got him coming on the th- this webinar in May. He’s gonna be our guest coming up soon.

[00:40:42] Itamar Gilad: Yeah he, he’s a great guy. And very smart, so this should be fun.

[00:40:46] Janna Bastow: So I’ll definitely ask him about that and refer back to this.

[00:40:49] Itamar Gilad: Cool, awesome.

[00:40:51] Janna Bastow: Excellent. Somebody actually asked him the one of the questions here, is are there any guidelines as when to use dishfooding or dogfooding?

[00:40:57] Itamar Gilad: I don’t know that there’s hard and fast rules for any of those. Fishfooding, it needs to make sense for the people building it. I If you’re building something for I don’t know, medical professionals doing o- operati- in the operation theater, there’s no point in either fishfooding or dogfooding. You’re not the target audience. But if the people that you are, that are working on the team are actually t- in the target audience, absolutely. Have them use it first. Even if they’re not in the target audience, you can do what’s called a [inaudible 00:49:14], you just ask them to complete a set of tasks, to actually use the software they developed to complete the set of tasks that the end user will need to do. And sometimes that actually brings home how good or bad the experience is, and it he- really helps them identify. I didn’t put it there, but the [inaudible 00:49:32] is another validation technique.

[00:41:45] Janna Bastow: Excellent, okay. So how does this fit within the the gist? Like, would these be considered part of the steps? Would you be saying “We’re trying to, we’re in the discovery part of it and therefore the step we’re gonna take is a usability test?”

[00:41:59] Itamar Gilad: Yeah. So managing the whole project is actually connected to the task layer. How do we put all of this into action?

[00:42:05] Janna Bastow: Yeah.

[00:42:05] Itamar Gilad: So I think this will answer your question.

 All right, so the problem I see in a lot of organization is that the world is split into two worlds. There’s the planners, the people who think about the strategy, the roadmaps, the projects, the business goals, they think in terms of quarters, and business results. And then there’s the people who are focused on execution. The designers, the engineers, the developers, the QA. They’re thinking in very short-term. Sprints, tickets. And there’s this gap between these two worlds. They don’t s- understand each other, they don’t trust each other often. The plans that come from the top, these people are very skeptical about them. Justifiably so sometimes. And these people in the top, they don’t understand what all this iteration thing is. What, we’re agile, but nothing is coming out the, at the end. Like, when are you going to give us that feature we asked for?

And the product manager is sandwiched in the middle. That person, sometimes called the product owner now, especially in Europe, is supposed to make all this work. Supposed to deliver on the roadmap, supposed to feed the Agile machine, and supposed to create perfectly [inaudible 00:51:17] product backlogs, perfectly curated user stories and epics. And a lot of project managers are not happy, because they’re perpetuating a dysfunctional system, to be honest. Have you seen this model working in any companies, or have you seen, have you encountered this?

[00:43:27] Janna Bastow: Yeah, definitely seen this split of the two worlds, and I think a lotta product people are trying to find or a lotta companies are trying to find that the way to make the two jive together.

[00:43:37] Itamar Gilad: Yeah. So I, this is really where companies like Google defer, this Google at the time I was there.

[00:43:44] Janna Bastow: Yeah.

[00:43:44] Itamar Gilad: It wasn’t split this way. And that’s the mentality I’m trying to bring into. So what I suggest for teams to do is, “Okay, keep on your Agile rituals, everything, whether it’s Kanban or Scrum, perfectly fine. But find some middle ground between, the roadmap or the high-level plan and the sprints that are so here and now. And I suggest something like the GIST mode, for example. So it’s basically the th- top three layers of GIST, here are the key results from our goals.

 Maybe we’ll need to show them the average onboarding time to today’s, but right now it’s 5.5. Maybe activation rates need to go up, and also some technical goals. ‘Cause teams need to work on these things too, design goals. So it’s not just product goals. And then the ideas we’re working on now, we might have more ideas in the idea rack, but these are the snapshot that we chose through whatever prioritization process to work on. And for each one of these ideas what are the next few steps? And this is a very dynamic kind of map of the work, and we’re using it to project manage the w- the work. Sorry for using this antiquated term, but I think it’s something that’s really missing.

So the way it works, the leads of the team, the PM, the eng lead, the UX designer create the board beginning of the quarter, review it with the team, change it based on feedback. And then we meet around the board every week or two and review the changes. And it’s good to, to make a lot of changes offline, not to force everyone to know. But i- it’s really important to have this discussion. First off, reminder, these are the goals. We’re not just coding here, we’re trying to achieve something.

[00:45:24] Janna Bastow: Yep.

[00:45:24] Itamar Gilad: Second is other ideas. Remember that idea that was on the board last time? It’s gone. It’s, turned out to be bad, we replaced it, here’s a new one with some steps, and let’s talk about the steps. How are they going? The most critical steps, are they progressing? Can we shift resources? Really talk, and a lot of team members tell me that this really helps bring a lot of context to them. They really work with a lot of understanding oof what’s actually required to do. And that context takes away a lot from, a lot of the needs to create this perfectly creor- created backlogs and user stories.

‘Cause, and that’s the experience I had at Google. When people know what they’re supposed to do, you don’t need to tell them. They actually tell you. And it’s really rewarding when they come and show you something, say, “Hey, I thought this would work better. What do you think?” You’re like respect, that’s” … If you disagree you can push back, but in general 60% of what I ever launched in Google was not my idea at all, and I , wasn’t in any of my specs or anything like that, so that’s the way you want to work. [inaudible 00:54:50]-

[00:46:26] Janna Bastow: That’s actually really powerful product management, is if most of the things aren’t your idea, it’s not your job to have all the ideas, it’s your job to ask the best questions and to coordinate and to pull in the best insights for people on your team, and create a system by which the best ideas could be surfaced. So sounds like you found a system by which that would happen. This is really beautiful, by the way, I really like how it pulls together all those different elements. The goals, the ideas, the steps the work all into one place. Never can see what’s going on.

[00:46:53] Itamar Gilad: Yeah, it’s also useful snapshot for the managers, the leaders, the stakeholders who are generally interested what’s happening with the team.

[00:47:01] Janna Bastow: Yeah.

[00:47:02] Itamar Gilad: The spring status doesn’t tell them much, but if you can share this in a biweekly email, Sarah, and tell them. This idea we tested, those were the results, we decided it’s not good enough, they can push back as well. They can come in and say, “I disagree.” It’s good to have this discussion at that level. But you, they don’t have to steer you. They understand that you’re moving on your own accord in the direction of business results. And that builds trust in you and your team.

[00:47:28] Janna Bastow: Yeah. And I also love that it’s tying back to the e- the overarching goals. I think one of the big problems with a lot of roadmaps is that it’s basically just a list of things to do over time, and then at the end of the day you just have a bunch of stuff done, maybe.

[00:47:42] Itamar Gilad: Yeah.

[00:47:42] Janna Bastow: But this really ties down to here’s the stuff, here’s why we’re doing it, and here’s how we might go about doing it.” Here’s the experiments and the stuff that we’re trying to do.

[00:47:50] Itamar Gilad: Yep.

[00:47:50] Janna Bastow: And it creates a lotta cohesion as opposed to this pile of chaos, which I think is where a lotta teams end up with the roadmap. 

[00:47:57] Itamar Gilad: Yeah I thin ProdProd does the same, right? You by- you guys basically create a shared view that everyone can look at and understand.

[00:48:02] Janna Bastow: Actually, if you take one of these, like one of these stripes here, like the average onboarding time, that tripe of stuff there, and tip it on its side, that’s basically a roadmap guard.

[00:48:10] Itamar Gilad: Oh, okay.

[00:48:10] Janna Bastow: So each one of these things where it says, where its says goals, that’d be the objectives at the top. And the ideas would be, I could imagine those being at almost the initiative level, and then the steps being the the experiments, the ideas underneath that.

[00:48:21] Itamar Gilad: Yeah.

[00:48:22] Janna Bastow: It’s a very similar sort of concept. It also maps very closely to Theresa Torres’s opportunity solution tree, or it linked to wh- what are the opportunities, and therefore these solutions, and you’re prioritizing at the opportunity level?

[00:48:34] Itamar Gilad: Yeah, totally. She calls them experiments and, 

[00:48:36] Janna Bastow: Solutions.

[00:48:37] Itamar Gilad: … but it’s the same thing, at the end of the day.

[00:48:39] Janna Bastow: Yeah. [laughs]

[00:48:39] Itamar Gilad: We’re all kind of creating something that tries to connect the strategy on one hand to action at the end, and by creating these layers in the middle.

[00:48:46] Janna Bastow: Yeah.

[00:48:47] Itamar Gilad: So that’s why I don’t claim that it’s a completely original idea, although I think I, I arrived at it individually, independently. But many other people arrived at it independently themselves, so, [inaudible 00:57:42]-

[00:48:58] Janna Bastow: Exactly that, exactly. And this is what we’re discovering by having all these conversations and having all these these experiences within these companies, is that ultimately, to organize our teams, we need to have something to point us towards what it is we’re doing, and then still be able to show the more granular work. Which I think is why so many people landed upon the same sort of thing, but have different terminology for it. And I love that everyone has different terminology for it, and everyone everyone who’s joined beyond this has h- come in with different ways of talking about it. And I think that’s fine, like every pers- every company I talk to has different ways that they talk about these different pieces of work. And I think it’s important that it’s, like, everyone knows that it’s okay, you call it what you call it. As long as it’s consistent within your team.

 So call them ideas, call them experiments, call them initiatives, call them goals, call them KPIs, objectives. Frankly, no one cares. It’s just to make sure that your team knows what you’re talking about when you’re talking about them, and-

[00:49:51] Itamar Gilad: Yep. Basically all the processes there are just ways to create some sort of order in this chaos and drive meaningful discussions. So by obstructing things and saying, “It needs to look like an OKR, it needs to look like an idea with an ICE score,” in this example, just to [inaudible 00:58:57] people to discuss these things without letting egos and opinions and, biases, take control of the discussion.

[00:50:15] Janna Bastow: Yeah. So, everyone, this is another one to another feather to add to your cap. Absolutely, keep this GIST board as an example, I think it’s a really good example of how to align teams and bring a sense of cohesion to the work that you’re doing. What does everyone think, is that gonna be useful?

[00:50:31] Itamar Gilad: Yay. [laughs]

[00:50:32] Janna Bastow: Now we’ve run out of time for questions. But I wanna finish by saying a huge thank you to you all, really great to have you all jumping in on the chat and all your questions. And a giant thank you to Itamar Gilad for sharing his GIST framework, and his confidence meter, and all of us all here today. Thank you so much.

[00:50:48] Itamar Gilad: Thank you guys.

[00:50:50] Janna Bastow: All right, wonderful, great having you hear today, and talk to you again see you here next month. Bye for now. 

Watch more of our Product Expert webinars