Skip to main content

[On Demand] Product Management Webinar: AI Automation

How to Automate your Product Processes with AI, with Chris Butler

Watch GitHub’s Director of Product Operations and learn how and where to leverage AI to automate your product processes and supercharge your operational efficiency. 

Find more time for creative ideation and strategic decision-making by unlocking major improvements in your flow with the help of AI. 

Learn how to achieve operational excellence with scalable AI tooling and transform the way your Product Management Team works and the results they drive.

About this webinar

With a veritable plethora of AI tools and possibilities, where do you start when it comes to leveraging this technology to help you drive operational efficiencies across your Product Team? With so many points to your process, how do you know where the right opportunities are for AI-powered automation? 

Watch ProdPad CEO and Co-Founder Janna Bastow as she asks GitHub’s Director of Product Operations Chris Butler to explain just how to identify the opportunities for AI automation and how to implement them. 

Whether you’re tasked with optimizing organization-wide Product Operations, or you’re just looking for ways to work smarter and do more with less, watch our webinar and come away with some actionable insights to help you leverage AI and transform what you’re able to achieve. 

Watch the webinar and find out:

  • The core principles of Product Operations
  • The numerous benefits of automating your processes with AI
  • Where to leverage AI automation across a typical Product Management process
  • How to implement AI automation to drive operational efficiency
  • What the risks of AI automation are and how to mitigate them
  • How to measure the success of your process automation 

About Chris Butler

Director of Product Operations at GitHub, Chris is a product leader who drives innovation at the intersection of technology, design, and strategy. With experience spanning GitHub, Google, Facebook, Microsoft, and startups, he’s led impactful initiatives in AI, productivity, and responsible tech. From crafting speculative design fiction to mentoring future product leaders, his work bridges creativity and execution.

As Co-Founder of The Uncertainty Project and in his role at GitHub, he focuses on aligning vision with action to create ethical, sustainable, and impactful solutions

Chris Butler guest on ProdPad webinar on AI automation of product process

Janna Bastow: [00:00:00] So big welcome to everybody who’s joined for today’s product expert fireside series that we run here at ProdPad.

Today ‘s session is all about how to automate your product processes with AI. So very topical. And we’ve got Chris Butler joining us. I’m gonna introduce you to Chris in just a minute, but we’ll do a bit of housekeeping first. As many of you probably know who’ve been here before, this is a series of webinars that we’ve been running for years.

So if you go to prodpad.com/webinars, you can see the backlog or the history of all the ones that we’ve done in the past. And they’re always recorded. It’s always a mixture of either presentations or fireside, like today’s gonna be. And it’s really with a focus on the experts that we bring in to share their experiences with their insights.

It’s a real focus on the content and the learning and the sharing. So today is gonna be recorded. We are recording now. And you will have a chance to ask questions. Drop them into the q and a section if you could, and that way other people can see your questions and give them the big thumbs up so we know which ones are most popular.

But also use the chat today. Let us know what’s resonating. Let us know. [00:01:00] Your let us know what you’re thinking as we, we chat through this topic around automating processes. Before we jump into the meet, let’s talk about how ProdPad itself would love to hear from any existing users, who here is using ProdPad already.

Thank you so much for the support. For anybody who else who isn’t using it, it’s a tool that was built by myself and my co-founder Simon Cass. We were both product people you might notice as the product, people behind mind the product. So Producted grew up around this sort of best practice that we were seeing from the world around us as product management craft grew.

And the, it was basically built as a tool to help us do our own jobs when we were both leading product at a couple different companies. And it was something to help us keep track of all the ideas and experiments and all the feedback we were getting and to articulate this strategy into a roadmap and a series of objectives that that we could share with the rest of the team and with our execs.

And ProdPad. Was built to give us a sense of control and organization as well as transparency for the rest of the team so people understand what’s going on in the [00:02:00] product world and it creates a single source of truth for all your product decisions. So it’s the type of place where you’re able to look back on after you’ve been building in it for a couple years and say, ah, this is what worked and this is what didn’t work.

It’s completely free to try. We have a a sandbox version of ProdPad, which is preloaded with example data. So you can see example now next later roadmaps and OKRs and all the experiments and how they interplay. And our team is made up of product people. So we’d love to see you in there and hear your feedback And, what we’ve done with ProdPad, if you’d asked me a couple years ago, I would’ve said, is a collaboration tool around your vision and your objectives and the strategy and the things that are being built in your product and the things your customers have said. But we’ve actually now underlined that with ai, our co-pilot, which allows you to ask it questions.

It can help guide you. It can help point out whether you’re putting an idea on your roadmap that doesn’t actually connect with your vision. You can give it a PDF of your old roadmap and it’ll help turn you [00:03:00] into a now, next, later format for you. It’ll answer questions like, did we do anything around this?

Whatever happened with this? And it’ll pull up the decision that you made two years ago and and give you insights about why it came out that way and what you should do about it next. So really upped the game with with copilot these days. And we also have, since we’re talking about automation today, we also have an automation module built into ProdPad that allows you to quickly automate different pieces so that as ideas flow through, as different pieces flow through, you’re able to keep things linked and up to date.

Just a little bit about us. If you wanna start a demo reach out to us. Just hit up prodpad.com/demo and we’ll get that started. We’re always happy to show you through how it actually works. But in the meantime, that’s enough about us. Let’s talk about Chris. Chris is a this is Chris Butler.

Everybody say hello. He’s a seasoned product leader. Hey. Hi Chris. Thanks for joining. Hi, he’s a seasoned product leader who’s spent his career working at the intersection of tech, design and strategy. And he is led I impactful work across big [00:04:00] names like GitHub, where he is now Google, Facebook, Microsoft as well.

And as in the startup world his focus areas span AI, productivity and responsible tech. And he is not afraid to push the boundaries, whether that’s through speculative design design fiction or mentoring the next generation of product leaders. And he is also the co-founder of The Uncertainty Project and leads product operations for AI and productivity at GitHub.

So Chris, we’d love to hear more about that and we are looking forward to chatting today about how you’re going about automating your product processes with ai.

Chris Butler: Absolutely. No, I’m happy to share what we’ve been experimenting with internally at GitHub and, I think with everybody getting the memo from their CEO after the Shopify memo went out about using ai, everybody’s like on high alert about these types of things, but I’m hoping I can, share a couple stories about the way we’ve, we started to integrate this into our processes within GitHub.

Janna Bastow: Yeah, absolutely. And you talk about that memo that went out for the uninitiated. What’s the importance of that?

Chris Butler: Yeah so Shopify, CEO is someone who, I think he is a little bit [00:05:00] controversial within the startup world, I would say. But he put out a memo basically for all of his people to use AI by default.

And that if they, I think including, if you’re thinking about hiring someone that if you need to try to do the job with AI first before you actually hire someone else. And so really this kind of push for everybody to integrate different types of AI into their world whatever that job role is something that is, I think on the, like the top of mind for most executives.

Within GitHub for example, we’ve spun up a bunch of different programs of just helping people. Try out a bunch of things. ’cause I think that the, one of the biggest problems for a lot of teams is just how do they start to even understand what are the capabilities within these systems, right?

There’s a lot of magic. It’s unfortunate that the emoji for AI is now this like sparkly icon, like where sprinkling like magic dust on everything. But just getting used to like when you can and cannot trust the output of something when you, what you should try or not, what are these models built for?

Things like that I think are really important and you don’t [00:06:00] really understand that other than maybe in a theoretical academic sense until you really start to try it out in some way.

Janna Bastow: Yeah, absolutely. And you’re right at the intersection of using AI in automation and operations.

We were talking before, right? You jumped on about how product ops has evolved from the role of product management itself. How has your role evolved into that space and how has it adopted ai?

Chris Butler: Yeah. I, in, in, in the roles that I’ve worked on I think I first started really thinking a lot about the way we might integrate the latest wave of AI stuff when I was back at a company called Philosophy, which was a boutique design consultancy, but we worked for really large companies like Google, pwc, Neiman Marcus, Prudential, E-Trade, et cetera, et cetera.

And a lot of our work was really around just trying to figure out should we. Try to integrate AI into these systems, maybe even a little bit before the capabilities were possible. And so from that was really the starting point. That was probably in 2016 or 17. I would say even before that, like we called it business business intelligence or big [00:07:00] data. There’s like a bunch of things before this where it was about data and how to do prediction or how to do better business operations in a lot of different places. But that was really the first one where I started to think this through. But I would say that, overall I’ve found that we tend to miss think about these new technologies.

And we tend to over fixate on the technologies themselves rather than the nuance of that technology. And I would take this back to even when I worked at kayak, where I was like a weird product manager, business development hybrid role that was focused on mobile. And this was the time where Kayak was starting to see mobile was about to eclipse desktop use.

And what we. It, my ti my job title had mobile in it. Like it was a very clunky long job title, but it was like, there were lots of mobile PMs at that time as well. But we don’t really have mobile PMs anymore. And the reason why is because we understood that mobile was interesting.

The idea of I think the weirdest acronym was like, so loco like social, local, mobile. Yeah. Whoever that was back then. Yeah. We saw there was new capabilities. But those new capabilities [00:08:00] were really more about the way that a human would integrate with their lives. Yeah. And so once we started to understand that, it was really much less about, is it like, WAP if you wanna go that far back on mobile, but iOS apps versus Android or mobile web or whatever.

And so it was really focusing on how do pe, how does this change the way that people use the service? And in the case of Kayak, we ended up seeing a lot more use when it came to mobile searches, but not more purchasing. And what the reason that was, is that people had much easier access to just checking on the prices of things.

And it didn’t mean that they had more vacation. And so we had to make sure that we were aware of that. And so with ai, like I would say. There’s lots of dangers and problems that come out of over depending on AI and especially things like large language models, the different chat, GPT type of stuff.

There, there are definitely hazards we should talk about when it comes to that. And I think that gets back to like when you should and shouldn’t, should and should not use these, when should trust and not trust the output of these things. But there, there’s like huge amounts of, capability that we need to start thinking about.

And I tend to think about this as what are humans good at and what are machines good at? And [00:09:00] we need to be aware of that kind of those two different domains and make sure that we’re getting the most best of either world rather than the worst of both worlds. Yeah. And I think that’s a pretty important thing.

Janna Bastow: Yeah, absolutely. It’s a really good point. And so when you talk about what humans are good at versus what AI is good at, what is AI not able to replace right now when it comes to product ops sort of stuff?

Chris Butler: I think there’s a lot of human judgment that goes into different parts of the product process or the software development life cycle, the product development, life cycle, whatever you want to call it.

There’s a lot of kind of human decision making. And some of that time is that there’s just so much context and intuition, especially for product managers or other roles that deal with a lot of uncertainty. There’s a lot of like tacit knowledge that’s built up over time, and that intuition is really what helps people, make these decisions.

So prioritization is a great example of this, right? Like we can do a bunch of kind of automatic scoring potentially of a new work item. If you use something like rice, you’ll know what I mean, as far as like scoring. But the problem is that it’s really about turning this crank [00:10:00] and getting like a draft list of stack rank.

And then after that you need to now apply human judgment to that list and say that actually I disagree with these things and what the anti-pattern I’ll see is that then people want to go back in and start to change the scoring system in some way. So it’s in alignment, but there’s always gonna be context that humans have just based on their experience that will just not be captured by this context.

Yeah. The we’re constantly like trying to understand the world. We’re constantly pulling in many different sources of information that a AI system today just can’t pull in. It just doesn’t have access to it or it’s not available, or something like that. And so I think that’s one of those things is this idea of just judgment decision making.

The last step around this is really, the truth is that decision making is an emotional thing. And so that’s why we have a gut feeling or an intuition I is really about that fact. And so we, we cannot I think we should not be leaving a lot of these like more important decisions to the machine itself, but we should use the machine to gather a bunch of information to consider a bunch of different framings to be able to then even [00:11:00] do a better job of kind of synthesizing and improving the messaging around something.

Yeah, that’s all something that these machines can do. But it’s because it’s based off of all of these examples of writing that’d seen before or all of these other viewpoints that it’s seen before. The final part though is really that the product manager or the product team or the product leader really needs to make those final decisions and we shouldn’t be automating those ways.

So that’s maybe a really big thing. I would say in process automation.

Janna Bastow: Yeah, that’s a really good point. And actually this is why I am still very strong in my opinion that product management is not gonna be replaced by ai, right? It’s another tool that we can use to help sift through the noise that we’re surrounded with.

But you actually touched on something really interesting there, which is, the tendency for product manager to say, oh I use this system to stack rank or gimme this list. And the tendency, if it doesn’t match their intuition, is to go in and tweak that rating system or the stack ranking or whatever.

And it’s actually one of these traps I see people falling into. Yeah. Instead of using their in intuition and using the information that they know they’ll go and tweak the tech. To make it so that it provides the right information. And [00:12:00] you end up with this, it’s product management theater, right?

You’re doing all these little changes to get the thing to say, yeah, this one is a score of 82, so therefore we do it first, or this other thing that has a score of 80. And the thing with the basic scoring system you can see what it’s doing, you understand how it got that input, and you can edit it and you can change it.

But with ai, what it does, it will give you a stack rank or its opinion on something and it will come out and be confidently wrong about things. That’s right. Because it doesn’t have the whole picture. And it’s ai, it’s chaotic. And at times we know that,

Chris Butler: It’s random at time. I would say the people are chaotic, but again,

Janna Bastow: it reflects us.

Chris Butler: It sets a randomness. Exactly. Exactly. Yeah. And I think that’s the issue is that we’re not here to make this system perfect when it comes to prioritization. What we’re supposed, where we’re here to use this system for is to maybe, in some of the things we started to work on at GitHub is like, how do we help ease the burden of triage and prioritization. Yeah. And remove a lot of toil. That’s the team that I operate within and lead right now is referred to as Synapse, [00:13:00] which is a kind of cool name for a small team. But really we’re focused on how do we really focus a lot on alleviating toil for the product group.

And that is our mission, right? And there’s a lot of things that we can start to do to actually achieve that. And so the idea of deflecting questions in a Slack channel that are really just like questions that are asked all the time and there’s a known answer somewhere that’s documented.

Just even enabling something like that as, as simple as that, helps save a lot of the product teams time. But then once it does need to get into their issue system which we use GitHub to build GitHub there’s a lot of things that we can start to do to say, is this actually well formed?

And push it back to the person that’s requesting it. If it’s not well formed enough or doesn’t have enough information, that is something that a a LLM inside of this process, automation can do pretty well. Or doing the first draft of like, how would this fit with our charter or a strategy.

Yeah. Those are all things that I think we can start to do and actually makes a lot of sense. But the final decision of should we accept this work? Should we accept this dependency? That is something that really is the human team. And I think this is the other thing that. [00:14:00] I would just add about the way that humans versus machines should work is that there are, there is a real value to having multiple different viewpoints inside of a cross-functional team.

And that’s why we talk about like the balance team, where it’s like engineering, design and product. But I would say that it’s even beyond that, right? Like it’s also including privacy and responsible AI and legal and, the help team and customer support like, and the sales team, right?

Like all these people have different kind of viewpoints. And one of the things that is I think, helpful is one, trying to emulate or simulate those viewpoints early on in a process because they can give you good feedback. But then I think what we should be doing is we should really be fostering these cross-functional teams to continue to work well within that new future with a bunch of, say, agents, if you wanna use that, it’s like a way overwrought term nowadays to say something that’s adjunctive.

But I think there’s something really valuable about. Having all of these viewpoints and then enabling them to be able to make better decisions. But getting back to that prioritization discussion, right? There’s a bunch of information that these a, these like systems can like prompt for, can help get, can actually maybe [00:15:00] even provocate, right?

Like in a way of should we be taking us on why or why not, right? Yeah. There’s a lot of things that these systems can do, especially because of LLMs in the way that they work today. But the product team should still be that discussion between the product manager saying is this actually an alignment with our product, our business strategy?

Is this best for the customer? The designer saying, is this like the right experience that they should be having for the engineer? It’s is, does, will we be building something that is maintainable over time? And so all of those viewpoints should have a tension between them. And we shouldn’t just say that.

Oh, the, like you said, the, this issue set is at a score of 82, so we have to take it. There should be that discussion that happens between the human beings.

Janna Bastow: Yeah. Absolutely. Absolutely. And you mentioned this I think a very noble objective of reducing the toil for your product teams.

I love that. How are you measuring that? What sort of successes and wins have you had?

Chris Butler: Yeah we’re still trying to figure out the engagement piece of this. So the raw quantitative aspect of this, which is how long does it take for say, like an initiative to get kicked off and then start to be worked on?

There’s a lot of stuff that goes into there that’s beyond just toil. There’s actually like [00:16:00] strategic conversations. Resourcing sequencing, there’s a bunch of stuff like that. So it’s, we’re still figuring out how to do the raw, quantitative, like measurement within the organization. But I would say that like we do think an awful lot about the perceived toil that people go through.

And we do this a lot through qualitative interviewing. As a product manager in my at heart even though I’m in product operations, I still do somewhere in the range of four to eight customer interviews per week, right? And those are discussions with people like product managers, engineers, engineering managers, et cetera, right?

And so that qualitative interviewing is really helpful to understand what’s going on there. But then we also are starting to use surveying to understand the perceived toil that they’re dealing with in different types of work. And some of that work might be around how they do reporting and status.

It could be how they deal with kind of like intake of new requests and dependencies. It could be how they like have to kick off an initiative and get it started. Continuous planning and resourcing. Those are all like domains that we think about toil in. And so what we have started to see is like one example, we’re really focused right now on how do we reduce the reporting toil [00:17:00] within GitHub.

And most of this targets actually tpms rather than PMs and ems, but we’re starting to expand ’cause everybody has to do reporting, right? Almost everybody in the has to do that.

Janna Bastow: And sorry, what do you mean by tpms versus

Chris Butler: Yeah, epms

Janna Bastow: versus

Chris Butler: Totally fair. Totally fair. And this is the problem I think I’ve seen with product tpms in our context are more like program or project managers versus, I know that there’s another version of technical product manager, which is just the fact that they.

I guess work directly with engineers sometimes, which like, again, there’s a lot of terminology here, but what I really mean is like the people that are helping reduce the risk on a project, helping make sure that nothing gets missed as we’re actually moving through the development process. The, so that’s what I mean by TPM in this case.

And they have to do what? An awful lot of reporting, right? Yeah. Both product managers and engineering managers. There’s like these kind of parallel hierarchies of reporting because we’re a matrixed organization as well that happen. But that’s what I mean by TPM in this case. So they have to spend time to be able to pull information from a bunch of different sources.[00:18:00]

They have to then in some way parse it themselves and then write things out. And some of them are starting to use. Different types of models and LLMs to be able to help synthesize the output report. Yeah. But there is a real issue around hallucination. And also if you write the prompts incorrectly or not incorrectly, but if you don’t write the prompts and iterate it on them, you’ll end up getting like weird content sometimes that comes out of it.

And so one of the things that we’re building internally using GitHub and we’re using like actions and models and issues and project views and things like that, is to actually pull a bunch of content use kind of an LM to help synthesize that into a better form, using a template that then is a draft for these tpms to use.

And we’ve already been hearing that we’re saving people somewhere in the range of like 30 to 45 minutes a week. And yeah. So it’s, I mean I, I think there’s even more by the way that we could be saving them. This is just the start. And and we’re again, all of these things are experiments because.

I think there’s a lot of things for us to still learn about. What is that change about the environment within the organization, if everything is a generated status [00:19:00] report going forward. Yeah. How does that change? ’cause right now it’s all about pushing up in a hierarchy, right? Yeah. And you’re building these tools to be able to synthesize something that then goes to the next person.

That person then does a roll up that goes to the next person. So the question is should we actually be turning this around and saying it’s actually more about there’s a poll of information that I need today to be able to do my job. And what does, how does that come into the idea of i’ve been experimenting with how we create like leadership personas for like our C-level execs.

And how does that change what we send them based on what they usually care about? What are their blind spots even? And can we create an agent that maybe mimics these like c-level execs to help us actually do a better job on our day-to-day as well. So there’s a lot of like ethical and moral considerations to do there as well about emulating people.

Yeah, I’d really interested

Janna Bastow: An agent that mimics your execs. Yeah, exactly. Or the execs on board with this.

Chris Butler: So we’ve, they are aware that we’ve been doing this. They don’t have any comments yet, but I think the hard part there will be, I. A lot of pe, a lot of different teams will end up writing like a read me or a user manual for themselves.

The problem with that is it’s like what they want to say about [00:20:00] themselves, it’s not what are the blind spots that they have that they maybe are not aware of or they don’t like. And they don’t want to talk about. And so there’s a lot of like interesting questions about, if we were to create a profile like this and I have created these profiles, should we allow these leaders to edit them or not?

Should it be based on their behavior or not? And so I think that’s an interesting thing. I know we’re getting into like more speculative territory here, but I think moving from a push status type of thing to everybody to a pull, what I need to do or respond to or know about, I think is the model that we’re trying to think about right now.

Yeah. But right now we’re just trying to solve that push problem. Now, this could also cause there’s that comic about hey, I just I have to write this long email, so I only put in three bullet points and it spits out a long email. And then on the other side it says, I get this long email and I am able to use an AI to just pull out the three bullet points, right?

What we don’t want is to create more reporting slop within the organization. And I think this is one of those things when you start to have this as like mass behavior, you start to see different emergent effects coming out of the organization. Yeah. And so that’s one of the things we are trying to keep a very close eye on is how does this change the way that [00:21:00] information is consumed?

And I think the last piece is really the engagement piece I was talking about is like one of the number one things. I, and I’ve set up status reporting mechanisms more times than I can count at this point. And they’ve all failed. And the reason why they all fail is because usually the senior leader is not giving enough feedback and the people don’t know what that information is being used for.

Yeah. And so I think co completing that loop is another thing that we’re now looking at is what type of engagement metrics should we be viewing? How do we compare what was passed up to a leader and what then the leader passed upward? How do we look at that diff to start to now build a feedback loop of what we should actually use there?

But for right now it is very much just about automating, pulling from a bunch of different data sources, synthesizing this draft report essentially.

Janna Bastow: Yeah, absolutely. And so going back to the the reporting piece. So is this like your team is paving the cow pass and trying to stay ahead of what your team’s, what the product managers are already doing?

Or is it that you’re introducing AI to them? Like I imagine Yeah, without these tools, people were just taking reports and information, shoving it into [00:22:00] chat GBT or cloud or whatever Yeah. And coming up with their own sort of mix. Or is this sort of you’re having to teach people how to use these tools and use your internal versions?

Chris Butler: There’s definitely like people internally that are braving the way, like they’re early experimenters with this type of thing. And I. It. When I was starting to do this kind of like earlier last year, we had a bunch of really bad internal tools that were approved by our IT and security team.

And they were like really bad versions of chat BT basically, right? And so we were using those types of things and this is where I was starting to see issues with like hallucination, the importance of prompting and shaping the information that we end up getting. But I would say that it, there’s definitely a lot of very early adopters, but I would say that for anybody that deals with this type of toil, especially tpms or project or program managers, they’re usually so focused on just getting the toil done.

It’s almost like my dad used, worked on a manufacturing line, like when he was younger and he said he would go to the manufacturing line. He would start work and then he would wake up and he was walking out the door because he was done basically. And so we end up [00:23:00] losing a lot of understanding and context.

When we are so focused on the toil. Like I was doing a daily report for a little while and we, it, it ended up taking up so much of my time and my mind space that I couldn’t think about anything else. I couldn’t think about how to approve that process. And so a lot of the time we’re here to educate, not just, here’s a tool that you can use, here’s something you can use that is easy to install in your context and easy to configure.

We wanna do that. These are not custom solutions. They’re like modular composable things that we’re starting to now snap together. And. From there. I think it’s really about us trying to teach people sometimes about the tools that are available internally.

So one example is we’ve just started talking externally about something called copilot spaces. And so this is a way to basically create a. A context. And what I mean by context is that sometimes you wanna pull in, say, files from a GitHub repo. You want to add some, a specialized system prompt. You want to include a bunch of texts, say even a meeting transcript or something like that.

You can include this and then have a conversation with that context. And so people have [00:24:00] started to create spaces, like one of the first ones that I created was a way to help me write better reporting, right? And so just even starting to tell people about this, we have a way now to share like these spaces internally to our organization, creating a set of spaces that are like best practices that anybody can use for different parts of their process.

Sometimes it’s just an awareness issue, right? There’s lots and lots of great work that’s been happening with regard to spaces around that. And so it’s just just teaching people about this sometimes is like the issue.

Janna Bastow: Yeah, absolutely. And so I can understand that AI is has a tendency to be random at times.

Are there any experiments or attempts that have ended in failure or that you’re still hitting your head against the wall on touching on that fear that you set up some automation and then the AI goes wild and does a bunch of stuff that you have to then roll back?

Chris Butler: Yeah, there’s a, so there, there’s something that’s not even, there’s, lemme tell you like two different stories.

So one is that when I first started using these systems. I would definitely find hallucinations, but when I would then provide these things to senior leaders, they would find even more [00:25:00] hallucinations. And that was like early on, in a lot of the use of these tools for say, status reporting, like I said, like early last year.

And that definitely impacted, it was, it impacted their view of my work. And so what I would say is it was much more important for us to actually say, here’s how we’re trying to use this tool and taking an experiment driven mindset for this. But I think also just being very aware that like early on you will have to double check some things so that, that is like the most important thing.

And then I would say we have bots that already exist within the GitHub ecosystem that will do things like apply labels or do certain types of automations. And what we, what I’ve started to see is, and is that when we have every. We have lots of people that work at GitHub.

We have lots of repos and we have lots of project boards. And so as the bot itself starts to try to change things internally, it, there are other effects because it then impacts another bot that then triggers another bot. And so there’s some like interesting problems that we’re starting to think through of like I said, like we don’t wanna create reporting slop everywhere.

We need to start thinking about like [00:26:00] when these things are mass adopted within the organization what actually happens. And so I think in the case of reporting slop, I think what we start to do is we start to do things like auditing. And this is where I wanna, once we have more people using these like tools that we’re building internally, we wanna then start to look across all of the different.

Reports that are created and see how similar they are and that type of audit can tell us like, are there extra reporting LA lanes that we just don’t need? Because the more of these things that are created, the more stuff that senior leaders have to read and the kind of less likely they are to actually engage with them.

And so I think that’s another thing that I’ve just started to notice basically is around this.

Janna Bastow: Yeah, absolutely. So actually ’cause a couple questions coming in. Yeah. An opportunity to get really down into the details about Yeah. What tools you’re actually using. Yeah. Is this all in-house built stuff that you are using your own in-house devs or they’re external tools?

Yeah, that’s

Chris Butler: right. So our team is like a small handful of engineers and we end up using like I said, so a lot of this is focused around the use of GitHub models, which is a kind of [00:27:00] playground for access to a bunch of different types of models. Inside of GitHub actions, which is our usually used as like A-C-I-C-D automation pipeline, but we’re actually using it to now interact with issues and discussions and other documents basically.

And so we’re using those very specifically. We’re sometimes using like template frameworks inside of there that are not something that, that GitHub does, but they’re like an open source templating framework. We’re writing a lot of our own prompts in this case because some of the prompts have to be highly specific to the context area.

And so that is like a configurable thing that we think about a lot. But like I said, like we are trying to build pretty much everything now, I would say I. That I’ve seen a lot of amazing things from team, from groups like Airtable, Zapier, relay, there’s a bunch of others that are out there that do a lot of this kind of, not, I, I guess wy wig is not the right word.

I feel like I’m an outdated person now that I, when I say WY wig nowadays but there’s a visual way for people to create these automations with integrations to data that then can use models to transform content and then output it somewhere. I think you could use some, I think you’re calling it

Janna Bastow: No code.

Chris Butler: No code. Okay. Sorry. You’re [00:28:00] right. That is the, I could be wrong. I’m also, you’re busy. Exactly. No, you’re right. It’s no code or no code, low code, whatever you wanna call it. And yeah, I think like with all those types of platforms, you can start to do this type of stuff. What you really need is like, where is the context that is being built, right?

And in our case, it tends to be in project views focused around issues. And we have kind of a. S semi standard within the organization of initiatives, epics, tasks, batches, a bunch of things like that. But I would say that for, that’s the first thing you want to get access to, right? Because the other thing is like prs.

And so sometimes just shipping something within GitHub means that PR has been merged, right? And so getting access to PRS is the next thing. And then after that, we start to look at discussion posts, because that’s where a lot of if there’s like an architecture decision that’s made, usually there’s these huge threads discussions within GitHub that just hold these discussion posts, right?

Or decisions. Next thing is like, where are you storing kind of product documentation? In our case, it tends to be a combination of issues, discussions in Google Docs. But that’s like how that works. And [00:29:00] then there’s, where’s all the conversation about these things happening? So for us it’s Slack and Zoom transcripts but it could be whatever that is for you.

And so gathering that type of context and by the way, we don’t have access to all of this because of security concerns within our organization. And so I think that’s one of the problems I see. For Airtable or Zapier or any of these others, is that for a startup, it’s completely fine. I’m just gonna use Zapier, I’m gonna turn on all of the integrations.

And it’s gonna be great. That’s awesome. Inside of like an enterprise that is very aware of security concerns, you cannot do that. And so it’s one by one, you get these integrations to, to have the right scoping of access and things like that. So that’s why I think there’s a lot of these tools that you can use, but in your enterprise context, you may not actually have the ability to now join a bunch of this information.

And I would say that like part of this is just a bureaucratic kind of work that you need to do. You need to go and figure out how to actually get access to these things into the right systems and so that’s what I would say, like just brass tacks, that is that will be your job if you want to do this type of process.

Automation is making sure in a large enterprise you can, [00:30:00] you are allowed to have that access and these two systems talking to each other basically. And you should just deal with that. Like you just have to. But it helps it off a lot. So that’s what I’d say, like those are the specifics of what we build.

Now we do use a lot of, like the scripting language within actions, for example. There are some things that we build outside of that, but I would say by and large, almost everything is inside of an action, and we’re now packaging them as like GitHub apps, by the way, so that people can easily install them.

Janna Bastow: Oh, I was gonna say, are these things that you’re building for internal use only? Are these being, are they a test bed for stuff that will or is being released to GitHub as a product to all the other. Our product teams like that. Yeah. I

Chris Butler: think the goal of Synapse is to mostly focus on internal toil, but I would argue that in the way that external teams work, like we want to model good behavior on our own teams so that other people can gain the benefit of it.

Because we see one of the GitHub missions is, or goals is to get to a billion developers. And when we say a billion developers, we don’t mean that everybody suddenly is technical and learns to code, but it’s that if you have an idea, you should be able to build [00:31:00] something.

And that, that I, that the fact that you can start to do that means that all different types of people are gonna be using GitHub, ideally as the center point for this. And so inside of that, how these teams work together with things like coding agents, which we’ve, we previewed at build this year, and there’s lots of other teams that are building coding agents, but, and, but how those multiple people work together I think is really exciting and interesting.

And having GitHub at the center point. So all this being said yes, some of these things will turn into like the way that Plan and Track is the team that owns, like projects for example, how they actually do this work, I think is gonna be one of those things eventually. There are some like more experimental things that we’re building right now.

Like we have an internal experiment that is really all about like, how do you gather. Context around an initiative. So to be able to kick off an initiative usually people might start with writing a doc on their own, or they’ll get like a list of bullets from a senior leader that says go implement this.

And so what they do is in our case what we’ve been experimented with is actually they can create [00:32:00] a discussion with whatever they have to start, and then automatically there’s a bunch of different framings that are listed as comments that basically, again, these are all actions using models and prompts and a bunch of other things, the templating system, but it basically come from a bunch of different viewpoints of are there actually gaps in this initiative so far?

What are some things that you haven’t thought of yet that you need to think of for a well thought is does this actually have success metrics, for example, and if not here’s how you might add them. Or is this in alignment with the strategy, right? That we profess internally.

How in alignment is it? And then there’s one of, one of the commenters that people really love and think is really funny is this one we call rude Q and A. And it’s basically like the most sarcastic, mean commenter you ever imagined, but it says the things that everybody is thinking about your project, but they’re just too worried to say it.

And what’s really exciting about this is that. People don’t actually take it as of offensive. It’s from a bot. It’s a very low risk way to get this type of feedback, right? And so anyways, all this, all of this is like trying to help them now form a better understanding of their initiative by looking at it from a bunch of different viewpoints.

[00:33:00] And then what it does is it actually generates a bunch of output documents that might be like, what does it look like on the public roadmap? Or what is the initiative brief that you’re gonna put in the issue? Or what is the announcement blog post look like? Almost like a PR FAQ type of thing, right?

And all of these are being kept up to date based on this context. And so as you respond to these things and at the documents and longer term have these discussions in your repo or in your weekly sync meeting, we start to build this kind of what is the future of this product? And that sits in parallel with the repo repos code, which is really all about the kind of what is the truth of today, right?

And there’s a bunch of really cool things you can start to do once you know this. And so I’m really excited about that type of experimentation where we’re really thinking about. What is this thing that kind of exists right now? It’s mostly like discussions and issues and projects. But what is this kind of centralized, what I will like, affectionately refer to as like product manager fiction about the future world of this thing, right?

Every PRD or spec is actually just fiction, right? It’s usually not reality yet. And so I’m really excited about some of those things and [00:34:00] we’ve been starting to see in this particular case people have been saving a lot of time and getting an early initiative, thinking into a lot of the formats they need.

And so that’s 45 minutes to an hour per initiative at least. And same thing, like if we wanna talk about results that we’ve started to see, even for things like this reporting toil reduction, we’ve been having some of our early dog fooder saying they’re saving like 30 to 45 minutes per week in this reporting stuff.

So I think all of this is showing that there’s like a lot of toil in doing, writing a bunch of different types of documentation that requires your mind to make like a big context switch to get into that mode. Yeah. But if we can give you a first draft that is good enough that you can then edit and use, that saves people tons of time.

Janna Bastow: Yeah, absolutely. Absolutely. And so what do you recommend for product operations teams that are obviously like smaller startup type Yeah. Companies that don’t have the AI experience. How do they get started?

Chris Butler: Yeah, I think like experimenting with these types of tools like ProdPad I’m sure it has a lot of this within their AI stuff, but, I’d also say Airtable with their recent launches around ai Zapier, any of these [00:35:00] like tool sets that are really trying to join.

Content doing some type of transformation or synthesis using like LLMs and prompts and then output to somewhere. I think that’s the first thing. So to me it’s a lot about automating certain types of reporting. That’s super powerful. I think for almost every team. Now in, in a smaller team, you actually don’t have to do as much reporting which is good, right?

Because they just don’t have the bureaucratic machine that needs to have, and it’s all about information flow, by the way. So I think there’s something interesting there. But the second thing I would say is I do think that kind of handling. Customer support customer tickets, the idea of how do you turn that into tickets internally.

I don’t think I, I think one of my opinions is that we should never take anything that is directly from the customer and just put it on in an issue and send that to an engineer to, to develop, right? There should be this like firewall where there’s a bunch of information that we’re collecting about what are customers seeing?

What are partners wanting? What are their teams that we’re working with want from us? And then as a product manager, we synthesize that into a point of view that we then try to generalize [00:36:00] enough to build a product and that turns into a work item. So I still think there’s like some. Like barrier there.

But I think the idea of how do you help make sure that you’re collecting all that information the way that’s best. Like one of the things we’re starting to experiment with is has this, is this request that’s happening right now related to other requests that we already have.

Is this feedback, like when we’re creating a new initiative, is there already feedback that we know that actually is related to this? And so it makes it much easier than a PM having to read through every ticket that’s out there, but at least giving like a viewpoint of here’s user research that’s already been done here are customer tickets, here’s sales calls that mention this type of thing.

Here’s our key enterprise customers that have complained to us about this. All of that type of stuff I think becomes really powerful. So I’d say you should and maybe this getting the getting back to the getting started question is product operations really is about making sure that the teams are as effective as possible.

I think sometimes we think more about efficiency which I think is a mistake. Like it’s not a tayloristic type of activity to build software. It’s not a, it’s not an assembly line. [00:37:00] What we’re doing is we’re designing the assembly line. And so from that perspective it should really be about like what are the frictions that we’re seeing when we’ve made decisions in the past where have there, there have been problems that have stopped us from making the best decision based on the information we had at the time.

Yeah. And so that’s why I think really focusing on those frictions is the thing you should do. And then from there you can say are there things here where if I just had more ready access to the information that was already collected, I could have done a better job. Then it’s all about trying to bring that access to that information to the place that you would actually need it.

And vice versa, it could be. We made a decision and we didn’t consider these viewpoints. Is there a way that we can automatically have these viewpoints con, considered or generated? Even if it’s just as a throwaway comment, it’s still something that gets people to start to expand the way they’re thinking.

Janna Bastow: Yeah, absolutely. And as you said, it’s about to designing the the assembly line. What role is AI playing in providing feedback on where those efficiencies are being gained. Like we can definitely see how if you give it some information, you can chew it up, give you better, more [00:38:00] concise information Yeah.

And smooth things out that way. But what about it feeding and creating a sort of virtuous cycle of of better product management processes?

Chris Butler: Yeah, I think that this is where once we start to do a better job of. I think data is always one of those things that we need to do a better job of with product operations is that a lot of the time we don’t really track or think about engagement within an organization.

We know we believe we know what is going on, but we really don’t. Yeah. And so I think this idea of treating product operations as like an internal team that is really trying to now instrument and understand what is going on, I think that’s really important. Now, I think the future of kind of development of software, for example, and maybe I’m taking this in a different direction than you wanted, but I do think, with the use of kind of coding agents and the fact that coding agents will continue to get better over time.

I think the, there’s definitely a need for better orchestration, but I still see that as probably like an engineering task. What I think is gonna be very interesting is if people are familiar with mob programming. Mob [00:39:00] programming is basically like pair programming where the N is two for pair programming.

It’s two people, one person’s at the keyboard doing programming. The other person is next to ’em thinking and asking questions and stuff like that. The reality is a product team is a multi, a cross-functional team that is doing things together. And it may be that engineers go off and engineer and develop for a little bit.

Product managers go off and listen to customer stories. Designers build mockups and Figma or something like that. But there’s like a lot of this cross-functional stuff that I think if we think about mob programming, it’s about how do we. Go from an N of two in pair programming to an N of five or six or seven, which is the cross-functional team.

And then I think the agents that are there in the future end up actually helping helping not only orchestrate the idea of what are the things we’re gonna build automatic prototyping. The idea of asking key questions within this mob. But I imagine I created a design fiction for this internally, which is like, what is this meeting transcript of a group of humans that are cross-functional with a couple agents that are helping them?

And how might we use that kickoff meeting as a way to really do this type of stuff? So I think like looking to the future [00:40:00] a little bit, I think it will change that. We still want to think about like the mission and vision around these products. We still want to have that out there. So I don’t think a product manager is just gonna specify what we should build right this moment all the time.

I think there’s still this kind of longer term, how do we sequence this towards actually solving the full problem for a customer? But I think there will be an awful lot of like how do agents actually help work? How do humans orchestrate, how do agents orchestrate to be able to make the best product together as a like, mult multiple agent team, which is humans and agents together.

So I think that’s the future of that type of development.

Janna Bastow: Yeah. And this has been a big shift already. I’m sure we’re gonna see even bigger shifts do down the line. How do you bring skeptical team members along for the ride, those who are worried about AI taking over the work?

Chris Butler: Yeah, I think just when they try it out, I think whenever people start to try this out, they see just the way that it fails.

And understanding how much you should trust the technology is really important in leveraging it. Because there’s, a great paper I like to reference by Paris sermon in it’s like 86, but it’s called Trust in Automation. And what the core of this [00:41:00] paper is really that like when you are building automations, you need to set the right level of trust based on the capability of the automation.

And so if people overtrust the automation, they will enable it in times that they shouldn’t. And if they under trusts the automation, they will never engage it in the way that they should. Yeah. And there’s a couple other kind of types of things there as well, but I think like finding that zone for in the case of reporting, right?

In that particular case, we do not want to build something right now where it pulls a bunch of information, does the transformation with a template and an LM and then automatically post for a senior leader. We don’t want that because we know that there are problems with what it synthesizes in some ways.

We’re starting to dial in also there is a human judgment shaping capability that happens in status, right? So we may have something that is red internal to the team, but that leader knows that redness of that problem will be taken care of eventually. So it’s really, it’s maybe yellow to my senior leader because I don’t want to escalate to them.

I don’t want them to get involved yet. And so I think that’s where again, understanding what machines and humans are best at, understanding what is the right level of trust for this [00:42:00] type of thing. Really deciding that I think helps those skeptics to see that we’re actually being more deliberate in the way that we’re actually implementing these gut of automations in some way.

Janna Bastow: Yeah, absolutely. And so how do you prevent making a system that just speeds up an already bad process, like Susanna and the chat said, what do you do about data quality? Like garbage in, garbage out? Yeah. She had an example around product management, cu product manager customer calls going do you like my feature?

Chris Butler: Yeah. And I would say I mean I’ve done an awful lot around research and stuff like that, and I’ve done a bunch of talks about like how to get product managers to do better research because bad research is sometimes even worse than any no research, I would argue.

And so I can go on and on about that, but what I would say is in that particular case like there should, there is a real value and there’s a bunch of product, there’s a bunch of people that are building startups around this right now about how you do a better job of doing user research.

And one of the first things they end up building most of the time is actually a critiquer of the interview guide that you’re creating. And there’s kind of basic things, right? Like you shouldn’t be asking yes, no questions. You should [00:43:00] be asking things about what they’ve done in the recent past rather than in the future you should be asking for examples.

And so that to me I could very much write you a prompt like in about 10, 20 minutes that would attack those questions in that very particular way. And so that’s what I would say is I think there needs to be care and understanding what is the end goal of this? Yeah. It’s not more reports, it’s actually better understanding and better information flow within the organization. And I think like always looking at like a level up from a systems thinking standpoint. From the standpoint of it, like really understanding that work is about teams of people rather than an individual getting a task done.

And service design talks a lot about this complexity kind of system thinking. People think about this stuff, product operations, people need to think about this. But I would say, again, there’s this hazard that for people that are dealing with so much toil, they just can’t get their mind off of just completing the toilsome work.

Rather than trying to think. And so that’s why I think product operations should be there, is that we are there to help point out that there’s like an inefficiency inside of this process and. I think that is actually, it is bad if the process operations [00:44:00] person inside of your org is seen as just like always adding process rather than removing process.

Yeah. If you can’t remember why that process exists and no one likes it, like even the leaders, you should definitely remove it as fast as possible. So I think those are more like basic team dynamic type of things that we need to think about

Janna Bastow: And yeah. That’s actually really interesting. You mentioned being able to write a prompt in like 20 minutes or something like that, right?

Yeah. And I’m a huge believer that there’s good prompts and bad prompts. Yeah, there are. And that this is a skill that product people, sounds like you are building stuff internally, but ultimately at the end of the day, there’s a lot of prompting going on and it’s the same thing that’s happening in other tools externally.

Yeah. Any tips on crafting prompts or how to learn about how to identify whether it’s gonna provide good information or not?

Chris Butler: I think that the first thing is really experimentation. Like I, I think over time I’ve started to realize that unless I’m clear in certain ways, it’s not going to actually give me the output I want.

I think also giving examples is really important, but there’s plenty of guides. Like I think every major model Produ producer has a here’s a prompt guidebook, basically. So I’d read through one of [00:45:00] those. I would watch a video too about someone prompting. But in the end, it really is just about are you being clear about the objective?

Do you give it kind of the steps that it needs to do? Do you have an example now? There’s one problem that with prompting that I would just say like I’ve found is that it’s very hard to tell a model not to do something. And that’s because the idea of like negatives doesn’t, it doesn’t make any sense.

Yeah. But there, there are techniques, like for example, you don’t want it to synthesize or hallucinate or be, I would say hallucination and creativity are they’re not quite analogs, but they’re similar in the sense that sometimes I want a hallucination because it actually helps me think of some, something more creatively, personally as a human.

But I’d say what you want to do is you want it to make sure that it’s outlying that if there is information that’s not available, for example, yeah. Please market as a question or ask me that question and some, and so there’s some things around instructions where it’s like trying to get it to mark where there’s like a lack of knowledge.

It can be tricky, but I think is really valuable because, and that’s why I think we should always think about these things as drafts, not as like copy and pastable things basically. Yeah. And so I think that is that’s maybe one really important thing about [00:46:00] prompting is just try it out with the real data.

Try to write a prompt, see what the output is. And there’s lots of we have a model playground in GitHub. There’s lots of places, ways you can do that with like chat GBT and others as well. Yeah. And

Janna Bastow: that’s actually one of my favorite things is getting it to and saying in the prompt saying, once you get this stop and tell me what you’ve understood and then ask me more questions to clarify.

That’s right. So that you can do more with it. And it’s actually quite interesting to see what it often comes back with where it goes, oh, what about this? You’re like, oh, that’s a good point. I can talk about that for a couple minutes and give you some more info. This is why I

Chris Butler: think like different framings for prompts are also really interesting.

Is that like the most important thing for these systems is actually what is the kind of opinionated context that you’re including. And that’s the prompt is part of that context. But then what data you’re also using, and that’s one of the hardest problems for like coding agents is actually that.

Like what out of all of the code base is most appropriate right now? Yeah. For us to include and what is the actual task that you’re trying to give someone, which is the prompt. I think those things are really hard actually. And experimentation is really important in [00:47:00] that case.

Janna Bastow: Yeah, absolutely. Absolutely. And so when it comes to deciding what sort of things to automate, are there any things that sort of flash up red flags don’t touch this? Or are the things that you think are riper for disruption than others?

Chris Butler: Yeah, I do think that anything where it’s like it should be a human making, like a judgment call, you should prepare as much data as possible for that.

So again doing stack rank prioritization, like I think just having a completely automated system for that is probably a bad idea. But having a system that helps enable the team to then make prioritization decisions and do triage, I think is really valuable. I would say I. Using a system to auto generate an entire spec for your feature that you just give it one line that is a bad idea because it give you a really bad spec.

It will look like it’s a spec, right? It will give you all the stuff. And that’s very confidently

Janna Bastow: bad, isn’t it? Yeah,

Chris Butler: it’s really bad. And it’s not differentiated, it’s not imaginative. It doesn’t really think through the more important things. But what you should use it for is you should say what am I missing here?

Or what are scenarios that I haven’t thought about? [00:48:00] Or what is part of this plan that is gonna go wrong? Like I, I’m a big fan of like red teaming and things like pre-mortems. And so I use those tools to actually help me expand my thinking. Those are some things I would think like right outta the box that you should be focusing on.

Is not those things, but like how does it assist you in being better at writing, for example, is really helpful.

Janna Bastow: I like that we’ve actually baked that right into ProdPad. So when you say, Hey, help me flesh out this idea in more detail, it will, it’ll give you some suggestions and things you can do, but it always keeps the human in the loop.

So it says, Hey, here’s what I come up with. What do you think? Now edit it and we can work together on it.

Chris Butler: That’s right.

Janna Bastow: But also it it generates it’s thoughts on risks and limitations, so any challenges that might come up. If you come up with an idea, it’ll call you out being like, Hey, have you thought about the privacy implications of this?

Or, this is probably illegal, or whatever.

Chris Butler: Yeah, that’s right. And so we have an internal space that we’ve created for kicking off an initiative. Yeah. That sometimes it is just like an idea, but then there’s a bunch of questions that you want to like work through over time to be able to do that.

And so I totally agree that’s a great way for, especially when you’re like really early, before you have a more well-formed idea. You [00:49:00] need that kind of q and a back and forth.

Janna Bastow: Yeah, absolutely. And this is such a fast changing place. Yeah. I’m really interested to see where this is gonna be.

If we were having this conversation three months from now, or six months from now, it’d probably be entirely different lingo and different ways of thinking about things. What are you hoping your product management processes are gonna look like in a year’s time?

Chris Butler: Yeah, I actually like vibe coded this weird prototype of a dashboard for a product ops person in a couple years time and.

It’s been as just an example internally, but what I would say some of the key things that were there was really we should be about how is information flowing through the organization. And what that means is like, are there bottlenecks? Are there dependencies that are having problems? And I think this is where project management ends up overlapping a little bit, is we want to design better systems and then project management is more about how do we like help remove the risk in that particular case.

Yeah. So I think we’re like enabling those systems for project managers a lot of the time. I would also say that again we’re there to help design the way the team works together. And experiment with that. And so I think there’s [00:50:00] gonna be new ways of us thinking about, like right now we would write down in a document somewhere that would turn into a discussion post that here’s the new process, right?

And in that case, people are the automation usually, right? It’s like we’ll do this step one, step two, step three. And it’s this person does that, this person does that. And maybe there’s like some automation inside that. I think more and more of those cases where we can write down here’s the process that it is actually gonna go to that almost like no code type of situation where that will be automated in some way.

But then finally I think like. When we look at the communication flows, we should then start to look at are there duplicative work? Is there duplicate, duplicative work happening, right? Is there work that is diverging in some way that is caused because of these processes? So we almost become like a meta analysis of the way that processes work within the organization.

And so I think that’s what I would see, whatever we wanna call that in the future, I see it as like a cross-functional team that cares about the way teams work. But maybe it is product ops in the future. But that’s I think where, it might just be for now we had to figure out like a term for this because, but there are other ops roles that do this too, [00:51:00] right?

Like design ops, research ops, yeah. Things like that. They think about really how does this team do their work and work with dependencies throughout the organization. I think it should be. More of a cross-functional team that is thinking about the way that all people work together rather than just one probably.

That’s really helpful. And I used to say that product operations was PMing the PM experience. And I think I’ve, just based on the direction of the world and everything, I think I’m now wrong, unfortunately. And so it’s gonna be this like cross-functional team that operates to help out the cross-functional team in the way they figure out how to do their work.

Janna Bastow: That’s a really good point to to finish on. And I think something that maybe we revisit in a year’s time and go, how wrong were you on all this and what were you? Right on. That’s my motto.

Chris Butler: I am wrong today . I just don’t know how yet. That’s my motto.

Janna Bastow: Yeah, exactly.

We’re all here learning and I love it. Thank you so much, Chris. What you’ve been learning and what’s been what’s been working and not working with your own internal ops. We’re running up against time here – but I just want to point out that we have some resources here for people who are looking to unlock greater efficiency within their product operations.

We’ve got [00:52:00] some some guides on it and we’ve got a webinar that we did. Recently, which you can watch on demand. So check those out. We can send these links out afterwards. And also save this save your calendars for this next one. So coming up in July, on July 29th, we’re gonna have Christian Idiodi coming in to talk about how to influence up getting buy-in for your org for the for an Org-wide Company transformation.

So umpire will be talking on some of the same things that we’ve touched on here today. But definitely a chance to think about how you influence up and get the execs on board. I. And finally, I’m going to leave you with this link to start a demo of ProdPad to get jump in on a demo of ProdPad.

So we talked a lot about using AI to improve your product processes. We’ve built tools in ProdPad that are available to use. You can jump in and start using it to judge you on your product roadmap and your give feedback on your ideas and help connect up the dots that might be disconnected right now.

Really powerful tool. We’d love your feedback on it and we’d love to see you around to give it a [00:53:00] try. And in the meantime, Chris, thank you so much for your time today. You’ve shared some brilliant insights some very good hot takes and some lovely thoughts about where we might be going in the future.

So on that note, thank you.

Chris Butler: Thanks for having me.

Janna Bastow: Yeah, big thanks everybody. Thanks to Chris. And in the meantime as I said you’ll get a copy of this recording, share it round with your friends and in the meantime we’ll see you back here. Same time, same place for the next webinar. All right.

Thank you everybody. Bye for now.

Watch more of our Product Expert webinars