Skip to main content

[On Demand] Product Management Webinar: AI Products and Features

How to Build and Manage AI Products and Features

AI is transforming the way we shape products and offer value to our users, and Product Managers are at the forefront of this change. But knowing how and when to leverage AI in your product strategy can feel overwhelming.

Watch ProdPad Co-Founder and CTO, Simon Cast, for a deep dive into the practicalities of building AI products and features, whether you’re enhancing an existing product or launching something entirely new.

.


About this webinar

AI is no longer a future trend, it’s shaping the products of today. Understanding how to leverage AI in new or existing products is vital for any Product Manager wanting to future-proof their career. Watch Simon and learn everything you need to know to confidently enhance existing or build new products successfully with AI.

Drawing on real-world experience (he’s built the world’s best Product Management AI tool) and actionable insights, Simon will walk you through the key scenarios Product Managers are likely to face when working with AI products, features and functionality.

We explored and explained all the different options you have to build them and the key scenarios, considerations you will encounter and what pitfalls to be wary of.

This webinar covers:

  • How to integrate AI features into existing products
  • What it takes to build a brand-new AI-powered product from the ground up
  • The pros and cons of using existing AI models vs. building your own
  • How to evaluate the right approach for your product and your team
  • Strategies for pricing AI features effectively
  • Best practices for managing AI products post-launch
  • The unique challenges Product Managers face in the world of AI—and how to overcome them 

About Simon Cast

With a background in Mechanical and Space Engineering, Simon first started in Product by aiming for the stars and building automation software for satellites. Having honed his product skills, identifying problems, testing, and iterating to achieve useable solutions, Simon relocated to the UK to continue to develop his product skills through consultancy work, and further product manager and Head of Product roles. 

It was around this time Simon met Janna and began working together over weekends and in the evenings to bring their idea of a suite of product management tools to reality, while also contributing to Mind the Product. Eventually, like Janna, Simon was able to focus full-time on ProdPad to help it become the tool it is today.

Maneesha Silva: Hello everyone. Welcome to our webinar on how to build and manage AI products and features. Just a bit of housekeeping. As Megan mentioned a lot of you’re stuck in already, but there is a little chat section to the side.

So by all means chat away, share your thoughts. There is also a little q and a box in the menu, so if you do have any questions at any point, just pop them in there and we will get to them at the end of the webinar, a little q and a section at the end. So we’re here for an hour, but before we get into the nitty gritty for all of those that haven’t actually been here before or haven’t been with a ProdPad webinar before, let me introduce you to ProdPad.

So ProdPad is an end-to-end complete product management platform. It’s a solution, basically built for product managers and product teams to do everything I’m talking about, communicating your roadmap, your ideas, your managing, your [00:01:00] backlog, gathering ideas, analyzing, feedback, and using it basically to inform your product strategy.

If you’re someone that you know likes outcome focused, product management, outcome focus, road mapping, then this pretty much is the tool for you. You can actually try it yourself in our sandbox area. So that’s, it’s a little basically area that you can go and play around with the platform for yourself.

You can explore it at ProdPad.com/sandbox. And while you’re in there, you will also get to see CoPilot. But before we get into that, let me introduce you to our CMO, Megan, who you may have met before, and she will take it from there. 

Megan Saker: Yeah. Just to add to what Maneesha said about ProdPad. So ProdPad also comes complete with the world’s best product management AI.

And this is one of the reasons that we’re well placed to be talking to you today about how to build and manage AI products. So here at ProdPad, we’ve been using machine learning and AI since around 2018. We had a very early iteration of our AI assistant [00:02:00] called.bot who and the functionality still exists today will automatically dedupe your backlog.

So any ideas that are the same get flagged to you automatically link feedback to related ideas in your backlog and vice versa. And we’ve come all the way now to the present day with ProdPad CoPilot which we will talk about a little bit later, but. Enough about ProdPad. Let’s get into this.

So I wanna start by introducing you to Simon Cast here. So in a change to the schedule programming we have the other co-founder, ProdPad co-founder with us this time. So rather than Janna, Simon is here. Simon is Janna’s co-founder or not. You didn’t find Janna co-founded Janna with ProdPad on the other way.

But he’s also our CTO and resident AI expert. So Simon has very much been at the coalface when it comes to building all the great AI capabilities that you’ll find in ProdPad. [00:03:00] So Simon is, thanks for joining us. Simon worries. Simon is here to answer any of the questions either in the chat as we’re going or during the q and a section at the end.

So we’ve got Simon’s expertise with us today, right? Let’s set the stage for today’s discussion on AI in product management. So there are two key factors of AI that we need to be concerned about as product managers. One, how you as a product manager can use AI to make your day-to-day work more effective and more efficient.

And two, how you can integrate AI into your product to gain a competitive advantage and solve bigger and better problems. So just for, just to be clear, today we’re gonna be talking about the second aspect, or sorry, I’ve gone through. So we are gonna talk about the second aspect here, which is namely how to build and manage AI products.

However, if you want to get some advice [00:04:00] on that second part, how to use AI in your day-to-day, then we’ve got plenty of resources for you here. So there’s a blog on how to write AI prompts. We’ve got a whole ebook covering everything you need to know about using AI in your work to work more efficiently.

And finally, that last QR code, there is a webinar that Janna did on that very subject. We’ll also include all these links in the follow up email for you. So for today, we are focused on how to build and manage AI products. So when it comes to building AI products, there are, broadly speaking, four different scenarios that you’re likely to find yourself in or considering.

So one, you are either enhancing an existing product and using an existing sort of third party AI model. Oh, hello I dunno what’s happened there. I updated my Zoom and that’s the result. Two. Scared to put my fingers up, but two. [00:05:00] Enhancing an existing product but using and building your own AI model.

Three, you could be building a brand new standalone AI product. Using an existing third party AI model, or finally, everything’s new brand new standalone AI tool and you are building your own AI model to power it. So today we’ll cover off each of these situations. I’ll let you know the unique considerations and what you need to think about when it comes to each of these and how to choose the right route for you.

Outline the full agenda. So we are going to cover some general considerations and challenges when building AI products across all four of those scenarios. We’ll then drill into the unique considerations when you are building an AI product from scratch. We’ll then look at what you need to think about when you are adding AI to an existing product, and then we’ll get into the [00:06:00] model behind the AI features and products.

Are you going to use an existing third party model or are you gonna build your own? We’ll then look at how to monetize your AI. So we’ll look at AI pricing. And finally then what it takes to manage an AI product, AI features on an ongoing basis. So once they’re out in the world. So let’s start off with the general considerations, regardless of what situation you go down.

What are your motivations for using AI technology? Now, I wanna take a moment here and I want you all to think about this. You’ve all heard of shiny object syndrome, right? And I’m sure, look, I’m sure none of you are building AI features without good reason. But I just wanna say this, in case any of you have found yourself swept up in the excitement and enthusiasm, or you felt under pressure from leadership to do something with AI.

Remember, as with any product feature, you’ll only see success if you are [00:07:00] solving a problem for your users. So don’t forget, don’t lose sight of the usual core product management principles of user research, of hypothesis testing to decide if AI really is the right solution. So there needs to be a problem to solve, and AI needs to represent the best way to solve it next.

Have you set realistic expectations? So this goes for internal expectations with your stakeholders, the team leadership, sales but also external expectations. So what are you saying to the market? What are you promising your users? So for example, when it comes to internal stakeholders, everyone needs to be aware of the realities of working with AI.

And we’re gonna go cover these in some details, but the fact is generative AI is probabilistic and not deterministic. So it can make mistakes. You don’t have the same level of control over [00:08:00] how the feature behaves as you traditionally have. And your internal stakeholders need to understand that and therefore have realistic expectations about how perfect the feature can be.

You’ll also have to make decisions where you are balancing accuracy, speed, cost and experience. For example, typically if speed is the most important thing, then the trade off is a degree of accuracy. If you have to have the best possible accuracy, then it’s gonna come at the cost of speed.

You’re gonna need to allow a little bit longer for the response. And as you make these decisions, as you make these trade-offs, you need to make sure that there’s understanding internally and the expectations are set both based on those ’cause you will have to make those trade offs.

And then externally with your customers, be careful not to over promise and risk disappointing your U users. But also think about how you use your UI to [00:09:00] set expectations in the moment in terms of how long you’re going, they have to wait for an output to appear. So use smart UI design, loading indicators, progress bars and friendly messaging to keep users in the loop.

Next up being, are you ready to move fast? So AI evolves rapidly, goes without saying that models improve. APIs change and competitors iterate quickly. Even consumer expectations are gonna move fast. What was once a delight, a few ramps later, it’s a hygiene factor. So if you are using AI in your product more so than ever before, you need an approach.

product management approach allows for continuous improvement. So you need to make sure you’re not being forced to use a timeline roadmap where you are committed to rigid dates and when to exact features you need to use a Now-Next-Later approach where you declare the [00:10:00] problems to solve on your roadmap and then nest.

Different ideas, different experiments to solve each of those problems. This way you can commit to solving a problem, but have the flexibility to test, learn, pivot and change as AI capabilities change but ultimately end up solving that problem. If you are battling with stakeholders who insist on a sort of feature by feature timeline approach that doesn’t give you that flexibility and that ability to move fast, then we’ve got some resources.

Check out po pad.com. There’s a QR code there for what is actually a ready-made deck that you can present to your internal stakeholders, which shows them the business benefits, importantly of the Now-Next-Later. What you need to know about the risks. Do you know the risks? So before you put any AI features or product into the market, you need to make sure that you understand the very unique risks that come with [00:11:00] AI.

So first up, data security. So what customer data is your AI processing? Could sensitive data leak into the model? You need to be really clear on the security of your customer data, especially when you’re using third party AI models. ’cause you can bet particularly in B2B we experience this, but your customers will ask they might even have internal policies that mean that they have to ask or indeed that will prevent them from using your tool if you don’t have the right level of security.

Think about those Samsung headlines from however long ago it was. So this could literally make or break your product. So think about this ahead. Next risk the non-deterministic behavior. So as I mentioned earlier, gen AI is non-deterministic. You cannot know each and every, what each and every output will be.

So you need to know whether your product and specifically your product messaging and positioning can handle that level of [00:12:00] unpredictable ability. You need to be comfortable with uncertainty and ideally you need to plan for mistakes. Think about the worst case scenario and plan around that bias and fairness.

So bias lurks in every data set. Most models are trained using stuff from the internet. Good or bad. And let’s not forget, whoever builds your chosen model has control over the training data that goes into it. Inconvenient facts or knowledge can be left out. You only need to think about the deep sea, R two and T Square.

So you need to understand. That risk. And then think about what that means for you and your AI product and how you ensure ethical fairness. And then finally, compliance and regulations. So AI laws are still evolving, but G-D-P-R-C-C-P-A AI act like policies, they will all impact your offering.

So get ahead of the legal side, speech and legal team, but I will talk about that a little bit more in a [00:13:00] sec. Bashing on is the rest of your organization ready for AI? Introducing AI is an organizational shift. It is not just a product or an engineering challenge. If your company isn’t prepared to support the sales market, your AI launch could slip at the starting line.

So involve the key teams early on and ensure that they’re ready for the changes. So this is about training your customer-facing teams so they understand how to explain the AI, how to explain the features, how to explain that, and all the security considerations behind it. Also how to report performance feedback to you, how to communicate it to customers when they say, oh, something’s not quite working right.

Sales. So your sales team. We’ll need the right level of technical know-how, which you might find is more than it has been previously, to be able to answer any prospect questions to [00:14:00] be able to explain how it works. The other thing that they’ll need, which is often overlooked and you will get into a flap if you don’t think about this until the last minute, it, they need a reliable environment to be able to demo.

If you’ve got a sales motion in your company to demo the AI, right? And that’s tricky because it’s non-deterministic. You can’t predict the outcome. So spending some time to equip your sales team with an environment and a way of demoing it that you are confident will show it off in, in, in a fair light and that your sales team are confident to to run with and use.

I’ve gone the wrong way. Next up, do you understand the ethical and legal requirements of ai? So I brushed on this a bit with the risks but what are the legal must-dos? So first deal with the known. So find out all the AI regulations that you need to conform to work with your legal [00:15:00] team to ensure you are compliant and that you are communicating that compliance in the right way.

And then a couple of slightly grayer areas where there’s risk. So first is misuse. So there’s always the risk of bad actors misusing your AI. This could put the company obviously in legal hot waters. So you need safeguards in place like content filters abuse detection, certainly some ethical AI guidelines published.

Another gray area is accountability. What happens if your AI gives bad advice or makes a bad recommendation? Who’s responsible there? You are going to have to think about this in advance. Work with the legal team to figure out how you would navigate that. Our suggestiveness to build accountability into the system somehow.

So let users know where suggestions are coming from and importantly that they have the final say not the AI. [00:16:00] So now let’s get into the specific considerations relating to those different approaches. So let’s start with when you are building an AI product from scratch as opposed to adding AI to an existing product.

So there are a number of key challenges you’re gonna face in this situation. First off, a lack of data. This is one of the most significant hurdles, goes about saying, obviously AI relies on data to function effectively and platforms that have been in the market for however long have the advantage of years or whatever, of user data that you won’t have.

So without that data, you’ll need to come up with other ways to gather the context that your AI needs to provide valuable and useful insights. And you’ll realize what I’m talking about there as we move on. But even if you are using an existing third party AI model.[00:17:00] 

Where you are not obviously providing training data to the AI to get the most relevant results, you will want to be providing that AI with some context data against which it can draw each and every time it’s generating an output. So that’s the data really that you need and you’ll have to find elsewhere.

If it’s not existing, you have no inbuilt customer base. Second point, you’re starting with zero users. And yes, this is always a reality when launching a new product, but if you are weighing up, should we add AI features into our existing product or should we, should we break this off and have a completely new product?

Then just think about that, the disadvantage of an absolute standing start. Finally trust and credibility. So it can be necessary to overcome skepticism with new AI products. So unlike established products, which are obviously built on trust, with a new AI tool, you have to prove reliability, accuracy [00:18:00] and security.

So you’ve got to convince users to trust your AI with their data or decision making. And that’s not easy. But with a full knowledge of all those challenges when might it still be worth a punt? Sort of two situations, really. One, if you have a large and unique set of proprietary data, something that no one else can replicate, whether it’s industry specific insights, user generated content proprietary research, you’ve got a competitive edge with that.

So you stand a good chance of creating something that can really stand out in the market, offering something that those sort of broader, more generic tools can’t. The other factor is if you have identified a problem area or a problem that is underserved. Most people’s first experience with gen AI tools, certainly broad platforms like chat, GPT.

But if you found an industry or a niche that has very specific [00:19:00] workflows or needs that could be better served with a focused AI tool and give them better results than a general tool then potentially this is worth building your own model. An example of that might be legal professionals needing AI that understands like case law, citations, particular legal jargon, something potentially generic AI models may not handle that well.

And that could be a case for building your own model. But what if you are currently managing a longstanding or indeed a short standing product and you want to enhance your offering with ai? So are the considerations the same? Short answer, no. So let’s cover. There are a number of reasons why adding AI to your product might be worth the effort from gaining competitive advantage to better solving problems for your customers.

And AI can be great at [00:20:00] accelerating what you are already trying to solve. So take us here at ProdPad, for example. So when Janna and Simon founded ProdPad back in 2012. It was with the express purpose of saving product managers time, of freeing them up from the grunt work to do more of what sort of truly matters.

Discovery, strategic decision making, and that’s always been broad. Padd, raised on data. But over the last few years we have been able to skyrocket our journey to that reality for PMs, thanks to AI. So the whole premise of ProdPad as a product management hub built around an agile Now-Next-Later roadmap was to streamline how PMs work and make everything more efficient.

So don’t waste hours making static roadmap presentations each week. Use our dynamic system, don’t waste hours creating status communications for stakeholders. Automatically push notifications through Slack or teams. The list goes [00:21:00] on. But now with CoPilot, we are saving a crazy amount of time for our customers.

So the same problem, the same mission that we’ve always been on, but it is absolutely as I say, skyrocketed. So we’ve now got a feedback analysis AI tool called Signals that automatically analyzes your entire feedback or indeed feedback based on whatever filters you put on and surfaces the themes across everything.

We’ve given stakeholders a direct line to CoPilot through Slack so they can ask questions, any questions that they have, roadmap. Updates, feedback updates questions around the backlog so they don’t PMs anymore. CoPilots writing user stories, requirements, product visions, OKRs, et cetera, et cetera.

It’s even processing file uploads, importing data for our customers. So all of that is an example of how AI can support your core [00:22:00] problems to solve, just like CoPilot does for us. But it can also open up new problem areas. So either for your existing customer base and increasing the value that you are providing or opening up new audiences and new use cases.

So what are also, sorry. You can move fast. You can move a lot faster this way than building a standalone product because you already have a data set. Your existing data means that your AI features can immediately start doing cool stuff for people. So again, take CoPilot, prop already holds all the data for a company’s product roadmap.

Its entire backlog, all the feedback, OKRs, user stories, workflow, progress, blah, blah, blah. So we could very quickly offer our customers an AI assistant that could do things like generate new ideas, new initiatives because it already understands the complete sort of context of their product, what they were trying to do.

So it could help with [00:23:00] ideation and give genuinely relevant and useful ideas for the PM to explore based on everything you knew based on the context. But what are the considerations? It’s not all plain sailing. There are a few things that you need to think about in this scenario.

One, how is your data structured? So yes, you have data, but if it’s not structured in the right way for the AI, it ain’t gonna work. So it could even get in the AI’s way and make things worse if it’s not structured in the right way. So think about that in advance. Two, are your customers ready? So what is their reaction likely to be to the AI?

Do you need to be mindful of some sort of sensitivities and fear around things like being replaced by your AI tool? We mentioned earlier the possibility of company policies around AI. So look into that. Do you have everything ready to allay fears around security for example. And then how [00:24:00] is your AI gonna sit within your current pricing?

So this is slightly harder than if you were launching a fresh new product. You’ve got an existing pricing structure. How does AI fit into this? And we will come to this. We’ve got a whole section where we’ll talk about pricing. Right now that we’ve talked about building a standalone product, we’ve talked about adding AI to an existing product.

But what about the model that underpins either of those? Again, there are two possible routes. You are either using an existing third party AI model or you are building your own. Let’s start with building your own. Okay. First thing to say is that this is the less common approach by far. And you’ll see why when we delve into the challenges.

But when might it be the best approach? A couple of scenarios. There always seems to be a couple of scenarios, doesn’t there? One, you have a super specialized use case and you don’t think. The [00:25:00] outputs you need the outcomes from the AI would be achievable by just fine tuning an existing general model. An example might be a tool that identifies sort of plant species from photos.

In that case, you’d need a model that was trained on a very specific set of images. If you are very specialized, you might know for sure that a general third party AI model has absolutely no access to the very specific data that you need it to be trained on. Maybe a piece of poetry data. If it’s not enough for the AI to draw in the data as part of its output.

If you actually need it to remember and learn from the data, then you’ll need to build your own model and use that as the training data. There’s also the risk factor and potential impact. So if your product is designed to give recommendations or advice findings, maybe even diagnoses that would have significant impact.

If they were wrong, then you might [00:26:00] not be happy to trust a third party model over which you obviously have very limited control. So if you think you are gonna go down this road and build your own model, here are the things that you need to think about. So first of all, have you got enough training data?

You need loads to train an IO model like loads. So have you got that? And then once you’ve built your model and you’ve trained it up, you’re going to need to have some data left over with which to test the model. It’s also not enough to have a mound of data. It needs to be labeled effectively. So it’s not throwing a massive data set at a model and hoping for the best.

You need to guide the model and help it understand exactly what it’s looking at. So is your data, have you got enough? Is it in the right state, have you got the right team? So this ain’t easy, like this requires a lot of very specialist knowledge. On this slide here, I’ve put we’ve, we’ve put [00:27:00] some suggestions of the types of roles splitting it into probably essential versus some nice to haves.

So if you are going to build your own model, have you got the right team behind it? Next up, have you got the computational power? If you’re doing this, you need the umph to run it. The computational power required for both training and deployment of your own AI model is considerable. The demands will vary, obviously significantly depending on the complexity of the model, the size of the data set, the type of tasks you want your AI to perform.

But even the models on the smaller side of the spectrum will still need hefty hardware behind them. And you’ll need enough power. For the initial training and for the ongoing hosting and running of the model. So on the training side, you need immense computational power. Can’t stress that enough. This [00:28:00] process involves running thousands or even millions of iterations of data through the model to adjust parameters, improve accuracy, so you can’t do that on your laptop.

The hardware needed typically includes those things on the left of the slide and then on the hosting side, and to run the model, you’re going to need to think about the requirements there on the right. We’re obviously getting pretty technical here. You’ll find more details on these computational needs in the accompanying ebook that we’ve written on the subject of today’s webinar which will be included in the follow up email, and I’ve got a link to it later on in this slide deck.

Okay. Have you got the technical infrastructure to support building the model? So it’s not just processing power you need, there’s a whole technical infrastructure that encompasses this as well. So you will need large scale data storage. So your AI model will be relying on large volumes of high quality data, and that requires proper storage [00:29:00] and management systems.

So you need databases, data lakes, data pipelines, versioning tools. You are also gonna need efficient monitoring and maintenance. You’ll need the right infrastructure in place to help you log and track system health and spot performance degradation over time. And potentially have automated retraining pipelines that help you update.

The model training to improve performance dip. So there is a whole world of technical infrastructure you need to think about next. Are the costs feasible for the business? So given everything we’ve just said about computational power and data storage, cost is a big factor. Nothing, none of that comes cheap.

So you need to be certain that the business can sustain both the upfront costs involved in building and training the model and the ongoing costs of hosting and running your AI, AI in your application. And remember, if your AI product is successful. [00:30:00] Those hosting costs will scale upwards. So you need to anticipate the scalability of the costs, and you need to know whether that’s feasible for the business or will that make it cost effective.

When it comes to cost, you are going to need to balance what you buy in terms of power and storage with the acceptable performance level that you need to achieve. Because over provisioning hardware can be expensive, but also under-powering servers can lead to slow inference times and poor user experience.

So you’ll need to find the right balance where the costs are acceptable to the business while the performance is enough for the user. And if you can’t find that right balance then probably you need to reassess whether you’re building your own model, whether that’s the right option for you. If any of that has put the frighteners up you we wouldn’t blame you. Building your own AI model is a major technical undertaking. Then it’s time to think about the alternative, which is [00:31:00] using an existing model. So here goes the final scenario and option here. Here are the pros to this approach.

So simply put, it’s cheaper, it’s faster, it doesn’t require as much data, and it means you can spend all of your time developing your application rather than building the model beneath it. How does it work? So there are two ways of doing this. Of course, there seems to be two ways of doing everything in this webinar.

But yeah, two ways of using an existing third party model. You, one you can self-host an open source. LM or two, you can hook into an element at the LMM hosted elsewhere via APIs. So let’s take a look at that first option when you use an open source model and you host it yourself. So on the right, there are some options of open source [00:32:00] AI models.

With this route, you’ll need to consider computational power ’cause you are hosting the model yourself. It’s not as much as you need to build and train a model, but you’ll still need some power here. How you host the model is up to you. It can be on premise with GPUs cloud based through AWS Google Cloud, et cetera.

Or it can use edge deployment running smaller models on devices like Jets and Nano or stuff. The advantages of self-hosting are a greater level of control over the model. You’ve got the ability to customize certain elements. It’s also a better route if data privacy is gonna be an issue as you are keeping all the data on your server.

Next using an API based model. So some options there on the right of the slide. So here you are simply using the API of an existing model hosted elsewhere to make calls and provide responses. So here you have no [00:33:00] heavy infrastructure and you can typically get started to get up and running a lot faster.

However, you will be sacrificing control and customization of the model and you will need to think about the costs associated with each and every API call. So how do you choose? Here are a few factors that might influence your choice. So you could go self-hosted if, like I say, data privacy is paramount.

If you have the existing infrastructure and processing power or you can finance it or if you need the highest level of customization. Of a model, obviously without building your own go API based. If speed to market is paramount if you can make your AI product work through fine tuning rather than actual customization of a model and if you are confident that you can support a sort of scalable per request cost as your product usage increases, or secret third option adopt [00:34:00] a sort of hybrid approach.

So you could use an API based model to test your concept and then move to self posting once you’re fully confident. Once you feel like you’ve got, product market fit you could, or also if you’re using an API based model is switched between providers in different circumstances.

So if a call to one API. Falls down or is slow. You’ve set up an alternative. So if one model or if one model is better suited to a particular task then you can use different models. That’s an approach. So what do you need to think about if you are using an existing model? Either, in, in either one of those two tracks there?

So first of all, the costs. So you have to pay for this. So you need to think about how you handle those costs. How do you ensure that this is scalable for the business? This normally comes down to obviously how you price your AI products and features, which I’ll talk about in a moment. If you are [00:35:00] leaning more towards the self-hosted option then you need to make sure you’ve weighed up the costs of hosting against the cost of an API based option to make sure it’s worth it.

So Simon here crunched the numbers on this and believes you’re looking at around $13,000 of AI API calls per month before it becomes more economical to host the model yourself. Maybe have a think about those numbers. Next consider performance issues. So you are completely beholden to the performance and stability of a third party model as simple as that.

There will likely be outages and downtime and slow responses, and you’ll need to have considered that in advance and think about how you’ll handle that. You can, you can deal with the risk of sort of suboptimal performance through technical solutions or customer care. So technical solutions, you could put in place a process, as I just mentioned, that routes an API call to a different provider [00:36:00] if one fails.

On the test customer care side, like I talked about earlier, it’s about setting, it’s making sure your customer teams are setting the right expectations with your users training customer teams to be able to explain performance issues. Also, use in-app messaging and things as we mentioned before.

Next consideration, the dependence on the third party. So if you’re using someone else’s AI model, you’ve got a dependency on a third party, and that comes with risk to the business. Since the performance of that external AI model will have such a massive impact on the performance of your product, you are removing a certain significant amount of control there.

This can make business leaders and board members nervous. However, the important thing to stress here is the chances are this won’t be the first time you are reliant on a third party. So your business is likely to have a number of third party sort of risk points, from hosting providers, CRMs.

So the use of third party [00:37:00] providers isn’t going to be new. I just saw a guy’s comment. I did enjoy asking Chachi PT to create me an image where OpenAI was represented as a controlling overlord. So yeah, I enjoyed that. Yeah. Apologies for the really rubbish images I’ve used throughout this.

What, oh, talking rubbish images. Then, you need to remember that the model doesn’t remember. So this is an important point when you are using existing AI models. Don’t forget that these AI models are stateless. So they have no memory built into the model itself. If memory needs to be captured it has to happen within the application that you build.

So if you need memory, for example, if you need recall for a conversation bot, then you’ll need to build that into your product. I’m sure you are all aware of this, and indeed this is changing all the time. But most existing AI models will have a knowledge cutoff that represents a sort of end date on which the data of [00:38:00] is the end date for the data the model was trained on.

You won’t have the ability to patch a model with new or custom information. So if it’s important that your AI tool is making custom, taking custom information into account, then you’re going to need to layer on rag retrieval, augmented generation. So this involves capturing new knowledge. So maybe it’s from the prompt that the user has given or from some context data that they’ve included and placed it in a store.

Then you’ll need to give the AI a way of drawing the information drawing on the information in that store to inform their response. So remember, when you’re using the assisting model, you will never be able to put new information into the model. You can only fine tune it using system instructions, providing other context sources, context data for them to draw on each and every time they’re called upon to complete a [00:39:00] task.

Which brings me on to fine tuning. Do not underestimate the amount of time, the amount of work involved in fine tuning an existing AI model to get relevant and accurate outputs. Although you can’t train the model, you can overload system instructions within your application that feed the model with guidance on how the output should be presented.

You need to factor in considerable time to feed that in. So feed the model with prompts, ideal response examples to help hone and optimize the outputs so that your application of this AI model is relevant and valuable in the context of your product and your users. A great example of this being done really is with our very own ProdPad CoPilot.

I’ve put a quote here from Simon which I’m sure is horrifying to see. But CoPilot is an AI tool, as we said, specifically designed for product management and product teams. And as such, Simon and the team here invested like close to two [00:40:00] years, priming, fine tuning, prompting the AI model behind CoPilot, painstakingly feeding its system instructions based on real product team experiences.

So it takes a long time to get it really good. So don’t underestimate that. We are really cracking through this pay, so I appreciate that, but there is a lot to get through. I’m gonna move on now to how, talking about how to monetize your AI products and features. So how do you price them?

There are two routes, of course. There are always two routes. Direct monetization. Indirect direct. You are typically charging explicitly for your AI functionality. So there’s an actual, listed price. Indirectly, you are using AI to improve engagement and retention without charging for it separately.

So revenue goes up because AI helps with acquisition. It increases retention. So if we look at direct monetization first. [00:41:00] You’ve got some options. So you could charge for your AI features as an add-on. So users pay extra, obviously if you’ve got an existing product, they pay extra to access these.

You could charge for your AI completely separately. So even with an existing product, let’s say you’ve got a SAS product, it could be a new subscription and an additional subscription for the AI aspect of the tool. You could bundle AI in with everything else, with your existing pricing structure and then put out a price increase across the board to factor that in indirect monetization.

So again, your options are bundling, so you could bundle it in with your current pricing without a price increase. You are using the extra functionality to entice more users. Maybe you’re making a play for market share. You could give a basic version of your AI away for free and then follow a freemium model where users can upgrade for the full bells and [00:42:00] whistles.

Give them an upgrade path to get more, or you can give it all away for free, like the ultimate move for market share. So you get people in the door and hooked with your free AI, and then your upsell path potentially becomes your core product. All the AI is free, but they need to pay for the core product.

And AI is therefore then the acquisition motion that brings people to you. Once you know the theory, the approach behind how your customers will pay or not pay for your AI features, then you need to nail down a specific pricing strategy. And, this will, this, none of this will be new to you.

This is typically what you are looking at for digital product subscription based or outcome based. So the former is a sort of set flat recurring fee. The latter is a variable cost, based on use or results. If we look at subscriptions. Based pricing, like what does that look like when it comes to AI features?

So you could go seat based, obviously in [00:43:00] charge based on the number of users who want access to the AI features or the whole product, obviously if you’re building an AI tool from scratch. The other option is skill-based pricing. So where you vary the subscription cost based on the levels of sophistication or ability of the AI tools.

So here really you are applying a good, better, best tier approach to your AI. What do the options look like with outcome-based pricing? So variable pricing based on usage or results. Here are some options. So usage based pricing, so customers pay based on how much they use the AI features.

The number of AI calls API calls or queries data processes. Processing outcome. Output, sorry. Based pricing. So customers pay based on the volume of AI generated outputs. Reports or content or predictions or whatever it is that you are providing with your AI tool.

So that might be, for example, like a generative AI tool charging a [00:44:00] thousand words like charging per thousand words generated or indeed blocking access. A number of credits or finally, outcome-based pricing. So particularly useful for B2B a customer’s pay. AI delivers tangible business results, like revenue or cost saving.

So it could be like an AI powered hiring tool, just charging based on successful hires. So how do you choose the right AI pricing strategy? More decisions or so many decisions? Look, I’ve added some pointers there. I’m conscious that we are running outta time here. So I’ve added some things to the slide there to suggest when.

Each of these would potentially be the right option. What I would say is, it’s worth thinking about this in advance. Don’t leave your monetization strategy to the last minute. If you are aware of how this is gonna work upfront, then you understand the commercial goals, the commercial objectives that you are aiming [00:45:00] for with your AI initiatives, right?

Final point: what does it look like to manage an AI product on an ongoing basis once it’s out in the market, once it’s doing its thing? What are the unique considerations? And here they are. So make sure you understand the training data of your model. This will be one of your core responsibilities as a product manager, as an AI product manager.

You need to keep on top of that. You have control over the training data or not, you need to be aware of all the biases. Is that changing? How do you react to it? Be clear on the risks of your AI tool and how to mitigate them. So we’ve covered those quite a lot, but stay abreast of changing compliance requirements, for example.

And keep your strategies to mitigate any problems up to date. Keep your AI literacy up to date as we said, it changes fast with new [00:46:00] advancements. You need to stay informed about the changing capabilities so that your tool keeps up managing continuous learning and adaptation.

So AI models aren’t static, so they degrade over time. As the world changes. You’ll need to schedule periodic retraining. You need to monitor data drift. You need to ensure that your model keeps up with user needs. So you need to track performance on an ongoing basis and keep ’em really close and keep re constraining the outputs.

This is really important because AI is unpredictable if you leave it unchecked. It could go mad. So you need to constrain your AI tool to continuously refine those system instructions to improve accuracy. Use techniques like prompt engineering, reinforcement learning, validation rules, strict validation rules to try and prevent hallucinations.

This is ongoing, right? So that would always be a part of your day job now on monitor usage and specifically for the costs. [00:47:00] So you need to keep track of. The API calls if you’re using an external model you need to keep an eye on processing time on storage and how it’s being adopted so that you can anticipate any sort of run, runaway expenses, you can mitigate them.

And then finally, make sure your customer facing teams can answer the questions. We’ve talked about this a lot, but make sure that you’ll keep checking in with your customer teams, that they’re aware of the ongoing changes in your AI. Sorry, there’s a lot there. It’s dense. So what we have also got is everything we’ve covered and a bit more in an ebook form.

So do help yourselves to that. We will include it in the follow up email as I. One of the things I wanted to do is just leave you with the opportunity to go and try CoPilot. So not only because it’ll help you in your work and to manage these products. It’ll help [00:48:00] you be more efficient, but also as an example of an AI product that might give you ideas for your own.

So do go and start a free trial of PO ads and check out CoPilot, q and a. We have five minutes remaining. What have we got? So if you haven’t popped your questions into the, to, into the q and a box yet, do it now. Maneesha, what have we got? 

Maneesha Silva: Yes, so we’ve actually had an absolute ton of questions that Simon has been going through and answering Ah, oh wow.

29 so far, and still some more coming in. I’m just gonna go and while Simon is. Answering this one. If he is done, if he could answer some of the raining ones, I’ll say them out loud for him. And you can give us your thoughts. ’cause here’s the only way we’re gonna get these in. So question one. Oh no, there’s another one.

Question one. Do models have a way this by David Harrison? Do models have a way to provide system level info that’s [00:49:00] separate from user level prompts in teaching? For example I want to tell the AI that it should teach, not give the answers. If I’m able to provide that at a system level that prevents a student from saying, nah, forgot that, and just give me the answer on Simon’s thoughts.

Interesting. 

Simon Cast: Yes, there is. There certainly OpenAI has all of the models have a concept of what they call system instructions or develop a context. And those are where you can specify how you want the model to respond, how you want the style, the, the style of the response, humor with humor, with, professional, whatever you want.

But also that’s where you can specify things like don’t teach, don’t provide the answer or, use the Socratic method to get the person to answer themselves. That’s where the system, what they call system instructions is where you can put those in instructions. Now it’s [00:50:00] not a hundred percent bulletproof.

Every so often people come up with ways of jail breaking at, which then essentially gets the model to forget those system instructions. But it’s for a lot of use cases, it’s your best bet. And you can also. In terms of addressing jailbreaking, you can look at adding your own processing of the prompt to extract, to just check that there’s, somebody’s not doing something like saying ignore all previous instructions or something like that.

Maneesha Silva: Nice. Okay. Another one are these q and a responses included in the recording? They’re the most valuable part and another thing just how amazing Simon is. Both, yes. We will include the q and a actually as a part of the post comms for this, just because there are so many questions and so many fantastic answers from Simon.

So it would be silly not to share them all. Just one, another question I think we’ve got a bit more time is, oh, is there an AI chatbot available to consume in a mobile app, which can [00:51:00] use company data to answer support questions? Oh, 

Simon Cast: So where was I? Yes. So there are what they call small language models, which are essentially what GPT-3 was a year or so ago, two years ago.

It’s just a model with about eight to 12 billion parameters that are possible to run on mobile or edge devices. There are smaller models and it depends. And, but with those models, you do need to do a bit more fine tuning. And you’ll need, probably need a rag system to allow it to have access to your support data.

But it is possible. And I know that people are looking at running the models on mobile devices for that very reason. 

Maneesha Silva: Nice. There was another one, I think someone’s asked this a couple of times. Please answer the 2021 thing. I think that was just a mistake. A mistake where we put chat GBT 2021 when I think it should have been 2023.

I think Simon’s answered that one already. And then one last, oh, is that 

Megan Saker: The knowledge cutoff? 

Simon Cast: Yeah. [00:52:00] 2023. 

Megan Saker: Yeah, that was just a random date that wasn’t specifically referring to chat GPT. Okay. 

Maneesha Silva: And the last one is their ongoing investment to continue the fine tuning question mark. 

Megan Saker: Yeah, that’s interesting.

Yeah. What’s involved in, because we said there at the end, right? You need to keep fine tuning. You can’t just chill out once it’s out there. 

Simon Cast: It, it depends on whether,

so the fine tuning depends on what you are fine tuning against. So if you’ve got I. For example, additional support articles or things coming like that, then you probably want to look at either making those available through a rag process or if you are using a situation where you are adding the, using the model fine tuning approaches that are available, then you’d have to retrain the model on a regular basis to make sure that those, the new answers that [00:53:00] are part of your knowledge base are available to the model.

So there is always that ongoing stuff. Yeah. 

Maneesha Silva: Fair 

Simon Cast: And,

Maneesha Silva: yep. Yep. So just one last one just ’cause it is the last one. And then nobody more please cutoff date for anything. 

Simon Cast: Question mark. 

Maneesha Silva: There, 

Simon Cast: there is a, each model is built on a training set. And that training set is assembled largely from the internet, but other sources as well, depending on what they’ve agreed.

And so that will be the knowledge. The training set is not selected for a certain date. So in the case of GPT-4 four, oh, it’s October, 2023 and the more recent GPT-4 0.1, it’s Jan 2024. So that’s essentially what the cutoff date is. And so when they train new models, they tend to shift the cutoff [00:54:00] date to cover off more recent events.

A lot of it for the big models is about covering off more recent events, more recent data. For example, news articles, new books and things like that weren’t available. And the previous one. 

Megan Saker: Oh, there we are over time-wise. Okay. But I think, like you say, Maneesha, there’s been so many questions. Simon’s been working super hard answering them all. We’ll include it in the email we send out. So just give us a couple of days. We’ll include the recording and all sorts of questions and answers here.

Because we, yeah, we have covered a lot. So thank you so much for joining us. Thanks again to Simon for being with us. And thanks Maneesha for setting it all up and running the show. So hope you found it useful, and we’ll see you at [00:55:00] the next pro webinar. Bye everyone.

Thanks everyone. Bye then. Bye.

Watch more of our Product Expert webinars