Ethics in AI: The New Frontier for Product Managers
We’ve always said the job of a Product Manager is to sit at the intersection of what’s technically feasible, what’s desirable to users, and what’s viable for the business. That Venn diagram has been repeated in countless talks, books, and workshops. But here’s the thing: in 2025, that isn’t enough anymore.
The rise of AI has added a new, urgent dimension to our roles: AI ethics, or more specifically, ethical AI in product management.
Because just because we can build something doesn’t mean we should.
We’re living through a technological renaissance. Every week there’s a new AI-powered feature or tool claiming to revolutionize the way we work, shop, and live. But alongside the excitement, we’re already leaving a trail of ethical messes behind us. Misuse of personal data. Biased algorithms that disadvantage marginalized groups. Chatbots confidently hallucinating nonsense. And generative AI systems that impersonate real people without their consent.
If we don’t put ethics at the heart of our product practices, we risk doing lasting harm to our customers, our businesses, and society at large.
As product leaders, it’s our role to figure out what future we’re actually creating.
What is ethical AI in product management?
Ethical AI in product management means designing and deploying AI-powered products in ways that protect user privacy, minimize bias, prioritize transparency, and avoid harmful outcomes. It’s the fourth axis Product Managers must weigh alongside desirability, feasibility, and viability, ensuring that new technologies deliver value without causing unintended harm.
This isn’t about slowing innovation. It’s about de-risking it. Building something incredible that also stands the test of user trust, regulation, and time.
Why Product Managers need to step up on AI ethics
It’s tempting to see “AI ethics” as someone else’s job – maybe legal, or compliance, or the engineers training the models. But here’s the truth: as product managers, we own the outcomes of the products we put in the world.
- We decide which problems are worth solving.
- We prioritize which features ship.
- We sign off on what “done” looks like.
If an AI-powered feature misleads users, invades their privacy, or discriminates against a group, it doesn’t matter that “the algorithm did it.” It’s our responsibility as product leaders to anticipate risks and make ethical considerations part of the product decision-making process.
Ethical decision-making must be part of responsible AI in product management, embedded into every product conversation.
Ignoring this isn’t just irresponsible, it’s risky. It’ll get your product into the headlines for all the wrong reasons. And it’ll erode the trust that underpins everything we do.
The cautionary tales: AI ethics controversies
The headlines are already full of examples of companies getting it wrong. Let’s take a look at just a few from the past couple of years – and what product leaders can learn from them.
Deepfake voice ads – consent matters
In 2024, actress Scarlett Johansson sued OpenAI after an AI-generated ad went viral using a clone of her voice without permission. A simple “what’s technically possible” decision ignored the question of should we? The backlash was swift. Consent isn’t optional.
Lesson for PMs: Treat likeness and personal data as sacred. If your product involves user-generated content, ensure explicit consent and clear usage terms.
Misinformation at scale – context failures
AI policing tool Grok AI misinterpreted a sports metaphor (“shooting bricks”) as literal vandalism, falsely implicating NBA star Klay Thompson. The error spread widely before being corrected. Funny? Sure. Harmless? Certainly not if it happens in a medical or financial context.
Lesson for PMs: Don’t release AI features without robust guardrails, context testing, and plans for error recovery.
Google’s AI search overviews – trust at risk
Google rushed out AI-generated “Overviews” for search. Within days, users were posting absurd, misleading outputs that undermined trust in the world’s most used search engine.
Lesson for PMs: A half-baked AI feature can damage core trust in your product. Quality and truthfulness matter as much as speed.
Zoom’s AI training terms – privacy backlash
Zoom quietly updated its terms of service to allow user content (videos, audio, chats) to be used for AI training. Users revolted. Within days, Zoom backtracked.
Lesson for PMs: Be transparent about data use. Better yet, make AI training opt-in. Trust lost here is hard to regain.
Clearview AI – crossing the line
Clearview scraped 30 billion images from social media without consent to fuel its facial recognition tech. Multiple governments have since banned or fined the company.
Lesson for PMs: Data scraping without consent isn’t “clever growth hacking.” It’s unethical and increasingly illegal.
Each of these errors could have been avoided with better ethical AI in product management practices. They’re not just tech mistakes.. they’re human and managerial oversights. They’re cautionary tales for every product team building with AI.
Frameworks for building ethical AI
Thankfully, we don’t have to invent ethical guidelines from scratch. Some of the biggest players and regulators have already published principles we can use.
Google’s AI principles
Google has seven guiding principles, including:
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Build and test for safety
- Be accountable to people
- Incorporate privacy by design
PM takeaway: Use these as a checklist when green-lighting features. Ask: Could this reinforce bias? Do we have user feedback built in? Have we minimized data collection?
Microsoft’s Responsible AI Standard
Microsoft requires all teams to follow six core principles: fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability.
PM takeaway: Write a short “transparency note” for your AI features, explaining how they work, what data they use, and what limitations exist. If Microsoft can do it at scale, you can too.
EU AI Act and NIST Risk Framework
In Europe, the AI Act is set to enforce requirements for “high-risk” AI systems. In the US, NIST’s AI Risk Management Framework lays out standards for safety, explainability, and non-discrimination.
PM takeaway: Staying ahead of regulation isn’t just compliance. It’s a competitive advantage. Build explainability and consent features now, and you’ll be future-proofed.
How to stay ahead of AI ethics
Ethical AI isn’t a one-off checklist. It’s a moving target. The tech shifts every week, and so do the risks. Here’s how you can stay informed as a product leader:
- Curate your sources. Bookmark MIT Tech Review, Stanford HAI, Wired, and Partnership on AI. Scan once a week, dive deeper when relevant to your product area.
- Follow voices that matter. Teresa Torres reminds us ethics is a team responsibility. Marty Cagan emphasizes that PMs must anticipate consequences and viability risks. Add ethicists like Timnit Gebru (founder of the Distributed AI Research Institute, known for exposing bias in large language models and championing transparency in AI) and Tristan Harris (co-founder of the Center for Humane Technology, outspoken on the ethical design of tech and the societal risks of AI) to your feed for broader perspective.
- Engage in communities. Forums like ProductLed Alliance and Mind the Product are now talking openly about ethical AI. Join those conversations.
- Share knowledge internally. Set up a monthly “AI ethics brief” for your team. Include one new capability and one recent cautionary tale. Make it part of the product culture.
- Track regulations. Subscribe to updates from the EU, FTC, and other regulators. Know what’s coming, and shape your roadmap accordingly.
Making ethics the “fourth axis”
Traditionally, we weigh Desirability, Feasibility, Viability. From now on, we need to add Ethics as a fourth axis.
How do you actually do that in practice?
Make it a team responsibility
Annie Jean-Baptiste (Head of Product Inclusion at Google) stresses that inclusion and ethics can’t be bolted on at the end. They have to be baked into everyday team decisions, from PM to design to engineering. Don’t silo it. Build ethics into your definition of done.
Use checklists and pre-mortems
Add an “ethical audit” to your discovery process. Run consequence scanning workshops: “If this goes wrong, who gets hurt?” Build mitigations in up front.
Diversify your inputs
Diverse teams and diverse user testing help catch bias before it ships. If your AI is being used in hiring software, test with underrepresented candidates.
Track ethical metrics
Don’t just measure engagement. Track false positive/negative rates across demographics. Track how many AI decisions are explainable to users. Make it part of your OKRs.
Leverage available tools
Use open-source bias detection libraries (like IBM’s AI Fairness 360). Use Google’s dataset visualization tools. These exist to help teams like yours operationalize ethics.
Accountability: who owns AI outcomes?
Here’s the uncomfortable truth: when AI goes wrong, you can’t shrug and say “the algorithm did it.” Users don’t care about your model architecture. They care that your product harmed them.
Product teams own AI ethics
John Cutler reminds us that true product accountability isn’t about outputs, it’s about outcomes. In his standout post, Why Don’t They Trust Us?, he explains that teams often lose credibility when results fall short of expectations:
“Most product development teams are not (fully) trusted to deliver a high‑level outcome or solve a problem.”
For AI features, this means teams must be proactive, not reactive. If a model hallucinates or misleads users, the fallout (loss of trust, reputational risk, and possible legal issues) lands squarely on the team that shipped it.
Ethical responsibility is shared
At Salesforce, Kathy Baxter (Principal Architect of Ethical AI Practice) has taken ethics from abstract principles to operational muscle. In her Medium series How to Build Ethics into AI, she writes:
“Ethics is a mindset, not a checklist… Developers must ask: ‘What is the business impact of a false positive or false negative in our algorithm?’”
This isn’t about adding an “ethics person” off to the side. It’s about making ethical ownership part of the team’s definition of “done,” from PM to designer to engineer.
Specialists can help, but they’re not the whole answer
Larger organizations often set up Responsible AI councils or ethics review boards. For example, Microsoft has its Office of Responsible AI and the AETHER Committee (AI, Ethics, and Effects in Engineering and Research), which review sensitive use cases and set internal policy (microsoft.com). These groups play a critical role in establishing guardrails and aligning with regulation.
But as Margaret Mitchell (co-founder of Google’s Ethical AI team, now at Hugging Face) points out, councils can’t sit in every stand-up or review every PR. In her interviews and writing, she argues that ethical intent only works if it’s embedded into the day-to-day workflows of product teams.
“If ethical considerations are tacked on at the end of development, they will always lose out to delivery pressures.”
That means the product trio (PM, design, engineering) is still the front line. Specialists provide the frameworks, but accountability for applying them lies with the people building and shipping the feature.
Generative AI is unpredictable, so plan for it
Generative AI is unlike traditional software. Its outputs are probabilistic, not deterministic, which makes them inherently unpredictable. That unpredictability can lead to hallucinations, bias amplification, or unsafe recommendations, all under your brand.
As Emily Bender, co-author of the influential “Stochastic Parrots” paper, points out: large language models are fluent but not grounded in meaning. They generate text that sounds right, but with no guarantee it is right. That means hallucinations and misleading outputs aren’t edge cases, they’re built in.
For product managers, that demands new kinds of safeguards:
- Human-in-the-loop fail-safes. Don’t let generative AI outputs reach customers in high-risk contexts without human review.
- Clear disclaimers. Set expectations early. Products like ChatGPT warn that outputs may be inaccurate. Your features should do the same.
- Transparency tools. Give users ways to see how results were generated, audit outputs, and flag problems.
Skip these safeguards, and you’re effectively saying: “We’ll take full accountability for whatever the AI generates.” That’s not just a product risk… it’s a reputational and regulatory gamble.
The question we should all be asking in roadmap reviews isn’t just “Will this make money?” It’s: Who is responsible when this AI makes a mistake? And are we comfortable with that?
💡 Want to embed ethics into your product craft? Start by centralizing your product vision, OKRs, and feedback in one place. ProdPad was built to help teams balance feasibility, desirability, viability… as well as responsibility.
Why AI ethics is now a product differentiator
AI ethics isn’t just a “nice to have.” It’s becoming a strategic moat. In a world where users are skeptical of AI, regulators are circling, and competitors are racing half-baked features to market, ethics is the thing that will separate the trusted products from the ones that flame out.
The teams that bake ethics into their craft will be the ones that endure. They’ll:
- Build products customers actually trust, not just trial and discard.
- Avoid the scandal-rollback cycle that erodes brand equity.
- Win long-term loyalty from users who feel seen, protected, and respected.
But ethics isn’t something you tack on at the end, like an accessibility checklist you rush through at launch. It has to be part of our craft as product managers. The same way we weigh feasibility, desirability, and viability, we must add responsibility as a fourth axis. That means including ethical considerations in discovery, in roadmap reviews, in definition of done. Not as an afterthought, but as standard practice.
Think about it this way: every roadmap is a manifesto. Every decision about what to build (or not build) is also a decision about what kind of future you’re helping to create. A roadmap that only optimises for revenue will ship one kind of world. A roadmap that balances outcomes with ethics will ship another. Which manifesto would you rather put your name to?
We’ve always said tools shape behavior. Delivery tools anchor teams in output. At ProdPad, we anchor teams in outcomes. But now the job goes deeper. Ethical AI forces us to anchor in values too.
Because at the end of the day, our products don’t just ship features. They shape the world we all have to live in.