Every few months, product people discover a new tool and behave as if the craft has been permanently solved. First it was templates. Then frameworks. Then no-code. Now it is AI tooling. The pattern is the same: the tool gets impressive, people get excited, and suddenly everyone forgets that building useful products is still annoyingly dependent on judgment.
I use AI tools heavily. I also mistrust them. That tension is the point.
What Each Tool Is For
ChatGPT, Claude, Gemini, DeepSeek
These are thinking partners when used well. I use them to pressure-test product narratives, reframe messy requirements, compare options, draft research prompts, summarise technical material, and expose weak assumptions. I do not use them as final authority. The model can be fluent and wrong at the same time. So can humans, to be fair.
Codex, Claude Code, Cursor, Antigravity
These are useful when the PM needs to understand or manipulate a codebase, prototype a workflow, inspect implementation details, or make small product-facing changes without waiting for a full engineering cycle. The point is not to bypass engineers. The point is to become a more useful partner to them.
v0, Lovable, Bolt, Replit, Figma Make
These tools are good for turning vague product conversations into something people can react to. A prototype changes the meeting. People stop debating imaginary screens and start arguing with the thing in front of them. That is progress.
Perplexity
I use Perplexity for research-oriented work: market scans, source discovery, competitive context, and fast orientation. It is especially useful when the question is not "write this for me" but "help me understand the landscape before I decide."
AI should shorten the distance between question and evidence. It should not replace the discipline of asking the right question.
Where PMs Get It Wrong
The first mistake is using AI to produce documents nobody has earned. A PRD generated from a vague prompt is not product work. It is theatre with bullet points. The second mistake is using AI prototypes as proof of feasibility. A demo working in a sandbox does not mean the business model, data flow, compliance burden, support workflow, or edge cases make sense.
The third mistake is speed addiction. AI makes it very easy to create more artifacts than the team can actually learn from. The question is not "Can we generate this?" The question is "Will this help us decide?"
How I Teach PMs To Use The Stack
- Use AI to clarify the problem before generating solutions.
- Use prototypes to learn, not to impress.
- Use code tools to understand system constraints, not to pretend engineering no longer matters.
- Use research tools to find evidence, then verify important claims yourself.
- Use AI to improve judgment, not outsource it.
The Stack Is A Multiplier
If your product thinking is weak, AI will help you produce weak thinking faster. If your questions are sharp, your context is real, and your standards are high, these tools become a serious advantage.
That is the AI PM stack I care about: not a list of logos, but a workflow that helps product managers move from ambiguity to evidence to decision.