Is there a right way to use AI in product research and strategy?
I write a lot about the wrong ways to use AI in product research and strategy. Is there a right way?
The reality is that many researchers are now expected to use LLM-based tools in their work, despite their shortcomings. And it can be hard to push back on mandates from above, or competition from peers who don’t understand the risks.
So…are there ways we can integrate LLMs into research while still being responsible professionals about it?
I think we can, and it comes down to two things:
👀 Disclosing exactly how we’re using AI, and
🔀 Differentiating between human and AI-aided analysis
Responsible AI usage all starts with transparency.
In the same way we’d disclose our sample size for a survey, we need to disclose exactly what AI we’re using in our research, and where we’re using it. Because these choices do impact research results.
We should be comfortable with our decisions about where we’re using LLM-based tools, and be prepared to discuss the differences from having a human researcher do it.
🧩 If we can’t articulate these differences, why have human researchers at all?
(If you need help, check out my past posts!)
Second, we need to do a better job of distinguishing between how a human generates themes from data, versus how a machine finds patterns.
Both are legitimate ways of exploring data, but they are not equivalent.
Machines can find important patterns! (For a fantastic example, check out Ian Johnson's work on Latent Scope, presented at Rosenfeld Designing with AI 2025.)
But the statistical techniques used in LLMs also mean they’re prone to data flattening, stereotyping, fabrications, and other problems, especially when you’re dealing with edge cases.
It’s dangerous to present human synthesis and LLM pattern-finding interchangeably, because then we can’t properly account for built-in biases and risks. This kind of naive presentation results in expensive errors and devaluing of the very human perspectives that UX is intended to advocate for.
🧩 Using AI responsibly requires transparency and a clear delineation between human and AI methods.
We’ll be continuing this discussion in my next AI for UX Researchers workshop for Rosenfeld Media (the last one of 2025!).
👉 Consider joining us, or contact me about custom training for your organization.