The reality is that many researchers are now expected to use LLM-based tools in their work, despite their shortcomings. And it can be hard to push back on mandates from above, or competition from peers who don’t understand the risks. Are there ways we can integrate LLMs into research while still being responsible professionals about it?
Read MoreWhen we use AI tools in product design, strategy, and user research, we need to keep something important in mind: the AI is not “understanding” our requests, and even with safeguards, it may not catch when it’s making mistakes.
Read MoreError-checking AI output takes time, so we need to be wary of claims of “time saved” using AI for product research. A thought-provoking quote by Emily M. bender from the March 31 episode of her Mystery AI Hype Theater 3000 podcast underscores this.
Read MoreSince it launched, I’ve been testing how well gpt-oss-20b does at qualitative research synthesis. I’ve been able to run dozens of giant, 52K-token prompts on my standard-issue MacBook Pro in just a minute or two apiece, and the results have been surprisingly good overall–albeit, with the same egregious errors that we’ve come to expect in LLM citations.
Read MoreThere’s a widely circulated piece of bad advice when it comes to AI: “If you don’t know how to use AI, just ask the AI!” The problem with this advice is that it assumes the AI “knows” about its own functioning. Spoiler alert: it doesn’t! (But it is good at making up plausible sounding answers!)
Read MoreResearchers should be in the driver's seat when it comes to AI tools, and that means understanding their tradeoffs just like we do with traditional research methods. My AI for UX Researchers workshop is not about teaching you to use any one specific AI tool–it's about giving you the skills to evaluate the performance of any AI model or service critically.
Read MoreIn my AI+UXR workshops, I recommend starting a fresh chat each time you ask the LLM to do a significant task. Why? Because UX research tools need to be reliable, and the more you talk to the LLM, the more that reliability takes a hit.
Read MoreA highlight of last week’s AI+UXR workshop: seeing participants discover their own novel issues with AI synthesis for user research. This is the goal.
Read MoreWe talk a lot about whether AI can replace traditional user research functions. But where is AI letting us explore user data in ways that are entirely new? That’s the question asked by Ian Johnson and Patrick Boehler, in two separate case studies from the very different lenses of data visualization and journalism.
Read MoreIn my AI + UXR workshop, I teach attendees to check the subprocessors of AI tools to understand what models they’re using under the hood. So it was delightful validation to see Simon Willison mention in a May 11 post that he’s started to do the same thing, and highlights the importance of sharing our learnings in public.
Read MoreI invited 31 researchers to test AI research synthesis by running the exact same prompt. They learned LLM analysis is overhyped, but evaluating it is something you can do yourself.
Read MoreAs more designers, PMs, founders, and researchers themselves begin to use generative AI tools for research synthesis, it has many researchers wondering whether it's good enough to replace their work. The short answer is it's a lot like using AI for anything else: AI output is "adequate."
Read MoreShould user researchers prepare to move into the role of “orchestrating and validating the work of AI systems” like OpenAI’s Deep Research?
Read MoreLast week I conducted in-person interviews with a high-risk participant pool. Normally I would have worn an N95 mask, but at least 1/3 of our participants also had moderate to severe hearing loss. So instead, I wore a transparent face mask to ensure we’d still be able to communicate. Here's what worked and what didn't.
Read MoreFriction is the #1 killer of research repositories. I recently spoke with insurance technology company Guidewire about strategies for keeping friction low and ensuring that research repositories provide enduring value for researchers and research consumers alike.
Read MoreLow-cost, “lean” prototypes are the most efficient way to test user satisfaction with innovative experiences. But many product teams either overengineer their prototypes or omit testing with users entirely until it’s too late. In this post, I share 5 thought starters for teams looking to think outside the box about simulating experiences for user testing.
Read More