Why researchers should understand how to evaluate AI tools

Researchers should be in the driver's seat when it comes to AI tools, and that means understanding their tradeoffs just like we do with traditional research methods.

I'm pleased to be offering my AI for UX Researchers workshop through Rosenfeld next week.

This workshop is not about teaching you to use any one specific AI tool. It's about giving you the skills to evaluate the performance of *any* AI model or service critically, like engineers and scientists do.

There's a lot of risk in how many product teams are currently deploying AI. AI is designed to produce credible-looking output that's hard to evaluate–even for trained researchers.

But by knowing what to test and how, you can recognize these risks *before* they create problems for you and your team–and make better trade-offs.

🧩 Being able to rigorously evaluate AI tools gives your work flexibility and value, even if the AI models or tools change.

Hope to see you next week!

(Note: We’re offering this workshop one more timein October before the end of the year. Sign up today!)