Research studies are easy to automate; research relationships, not so much

It's easy to fall into the trap of confusing "output" with "impact." But as a researcher in a world of AI tools, that's a good way to devalue your work.

In our panel at Advancing Research, Noah Bond, Robert Fabricant, Sean McKay, Kate Towsey, Jemma Ahmed, and I discussed the importance of research fluidity and adaptability.

We talked about how the most impactful research may not look like a traditional study.

The most effective researchers may not deliver a lot of traditional reports, because they're partnering directly with stakeholders and proactively injecting insights exactly where they're needed.

Research studies are increasingly easy to automate. Research relationships, not so much.

This is one of the reasons I get concerned when researchers talk about the value of AI research tools in terms of "studies completed," "participants run," or "hours of research time saved." Because the number of studies you've run has nothing to do with the impact you've had on the business and your team.

This AI "output" focus is an oversimplification that does real harm to professional researchers.

These are increasingly important discussions to have in the current research landscape, and I want to thank my fellow panelists for their candor and vulnerability in the face of some tough topics.

I'll be doing a workshop later this month to further explore the pros and cons of AI for UXR, and to help us challenge the output-focused lens. To hear more about that (and upcoming AI office hours), sign up for updates.