What do UX practitioners worry about in implementing AI?

With all the hype around AI, it’s hard to recognize what issues are attention-worthy. So last week’s Rosenfeld Media community workshop on AI was a valuable opportunity to see what questions are top-of-mind for the UX community.

The discussion was lively, but it clustered around a few themes:

  • Accuracy

  • Control

  • Authenticity

While these topics are nothing new in the world of AI, UX practitioners have their own unique concerns within each of these areas:

Accuracy

User researchers have long been advocates for understanding and reducing bias in data. AI exacerbates these concerns, with the community raising questions around ways to recognize and account for biases. David Womack discussed RAG models and other ways of incorporating more edge cases into AI-enabled analyses.

Control

There’s a general recognition that the biases outlined above can produce harmful effects in our products and services. So what can we do about this? Rachael Dietkus and Nishanshi Shukla discussed frameworks, tools, and strategies they’ve been using in their work with AI to safeguard human users, and there was appetite for more examples of guardrails that have been used to harness AI in a human-centric way.

Authenticity

While the world at large wants to know if we are losing authenticity in our interpersonal interactions, the UX community in particular wants to understand the risks of synthetic data infiltrating our research. This was a great question we didn’t have time to discuss at length, but you’ll find this question repeatedly cropping up across UX forums.

Thanks again to our panelists, to Rosenfeld Media, and to curators Jemma Ahmed and Chris Geison.

AILlewyn Paine