"We don't know nearly enough" to predict the patterns of AI mistakes
A concern I have when I hear researchers say they're validating AI output with occasional spot checks:
"Once we see a pattern of AI getting tasks right, we're inclined to trust it more and more, verifying less often and moving on to tasks that don't meet these standards.
"AI mistakes can be more erratic than human ones (and way less reliable than traditional computers), though, and we don't know nearly enough to predict their patterns."
(From Seth Godin on "Trusting AI")
I'll be talking more about how AI mistakes differ from human ones, and what that means in the context of user research, at Rosenfeld Media’s Advancing Research 2025.
And if you're looking for help figuring out what AI mistakes means for your team's research and strategy practice, I also do coaching. Drop me a line if you're interested in learning more.