It's easy to fall into the trap of confusing "output" with "impact." But as a researcher in a world of AI tools, that's a good way to devalue your work.
Read MoreAs more designers, PMs, founders, and researchers themselves begin to use generative AI tools for research synthesis, it has many researchers wondering whether it's good enough to replace their work. The short answer is it's a lot like using AI for anything else: AI output is "adequate."
Read MoreAI is great at producing generic, average output. Average work may be perfectly fine to keep bringing home a paycheck, but if you don’t value it yourself, it will not be valued by others in the long term.
Read MoreFrom Seth Godin: "Once we see a pattern of AI getting tasks right, we're inclined to trust it more and more, verifying less often and moving on to tasks that don't meet these standards.”
Read MoreShould user researchers prepare to move into the role of “orchestrating and validating the work of AI systems” like OpenAI’s Deep Research?
Read MoreWith renewed attention to smart glasses thanks to Meta’s Orion, spatial computing is once again on people’s radar. AI’s natural interface capabilities and multimodality are proving to be powerful, leverageable tools for bringing information technology into the physical world.
Read MoreAt Austin Tech Week 2024, companies from startups to Meta shared their design process for AI. These were the four principles they all agreed on.
Read MoreEmerging technology creates perplexing problems for user-centered design, such as: how do you take a user-centered approach when it’s too early to have a defined user? Does this mean you should throw out user-centered design, or choose your target user based solely on TAM?
Read MoreWith all the hype around AI, it’s hard to recognize what issues are attention-worthy. So last week’s Rosenfeld Media community workshop on AI was a valuable opportunity to see what questions are top-of-mind for the UX community.
Read MoreAI is a technology with rippling systems-level effects. Collaborating across diverse disciplines is the only way to begin to understand the full implications of our AI design decisions. This is the focus of an upcoming community workshop I'll be moderating on artificial intelligence.
Read MoreIn Rosenfeld Media’s upcoming Advancing Research community workshop on artificial intelligence, I’ll be talking with Rachael Dietkus, Nishanshi Shukla, and David Womack about what researchers and tech professionals can do to mitigate issues in AI tool use, from psychological harm to users, to damaging our knowledge bases.
Read MoreAnalysts argue we’re entering the “Trough of Disillusionment” for AI. That may be bad news for investors, but for product teams, it offers new opportunities to build products that solve more meaningful problems for their users.
Read MoreStartup Archetype AI is fusing physical sensor data with LLMs to create an AI model that will "encode the entire physical world." This approach means that natural language becomes a translation layer used to both interpret input (i.e., sensing) and to issue commands (e.g., controlling a robot arm). This work is exciting but also hard to access. What can a regular person do to start experimenting with and preparing for this type of tech?
Read MoreAn article in this month’s issue of The Fabricator underscores that physical industry workers are not experiencing the same AI boom as information workers. Are we so accustomed to designing for information workers that we’re overlooking opportunities to serve new audiences?
Read MoreAman Ibrahim created TerifAI (as in “terrify”), a bot that can clone your voice after only a minute or so of conversation. It’s an incredible demonstration of the rapid development (and growing threat) of voice cloning AI, and an example of why the need for biometric voice redaction is becoming more urgent.
Read MoreAs part of a recent biometric data redaction demo, we used ElevenLabs to replace the voices of participants in recordings with AI-generated speech. Now ElevenLabs has announced new voice models based on classic actors. This raises new questions about trade-offs between participant privacy and stakeholder perceptions.
Read MoreWe’ve started to recognize the quality and legal issues AI brings into our own product ecosystems. But incorporating AI into our products also means we are integrating ourselves into much larger systems: systems with far-reaching and often hard-to-understand consequences. As human-centered professionals, how can we tackle this difficult problem?
Read MoreManuel Herrera did this fabulous sketchnote version of my AI doppelgangers demo at Designing With AI 2024. It's a concise summary of the limited biometric data redaction options available for recordings of humans today, and of some immediate issues with the AI tools that could take over this task in the future.
Read More