New ethical quandary unlocked: Would you want your research participants to sound like old Hollywood actors?
As part of our recent Designing with AI demo, we used ElevenLabs to replace the voices of participants in recordings with AI-generated speech. We gave our participants new voices to preserve their biometric data privacy, without having to remove their original intonation and emotional inflection.
We used ElevenLabs’ built-in voice models, searching for options that sounded similar to the original voice.
Now ElevenLabs has announced new voice models based on Judy Garland, James Dean, Burt Reynolds, and Sir Laurence Olivier. So your redacted research recordings of the future could, in theory, have a distinctly more Transatlantic vibe.
This development raises a lot of the same questions we discussed with regards to replacing a human with a synthetic avatar:
🗣️ What are the pros and cons of the fact that the replaced voice (or avatar) draws attention to itself as a “fake”?
🗣️ What impact will obvious replacements (of voices or bodies) have on stakeholders’ responses to user research recordings?
🗣️ How do we balance this with the need to protect our research participants from potential harm (due to biometric privacy concerns)?
🗣️ And what are the rights of the people whose voices or bodies were used to create these AI models (and does it make a difference if they are deceased)?
There are no easy answers, and these questions will only continue to multiply as AI continues its rapid development. Which is why, if you’re developing with or for AI, it’s critical to have a process for seeking out key subject matter experts and incorporating end user perspectives to anticipate and address these concerns.
If you’re looking for help, let’s chat.