AI doppelgangers won second place in a global agent hackathon!

AI doppelgangers won second place in the Cua + Ollama global online agent hackathon!

Last year Vitorio Miliano and I built a demo using AI to redact biometric data from user research videos. The idea was to replace human faces and voices with avatars, so that research teams could store these valuable recordings for longer without violating biometric data privacy regulations.

I presented it at Rosenfeld Media’s Designing with AI 2024, and we got some coverage elsewhere in UX circles–but it was only ever a demo. Our 2024 version was built on off-the-shelf services that were severely duration-limited and ran in the cloud, which is less secure for data redaction than you'd want in the real world.

So this year, Vitorio built a new, expanded, agent-based version that: 1) runs entirely locally, 2) “understands” entire scenes using a perceptive-language model (Perceptron's Isaac).

This means you can redact not only faces and bodies, but anything the model can recognize. ID badges? Check. Brand names on equipment in the background? Check. And you can tell the agent just what to redact in plain language.

I love how this new demo expands on our original work, and was thrilled to see it take second prize in a global agent hackathon.

Note: Vitorio is also running a workshop sharing his learnings on how to use vision language models as a design material. If you want a chance to explore cutting-edge VLMs with someone extremely knowledgeable, this is a great opportunity!