Archetype AI: Creating a model to “encode the entire physical world”
Startup Archetype AI is fusing physical sensor data with LLMs to create an AI model that will “encode the entire physical world.”
The recent surge of interest in LLMs has kicked off important (but less visible) conversations about AI that can make sense of real-world objects. Archetype AI is approaching this question through creation of a “large behavior model” (LBM) that brings together sensors and LLMs.
I recently attended a talk by Leonardo Giusti, founder of Archetype AI, where he shared why they think this is the right approach.
Archetype AI’s founders have roots in IoT. They’re known for Project Soli, which enabled gesture-based interactions for Google smartphones based on machine learning (ML)–but which encountered problems of scale.
According to Giusti, training an ML model to reliably identify gestures can require millions of samples from participants all around the world. And this training process has to be repeated for each new device or sensor you want to use.
Archetype AI aims to solve this problem by mapping sensor data to LLMs, enabling them to capitalize on LLMs’ scalability.
This approach has some interesting consequences. It means that natural language becomes a translation layer used to both interpret input (i.e., sensing) and to issue commands (e.g., controlling a robot arm). The semantic “understanding” built into the LLM also makes certain tasks easier, such as prediction and anomaly detection.
Personally, I’m quite excited about their work, but my one complaint is that it isn’t very accessible. This is often unavoidable with early-stage tech, but it leads me to ask: what can a regular person do to start experimenting with and preparing for this type of tech?
You can, of course, request early access at their website, and perhaps start playing around with it yourself if you’re approved (and are an engineer).
But even if you’re not, there are other steps you can take:
Learn about material exploration (a technique for exploring the capabilities and constraints of new-to-you materials and technologies)
Brush up on observational user research methods (which are critical for understanding humans’ physical behaviors and workflows)
Explore adjacent technologies (such as sensors/IoT, multimodal AI models, and LLMs–especially in applications that are trying to make output more contextual)
I believe we’re going to see a lot more physical AI-type models in the near future, and now is a good time to begin preparing. If you’re looking to figure out what this means for your users, we should talk.