VisionOS enterprise APIs show that Apple sees the industrial potential for spatial computing

Despite their strong focus on consumers and information workers at the Vision Pro unveiling, Apple sees the industrial potential for spatial computing.

I often beat the drum of spatial computing as a tool for industry rather than consumers (at least in the near term). So it was interesting to see that just a few months post-launch, Apple is releasing six new “enterprise APIs” for the Vision Pro at WWDC.

These APIs are fairly straightforward, consisting of enhanced sensor access (to the main camera as well as QR code and barcode scanning), as well as “platform control” (largely to enable machine learning capabilities such as object tracking and anomaly detection).

These are necessary upgrades for industry. And the use cases Apple discussed are the same ones we focused on for HoloLens back in 2016:

  • Expert remote assistance for technicians with “see what I see” functionality

  • Hands-free guidance for workers with physical workflows

  • Visualization of complex 3D models as part of the design process

Apple has not revealed plans to build general-purpose enterprise apps, at least for now. Instead, they emphasized that enterprise applications for the Vision Pro must be proprietary in-house- or custom-made.

This is probably for a number of reasons, not the least of which is that industry use cases for spatial computing are incredibly complex and varied. (Which says all the more for my incredible colleagues at Microsoft who were thinking about Dynamics 365 Remote Assist, Guides, and other general-purpose enterprise applications for HoloLens more than 7 years ago.)

I continue to see evidence that the early adopters of augmented reality and spatial computing devices will be in industry, and it looks like Apple is seeing the same thing.

Llewyn Paine