Apple has outlined a strategy balancing the processing demands of AI with a commitment to user privacy. Can this offer a model for UX?
The tension between AI and data privacy was on display in the conversation at the inaugural Rosenfeld Designing with AI conference last week, so I was particularly interested in seeing Apple’s new approach to AI with Apple Intelligence at WWDC.
Apple’s model seems to be:
1. Process data locally whenever possible
2. When you can’t:
Let users opt in
Process only the minimum necessary data
Avoid storing data in the cloud
Provide accountable ways to for users to verify how their data is used
Use cloud computing resources that are built for privacy
Tech companies–and product teams–are quickly being forced to choose between embracing new AI services (which often involve cloud processing and/or third-party services), and obsolescence. Apple’s WWDC announcements show that opting out entirely is not realistic for companies that wish to remain competitive.
What would it look like to use Apple’s model for processing UX and research data with AI? Here’s one idea:
1. Process data locally whenever possible
Conducting user data analysis efficiently without using AI is going to become harder and harder to do as industry expectations change. In the past 12 months, a majority of my clients and colleagues have been asked by their leadership to explore whether AI can speed up their user research and analysis.
But that doesn’t mean it all needs to be done in the cloud. There are transformer models (such as LLMs and voice conversion models) and diffusion models (for image and video editing) that can be run on (powerful) desktop computers. Product teams can make investments in beefier hardware, as well as engineering expertise to evaluate their needs and to run code locally.
2. When you can’t… (using cloud processing)
When processing demands outstrip the capabilities of local computing, it’s even more important to recognize the risks this can pose to users and to have mitigation plans in place.
This might look like:
Keeping consent forms up to date with legal/technological developments
Processing only what’s needed, de-identified (including face and voice!)
Not storing user data in the cloud
Figuring out ways users can audit your team’s research
Exploring alternative cloud processing options that prioritize data privacy and security
A lot of this, we do already. But some of it’s very hard.
What does a truly end-user-auditable research system really look like? (Your legal or compliance team may be able to audit it, but what does it mean to extend this?)
How can product teams at smaller companies keep up with the rapid rate of change in law, technology, and data security?
Many of us work with user data, whether it's in product, marketing, operations, or somewhere else. I’d love to hear what questions this raises for others about the systems we use, and how we could improve them.