- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
The Vision Pro uses 3D avatars on calls and for streaming. These researchers used eye tracking to work out the passwords and PINs people typed with their avatars.
Archived version: https://web.archive.org/web/20240912100207/https://www.wired.com/story/apple-vision-pro-persona-eye-tracking-spy-typing/
That should be an easy fix in a future software update by simply not replicating eye movement as soon as the user is looking at the keyboard.
The solution is constant googly eyes.
Let’s be honest: the solution is always googly eyes.
Sounds like what they already did: as soon as the virtual keyboard pops up the eye movement isn’t transmitted as part of the avatar.
Oh I see. According to the article:
The GAZEpolit researchers reported their findings to Apple in April and subsequently sent the company their proof-of-concept code so the attack could be replicated. Apple fixed the flaw in a Vision Pro software update at the end of July, which stops the sharing of a Persona if someone is using the virtual keyboard.
An Apple spokesperson confirmed the company fixed the vulnerability, saying it was addressed in VisionOS 1.3.
Seems like we’re going to be stuck in the uncanny valley of telepresence. The more fidelity we add, the more we’re able to pick up on microexpressions, subtle eye movements, and breathing, which helps trigger oxytocin and promote trust. But also, the more fidelity we add, the more attack surface we open up for malicious actors to exploit.
Easy fix.
Sounds like you could do this to a person in a normal zoom call with no headset.
Most people don’t look while typing, especially things with muscle memory like passwords, when using a physical keyboard. And a zoom call doesn’t convey facial data in three dimensions. The unique nature of the virtual keyboard, plus the three dimensional avatar, makes this new attack more feasible.
bet same for video calls.