Digital telekinesis.
A few thoughts: (1) How would you implement this? It’s easy if the devices are pre-configured to talk to each other, like a multiple monitor setup. Then it’s just a gesture recognition via cameras, on top of a copy/paste and an Airdrop. (2) Moroever, the devices would have to be pre-configured to talk to each other in *some* sense, as unless they were both logged into the same account, you could copy/paste random things into someone else’s computer just by pointing at the screen. (3) So long as both devices are logged into the same online account, then all devices connected to that account just need to poll their surroundings via camera, until they capture the copy and then the paste gesture. You’d have to work on the edge case if three or more logged-in devices with cameras were pointing at the user as the paste (or copy) gesture could be detected on more than one device. (4) Then: Gesture-based interaction for computers is really cool, but people get gorilla arm doing it, so it’s only been a relatively niche thing, useful in video games and VR. (5) However, with the rise of physical AI, it strikes me that you should be able to not just talk to your robot companions, but gesture to them, and they’ll do things. For example: point and your robot dog fetches a stick.
82.72K