May 23 2025 – A recent blog post published by tech media outlet patentlyapple on May 22 reveals that Apple has secured a new patent aimed at delivering highly accurate gesture-based interactions across its range of devices, including Macs, iPhones, and iPads.
According to the blog, Apple’s journey into gesture control technology dates back to 2014 when it acquired Israeli company PrimeSense, laying the groundwork for its subsequent developments in this field. Since then, the tech giant has filed dozens of patents related to advanced gesture recognition, with applications spanning from Macs and Apple TVs to MacBooks equipped with sophisticated cameras capable of capturing intricate hand movements.
The latest patent focuses on a system designed to intelligently recognize hand postures. It determines whether a hand is in what it terms “peripheral use mode,” a state indicated by actions such as a palm lying flat on a table or fingers mimicking typing motions. In such cases, the system identifies that the hand is engaged with an external device like a keyboard and, consequently, disregards any gestures made by that hand.

Conversely, when a hand is not detected to be in peripheral use mode, the system switches to “gesture use mode,” enabling the processing of gesture commands. Additionally, if a peripheral device event is detected after a gesture has been initialized, the system cancels the corresponding gesture action while maintaining other visual feedback elements to ensure a seamless user experience.
The patent also highlights the integration of eye-tracking technology. By capturing data such as the user’s gaze vector and pupil position, the system can accurately determine where the user is looking on the interface. This capability enhances the system’s ability to discern the relationship between gesture inputs and interactions with external devices.
Moreover, the eye-tracking data can be utilized to train algorithms that predict whether a hand is in peripheral use mode. Apple illustrates this with diagrams (FIGS. 1A-B) depicting an interactive system where a user can freely perform gestures with one hand while operating a keyboard with the other. The system optimizes user interface (UI) interactions by integrating data from both eye-tracking and gesture recognition.