Gesture-based interface is now being employed in almost every futuristic venture, be it motion aware games like Wii or Microsoft’s Kinect.
From Apple to Samsung, every company has included it in varying degrees in its devices, the lowest being scrolling and advanced phones having orientation recognition like the Samsung Galaxy S4. So how does this system work?
Motion Gesture and RecognitionÂ
Motion gestures are distinct actions triggered on devices hinged on particular movements by humans (You might remember the Shake Shake option of Zenfone motion gesture during its initial days).
Gesture here refers to any activity or exercise made by the user while holding the device in an accelerometer-based recognition. Recognition is a result of the mathematical interpretation of these human signals by a computer device.
The computer then uses this data as an input to execute actions on applications. It may also be referred to as gesture control and is a subdiscipline of computer vision, which may be the first step of artificial intelligence.
Gesture-based recognition is desirable as it is more accurate, has high stability and saves time. The main areas in which it is applicable are the automotive, transit and gaming sector. Defence and home automation need the technology, too, for safety purposes.
Broadly the gestures are of two types – online and offline. The former denotes direct manipulation to scale or rotate a tangible object. The latter encompasses all gestures made by the human face and hands.Â
Gesture-Based Technology and InterfaceÂ
Gesture-based technology falls in the scope of natural user interfaces, emphasising the use of body-related input.
The current system of justice-based technology also considers the user’s emotions, but it is in its infancy. Identification of posture, gait, proxemics and human behaviour are the main challenges faced in the technology.
It is a bridge between humans and machines by which devices can be made a faithful companion to humans. Suppose augmented to its full potential, the dependence on mouses and keyboards.
The capacity to trail the actions of the user determines the efficiency of the system. KUI (Kinetic User Interface) is emerging as a user interface that allows better interaction.
One primary tool of KUI is wired gloves, as they have magnetic and inertial tracking devices. The best-wired gloves can detect finger movement precisely to the nearest 5 degrees and also provide haptic feedback.
Depth aware cameras and stereo cameras are functional wheels for the gesture recognition process.Â
Gesture Control in Mobiles and GamingÂ
Elliptic Labs, a Norwegian technology company, developed a system of touchless gesture-based control for mobiles.
The technology allows users to interact with the device by making gestures in front of the screen and not physically touching it.
Games based on it are designed so that players can relate to the artificial intelligence design. The most trending one, Skyfall, allows the person to control an onscreen paddle by moving their hands in front of the webcam.
One can expect these games to be the source that will skyrocket the gesture technology in the new generation.
Touchless UI
Commanding the computer through body motion and gestures without physically touching a keyboard, mouse, or screen is Touchless User Interface. For instance, in Microsoft’s Kinect, we have touchless game interfaces.
Though not tethered to the computer by a cord, products like the Wii cannot be considered touchless. Touch screens are also different, as they have display screens with a touch-sensitive transparent panel covering the screen.
Touchless interfaces, in addition to gesture controls, have become increasingly popular, especially during the pandemic period, to avoid touching surfaces. It provides a safe and hygienic alternative to traditional touchscreen systems used in phones and offices.
The main features of touchless UI are face recognition and even intention detection by physiological observations. It can also identify the line of sight, emotion, age and gender of a person.
Hence gesture-based recognition is a leap that will allow us to handle multiple input points and redefine parameters, thus leading to a more natural form of communication with machines.
It does not require additional devices like the mouse and keyboard on a desktop, thus reducing the burden. Gesture-based inputs can be versatile as they do not have any constraints, so there can be unlimited permutations and combinations for a password.