Connected VehicleEmerging Technologies

Gestigon: Gesture & Skeletal Recognition – Automotive Leads the Way

Consumers are used to the automotive cabin resembling a “time capsule” of a previous era, with even 2015 model year cars still sporting CD-players, memory stick connections, and the ubiquitous “Aux-in” jack. But BMW’s announcement that its new 7-Series sedan enables hand gesture input may signal that the automotive interior may soon feature more intuitive ways to communicate with your favorite (driving) machine – even more innovative than you can find in the office or your living room.

Gesture control was introduced to consumer markets with great fanfare in 2012, yet adoption was timid due to cost, processing overhead, and reliability issues. Who needs gesture input for their laptops if it costs 3x more than a mouse, uses 30% of the CPU, and only works 70% of the time?  

Moritz van Grotthuss_Gestigon
Moritz van Grotthuss, CEO | gestigon GmbH

The automotive cabin creates more opportunity for this technology. Clicking on buttons or screens generally requires taking your eyes off the road – whereas a simple gesture won’t. In addition, as more and more of the driving process becomes automated, OEMs have realized that their cars need to monitor not just what goes on outside the vehicle, but inside as well. While on “super-cruise”, systems need to continually monitor whether the driver is in an immediate position to take over control.

The Next Innovation in Safety and Comfort – Occupant Monitoring

Occupant monitoring can be performed using a variety of sensors, including cameras and depth sensors. Cameras could be considered intrusive and video processing requires significantly more computing power. Depth sensors only collect “z-plane” information, allowing companies like gestigon to write software that can efficiently discern human shapes and other objects – while still preserving privacy.  

Gesture control marks the beginning of an era where machines gain more context about their human users. Cars will sense the size of its occupants and automatically adjust mirrors and seat controls for a perfect fit. Simple commands for navigation or interior control can be performed by pointing, perhaps combined with a voice command, such as “I want to go there” or “Lower this light”. Cars will understand the occupant’s intention, so as they reach for the glove box, it can be unlatched before their hands get there. Ordinary surfaces such as steering wheels and center consoles can be made “smart” as the driver’s hands interact with it. What I am describing is not science fiction, gestigon has active projects in this area with major OEMs and tier one suppliers.

Automobiles make a great environment for perfecting these innovations because the automotive interior is a closed environment.Engineers can predict the position of objects and obstacles, limiting the scope and thus the variability of the environment.When designing a skeletal recognition solution in the vehicle, we can expect to see a seat in a certain range of positions and we can tightly control the position of our sensor. This restriction in variability makes the job of recognizing patterns easier. On the other hand, the automobile is also a demanding environment, requiring exacting performance in a range of temperatures, impervious to vibrations and shocks, and designed to last ten years or more.    

Skeletal Recognition – First In the Car, Then In Your Home and Office

It is reasonable to assume that once these solutions are perfected in the vehicle interior, they can be readily deployed in other areas, such as the living room or office environments. As depth sensor performance increases and cost & power consumption decreases, their deployment will become ubiquitous in smart phones, tablets, and desktop Internet of Things devices. Depth sensors will disrupt casual photography apps as an image’s depth of field can be changed post-capture. Tablets will add measurement information to objects they are pointed at. After this initial wave of functionality, all devices will gain a greater sense of the “human context” around the device. The next iteration of the Nest thermostat may sense an approaching user and allow them to control lights by pointing their hands instead of having to touch successive on-screen menus. The next version of Amazon Echo may combine spoken questions or suggested music selections with the context of which family member is standing near the device.  Already, Augmented Reality (AR) and Virtual Reality (VR) headsets are one of the most enthusiastic first adopters of depth sensors – this makes sense as these headsets need an input mechanism, and the most natural candidates in this situation are the hands in front of the user.

Context – In Three Dimensions

As the interface between human and machine expands from two to three dimensions, experiences gained in perfecting the automotive experience will be applied to wearables, mobile devices, and yet-to-be dreamed up Internet of Things appliances. gestigon is already hard at work on this transition with an active project in the wearable headset space.  It seems that sometimes, time capsules can work in reverse – and contain objects from the future!


Follow us on Twitter @ TelematicsWire

Back to top button