Hand gesture recognition is one of the most fundamental interaction methods in HCI. These years the rapid development of immersive techniques such as VR and AR leads to the emergence of lots of consumer immersive products such as HTC Vive and HoloLens, where gesture recognition becomes the most significant interaction technique. However, existing gesture recognition technologies, including camera-based and glove-based, all have some important issues. For instance, Leapmotion, one of the typical devices in camera-basedmethod, requires users to stretch arms to keep the hands within the recognition area, leading to the “Gorilla Problem”, which can easily cause arm fatigue. Typical data glove 5DT, however, has an extremely high price that drives away ordinary customers.
In this project, we proposed LeapWrist, a smart wristband that allows users to do gestures anywhere in the 3D space. We mounted two pairs of miniature low power cameras and IR structural light projectors on a wristband, one pair looking at the palm while the other facing the hand back. The extracted hand depth maps were then fed into a CNN regression model to reconstruct the hand. This idea shared some similarity with the work from Microsoft. However, they only used one pair of camera and projector for the palm, which limits the power of detection. Moreover, the deep learning method has not been used before. Finally, I led the team to capture the outstanding prize in the highest level technology competition “Challenge Cup” at Tsinghua (1 out of 800).