Research

The 21st century. We are living in a more and more intelligent environment, evolving together with the pace of technologies. We have been surrounded by sensors. Integration of the data from various sensors provides the great potential to picture a user’s behavior (e.g., routine behavior), help machines better understand human, and further support new interaction paradigms between human and smart environment.

As a man with great interests in human ourselves, my passion lies in pushing computers to better describe and understand users’ behavior via computational methods. I am also interested in developing innovative interaction techniques to facilitate interactions and provide natural user experiences. My research usually involves human-computer interaction, ubiquitous computing and machine learning, and sometimes a combination of them.

Research Interests

  • Human Computer Interaction
  • Ubiquitous Computing
  • Machine Learning
  • Clench Interaction: Novel Biting Input Techniques (2019)

    We propose Clench Interaction, a novel input techniques by clenching our teeth

    People eat every day and biting is one of the most fundamental and natural actions that they perform on a daily basis. Existing work has explored tooth click location and jaw movement as input techniques, however clenching has the potential to add control to this input channel. We propose clench interaction that leverages clenching as an actively controlled physiological signal that can facilitate interactions. We conducted a user study to investigate users’ ability to control their clench force. We found that users can easily discriminate three force levels, and that they can quickly confrm actions by unclenching (quick release). We developed a design space for clench interaction based on the results and investigated the usability of the clench interface. Participants preferred the clench over baselines and indicated a willingness to use clench-based interactions. This novel technique can provide an additional input method in cases where users’ eyes or hands are busy, augment immersive experiences such as virtual/augmented reality, and assist individuals with disabilities. [Click for More Details]

  • Hand Range Interface: Information Always at Hand With A Body-centric Mid-air Input Surface (2018)

    We propose Hand Range Interface, an input surface that is always at our fingertips.

    Most interfaces of our interactive devices such as phones and laptops are flat and are built as external devices in our environment, disconnected from our bodies. Therefore, we need to carry them with us in our pocket or in a bag and accommodate our bodies to their design by sitting at a desk or holding the device in our hand. We propose Hand Range Interface, an input surface that is always at our fingertips. This bodycentric interface is a semi-sphere attached to a user’s wrist, with a radius the same as the distance from the wrist to the index finger. We prototyped the concept in virtual reality and conducted a user study with a pointing task. The input surface can be designed as rotating with the wrist or fixed relative to the wrist. We evaluated and compared participants’ subjective physical comfort level, pointing speed and pointing accuracy on the interface that was divided into 64 regions. We found that the interface whose orientation was fixed had a much better performance, with 41.2% higher average comfort score, 40.6% shorter average pointing time and 34.5% lower average error. Our results revealed interesting insights on user performance and preference of different regions on the interface. We concluded with a set of guidelines for future designers and developers on how to develop this type of new body-centric input surface.  [Click for More Details]

  • BreathVR: Leveraging Breathing as a Directly Controlled Interface for VR Games (2018)

    We propose breathing as a directly controlled physiological signal that can facilitate unique and engaging play experiences through natural interaction in single and multiplayer virtual reality games.

    With virtual reality head-mounted displays rapidly becoming accessible to mass audiences, there is growing interest in new forms of natural input techniques to enhance immersion and engagement for players. Research has explored physiological input for enhancing immersion in single player games through indirectly controlled signals like heart rate or galvanic skin response. In this paper, we propose breathing as a directly controlled physiological signal that can facilitate unique and engaging play experiences through natural interaction in single and multiplayer virtual reality games. Our study shows that participants report a higher sense of presence and find the gameplay more fun and challenging when using our breathing gestures. From study observations and analysis we present six design strategies that can aid virtual reality game designers interested in using directly controlled forms of physiological input.  [Click for More Details]

  • ForceBoard: Subtle Text Entry Leveraging Pressure (2018)

    ForceBoard is a pressure-based input technique that enables text entry by subtle motion.

    ForceBoard is a pressure-based input technique that enables text entry by subtle motion. To enter text, users apply pressure to control a multi-letter-wide sliding cursor on a one-dimensional keyboard with alphabetical ordering, and confirm the selection with a quick release. In particular, we examined the error model of pressure control for successive and error-tolerant input, which was then incorporated into a Bayesian algorithm to infer user input. Moreover, we employed tactile feedback to facilitate pressure control. The results showed that text entry rate achieved 4.2 WPM (Words Per Minute) for character-level input, and 11.0 WPM for word-level input after 10 minutes of training. Users subjectively reported ForceBoard was easy to learn and interesting to use. These results demonstrated the feasibility of applying pressure as the main channel for text entry, and that ForceBoard can be useful for subtle interaction or when interaction is constrained.  [Click for More Details]

  • vMotion: A Context-Sensitive Design Methodology for Real-Walking in VR (2018)

    We propose a design methodology of seamlessly integrating redirection into the virtual experience that takes advantage of the perceptual phenomenon of inattentional blindness.

    Physically walking in virtual reality can provide a satisfying sense of presence. However, natural locomotion in virtual worlds larger than the tracked space remains a practical challenge. Numerous redirected walking techniques have been proposed to overcome space limitations but they often require rapid head rotation, sometimes induced by distractors, to keep the scene rotation imperceptible. We propose a design methodology of seamlessly integrating redirection into the virtual experience that takes advantage of the perceptual phenomenon of inattentional blindness. Using the functioning of the human visual system, we present four novel visibility control techniques that work with our design methodology to minimize disruption commonly found in existing redirection techniques. A user study shows that our embedded techniques are imperceptible and users report significantly less dizziness when using our methods. The illusion of unconstrained walking in a large area (16 x 8m) is maintained even though users are limited to a smaller (3.5 x 3.5m) physical space.  [Click for More Details]

  • GalVR: A Novel Collaboration Interface using GVS (2017)

    GalVR is a navigation interface that uses galvanic vestibular stimulation (GVS) during walking to cause users to turn from their planned trajectory.

    GalVR is a navigation interface that uses galvanic vestibular stimulation (GVS) during walking to cause users to turn from their planned trajectory. We explore GalVR for collaborative navigation in a two-player virtual reality (VR) game. The interface affords a novel game design that exploits the differences in first and third-person perspectives, allowing VR and non-VR users to share a play experience. By introducing interdependence arising from dissimilar points of view, players can uniquely contribute to the shared experience based on their roles. We detail the design of our asymmetrical game, Dark Room and present some insights from a pilot study. Trust emerged as the defining factor for successful play.  [Click for More Details]

  • Leapwrist: A Low-cost Hand Gesture Recognition Smart Band (2017)

    We proposed LeapWrist, a smart wristband that allows users to do gestures anywhere in the 3D space.

    In this project, we proposed LeapWrist, a smart wristband that allows users to do gestures anywhere in the 3D space. We mounted two pairs of miniature low power cameras and IR structural light projectors on a wristband, one pair looking at the palm while the other facing the hand back. The extracted hand depth maps were then fed into a CNN regression model to reconstruct the hand. This idea shared some similarity with the work from Microsoft [1]. However, they only used one pair of camera and projector for the palm, which limits the power of detection. Moreover, the deep learning method has not been used before. Finally, I led the team to capture the outstanding prize in the highest level technology competition “Challenge Cup” at Tsinghua (1 out of 800).  [Click for More Details]

  • Listening Behavior Generation of SARA (2016)

    We designed and implemented a new real-time algorithm for the listening behavior generation of SARA, which leveraged traditional multimodal features, together with rapport scores and conversational strategies.

    We designed and implemented a new real-time algorithm for the listening behavior generation of SARA, which leveraged traditional multimodal features, together with rapport scores and conversational strategies. SARA was demoed on World Economic Summer Forum 2016, SIGDIAL 2017 and World Economic Forum 2017.  [Click for More Details]