A Comparative Study of Hand Gesture Recognition Devices in the Context of Game Design

Ahmed S.Khalaf, Sultan A. Alharthi, Igor Dolgov, Z O.Toups
Gesture recognition devices provide a new means for natural human-computer interaction. However, when selecting these devices for games, designers might find it challenging to decide which gesture recognition device will work best. In the present research, we compare three vision-based, hand gesture devices: Leap Motion, Microsoft’s Kinect, and Intel’s RealSense. We developed a simple hand-gesture based game to evaluate performance, cognitive demand, comfort, and player experience of using these gesture devices. We found that participants’ preferred and performed much better using Leap Motion and Kinect compared to using RealSense. Leap Motion also outperformed or was equivalent to Kinect. These findings suggest that not all gesture recognition devices can be suitable for games and that designers need to make better decisions when selecting gesture recognition devices and designing gesture based games to insure the usability, accuracy, and comfort of such games.

A Taxonomy for Selecting Wearable Input Devices for Mixed Reality

Ahmed S. Khalaf, Sultan A. Alharthi, Son Tran, Igor Dolgov, Z O. Toups
Composite wearable computers consist of multiple wearable devices connected together and working as a cohesive whole. These composite wearable computers are promising for augmenting our interaction with the physical, virtual, and mixed play spaces (e.g., mixed reality games). Yet little research has directly addressed how mixed reality system designers can select wearable input devices and how these devices can be assembled together to form a cohesive wearable computer. We present an initial taxonomy of wearable input devices to aid designers in deciding which devices to select and assemble together to support different mixed reality systems. We undertook a grounded theory analysis of 84 different wearable input devices resulting in a design taxonomy for composite wearable computers. The taxonomy consists of two axes: TYPE OF INTERACTIVITY and BODY LOCATION. These axes enable designers to identify which devices fill particular needs in the system development process and how these devices can be assembled together to form a cohesive wearable computer.

HandPoseMenu: Hand Posture-Based Virtual Menus for Changing Interaction Mode in 3D Space

Chanho Park, Hyunwoo Cho, Sangheon Park, Young-Suk Yoon, Sung-Uk Jung
A system control technique (changing the interaction mode) is needed to provide various spatial interaction techniques for virtual, augmented and mixed reality (VR/AR/MR) applications. We propose HandPoseMenu, a hand posture-based virtual menus system in 3D space. HandPoseMenu recognizes the non-dominant hand's posture using a depth sensor that is attached to a wearable display and presents the virtual menu corresponding to the hand posture beside the non-dominant hand. When the user selects a desired function among the presented menu as a dominant hand, the interaction mode changes. The combination of the two system controls (graphical menu and hand posture) makes it possible to display the menu that the user wants to use in 3D space. This paper describes the implementation of HandPoseMenu and its application.

Haptic-Payment: Stimulating 'Pain' of Payment through Vibration Feedback in Mobile Devices

Muhanad S.Manshad, Daniel Brannon, Sultan A. Alharthi
The proliferation of mobile payment applications in recent years has further decoupled the physical act of paying from the consumption experience. Prior research suggests that this decreases the psychological `pain' that consumers feel when making a purchase, and leads them to spend more money than they otherwise would. The present research proposes a system that restores the physical sensation of paying to mobile payment apps. Drawing on a preliminary user study, as well as prior theories on prehensile grip types and haptic vibration feedback thresholds, we develop a device consisting of an array of vibration motors that can be attached to the back of a mobile phone and can be controlled through a custom application to provide vibration feedback of varying frequencies and durations at the time of mobile payment. Our preliminary findings suggest that a configuration of vibration motors located around the right and left edges of the phone, and which give high frequency, short duration vibrations upon payment is the most effective way inducing the pain of payment.

Interactive Exhibition from Wall Panels in a Museum

Jihoon Cho, Hyunsoo Kim, Joowon Lim, Taeho Kim, Jinah Park
We present an interactive exhibition concept built on existing physical wall panels in a museum. By combining the unique exhibition contents with various rendering and interface techniques, we create immersive contents that have two-way communication and multi-modal feedback as key features to provide engaging experiences. We apply our interactive panel concept to a mummy exhibit in a natural history museum in Korea and demonstrate its effectiveness. The final goal of this project is for it to serve as a platform for virtual museums to provide an opportunity to expand offline museums to an online experience.

ISAR: An Authoring System for Interactive Tabletops

Zardosht Hodaie, Juan Haladjian, Bernd Brugge
Despite more than three decades of research in augmented reality and shown positive effect of AR in educational settings, we still don't witness spread of this technology in the schools. Complex technology and limited educational content are among the reasons for this absence. Authoring systems can play a positive role in introduction of AR into the school settings. In this paper we introduce ISAR, a domain-independent authoring system for a camera-projector based interactive tabletop. ISAR allows teachers to create learning content and define interactions for the tabletop themselves, without the need for programming, and hence reduces the entrance barrier of educational practitioners for experimenting with augmented reality and tangible interactions.

Lessons learned from an auditory–vibrotactile sensory experience in the museum

Dong Hoon Jung, Jieun Kim, Hokyoung Ryu, Jeong Gu Lee, Hoi Jeong Yang
The present paper describes research results and lessons learned from a multisensory exhibit held in Gwacheon National Science Museum. We introduced an all-in-one multisensory device that can produce visual, auditory, and vibrotactile stimuli to create a novel sensory experience. Three sensory conditions were considered, and the ways sensory feedback differed under the different conditions were examined. In total, 113 people visited our exhibition. From among these visitors, we carried out self-report questionnaires, interviews, and EDA activities measurement with 60 participants. The results showed that the vibrotactile and auditory stimuli conditions were pleasing and present compared to the condition of only auditory stimulus. When applying vibrotactile stimulus, understanding and developing situational vibrotactile stimulus in response to the auditory sensory stimulus might be effective compared to deploying ambient vibrotactile sensory stimulus. Additional experiments and data analysis are needed. However, we hope that this case study will inspire researchers and curators who are willing to introduce the multisensory experience into the actual museum setting.

RARE Shop: Real & Augmented Retail Bookshop Experience using Projection Mapping

Yuhui You, Kelvin Cheng
Augmented reality (AR) technology has been used to integrate online product information to the physical shopping experience by relying on hardware outputs such as head-mounted displays or mobile devices, which interferes with the customers natural in-store shopping experience. Different from traditional AR, spatial augmented reality (SAR) could potentially offer an unmediated augmented physical experience. In this paper, we present a proof-of-concept prototype that augments existing physical shops with online personalized product information without disrupting the customer’s natural shopping behavior. We envision a real and augmented book shop experience, using SAR to provide immersive digital and online information overlaid onto physical products, breaking down the boundary between physical space and digital information at retail shops. Moreover, we design unmediated interaction that reacts to the relative spatial and temporal context of the users, based on user’s natural interactions with the physical products, without the need for learning special interactions/gestures. We also discuss the design challenges and implications for this in-store shopping experience.

Tabletop AR with HMD and Tablet: A Comparative Study for 3D Selection

Carole Plasson, Dominique Cunin, Yann Laurillau, Laurence Nigay
We experimentally compare the performance and usability of tablet-based and see-through head-mounted display (HMD)-based interaction techniques for selecting 3D virtual objects projected on a table. This study is a first step toward a better understanding of the advantages and limitations of these interaction techniques, with the perspective of improving interaction with augmented maps.To this end, we evaluate the performance of 3 interaction techniques in selecting 3D virtual objects in sparse and dense environments: (1) the direct touch interaction with a HMD; (2) the ray-casting interaction with a HMD; and (3) the touch interaction on a tablet. Our results show that the two techniques using a HMD are faster, less physically tiring and preferred by the participants over the tablet. The HMD-based interaction techniques perform equally well but the direct touch technique seems to be less impacted by small targets and occlusion.

Though Miles Apart: Soft Tangible Interface that Evokes Nostalgic Memories

Annie Sungkajun, Jinsil Hwaryoung Seo
Though Miles Apart is an interactive installation that encourages people to join together through a soft tangible interface, to reminisce on their personal experiences. Based on a traditional Chinese melody, Shuidiao Getou, the installation embodies the idea of separation but, though apart, ``we are still able to share the beauty of the moon together''. Initially, as people enter the area, they are invited to sit around the pond-like surface, near glowing points. When a viewer grasps a glowing, soft conductive unit in their hands, it will trigger videos in response to their touch. Viewers will listen and watch stories from people of different cultures and backgrounds, as they recollect on their memories. It creates a meditative, immersive environment, where the collective is allowed to sit and experience nostalgia.

Toward Long Distance Tabletop Hand-Document Telepresence

Chelhwon Kim, Patrick Chiu, Joseph Andrew de la Pena, Laurent Denoue, Jun Shingu, Yulius Tjahjadi
In a telepresence scenario with remote users discussing a document, it can be difficult to follow which parts are being discussed. One way to address this is by showing the user's hand position on the document, which also enables expressive gestural communication. An important practical problem is how to capture and transmit the hand movements efficiently with high resolution document images. We propose a tabletop system with two channels that integrates document capture with a 4K video camera and hand tracking with a webcam, in which the document image and hand skeleton data are transmitted at different rates and handled by a lightweight Web browser client at remote sites. To enhance the rendering, we employ velocity based smoothing and ephemeral motion traces. We tested our prototype over long distances from USA to Japan and to Italy, and report on latency and jitter performance. Our system achieves relatively low latency over a long distance in comparison with a tele-immersive system that transmits mesh data over much shorter distances.