Applying Pen Pressure, Tilt and Touch Interactions to Data Visualizations

Diogo Cabral, Marcelo Ramos and Pedro F. Campos
Bimanual interactions using pen and touch are natural to humans and have proven and explored in previous research. However, most of the previous work has been limited to using cartesian coordinates of fingers and pen tip. In this work, we go further by exploring additional pen data, like pressure and tilt, combined with multi-touch inputs. We apply this combination to two data visualizations: Bubble Chart and Linear Regression combined with a Radar. We have performed a preliminary user study comparing Pen and Touch interactions with Mouse input. We have found the Pen and Touch interactions can consume less time while looking for specific values in the Bubble Chart, whereas Mouse can be faster while looking for specific relation in Linear Regression and Radar.

Augmented Reality Application in Vocational Education: A Case of Welding Training

Richa Agrawal and Jayesh S. Pillai
There are 36 million unemployed youth in India. One of the reasons for unemployment among youth is the significant disconnect between the school education and the opportunities, skills, and exposure necessary to achieve full potential and earn livelihood. Various NGOs have taken initiatives to introduce vocational training as a part of mainstream education to create employment opportunities for students from low-income backgrounds on completion of schooling. The secondary schools in which such training is imparted often face a lack of space and infrastructure, regulatory restrictions and safety concerns. The objective of this project was to innovate using Augmented Reality (AR) to overcome the spatial and safety barriers that affect the efficiency of these vocational training programs. An interactive learning module was designed using marker-based mobile AR to aid in providing the knowledge as well as learning the skills required for welding. This learning module was evaluated with expert welding instructors and accordingly proposed to be used in government schools providing vocational training in welding. This project opens up the possibility of designing safe and accessible ways to develop vocational skills.

Backhand Display: a Wearable Device for the Back of the Hand

Hiroyuki Manabe and Yuki Iwai
Smartwatches and smartphones have their own unique advantages and their combination allows the user to quickly access to information. Our proposal, Backhand Display, is a wearable device for the back of the hand. It is a single device that can replace both the smartwatch and the smartphone. Its fundamental performances are evaluated, including text input, vibration notification, and response time between notification and reply. It is confirmed that Backhand Display offers faster text input than the smartwatch, greater notification capability than the smartphone, and quicker response to notifications than either of them. They are consistent with the results of user questionnaire. Though there are several limitations and issues to be solved, these results show the potential of Backhand Display, a wearable device on the back of the hand.

Exploring Tangible VR as a Tool for Workplace Design

Lukas Desmond Elias Van Campenhout, Marieke Van Camp and Ward Vancoppenolle
In this paper, we present a demonstrator that combines elements of physical prototyping and Virtual Reality. Our goal is to integrate VR in the design process of workplaces, not as a replacement of physical prototyping, but complementary to it. We call this approach Tangible Virtual Reality. We discuss our own background in embodied interaction, and present the context for our research: an industrial work cell for human-robot interaction, proposed by Audi Brussels, Kuka Belgium and FRS Robotics. We explain and illustrate how we used 3D CAD geometry to create a VR model and a physical prototype, and how we mapped them over one another. The result is a demonstrator that offers a VR experience, enhanced with real tactile information, and channeled by the natural limitations of the physical world. We suggest where the benefits of this physical/virtual design approach lay, and discuss how it could be operationalized in workplace design practice.

IIMR: A Framework for Intangible Mid-Air Interactions in a Mixed Reality Environment

G S Rajshekar Reddy, Prithvi Raj and Lingaraju G M
Augmented Reality (AR) has been getting increased traction lately with a plethora of AR-enabled devices on the market ranging from smartphones to commercial Head-Mounted Displays (HMDs). With the rise in AR, a need exists for there to be a more natural and intuitive form of interaction with virtual content that does not interfere with the user experience. Moreover, advanced and sophisticated hardware such as the Leap Motion Controller makes this possible by employing sensors that can track hands accurately in real-time. This paper presents a framework, IIMR (Intangible Interactions in Mixed Reality), that operates over a network and allows for mid-air interactions using just the bare-hands. The aim is to provide researchers and developers with an inexpensive and accessible solution for quickly developing and prototyping hand interactions in a mixed reality environment. The system consists of three modules, namely a Smartphone, a Google cardboard-like HMD, and a Leap Motion Controller. Diverse applications are developed to evaluate the framework, and the results from qualitative usability studies are outlined.

Interactions with Digital Mountains: Tangible, Immersive and Touch Interactive Virtual Reality

Oscar Ardaiz, Asier Marzo, Ruben Baztan and Iñigo Ezcurdia
Digitization of Earth mountains and terrains has facilitated to plan journeys, manage natural resources, and learn about the Earth from the comfort of our homes. We aim to develop new interactions on digital mountains with novel interfaces: 3D printed representation of a mountain, an immersive virtual reality visualization, and two different touch interactive interfaces for immersive virtual reality visualizations: a 3D printed mountain with touch sensors and a multitouch tablet. We show how we have built such prototypes based on digital data retrieved from a map provider, and which interactions are possible with each interaction device. We explain how we design and conduct evaluation.

MarioControl: An Intuitive Control Method for a Mobile Robot Control from a Third Person Perspective

Ryota Shibusawa, Mutsuhiro Nakashige and Katsutosh Oe Daiichi
Playware using a mobile robot can be fun for users of all ages, help prevent dementia in the elderly, and educate children. When controlling the robot from the third-person perspective, if the pose of the robot changes, the direction of movement may be different from the direction of the controller operation by the user. Thus, it is difficult for general users to control a mobile robot. In this study, we proposed and implemented a control method in which the direction of operation and the direction of movement are the same when using a robot that can read markers on a running surface and recognize its own posture. Our experimental results showed that the our proposed method is easier to use for many users compared to a conventional method used in radio-controlled cars. Additionally, we found that the people who take longer to control a mobile robot can control it better using the proposed method.

MaskMe: Using Masks to Design Collaborative Games for Helping Children with Autism Make Eye Contact

Qin Wu
Eye contact disorder is one of the typical characteristics of children with Autism Spectrum Disorder (ASD), and this issue is directly related to the deficiencies in social interaction. In this poster, we present MaskMe, a collaborative game to help children with ASD make eye contact. Our game set several rules to encourage children with ASD to make eye contact with each other, we use the mask to cover children's face to reduce their stress when making eye contact, and we developed a device attached to the mask that can define if eye contact is occurring. We conducted a pilot user study using MaskMe with 18 children with autism, the result shows that our game was effective on encourage them to have more eye contact with each other.

Smart Elevator Hall: Prototype of In-building Guiding Using Interactive Floor Display

Takashi Matsumoto, Motoaki Yamazaki, Yuya Igarashi, Ryotaro Yoshida and Michihito Shiraishi
The authors are designing a guiding system using interactive floor display. This paper introduces a prototype of a Smart Elevator Hall as an example of the interactive floor. User scenarios were developed to design detailed functions of the elevator hall and the prototype was implemented with a sensor and LED panels. The system was tested with scenario skits by designers to evaluate the concept’s potential and issues for further developments.

TapStr: A Tap and Stroke Reduced-Qwerty for Smartphones

Mohammad Sharif, Gulnar Rakhmetulla and Ahmed Sabbir Arif
TapStr is a single-row reduced-Qwerty that enables text entry on smartphones by performing taps and directional strokes. It is an unambiguous keyboard, thus does not rely on a statistical decoder to function. It supports the entry of uppercase and lowercase letters, numeric characters, symbols, and emojis. Its purpose is to provide an economic alternative to virtual Qwerty when saving touchscreen real-estate is more desired than the entry speed and a convenient alternative when users do not want to repeatedly switch between multiple layouts for mixed-case alphanumeric text and symbols (such as a password). In a short-term study, TapStr yielded about 11 WPM with plain phrases and 8 WPM with phrases containing uppercase letters, numbers, and symbols.

Towards Computational Identification of Visual Attention on Interactive Tabletops

Alberta Ansah, Caitlin Mills, Orit Shaer and Andrew L Kun
There is a growing interest in the ability to detect where people are looking in real-time to support learning, collaboration, and efficiency. Here we present an overview of computational methods for accurately classifying the area of visual attention on a horizontal surface that we use to represent an interactive display (i.e. tabletop). We propose a new model that utilizes a neural network to estimate the area of visual attention, and provide a close examination of the factors that contribute to the accuracy of the model. Additionally, we discuss the use of this technique to model joint visual attention in collaboration. We achieved a mean classification accuracy of 75.75% with a standard deviation of 0.14 when data from four participants was used in training the model and then tested on the fifth participant. We also achieved a mean classification accuracy of 98.8% with 0.02 standard deviation when different amounts of overall data was used to test the model.

Towards Visualizing and Exploring Multivariate Networks on Mobile Devices

Tom Horak, Ricardo Langner and Raimund Dachselt
Mobile devices are getting more and more powerful, however, interfaces for exploring multivariate network data on-the-go remain rare and challenging to realize. In this work, we present considerations as well as concepts on how to enable such exploration on mobile devices. Specifically, we discuss that an interface with force-directed node-link representation alongside additional views for investigating multivariate aspects and filter functionalities seems promising. Further, such an interface could be advanced by incorporating multiple mobile devices in parallel. As part of our ongoing investigations, we have realized an early prototype indicating the general feasibility of our concepts.