A Study of Material Sonification in Touchscreen Devices

Rodrigo Martín, Michael Weinmann, Matthias Hullin
Even in the digital age, designers largely rely on physical material samples to illustrate their products, as existing visual representations fail to sufficiently reproduce the look and feel of real world materials. Here, we investigate the use of interactive material sonification as an additional sensory modality for communicating well-established material qualities like softness, pleasantness or value. We developed a custom application for touchscreen devices that receives tactile input and translate it into material rubbing sound using granular synthesis. We used this system to perform a psychophysical study, in which the ability of the user to rate subjective material qualities is evaluated, with the actual material samples serving as reference stimulus.

A Tabletop System Using an OmniDirectional Projector-Camera

Kosuke Maeda, Mitski Piekenbrock, Toshiki Sato, Hideki Koike
We propose an omnidirectional projection system embedding a projector with an ultra wide-angle lens in the table. AR markers are attached on the target surfaces so that the system can track them with an omnidirectional camera. Finally, we developed a prototype of the system and introduced some applications to show the effectiveness of our method.

Aligned Functional Paste: Enhancing Handwriting with Retrieval Results on a Large Surface

Masashi Uyama, Kenji Nakajima, Katsuhiko Akiyama
This paper proposes a new interaction method called aligned functional paste. The method allows users to select handwritten keywords, to apply some mapping function to the keywords and to align results of the mapping function in a few steps.

An Encounter Type VR System Aimed at Exhibiting Wall Material Samples for Show House

Shun Yamaguchi, Hirotaka Shionoiri, Takuto Nakamura, Hiroyuki Kajimoto
In this research, we propose a system that can change the tactile material of the wall surface especially in the VR show house. In order to present multiple types of wall materials, an encounter type tactile sense presentation unit with a plurality of wall materials mounted on a uniaxial robot presents a specific type of wall material according to the movement of the hand of the experiencer. With this encounter type approach, users can experience tactile sensations of multiple kinds of realistic wall materials. We examined the specs necessary for such presentation, constructed the system, and conducted a user study to examine the effect of the proposed system, comparing with visual only and visual + force only conditions.

Body-Prop Interaction: Augmented Open Discs and Egocentric Body-Based Interaction for Exploring Immersive Visualizations

Rajiv Khadka, James Money, Amy Banic
Immersive Visualizations of 3-dimensional scientific data pose unique interaction challenges. One challenge is to be able to interact with the visual volumetric representations of data using direct manipulation in a way that facilitates exploration and discovery, yet maintains data relationships. In this paper, we present Body-Prop Interaction, a novel tangible multimodal interface for immersive visualizations. Our interaction technique combines multiple input and output modalities with 3D printed open discs that are tracked and augmented with virtual information. To demonstrate our technique, we implemented interactive tasks for a volume visualization of graphite billet data. While demonstrated for this type of data visualization, our novel interaction technique has the potential to be used to interact with other types of augmented and virtual reality.

Cross-Ratio Based Gaze Estimation using Polarization Camera System

Masato Sasaki, Takashi Nagamatsu, Kentaro Takemura
Eye-based interaction is one of the solutions for achieving intuitive interfaces on surfaces such as a large display, and thus, various eye-tracking methods have been studied. Cross-ratio based gaze estimation, which determines the point-of-gaze on a screen, has been studied actively as a novel eye-tracking method because the method does not require a hardware calibration defining the relationship between a camera and monitor. We expect that the cross-ratio method will be a breakthrough for eye-based interaction under various circumstances such as tabletop devices and digital whiteboards. In eye-tracking, near-infrared light is often emitted, and at least four LEDs are located on display corners for detecting the screen plane in the cross-ratio based method. However, long-time radiation of near-infrared light can make a user fatigued. Therefore, in this study, we attempted to extract the screen area correctly without near-infrared radiation emission. A polarizing filter is included in the display, and thus, visibility of the screen can be controlled by the light__ polarization direction of the external polarized light filter. We propose gaze estimation based on the cross-ratio method using a developed polarization camera system, which can capture two polarized images of different angles simultaneously. Further, we confirmed that the point-of-gaze could be estimated using the screen reflection detected by computing the differences between two images without near-infrared emission.

CrosSI: A Novel Workspace with Connected Horizontal and Vertical Interactive Surfaces

Risa Otsuki, Kaori Fujinami
A typical desk workspace includes both horizontal and vertical areas, wherein users can freely move items between the horizontal and vertical surfaces and edit objects; for example, a user can place a paper on the desk, write a memo, and affix it to the wall. However, in workspaces designed to handle digital information, the usable area is limited to specific devices such as displays and tablets, and the user interface is discontinuous in terms of both the software and hardware. Therefore, we propose a multi-surface information display system CrosSI, which includes a horizontal surface and multiple cubic structures with vertical surfaces on the horizontal surface. Using this system, we investigate the ability to visualize and move information on a continuous system of horizontal and vertical surfaces.

Define, Refine, and Identify Events in a Bendable Interface

Nurit Kirshenbaum, Scott Robertson
As we aspire to bring bendable interfaces closer to mainstream use, we need to resolve practical issues such as the event model for bend input. This work takes steps towards understanding the bend events space for a simplified 1DOF (Degree of Freedom) device. We suggest that a rich set of informative bend events can elevate application development for bendable devices. In this work, we describe the current state of event models for bendable devices, our suggested refined model, and the steps we are taking toward implementing an event system.

Designing for Ambient UX: Case Study of a Dynamic Lighting System for a Work Space

Milica Pavlovic, Sara Colombo, Yihyun Lim, Federico Casalegno
The research aims at proposing and validating a framework for supporting the design for user experience in interactive spaces (Ambient UX). It suggests that dynamic changes in interactive spaces should be designed focusing on their effects on three levels of the user experience: physical wellbeing, meanings, and social relations. Validation occurred through a field study performed in a work environment, where a dynamic lighting system was designed and installed. Preliminary results validate the relevance of the three levels, thus laying the ground for further research and discussion.

Here's looking at you: A Spherical FTVR Display for Realistic Eye-Contact

Georg Hagemann, Qian Zhou, Oky Dicky Ardiansyah Prima, Ian Stavness, Sidney Fels
In this work we describe the design, implementation and initial evaluation of a spherical Fish Tank Virtual Reality (FTVR) display for realistic eye-contact. We identify display shape, size, and depth cues as well as model fidelity as important considerations and challenges for setting up realistic eye-contact and package it into a reproducible framework. Based on the design, we implemented and evaluated the system to assess the effectiveness of the eye-contact. In our initial evaluation participants were able to identify eye-contact with an accuracy of 89.6%. Moreover eye-contact with a virtual character triggered changes in participant's social behavior that are in line with real world eye-contact scenarios. Taken together, these results provide practical guidelines for building displays for realistic eye-contact and can be applied to applications such as teleconferencing and VR treatment in psychology.

Practice System for Controlling Cutting Pressure for Paper-cutting

Takafumi Higashi, Hideaki Kanai
We describe a system for paper-cutting with a knife with a blade attached to the tip of the stylus on a drawing display. The purpose of this research is to support controlling cutting pressure for novices. Novices tend to cut paper with an unstable pressure stronger than necessary. Therefore, some instructors teach practicing to control the pressure to novices We have developed a device to support cutting practice by a stylus and drawing display. We measured the difference between pressures of novices and experts. In addition, we developed a system that encourages appropriate pressure based on expert pressure. This system shows the pressure difference between the user and the experts in color and sound. We experimented to compare the effectiveness of the system. As a result, the novices practiced with the system, the range of pressure and variation improved than the existing practice method.

Recognizing Gestures on Projected Button Widgets with an RGB-D Camera Using a CNN

Patrick Chiu, Chelhwon Kim, Hideto Oda
Projector-camera systems can turn any surface such as tabletops and walls into an interactive display. A basic problem is to recognize the gesture actions on the projected UI widgets. Previous approaches using finger template matching or occlusion patterns have issues with environmental lighting conditions, artifacts and noise in the video images of a projection, and inaccuracies of depth cameras. In this work, we propose a new recognizer that employs a deep neural net with an RGB-D camera; specifically, we use a CNN (Convolutional Neural Network) with optical flow computed from the color and depth channels. We evaluated our method on a new dataset of RGB-D videos of 12 users interacting with buttons projected on a tabletop surface.

Simplicity is Not Always Best: Image Preferences in Patients with Mild Cognitive Impairment

Heng-Jin-Xiu Hu, Li-Hao Chen, Yi-Chien Liu
Mild cognitive impairment (MCI) is one of the most prevalent chronic diseases related to the decline of cognitive ability for elderly people. This decline in cognitive function could cause older adults to experience difficulty in interpreting the meanings of function icons on the interfaces of household appliances. The aim of this study was to explore the recognition of the meanings of different types of product function icons in individuals with MCI. These four common functions of images were redesigned according to two directions as test samples: from complex to simplified form and from plane to stereo form. In this study, 31 elderly people with MCI and their caregivers were invited to participate. The following test results were revealed: 1) Participants reported that stereoscopic images were more recognizable than flat images. 2) Images with more details were easier to recognize than those that were abstract. These results can be applied to the interface design of household appliances, which can also provide more data for designing product interfaces for elderly users and users with mild_cognitive_disorders.

SmartSurveys: Does context influence whether we'll share healthcare experience data with our smartphone?

Tina Chan, Josephine McMurray, Alaaddin Sidahmed, James R Wallace
Consumer feedback is collected in many industries, including in healthcare where patient feedback contributes to a higher quality of care. Current collection methods include complaints, local surveys, and patient stories, but these methods yield low participation at high costs. Providers need affordable and effective ways to collect feedback, and smartphone applications present as suitable solutions. However, previous research shows that patients are hesitant to provide smartphone-based feedback in a care setting due to perceived risks and apparent futility of expecting change as a result. We will conduct a study to observe consumer behaviour using smartphones to provide service feedback in healthcare spaces versus non-healthcare spaces. We will identify addressable barriers that impact the adoption of smartphone technology to gather patient experience data in health care spaces.

Traditionally Crafted Digital Interfaces

Deepshikha, Yammiyavar Pradeep
Digital textiles merged with traditional designs could create culture-specific products for the users. The present paper elaborates designing of three LED embedded scarves -Aster, Iris and Nymphaea- merged with traditional embroidery and hand painting techniques. The essence of traditional craft making and digital crafts has been enriched with design, texture, pattern and material exploration. Further, the scarves have been applied to a scenario of non-verbal communication in the social space of user, since limited research is available for non-verbal communication with respect to textiles in India. The experimentation conducted reveals high perceived usefulness and perceived ease of use of 60 young female respondents for using digitally crafted textiles for non-verbal expression/communication in the social space. The research aims to bring together traditional craftsmen and designers to create digital crafts for the new era of crafting traditions with context specific applications.