A Web-Based Remote Assistance System with Gravity-Aware 3D Hand Gesture Visualization

Chelhwon Kim, Patrick Chiu, Yulius Tjahjadi
We present a remote assistance system that enables a remotely located expert to provide guidance using hand gestures to a customer who performs a physical task in a different location. The system is built on top of a web-based real-time media communication framework which allows the customer to use a commodity smartphone to send a live video feed to the expert, from which the expert can see the view of the customer's workspace and can show his/her hand gestures over the video in real-time. The expert's hand gesture is captured with a hand tracking device and visualized with a rigged 3D hand model on the live video feed. The system can be accessed via a web browser, and it does not require any app software to be installed on the customer's device. Our system supports various types of devices including smartphone, tablet, desktop PC, and smart glass. To improve the collaboration experience, the system provides a novel gravity-aware hand visualization technique.

Autotator: Semi-Automatic Approach for Accelerating the Chart Image Annotation Process

Junhoe Kim, Jaemin Jo, Jinwook Seo
Annotating chart images for training machine learning models is tedious and repetitive especially in that chart images often have a large number of visual elements to annotate. We present Autotator, a semi-automatic chart annotation system that automatically provides suggestions for three annotation tasks such as labeling a chart type, annotating bounding boxes, and associating a quantity. We also present a web-based interface that allows users to interact with the suggestions provided by the system. Finally, we demonstrate a use case of our system where an annotator builds a training corpus of bar charts.

Compatible 2D Table Navigation System for Visually Impaired Users

Kiroong Choe, Jinwook Seo
Complex data comprehension is a hard task for visually impaired people, for the lack of viable supporting tools. We designed a web-based interactive navigation system to enable visually impaired people to effectively explore a simple data table on common touch devices. Due to ecological factor, there are still many blind people who are not used to complex structured dataset. Thus we made user interactions consistent with major mobile screen readers to minimize the users' burden. Users can easily overview and query detailed information while optimizing the cognitive workload.

Data-driven Texture Modeling and Rendering on Electrovibration Display

Seongwon Cho, Reza Haghighi Osgouei, Jin Ryong Kim, Seungmoon Choi
We propose a data-driven method for realistic texture rendering on an electrovibration display. To compensate the nonlinear dynamics of an electrovibration display, we use nonlinear autoregressive with external input (NARX) neural networks as an inverse dynamics model of an electrovibration display. The neural networks are trained with lateral forces resulting from actuating the display with a pseudo-random binary signal (PRBS). The lateral forces collected from the textured surface with various scanning velocities and normal forces are fed into the neural network to generate the actuation signal for the display. For arbitrary scanning velocity and normal force, we apply the two-step interpolation scheme between the closest neighbors in the velocity-force grid.

EchoTube: Robust Touch Sensing along Flexible Tubes using Waveguided Ultrasound

Carlos E. Tejada, Jess McIntosh, Klaes Alexander Bergen, Sebastian Boring, Daniel Ashbrook, Asier Marzo
While pressing can enable a wide variety of interesting applications, most press sensing techniques operate only at close distances and rely on fragile electronics. We present EchoTube, a robust, modular, simple, and inexpensive system for sensing low-resolution press events at a distance. EchoTube works by emitting ultrasonic pulses inside a flexible tube which acts as a waveguide and detecting reflections caused by deformations in the tube. EchoTube is deployable in a wide variety of situations: the flexibility of the tubes allows them to be wrapped around and affixed to irregular objects. Because the electronic elements are located at one end of the tube, EchoTube is robust, able to withstand crushing, impacts, water, and other adverse conditions. In this paper, we detail the design, implementation, and theory behind EchoTube; characterize its performance under different configurations; and present a variety of exemplar applications that illustrate its potential.

EPES: Seizure Propagation Analysis in an Immersive Environment

Zahra Aminolroaya, Samuel Wiebe, Colin Bruce Josephson, Hannah Sloan, Christian Roatis, Patrick Abou Gharib, Frank Maurer, Kun Feng
Planning for epilepsy surgery requires precise localization of the seizure onset zone. This permits physicians to make accurate estimates about the postoperative chances of seizure freedom and the attendant risk. Patients with complex epilepsies may require intracranial electroencephalography (iEEG) to best lateralize and localize the seizure onset zone. Magnetic resonance imaging (MRI) data of implanted intracranial electrodes in a brain is used to confirm correct placement of the intracranial electrodes and to accurately map seizure onset and spread through the brain. However, the relative lack of tools co-registering iEEG with MRI data renders this a time-consuming investigation since for epilepsy specialists who have to manually map this information in 2-dimensional space. Our immersive analytics tool, EPES (Epilepsy Pre-Evaluation Space), provides an application to analyze iEEG data and its fusion with the corresponding intracranial electrodes’ recordings of the brain activity. EPES highlights where a seizure is occurring and how it propagates through the virtual brain generated from the patient MRI data.

FIESTA: A Free Roaming Collaborative Immersive Analytics System

Benjamin Lee, Maxime Cordeil, Arnaud Prouzeau, Tim Dwyer
We present FIESTA, a prototype system for collaborative immersive analytics (CIA). In contrast to many existing CIA prototypes, FIESTA allows users to collaboratively work together wherever and however they wish---untethered from mandatory physical display devices. Users can freely move around in a shared room-sized environment, author and generate immersive data visualisations, position them in the space around them, and share and communicate their insights to one another. Certain visualisation tasks are also supported to facilitate this process, such as details on demand and brushing and linking.

Hopping-Pong: Changing Trajectory of Moving Object Using Computational Ultrasound Force

Tao Morisaki, Ryoma Mori, Ryosuke Mori, Yasutoshi Makino, Yuta Itoh, Yuji Yamakawa, Hiroyuki Shinoda
Physically moving real objects via a computational force connects computers and the real world and has been applied to tangible interfaces and mid-air display. Many researchers have controlled only a stationary real object by computational force. On the other hand, controlling a moving object can expand the real space that is controllable by the computer. In this paper, we explore the potential of computational force from the viewpoint of changing the trajectory of a moving object. Changing the trajectory is the primitive model to control a moving object, and it is the technological challenge requiring high-speed measurement and non-contact force with high-spatial resolution. As a proof-of-concept, we introduce Hopping-Pong changing the trajectory of a flying Ping-Pong Ball (PPB) using ultrasound force. The result shows that Hopping-Pong changes the trajectory of a PPB 344 mm. We conclude that a computational force is capable of controlling a moving object in the real world. This research contributes to expanding the computationally controlled space with applications for augmented sports, HCI and factory automation.

In-Place Ink Text Editor: A Notepad-like Simple Text Editor with Direct Writing

Jiseong Gu, Geehyuk Lee
Indirect writing interface is currently widely used for handwriting text entry with a pen, however, it may cause additional eye and hand movement. Gu and Lee suggested a revival of the concept of direct writing and they showed the potential of the direct writing interface in editing text. In this demo, we demonstrate an In-Place Ink text editor prototype which is a notepad-like simple text editor with direct writing feature.

JoyHolder: Tangible Back-of-Device Mobile Interactions

Amit Yadav, Alexander K. Eady, Sara Nabil, Audrey Girouard
One-handed mobile use, which is predominantly thumb-driven, presents interaction challenges like screen occlusion, reachability of far and inside corners, and an increased chance of dropping the device. We adopt a Research through Design approach around single-hand mobile interaction by exploring a variety of back-of-device tangibles (including a touchpad, scroller, magnetic button, push button, slider, stretchable spiral and a ring joystick). The latter ‘joy’-stick was inspired from the recent popular but passive ring phone ‘holders’, which we combined into ‘JoyHolder’ – a joystick-based interactive phone holder for tangible back-of-device input interactions. We demonstrate our low-fidelity and medium-fidelity prototypes (using crafting and digital fabrication methods) and our interactive JoyHolder to encourage discussion on tangible back-of-device interactions. Preliminary insights from a pilot-study we ran reflects the hesitation for adopting some of these tangibles, the potential of others and the importance of physical feedback while using back-of-device input modalities.

MirrorPad: A Touchpad for Direct Pen Interaction on a Laptop

Sangyoon Lee, Geehyuk Lee
There are needs for pen interaction on a laptop, and the market sees many pen-enabled laptop products. Many of them can be transformed into tablets when pen interaction is needed. In a real situation, however, users use multiple applications simultaneously, and such a convertible feature may not be useful. We introduce MirrorPad, a novel interface device contained in a laptop for direct pen interaction. It is both a normal touchpad and a viewport for pen interaction with a mirrored region on the screen. MirrorPad allows users to perform pen annotation work and keyboard work interspersed in the same workflow while the device remains in the laptop-like configuration.

ShearSheet: Low-Cost Shear Force Input with Elastic Feedback for Augmenting Touch Interaction

Mengting Huang, Kazuyuki Fujita, Kazuki Takashima, Taichi Tsuchida, Hiroyuki Manabe, Yoshifumi Kitamura
We propose ShearSheet, a low-cost, feasible, and simple DIY method that enables tangential (shear) force input on a touchscreen using a rubber-mounted slim transparent sheet. The sheet has tiny conductive material(s) attached to specific positions on the underside so that displacement of the sheet is recognized as touch input(s). This allows the system to distinguish between normal touch input and shear force input depending on whether the sheet moves or not, without requiring any external sensors or power. We first introduce ShearSheet’s simple implementation for a smartphone and then discuss several promising interaction techniques that augment shear interaction with one- and two-finger operations. We demonstrate the effectiveness of ShearSheet’s smooth transition between position- and rate-based control through a controlled user study using simple scrolling tasks. The results confirm that ShearSheet offers improved performance such as fewer operations and higher subjective preference, suggesting its substantial benefits and potential applications.

Towers of Saliency: A Reinforcement Learning Visualization Using Immersive Environments

Nathan Douglas, Frank Maurer, Dianna Yim, Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor
Deep reinforcement learning (DRL) has had many successes on complex tasks, but is typically considered a black box. Opening this black box would enable better understanding and trust of the model which can be helpful for researchers and end users to better interact with the learner. In this paper, we propose a new visualization to better analyze DRL agents and present a case study using the Pommerman benchmark domain. This visualization combines two previously proven methods for improving human understanding of systems: saliency mapping and immersive visualization.

Visual Link Routing in Immersive Visualisations

Arnaud Prouzeau, Antoine Lhuillier, Barrett Ens, Daniel Weiskopf, Tim Dwyer
In immersive display environments, such as virtual or augmented reality, we can make explicit the connections between data points in visualisations and their context in the world, or in other visualisations. This paper considers the requirements and design space for drawing such links in order to minimise occlusion and clutter. A novel possibility in immersive environments is to optimise the link layout with respect to a particular point of view. In collaborative scenarios there is the need to do this for multiple points of view. We present an algorithm to achieve such link layouts and demonstrate its applicability in a variety of practical use cases.