Awareness, Usage and Discovery of Swipe-revealed Hidden Widgets in iOS

Nicole Ke Chen Pong, Sylvain Malacria
Revealing a hidden widget with a dedicated sliding gesture is a common interaction design in today's handheld devices. Such ''Swhidgets'' (for swipe-revealed hidden widgets) provide a fast (and sometime unique) access to some commands. Interestingly, swhidgets do not follow conventional design guidelines in that they have no explicit signifiers, and users have to discover their existence before being able to use them. In this paper, we discuss the benefits of this signifierless design and investigate how iOS users deal with this type of widgets. We report on the results of a laboratory study and an online survey, investigating iOS users' experience with swhidgets. Our results suggest that swhidgets~are moderately but unevenly known by participants, yet the awareness and the discovery issues of this design is worthy of further discussion.

BEXHI: A Mechanical Structure for Prototyping Bendable and EXpandable Handheld Interfaces

Michael Ortega, Alix Goguey
In this paper, we present BEXHI, a new mechanical structure for prototyping expandable and bendable handheld devices. Many research projects have pushed bendable surfaces from prototypes to commercially viable devices. In the meantime, expandable devices have become a topic of interest letting one foresee that such devices are on the horizon. With BEXHI, we provide a structure to explore the combined capabilities of these devices. The structure consists of multiple interweaved units allowing non-porous expandable surfaces to bend. Through an instanciation, we illustrate and discuss that the BEXHI structure allows for the exploration of the combination of both bend and expansion interaction spaces.

CARDS: a Mixed-Reality System for Collaborative Learning at School

Philippe Giraudeau, Alexis Olry de Rancourt, Joan Sol Roo, stephanie FLECK, David Bertolo, Robin Vivian, Martin HACHET
Traditional computer systems based on the WIMP paradigm (Window, Icon, Menu, Pointer) have shown potential benefits at school (e.g. for web browsing). On the other hand, they are not well suited as soon as hands-on and collaborative activities are targeted. We present CARDS, a Mixed-Reality system that combines together physical and digital objects in a seamless workspace to foster active and collaborative learning. We describe the design process based on a participatory approach with researchers, teachers, and pupils. We then present and discuss the results of a user study that tends to show that CARDS has a good educational potential for the targeted activities.

Comparing Lassoing Criteria and Modeling Straight-line and One-loop Lassoing Motions Considering Criteria

Hiroki Usuba, Shota Yamanaka, Homei Miyashita
In graphical user interfaces, users can select multiple objects simultaneously via lasso selection. This can be implemented, for example, by having users select objects by looping around their centers or entire areas. Based on differences in lassoing criteria, we presume that the performance of the criteria also differs. In this study, we compare three lassoing criteria and model lassoing motions while considering these criteria. We conducted two experiments; participants steered through straight-line and one-loop paths by using three criteria. The participants handled the lassoing criteria correctly and performed lassoing at appropriate speeds for each path shape. Although the drawn trajectories varied depending on the lassoing criteria, the criteria in the performance and subjective evaluations did not differ significantly. Additionally, from our results, we build a baseline model to predict the movement time by considering the lassoing criteria. We also discuss further experiments to predict movement time under more complex conditions.

Creating Accessible Interactive Audio Tactile Drawings using Spatial Augmented Reality

Lauren Thevin, Christophe Jouffrais, Nicolas Rodier, Nicolas Palard, Martin HACHET, Anke M. Brock
Interactive tactile graphics have shown a true potential for people with visual impairments, for instance for acquiring spatial knowledge. Until today, however, they are not well adopted in real-life settings (e.g. special education schools). One obstacle consists in the creation of these media, which requires specific skills, such as the use of vector-graphic software for drawing and inserting interactive zones, which is challenging for stakeholders (social workers, teachers, families of people with visual impairments, etc.). We explored how a Spatial Augmented Reality approach can enhance the creation of interactive tactile graphics by sighted users. We developed the system using a participatory design method. A user study showed that the augmented reality device allowed stakeholders (N=28) to create interactive tactile graphics more efficiently than with a regular vector-drawing software (baseline), independently of their technical background.

DesignAR: Immersive 3D-Modeling Combining Augmented Reality with Interactive Displays

Patrick Reipschlager, Raimund Dachselt
We present DesignAR, an augmented design workstation for creating 3D models. Our approach seamlessly integrates an interactive surface displaying 2D views with head-mounted, stereoscopic Augmented Reality (AR). This creates a combined output space that expands the screen estate and enables placing 3D objects beyond display borders. For the effective combination of 2D and 3D views, we define different levels of proximity and alignment. Regarding input, multi-touch and pen mitigate issues of precision and ergonomics commonly found in mid-air VR/AR interaction. For creating and refining 3D models, we propose a set of pen and touch techniques with immediate AR feedback, including sketching of rotational solids or tracing physical objects on the surface. To further support a designer’s modeling process, we additionally propose orthographic model views and UI offloading in AR as well as freely placeable model instances with real-world reference. Based on our DesignAR prototype, we report on challenges and insights regarding this novel type of display augmentation. The combination of high-resolution, high-precision interactive surfaces with carefully aligned AR views opens up exciting possibilities for future work and design environments, a vision we call Augmented Displays.

Digital Assistance for Quality Assurance: Augmenting Workspaces Using Deep Learning for Tracking Near-Symmetrical Objects

Joao Belo, Andreas Fender, Tiare Feuchtner, Kaj Grønbæk
We present a digital assistance approach for applied metrology on near-symmetrical objects. In manufacturing, systematically measuring products for quality assurance is often a manual task, where the primary challenge for the workers lies in accurately identifying positions to measure and correctly documenting these measurements. This paper focuses on a use-case, which involves metrology of small near-symmetrical objects, such as LEGO bricks. We aim to support this task through situated visual measurement guides. Aligning these guides poses a major challenge, since ?ne grained details, such as embossed logos, serve as the only feature by which to retrieve an object’s unique orientation. We present a two-step approach, which consists of (1) locating and orienting the object based on its shape, and then (2) disambiguating the object’s rotational symmetry based on small visual features. We apply and compare different deep learning approaches and discuss our guidance system in the context of our use case.

EchoTube: Robust Touch Sensing along Flexible Tubes using Waveguided Ultrasound

Carlos E. Tejada, Jess McIntosh, Klaes Alexander Bergen, Sebastian Boring, Daniel Ashbrook, Asier Marzo
While pressing can enable a wide variety of interesting applications, most press sensing techniques operate only at close distances and rely on fragile electronics. We present EchoTube, a robust, modular, simple, and inexpensive system for sensing low-resolution press events at a distance. EchoTube works by emitting ultrasonic pulses inside a flexible tube which acts as a waveguide and detecting reflections caused by deformations in the tube. EchoTube is deployable in a wide variety of situations: the flexibility of the tubes allows them to be wrapped around and affixed to irregular objects. Because the electronic elements are located at one end of the tube, EchoTube is robust, able to withstand crushing, impacts, water, and other adverse conditions. In this paper, we detail the design, implementation, and theory behind EchoTube; characterize its performance under different configurations; and present a variety of exemplar applications that illustrate its potential.

Eliciting Pen-Holding Postures for General Input with Suitability for EMG Armband Detection

Fabrice Matulic, Brian Vogel, Naoki Kimura, Daniel Vogel
We conduct a two-part study to better understand pen grip postures for general input like mode switching and com-mand invocation. The first part of the study asks participants what variations of their normal pen grip posture they might use, without any specific consideration for sensing capabilities. The second part evaluates three of their sug-gested postures with an additional set of six postures designed for the sensing capabilities of a consumer EMG armband. Results show that grips considered normal and mature, such as the dynamic tripod and the dynamic quadrupod, are the best candidates for pen-grip based interaction, followed by finger-on-pen postures and grips using pen tilt. A convolutional neural network trained on EMG data gathered during the study yields above 70% within-participant recognition accuracy for common sets of five postures and above 80% for three-posture subsets. Based on the results, we propose design guidelines for pen interaction using variations of grip postures.

EyeDescribe: Combining Eye Gaze and Speech to Automatically Create Accessible Touch Screen Artwork

Kyle Reinholt, Darren Guinness, Shaun K. Kane
Many images on the Web, including photographs and artistic images, feature spatial relationships between objects that are inaccessible to someone who is blind or visually impaired even when a text description is provided. While some tools exist to manually create accessible image descriptions, this work is time consuming and requires specialized tools. We introduce an approach that automatically creates spatially registered image labels based on how a sighted person naturally interacts with the image. Our system collects behavioral data from sighted viewers of an image, specifically eye gaze data and spoken descriptions, and uses them to generate a spatially indexed accessible image that can then be explored using an audio-based touch screen application. We describe our approach to assigning text labels to locations in an image based on eye gaze. We then report on two formative studies with blind users testing EyeDescribe. Our approach resulted in correct labels for all objects in our image set. Participants were able to better recall the location of objects when given both object labels and spatial locations. This approach provides a new method for creating accessible images with minimum required effort.

Feel the Water: Expressing Physicality of District Heating Processes in Functional Overview Displays

Veronika Domova, Alvaro Aranda Munoz, Elsa Vaara, Petra Edoff
This paper describes an explorative design study conducted in the scope of a collaborative research project in the district heating domain. In the scope of the project, we have arranged extensive field studies at two power plants to understand the workflows, problems, and needs of industrial operators. We relied on the gained knowledge to design and develop novel visual interfaces that would communicate the overall status of the district heating system at-a-glance. We aimed at exploring potential directions and alternatives beyond conventional industrial interfaces. One particular aspect of our research was related to how the physicality of the underlying industrial processes can be expressed by purely visual means. The paper introduces three high-fidelity prototypes demonstrating the novel visualizations developed. The paper explains the design choices made, namely the relation between the selected visual encodings to the requirements of the industrial operators' tasks. Preliminary evaluation indicates industrial operators' interest in the designed solutions. Future work will incorporate an extensive qualitative evaluation on site.

Finding the Sweet Spot: Analyzing Unrestricted Touchscreen Interaction In-the-Wild

Sven Mayer, Huy Viet Le, Markus Funk, Niels Henze
With smartphones being a prime example, touchscreens became one of the most widely used interface to interact with computing systems. Compared to other touchscreen devices, smartphones pose additional challenges as the hand that interacts with the device is commonly used to also hold the device. Consequently, determining how fingers of the hand holding the device can interact with the screen is a non-trivial challenge. A body of recent work investigated the comfortable area in controlled lab studies. This poses limitations as it is based on the assumption that the grips used in the studies are representative for normal smartphone use. In this paper, we extend previous work by providing insights from in-the-wild studies using two different apps that were deployed in the Android App Store. Comparing our results with previous work we confirm that our data fits previously proposed models. Further analyzing the data, we highlight the sweet spot, the position that is touched if the input can be performed on the whole screen.

Hopping-Pong: Changing Trajectory of Moving Object Using Computational Ultrasound Force

Tao Morisaki, Ryoma Mori, Ryosuke Mori, Yasutoshi Makino, Yuta Itoh, Yuji Yamakawa, Hiroyuki Shinoda
Physically moving real objects via a computational force connects computers and the real world and has been applied to tangible interfaces and mid-air display. Many researchers have controlled only a stationary real object by computational force. On the other hand, controlling a moving object can expand the real space that is controllable by the computer. In this paper, we explore the potential of computational force from the viewpoint of changing the trajectory of a moving object. Changing the trajectory is the primitive model to control a moving object, and it is the technological challenge requiring high-speed measurement and non-contact force with high-spatial resolution. As a proof-of-concept, we introduce Hopping-Pong changing the trajectory of a flying Ping-Pong Ball (PPB) using ultrasound force. The result shows that Hopping-Pong changes the trajectory of a PPB 344 mm. We conclude that a computational force is capable of controlling a moving object in the real world. This research contributes to expanding the computationally controlled space with applications for augmented sports, HCI and factory automation.

Inking Your Insights: Investigating Digital Externalization Behaviors During Data Analysis

Yea-Seul Kim, Nathalie Henry Riche, Bongshin Lee, Matthew Brehmer, Michel Pahud, Ken Hinckley, Jessica Hullman
Externalizing one's thoughts can be helpful during data analysis, such as which one marks interesting data, notes hypotheses, and draws diagrams. In this paper, we present two exploratory studies conducted to investigate types and use of externalizations during the analysis process. We first studied how people take notes during different stages of data analysis using VoyagerNote, a visualization recommendation system augmented to support text annotations, and coupled with participants' favorite external note-taking tools (e.g., word processor, pen & paper). Externalizations manifested mostly as notes written on paper or in a word processor, with annotations atop views used almost exclusively in the initial phase of analysis. In the second study, we investigated two specific opportunities: (1) integrating digital pen input to facilitate the use of free-form externalizations and (2) providing a more explicit linking between visualizations and externalizations. We conducted the study with VoyagerInk, a visualization system that enabled free-form externalization with a digital pen as well as touch interactions to link externalizations to data. Participants created more graphical externalizations with VoyagerInk and revisited over half of their externalizations via the linking mechanism. Reflecting on the findings from these two studies, we discuss implications for the design of data analysis tools.

Is it Real? Measuring the Effect of Resolution, Latency, Frame rate and Jitter on the Presence of Virtual Entities

Thibault Louis, Jocelyne Troccaz, Amelie Rochet-Capellan, Francois Berard
The feeling of presence of virtual entities is an important objective in virtual reality, teleconferencing, augmented reality, exposure therapy and video games. Presence creates emotional involvement and supports intuitive and efficient interactions. As a feeling, presence is mostly measured via subjective questionnaire, but its validity is disputed. We introduce a new method to measure the contribution of several technical parameters toward presence. Its robustness stems from asking participant to rank contrasts rather than asking absolute values, and from the statistical analysis of repeated answers. We implemented this method in a user study where virtual entities were created with a handheld perspective corrected display. We evaluated the impact on two virtual entities' presence of four important parameters of digital visual stimuli: resolution, latency, frame rate and jitter. Results suggest that jitter and frame rate are critical for presence but not latency, and resolution depends on the explored entity.

Learning to Drag: The Effects of Social Interactions in Touch Gestures Learnability for Older Adults

Martin Mihajlov, Mark Springett, Effie Lai-Chong Law
Considering the potential physical limitations of older adults, the naturalness of touch-based gestures as an interaction method is questionable. Assuming touch-based gestures are natural, they should be highly learnable and amenable to enhancement through social interactions. To investigate whether social interactions can enhance the learnability of touch gestures for older adults with low digital literacy, we conducted a study with 42 technology- naïve participants aged 64 to 82. They were paired and encouraged to play two games on an interactive tabletop with the expectation to use the drag gesture to complete the games socially. We then compared these results with a previous study of technology–naïve older adults playing the same games individually. The results of the comparisons show that dyadic interactions had some benefits for the participants in helping them to become comfortable with the drag gesture by negotiation and imitation. Further qualitative analysis suggested that playing pairs generally helped learners to comfortably explore the digital environment using the newly acquired skill.

Modeling Pen Steering Performance in a Single Constant-width Curved Path

Shota Yamanaka, Homei Miyashita
For pen-steering tasks, the steering law can predict the movement time for both straight and circular paths. It has been shown, however, that those path shapes require different regression expression coefficients. Because it has also been shown that the path curvature (i.e., the inverse of the curvature radius) linearly decreases the movement speed, we refined the steering law to predict the movement time under various curvature radii; our pen-steering experiment included linear to circular shapes. The result showed that the proposed model had good fit for the empirical data: adjusted r2 > 0.95, with the smallest Akaike information criterion (AIC) value among various candidate models. We also discuss how this refined model can contribute to other fields such as predicting driving difficulties.

On Gesture Combination: An Exploration of a Solution to Augment Gesture Interaction

William Delamare, Chaklam Silpasuwanchai, Sayan Sarcar, Toshiaki Shiraki, Xiangshi Ren
Current gesture interaction paradigm mainly involves a one-to-one gesture-command mapping. This leads to memorability issues regarding (1) the mapping - as each new command requires a new gesture, and (2) the gestures specifics (e.g., motion paths) - that can be complex to leverage the recognition of several gestures. We explore the concept of combining 3D gestures when interacting in smart environments. We first propose a design space to characterize the temporal and spatial combination aspects, and the gesture types used by the combination. We then report results from three user studies in the context of smart TV interaction. The first study reveals that end-users can create gesture sets with combinations fully optimized to reuse gestures. The second study shows that combining gestures can lead to improved memorability compared to single gestures. The third study reveals that preferences for gestures combination appear when single gestures have an abstract gesture-command mapping.

ShearSheet: Low-Cost Shear Force Input with Elastic Feedback for Augmenting Touch Interaction

Mengting Huang, Kazuyuki Fujita, Kazuki Takashima, Taichi Tsuchida, Hiroyuki Manabe, Yoshifumi Kitamura
We propose ShearSheet, a low-cost, feasible, and simple DIY method that enables tangential (shear) force input on a touchscreen using a rubber-mounted slim transparent sheet. The sheet has tiny conductive material(s) attached to specific positions on the underside so that displacement of the sheet is recognized as touch input(s). This allows the system to distinguish between normal touch input and shear force input depending on whether the sheet moves or not, without requiring any external sensors or power. We first introduce ShearSheet’s simple implementation for a smartphone and then discuss several promising interaction techniques that augment shear interaction with one- and two-finger operations. We demonstrate the effectiveness of ShearSheet’s smooth transition between position- and rate-based control through a controlled user study using simple scrolling tasks. The results confirm that ShearSheet offers improved performance such as fewer operations and higher subjective preference, suggesting its substantial benefits and potential applications.

Situated Sketching and Enactment for Pervasive Displays

Alix Ducros, Clemens N. Klokmose, Aurelien Tabard
Situated sketching and enactment aim at grounding designs in the spatial, social and cultural practices of a particular place. This is particularly relevant when designing for public places in which human activities are open-ended, multi-faceted, and difficult to anticipate, such as libraries, train stations, or commercial areas. In order to investigate situated sketching and enactment, we developed Ébauche. It enables designers to collaboratively sketch interfaces, distribute them across multiple displays and enact use cases.We present the lessons learned from six situated sketching and enactment workshops on public displays with Ébauche. And we present the results of a controlled study with 8 pairs of designers who used paper and Ébauche. We present the various ways in which participants leveraged the place, and how paper or Ébauche influenced the integration of their designs in the place. Looking at the design outcomes, our results suggest that paper leads to broader exploration of ideas and deeper physical integration in the environment. Whereas Ébauche leads to more refined sketches and more animated enactments.

SonicSpray: A Technique to Reconfigure Permeable Mid-Air Displays

Mohd Adili Norasikin, Diego Martinez-Plasencia, Gianluca Memoli, Sriram Subramanian
Permeable, mid-air displays, such as those using fog or water mist are limited by our ability to shape and control the aerosol and deal with two major issues: (1) the size and complexity of the system, and (2) the creation of laminar flow, to retain display quality. Here we present SonicSpray, a technique using ultrasonic Bessel beams to create reconfigurable mid-air displays. We build a prototype from low-cost, off-the-shelf parts. We explore the potential and limitations of SonicSpray to create and redirect laminar flows of fog. We demonstrate a working prototype that precisely controls laminar aerosols through only 6x6 ultrasound transducers array. We describe the implementation steps to build the device, verify the control and projection algorithm for the display, and evaluate its performance. We finally report our exploration of several useful applications, in learning, entertainment and arts.

SpaceState: Ad-Hoc Definition and Recognition of Hierarchical Room States for Smart Environments

Andreas Fender, Jorg Muller
We present SpaceState, a system for designing spatial user interfaces that react to changes of the physical layout of a room. SpaceState uses depth cameras to measure the physical environment and allows designers to interactively define global and local states of the room. After designers defined states, SpaceState can identify the current state of the physical environment in real-time. This allows applications to adapt the content to room states and to react to transitions between states. Other scenarios include analysis and optimizations of work flows in physical environments. We demonstrate SpaceState by showcasing various example states and interactions. Lastly, we implemented an example application: A projection mapping based tele-presence application, which projects a remote user in the local physical space according to the current layout of the space.

Spotlight on Off-Screen Points of Interest in Handheld Augmented Reality: Halo-based techniques

Patrick PEREA, Denis Morand, Laurence Nigay
Navigating Augmented Reality (AR) environments with a handheld device often requires users to access digital contents (i.e. Points of Interests - POIs) associated with physical objects outside the field of view of the device’s camera. Halo3D is a technique that displays the location of off-screen POIs as halos (arcs) along the edges of the screen. Halo3D reduces clutter by aggregating POIs but has not been evaluated. The results of a first experiment show that an enhanced version of Halo3D was 18% faster than the focus+context technique AroundPlot* for pointing at a POI, and perceived as 34% less intrusive than the arrow-based technique Arrow2D. The results of a second experiment in more realistic settings reveal that two variants of Halo3D that show the spatial distribution of POIs in clusters (1) enable an effective understanding of the off-screen environment and (2) require less effort than AroundPlot* to find POIs in the environment.

Tailored Controls: Creating Personalized Tangible User Interfaces from Paper

Vincent Becker, Sandro Kalbermatter, Simon Mayer, Gabor Soros
User interfaces rarely adapt to the specific user preferences or the task at hand. We present a method that allows to quickly and inexpensively create personalized interfaces from plain paper. Users can cut out shapes and assign control functions to these paper snippets via a simple configuration interface. After configuration, control takes place entirely through the manipulation of the paper shapes, providing the experience of a tailored tangible user interface. The shapes and assignments can be dynamically changed during use. Our system is based on markerless tracking of the user's fingers and the paper shapes on a surface using an RGBD camera mounted above the interaction space, which is the only hardware sensor required. Our approach and system are backed up by two studies where we determined what shapes and interaction abstractions users prefer, and verified that users can indeed employ our system to build real applications with paper snippet interfaces.

Towards More Practical Spacing for Smartphone Touch GUI Objects Accompanied by Distractors

Shota Yamanaka, Hiroaki Shimono, Homei Miyashita
To achieve better touch GUI designs, researchers have studied optimal target margins, but state-of-the-art results on this topic have been observed for only middle-size touchscreens. In this study, to better determine more practical target arrangements for smartphone GUIs, we conducted four experiments to test the effects of gaps among targets on touch pointing performance. Two lab-based and two crowd-based experiments showed that wider gaps tend to help users reduce the task completion time and the rate of accidental taps on unintended items. For 1D and 2D tasks, the results of the lab-based study with a one-handed thumb operation style showed that 4-mm gaps were the smallest sizes to remove negative effects of surrounding items. The results of the crowd-based study, however, indicated no specific gaps to completely remove the negative effects.

Visual Link Routing in Immersive Visualisations

Arnaud Prouzeau, Antoine Lhuillier, Barrett Ens, Daniel Weiskopf, Tim Dwyer
In immersive display environments, such as virtual or augmented reality, we can make explicit the connections between data points in visualisations and their context in the world, or in other visualisations. This paper considers the requirements and design space for drawing such links in order to minimise occlusion and clutter. A novel possibility in immersive environments is to optimise the link layout with respect to a particular point of view. In collaborative scenarios there is the need to do this for multiple points of view. We present an algorithm to achieve such link layouts and demonstrate its applicability in a variety of practical use cases.