Designing embodied playfulness has been explored as a method for problem-solving. However, when thinking about deploying such an approach in public space activities, we often face many limitations regarding safety and ambiance, especially for bodily movements and behavior. To explore and address the challenges of deploying playfulness with restrained bodily movements in public spaces, we present a case study of an escalator augmented with auditory and visual feedback. An escalator in a public shopping mall has many limitations that require careful consideration in the design to maintain safety and avoid mistakes. We describe the challenges of our design strategy in order to complete the installation of a public escalator over five days. The results show that our approach significantly encouraged people to use the escalator, and also improved their manner of using it. Our work presents a successful method of treating the balance of social limitations and enjoyment that can affect human behavior in positive ways.
Run Zhao, Dong Wang, Qian Zhang, Xueyi Jin, Ke Liu
Handwritten signature verification techniques, which can facilitate user authentication and enable secure information exchange, are still important in property safety. However, on-line automatic handwritten signature verification usually requires dynamic handwritten patterns captured by a special device, such as a sensor-instrumented pen, a tablet or a smartwatch on the dominant hand. This paper presents SonarSign, an on-line handwritten signature verification system based on inaudible acoustic signals. The key insight is to use acoustic signals to capture the dynamic handwritten signature patterns for verification. Particularly, SonarSign exploits the built-in speakers and microphones of smartphones to transmit a specially designed training sequence and record the corresponding echo for channel impulse response (CIR) estimation, respectively. Based on the sensitivity of CIR to the tiny surrounding environment changes including handwritten signature actions, SonarSign designs an attentional multi-modal Siamese network for end-to-end signatures verification. First, multi-modal CIR streams are fused to extract representative signature pattern features from spatio-temporal dimensions. Then an attentional Siamese network is elaborated to verify whether the given two signatures are from the same signatory. Extensive experiments in real-world scenarios show that SonarSign can achieve accurate and robust signatures verification with an AUC (Area Under ROC (Receiver Operating Characteristic) Curve) of 98.02% and an EER (Equal Error Rate) of 5.79% for unseen users.
Computation is increasingly seen as a design material, finding its way into once mundane objects, surfaces and spaces. As such, demonstrations of the convergence of architectural elements and interactive systems are becoming commonplace in both research and industry. While eXtended Reality (XR) and projection mapping technologies seem to be ubiquitous in this field, shape-changing interfaces are under-represented. In this case study we explore the domain of shape-changing thresholds in the form of an interactive door panel. Corresponding to conditions of space occupancy, the design communicates through dynamic affordances and affordance elimination three ascribed social cues: entry invitation, entry permission, and entry prohibition. In a user evaluation study, all participants were able to correctly interpret the entry invitation and the entry prohibition cues. Based on our learnings and findings, we discuss aspects of our design and implications of shape-changing doors.
Bridger Herman, Maxwell Omdal, Stephanie Zeller, Clara A Richter, Francesca Samsel, Greg Abram, Daniel F. Keefe
Data physicalizations (e.g., 3D printed terrain models, anatomical scans, or even abstract data) can naturally engage both the visual and haptic senses in ways that are difficult or impossible to do with traditional planar touch screens and even immersive digital displays. Yet, the rigid 3D physicalizations produced with today’s most common 3D printers are fundamentally limited for data exploration and querying tasks that require dynamic input (i.e., touch sensing) and output (i.e., animation), functions that are easily handled with digital displays. We introduce a novel style of hybrid virtual + physical visualization designed specifically to support interactive data exploration tasks. Working toward a “best of both worlds” solution, our approach fuses immersive AR, physical 3D data printouts, and touch sensing through the physicalization. We demonstrate this solution can support three of the most common spatial data querying interfaces used in scientific visualization (streamline seeding, dynamic cutting places, and world-in-miniature visualization). Finally, we present quantitative performance data and describe a first application to exploratory visualization of an actively studied supercomputer climate simulation data with feedback from domain scientists.
Marco Moran-Ledesma, Oliver Schneider, Mark Hancock
When interacting in virtual reality (VR) applications like CAD and open-world games, people may want to use gestures as a means of intuitive and natural input. However, people may prefer physical props over handheld controllers to input gestures in VR.. We present an elicitation study where 21 participants chose from 95 props to perform manipulative gestures for 20 CAD-like and open-world game-like referents. When analyzing this data, we found existing methods for elicitation studies were insufficient to describe gestures with props, or to measure agreement with prop selection (i.e., agreement between sets of items). We proceeded by describing gestures as context-free grammars, capturing how different props were used in similar roles in a given gesture. We present gesture and prop agreement scores using a generalized agreement score that we developed to compare multiple selections rather than a single selection. We found that props were selected based on their resemblance to virtual objects and the actions they afforded; that gesture and prop agreement depended on the referent, with some referents leading to similar gesture choices, while others led to similar prop choices; and that a small set of carefully chosen props can support multiple gestures.
Ville Paananen, Jonas Oppenlaender, Jorge Goncalves, Danula Hettiachchi, Simo Hosio
Spatial experience, or how humans experience a given space, has been a pivotal topic especially in urban-scale environments. In the human scale, researchers have mostly investigated personal meanings or aesthetic and embodied experiences. In this paper, we investigate the human scale as an ensemble of individual granular spatial features. Through large-scale online questionnaires we first collected a rich set of spatial features that people generally use to characterize their surroundings. Second, we conducted a set of field interviews to develop a more nuanced understanding of the feature identified as most important in the first study: perceived safety. Our combined quantitative and qualitative analysis contributes to spatial understanding and presents a timely investigation into the perceived safety of human scale spaces. By connecting our results to the broader scientific literature, we contribute to the field of HCI and especially for digitally capturing and conveying spatial understanding.
Head-mounted displays (HMDs) increase immersion into virtual worlds. The problem is that this limits headset users’ awareness of bystanders: headset users cannot attend to bystanders’ presence and activities. We call this HMD boundary. We explore how to make the HMD boundary permeable by comparing different ways of providing informal awareness cues to the headset user about bystanders. We adapted and implemented three visualization techniques (Avatar View, Radar and Presence++) that share bystanders’ location and orientation with headset users. We conducted a hybrid user and simulation study with three different types of VR content (high, medium, low interactively) with twenty participants to compare how these visualization techniques allow people to maintain an awareness of bystanders, and how they affect immersion (compared to a baseline condition). Our study reveals that a see-through avatar representation of bystanders was effective, but led to slightly reduced immersion in the VR content. Based on our findings, we discuss how future awareness visualization techniques can be designed to mitigate the reduction of immersion for the headset user.
Finn Welsford-Ackroyd, Andrew Chalmers, Rafael Kuffner, Daniel Medeiros, Hyejin Kim, Taehyun James Rhee
In this paper, we present a system that allows the user of a head-mounted display (HMD) to communicate and collaborate with spectators outside of the headset, and we evaluate its impact on task performance, immersion and collaborative interaction. The solution targets scenarios like live presentations or multi-user collaborative systems, where it is not convenient or cost-effective to develop a VR multiplayer experience and supply each user with an HMD. The spectator views the virtual world on a large-scale tiled video wall, and is given the ability to control the orientation of their own virtual camera. This allows the spectator to stay focused on the immersed user's point of view or freely look around the environment. To improve collaboration between users, we implemented a pointer system where spectator is able to point at objects on the screen, which maps directly onto the objects in the virtual world. We investigate the influence of rotational camera decoupling and pointing gestures in the context of HMD-immersed and non-immersed users utilizing a large-scale display. This influence was evaluated through a user study, where we found that the system enables spectators to effectively communicate, collaborate, and feel semi-immersed without the need to wear an HMD.
Jim Smiley, Benjamin Lee, Siddhant Tandon, Maxime Cordeil, Lonni Besançon, Jarrod Knibbe, Bernhard Jenny, Tim Dwyer
Tangible controls---especially sliders and rotary knobs---have been explored in a wide range of interactive applications for desktop and immersive environments. Studies have shown that they support greater precision and provide proprioceptive benefits, such as support for eyes-free interaction. However, such controls tend to be expressly designed for specific applications. We draw inspiration from a bespoke controller for immersive data visualisation, but decompose this design into a simple, wireless, composable unit featuring two actuated sliders and a rotary encoder. Through these controller units, we explore the interaction opportunities around actuated sliders; supporting precise selection, infinite scrolling, adaptive data representations, and rich haptic feedback; all within a mode-less interaction space. We demonstrate the controllers' use for simple, ad hoc desktop interaction, before moving on to more complex, multi-dimensional interactions in VR and AR. We show that the flexibility and composability of these actuated controllers provides an emergent design space for interaction.
In touch interfaces, a target, such as an icon, has two widths: the visual width and the touchable width. The visual width is the target’s appearance, and the touchable width is the area in which users can touch a target and execute an action. In this study, we conduct two experiments to investigate the effects of the visual and touchable widths on touch pointing performance (movement time and success rate). Based on the results, we build candidate models for predicting the movement time and compare them by the values of adjusted R^2 and AIC. In addition, we build a success rate model and test it through cross-validation. Existing models can be applied only to situations where the visual and touchable widths are equal, and we show that our refined model achieves better model fitness, even when such widths are different. We also discuss the design implications of the touch interfaces based on our models.
Yosra Rekik, Edward Lank, Adnane Guettaf, Laurent Grisoni
Alongside vision and sound, hardware systems can be readily designed to support various forms of tactile feedback; however, while a significant body of work has explored enriching visual and auditory communication with interactive systems, tactile information has not received the same level of attention. In this work, we explore increasing the expressivity of tactile feedback by allowing the user to dynamically select between several channels of tactile feedback using variations in finger speed. In a controlled experiment, we show that a user can learn the dynamics of eyes-free tactile channel selection among different channels, and can reliable discriminate between different tactile patterns during multi-channel selection with an accuracy up to 90% when using two finger speed levels. We discuss the implications of this work for richer, more interactive tactile interfaces.
Aziz Niyazov, Nicolas Mellado, Loic Barthe, Marcos Serrano
Pervasive interfaces can present relevant information anywhere in our environment, and they are thus challenged by the non rectilinearity of the display surface (e.g. circular table) and by the presence of objects that can partially occlude the interface (e.g. a book or cup on the table). To tackle this problem, we propose a novel solution based on two core contributions: the decomposition of the interface into deformable graphical units, called Dynamic Decals, and the control of their position and behaviour by a constraint-based approach. Our approach dynamically deforms the interface when needed while minimizing the impact on its visibility and layout properties. To do so, we extend previous work on implicit deformations to propose and experimentally validate functions defining different decal shapes and new deformers modeling decal deformations when they collide. Then, we interactively optimize the decal placements according to the interface geometry and their interrelations. Relations are modeled as constraints and the interface evolution results from an easy and efficient to solve minimization problem. Our approach is validated by a user study showing that, compared to two baselines, Dynamic decals is an aesthetically pleasant interface that preserves visibility, layout and aesthetic properties.
Picking numbers is arguably the most frequently performed input task on smartwatches. This paper presents three new methods for picking numbers on smartwatches by performing directional swipes, twisting the wrist, and varying contact force on the screen. Unlike the default number picker, the proposed methods enable users to actively switch between slow-and-steady and fast-and-continuous increments and decrements during the input process. We evaluated these methods in two user studies. The first compared the new methods with the default input stepper method in both stationary and mobile settings. The second compared them for individual numeric values and values embedded in text. In both studies, swipe yielded a significantly faster input rate. Participants also found the method faster, more accurate, and the least mentally and physically demanding compared to the other methods. Accuracy rates were comparable between the methods.
Nalin Chhibber, Hemant Bhaskar, Fabrice Matulic, Daniel Vogel
We propose a style of hand postures to trigger commands on a laptop. The key idea is to perform hand-postures while keeping the hands on, beside, or below the keyboard, to align with natural laptop usage. 36 hand-posture variations are explored considering three resting locations, left or right hand, open or closed hand, and three wrist rotation angles. A 30-participant formative study measures posture preferences and generates a dataset of nearly 350K images under different lighting conditions and backgrounds. A deep learning recognizer achieves over 97% accuracy when classifying all 36 postures with 2 additional non-posture classes for typing and non-typing. A second experiment with 20 participants validates the recognizer under real-time usage and compares posture invocation time with keyboard shortcuts. Results find low error rates and fast formation time, indicating postures are close to current typing and pointing postures. Finally, practical use case demonstrations are presented, and further extensions discussed.
Terence Dickson, Rina R. Wehbe, Fabrice Matulic, Daniel Vogel
We propose CursorTap, an extension of Forlines et al.’s mixed, absolute and relative “HybridPointing” to large wall-sized multitouch displays. Our technique uses a relative pointing quasimode activated with one hand, while the other hand controls a distant cursor similar to a large touchpad. A controlled experiment compares the technique to standard absolute touch input as a baseline and a whole-display “Drag” technique representing a common alternate approach. Results show CursorTap is fastest for the common usage scenario of reaching distant targets and then returning to nearby targets. Overall, median selection times across distances are similar with CursorTap, but linearly increase with the other techniques. As further validation, a second study explore show people use CursorTap in a two-person game. The results found just over half of the participants choose to use CursorTap for half of the primary interactions where “enemies” are eliminated using a tap, drag, or lasso “tool”.
Nurit Kirshenbaum, Kylie Davidson, Jesse Harden, Chris North, Dylan Kobayashi, Ryan Theriot, Roderick S Tabalba Jr., Michael L. Rogers, Mahdi Belcaid, Andrew T Burks, Krishna N Bharadwaj, Luc Renambot, Andrew E Johnson, Lance Long, Jason Leigh
Technology have long been a partner of workplace meeting facilitation. The recent outbreak of COVID-19 and the cautionary measures to reduce its spread have made it more prevalent than ever before in the form of online-meetings. In this paper, we recount our experiences during weekly meetings in three modalities: using SAGE2 - a collaborative sharing software designed for large displays - for co-located meetings, using a standard projector for co-located meetings, and using the Zoom video-conferencing tool for distributed meetings. We view these meetings through the lens of effective meeting attributes and share ethnographic observations and attitudinal survey conducted in our research lab. We discuss patterns of content sharing, either sequential, parallel, or semi-parallel, and the potential advantages of creating complex canvases of content. We see how the SAGE2 tool affords parallel content sharing to create complex canvases, which represent queues of ideas and contributions (past, present, and future) using the space on a large display to suggest the progression of time through the meeting.
Eyes-free operation of mobile devices is critical in situations where the visual channel is either unavailable or attention is needed elsewhere. In such situations, vibrotactile tracing along paths or lines can help users to navigate and identify symbols and shapes without visual information. In this paper, we investigated the applicability of different measures that can determine the effectiveness of vibrotactile line tracing on touch screens. In two user studies, we compare trace Length error, Area error, and Fréchet distance as alternatives to commonly used trace Time. Our results show that a lower Fréchet distance is indicative of better comprehension of a line trace. Furthermore, we show that distinct feedback methods perform differently with varying geometric features in lines, and propose a segmented line design for tactile line tracing studies. We believe the results will inform future designs of eyes-free operation techniques and studies.
Affinity diagramming is widely applied to analyze qualitative data such as interview transcripts. It involves multiple analytic processes and is often performed collaboratively. Drawing on interviews with three practitioners and upon our own experience, we show how practitioners combine multiple analytic processes and adopt different artifacts to help them analyze their data. Current tools, however, fail to adequately support mixing analytic processes, devices, and collaboration styles. We present a vision and prototype ADQDA, a cross-device, collaborative affinity diagramming tool for qualitative data analysis, implemented using distributed web technologies. We show how this approach enables analysts to appropriate available pertinent digital devices as they fluidly migrate between analytic phases or adopt different methods and representations, all while preserving consistent analysis artifacts. We validate this approach through a set of application scenarios that explore how it enables new ways of analyzing qualitative data that better align with identified analytic practices.
Mark J. Berentsen, Marit Bentvelzen, Paweł W. Woźniak
Mountain Biking (MTB) is an increasingly popular outdoors activity which offers a unqiue connection to nature along with the health benefits of cardiovascular exercise. Yet, complex MTB technique is an entry barrier that often prevent novices from enjoying the sport. Developing interactive systems, which can support developing MTB proficiency can augment the outdoor experience and make the sport available to a larger group of users. To that end, we designed, implemented and evaluate MTBalance---a system which provides body posture feedback for beginner mountain bikers. Based on inertial tracking, MTBalance informs the user about how to correct their posture to improve MTB performance. We conducted a study in which we compared different feedback modalities for MTBalance. We observed that the system increased perceived balance awareness. Our work provides insights for designing body awareness systems for outdoor sports.
Over the past decade, many systems have been developed for humans to remotely connect to their pets at home. Yet little attention has been paid to how animals can control such systems and what the implications are of animals using internet systems. This paper explores the creation of a video call device to allow a dog to remotely call their human, giving the animal control and agency over technology in their home. After building and prototyping a novel interaction method over several weeks and iterations, we test our system with a dog and a human. Analysing our experience and data, we reflect on power relations, how to quantify an animal's user experience and what interactive internet systems look like with animal users. This paper builds upon Human-Computer Interaction methods for unconventional users, uncovering key questions that advance the creation of animal-to-human interfaces and animal internet devices.
Tao Morisaki, Ryoma Mori, Ryosuke Mori, Kohki Serizawa, Yasutoshi Makino, Yuta Itoh, Yuji Yamakawa, Hiroyuki Shinoda
The motion of a tool (e.g., pen or ball) can be applied to physically support human activities. In this study, as a new concept of human augmentation, through the application of noncontact forces, we propose to manipulate a tool that a person is using. We selected a moving ping-pong ball (PPB) as the target tool. As a proof-of-concept system, we developed an ultrasound-based curveball system by which table tennis players can shoot a curveball regardless of their physical ability. The system curves the trajectory of a moving PPB by presenting a noncontact ultrasound force. With the proposed system, players can determine the curve timing and the curve direction (left or right) using a racket-shaped controller. An actual table tennis match using the curveball system qualitatively showed that the player using the curveball system had the upper hand. Another user study applying using a ball dispenser quantitatively showed that the ultrasound-driven curveball increases the number of mistakes of the opponent player by 2.95 fold. These results indicate that the proposed concept is feasible.
Lauren Thevin, Nicolas Rodier, Bernard Oriola, Martin Hachet, Christophe Jouffrais, Anke M. Brock
Board games allow us to share collective entertainment experiences. They entertain because of the interactions between players, physical manipulation of tokens and decision making. Unfortunately, most board games exclude people with visual impairments as they were not initially designed for players with special needs. Through a user-centered design process with an accessible game library and visually impaired players, we observed challenges and solutions in making existing board games accessible through handcrafted solutions (tactile stickers, braille labels, etc.). In a second step, we used Spatial Augmented Reality, to make existing board games inclusive by adding interactivity (GameARt). In a case study with an existing board game considered as not accessible (Jamaica), we designed an interactive SAR version with touch detection (JamaicAR). We evaluated this prototype in a user study with 5 groups of 3 players each, including sighted, low vision and blind players. All players, independent of visual status, were able to play the Augmented Reality game. Moreover, the game was rated positively by all players regarding attractiveness, play engrossment, enjoyment and social connectivity. Our work shows that spatial augmented reality has the potential to make board games accessible to people with visual impairments when handcrafted adaptations fall short.