Get access to the Proceedings provided by ACM OpenTOC.


Get the opening and closing remarks slides here:

Times are displayed in time zone: (UTC) Coordinated Universal Time

15:00 - 19:00

Doctoral Symposium

With coffee break from 16:30 - 17:00

12:00 - 12:30

Opening Remarks

12:30 - 13:30

Session 1  

Chair: Paweł W. Woźniak (Utrecht University, Netherlands)

Exploring User Defined Gestures for Ear-Based Interactions


Yu-Chun Chen, Chia-Ying Liao, Shuo-wen Hsu and Da-Yuan Huang
The human ear is highly sensitive and accessible, making it especially suitable for being used as an interface for interacting with smart earpieces or augmented glasses. However, previous works on ear-based input mainly address gesture sensing technology and researcher-designed gestures. This paper aims to bring more understandings of gesture design. Thus, for a user elicitation study, we recruited 28 participants, each of whom designed gestures for 31 smart device-related tasks. This resulted in a total of 868 gestures generated. Upon the basis of these gestures, we compiled a taxonomy and concluded the considerations underlying the participants’ designs that also offer insights into their design rationales and preferences. Thereafter, based on these study results, we propose a set of user-defined gestures and share interesting findings. We hope this work can shed some light on not only sensing technologies of ear-based input, but also the interface design of future wearable interfaces.

Maps Around Me: 3D Multiview Layouts in Immersive Spaces


Kadek Ananta Satriadi, Barrett Ens, Maxime Cordeil, Tobias Czauderna and Bernhard Jenny
Visual exploration of maps often requires a contextual understanding at multiple scales and locations. Multiview map layouts, which present a hierarchy of multiple views to reveal detail at various scales and locations, have been shown to support better performance than traditional single-view exploration on desktop displays. This paper investigates extending such layouts of 2D maps into 3D immersive spaces, which are not limited by the real-estate barrier of physical screens and support sensemaking through spatial interaction. Based on our initial implementation of immersive multiview maps, we conduct an exploratory study with 16 participants aimed at understanding how people place and view such maps in immersive space. We observe the layouts produced by users performing map exploration search, comparison and route-planning tasks. Our qualitative analysis identified patterns in layout geometry (spherical, spherical cap, planar), overview-detail relationship (central window, occluding, coordinated) and interaction strategy. Based on these observations, along with qualitative feedback from a user walkthrough session, we identify implications and recommend features for immersive multiview map systems. Our main findings are that participants tend to prefer and arrange multiview maps in a spherical cap layout around them and that they often to rearrange the views during tasks.

Towards Rapid Prototyping of Foldable Graphical User Interfaces with Flecto


Iyad Khaddam, Jean Vanderdonckt, Salah Dowaji and Donatien Grolaux
Whilst new patents and announcements advertise the technical availability of foldable displays, which are capable to be folded to some extent, there is still a lack of fundamental and applied understanding of how to model, to design, and to prototype graphical user interfaces for these devices before actually implementing them. Without waiting for their off-the-shelf availability and without being tied to any physical foldable mechanism,Flecto defines a model, an associated notation, and a supporting software for prototyping graphical user interfaces running on foldable displays, such as foldable smartphone or assemblies of foldable surfaces. For this purpose,the Yoshizawa-Randlett diagramming system, used to describe the folds of origami models, is extended to characterize a foldable display and define possible interactive actions based on its folding operations. A guiding method for rapidly prototyping foldable user interfaces is devised and supported by Flecto, a design environment where foldable user interfaces are simulated in virtual reality instead of in physical reality. We report on a case study to demonstrate Flecto in action and we gather the feedback from users on Flecto, using Microsoft Product Reaction Cards.

Evaluation of Skill Improvement by Combining Skill and Difficulty Levels During Paper-cutting Production


Takafumi Higashi
Novices find it difficult to choose tasks of corresponding difficulty levels to their skills in paper-cutting production. Hence, this paper describes the changes in skill improvement by combining the skills of novices and difficulty levels of the production tasks of paper-cutting pictures. Flow state in the psychology verifies mental state based on the combination of skill and difficulty level. Thus, we evaluated the changes in skill improvement based on the combination of "skill level for controlling cutting pressure" and "difficulty level due to width and distance." Further, we developed a questionnaire to evaluate the psychological state during paper-cutting production based on the existing flow determined through a quantitative evaluation method. The results confirmed the difference between skill improvement and flow during production.

Investigating How Smartphone Movement is Affected by Lying Down Body Posture


Kaori Ikematsu, Haruna Oshima, Rachel Eardley and Itiro Siio
In this paper, we investigated how various ``lying down'' body postures affected the use of the smartphone user interface (UI) design. Extending previous research that studied body postures, handgrips, and the movement of the smartphone. We have done this in three steps; (1) An online survey that examined what type of lying down postures, participants, utilized when operating a smartphone; (2) We broke down these lying down postures in terms of body angle (i.e., users facing down, facing up, and on their side) and body support; (3) We conducted an experiment questioning the effects that these body angles and body supports had on the participants' handgrips. What we found was that the smartphone moves the most (is the most unstable) in the ``facing up (with support)'' condition. Additionally, we discovered that the participants preferred body posture was those that produced the least amount of motion (more stability) with their smartphones.

Break

14:00 - 15:00

Session 2  

Chair: Frederik Brudy (Autodesk Research Toronto, Canada)

Flex-ER: A Platform to Evaluate Interaction Techniques for Immersive Visualizations


María Jesús Lobo, Christophe Hurter and Pourang Irani
Extended Reality (XR) systems (which encapsulate AR, VR and MR) is an emerging field which enables the development of novel visualization and interaction techniques. To develop and to assess such techniques, researchers and designers have to face choices in terms of which development tools to adopt, and with very little information about how such tools support some of the very basic tasks for information visualization, such as selecting data items, linking and navigating. As a solution, we propose Flex-ER, a flexible web-based environment that enables users to prototype, debug and share experimental conditions and results. Flex-ER enables users to quickly switch between hardware platforms and input modalities by using a JSON specification that supports both defining interaction techniques and tasks at a low cost. We demonstrate the flexibility of the environment through three task design examples: brushing, linking and navigating. A qualitative user study suggest that Flex-ER can be helpful to prototype and explore different interaction techniques for immersive analytics.

Collaborative Tabletops for Blind People: The Effect of Auditory Design on Workspace Awareness


Daniel Mendes, Sofia Reis, João Guerreiro and Hugo Nicolau
Interactive tabletops offer unique collaborative features, particularly their size, geometry, orientation and, more importantly, the ability to support multi-user interaction. Although previous efforts were made to make interactive tabletops accessible to blind people, the potential to use them in collaborative activities remains unexplored. In this paper, we present the design and implementation of a multi-user auditory display for interactive tabletops, supporting three feedback modes that vary on how much information about the partners' actions is conveyed. We conducted a user study with ten blind people to assess the effect of feedback modes on workspace awareness and task performance. Furthermore, we analyze the type of awareness information exchanged and the emergent collaboration strategies. Finally, we provide implications for the design of future tabletop collaborative tools for blind users.

Autogrip: Enabling Force Feedback Devices to Self-Attach to End-Users' Fingers


Gareth Barnaby and Anne Roudaut
Autogrip is a new thimble that enables force feedback devices to autonomously attach themselves to a finger. Although self-attachment is a simple concept, it has never been explored in the space of force feedback devices where current thimble solutions require complex attachment procedures and often swapping between interchangeable parts. Self-attachment is advantageous in many applications such as: immersive spaces, multi-user, walk up and use contexts, and especially multi-point force feedback systems as it can allow a lone user to quickly attach multiple devices to fingers on both hands - a difficult task with current thimbles. We present the design of our open-source contraption, Autogrip, a one-size-fits-all thimble that retro-fits to existing force feedback devices, enabling them to automatically attach themselves to a fingertip. We demonstrate Autogrip by retrofitting it to a Phantom 1.5 and a 4-finger Mantis system. We report preliminary user-testing results that indicated Autogrip was three times faster to attach than a typical method. We also present further refinements based on user feedback.

Invited CHI: Rapid Iron-On User Interfaces: Hands-on Fabrication of Interactive Textile Prototypes


Konstantin Klamka, Raimund Dachselt and Jürgen Steimle
Rapid prototyping of interactive textiles is still challenging, since manual skills, several processing steps, and expert knowledge are involved. We present Rapid Iron-On User Interfaces, a novel fabrication approach for empowering designers and makers to enhance fabrics with interactive functionalities. It builds on heat-activated adhesive materials consisting of smart textiles and printed electronics, which can be flexibly ironed onto the fabric to create custom interface functionality. To support rapid fabrication in a sketching-like fashion, we developed a handheld dispenser tool for directly applying continuous functional tapes of desired length as well as discrete patches. We introduce versatile compositions techniques that allow for creating complex circuits, utilizing commodity textile accessories and sketching custom-shaped I/O modules. We further contribute a comprehensive library of components for input, output, wiring and computing. Three example applications, results from technical experiments and expert reviews demonstrate the functionality, versatility and potential of this approach.

Invited CHI: Household Surface Interactions: Understanding User Input Preferences and Perceived Home Experiences


Garreth W. Tigwell and Michael Crabb
Households contain a variety of surfaces that are used in a number of activity contexts. As ambient technology becomes commonplace in our homes, it is only a matter of time before these surfaces become linked to computer systems for Household Surface Interaction (HSI). However, little is known about the user experience attached to HSI, and the potential acceptance of HSI within modern homes. To address this problem, we ran a mixed methods user study with 39 participants to examine HSI using nine household surfaces and five common gestures (tap, press, swipe, drag, and pinch). We found that under the right conditions, surfaces with some amount of texture can enhance HSI. Furthermore, perceived good and poor user experience varied among participants for surface type indicating individual preferences. We present findings and design considerations based on surface characteristics and the challenges that users perceive they may have with HSI within their homes.

Break

16:00 - 17:00

Opening Keynote by Hrvoje Benko

Break

17:30 - 18:00

Posters Fast-Forward

18:00 - 19:00

Posters Session 1 + Social Event (Opening Reception)

12:00 - 12:30

Posters Fast-Forward

12:30 - 13:30

Posters Session 2

Break

14:00 - 15:00

Session 3     

Chair: Jean Vanderdonckt (Université catholique de Louvain, Belgium)

Multitouch Interaction with Parallel Coordinates on Large Vertical Displays


Joshua Reibert, Patrick Riehmann and Bernd Froehlich
This paper presents a multitouch vocabulary for interacting with parallel coordinates plots on wall-sized displays. The gesture set relies on principles such as two-finger range definition, a functional distinction of background and foreground for applying the Hold-and-Move concept to wall-sized displays as well as fling-based interaction for triggering and controlling long-range movements. Our implementation demonstrates that out-of-reach problems and limitations regarding multitouch technology and display size can be tackled by the coherent integration of our multitouch gestures. Expert reviews indicate that our gesture vocabulary helps to solve typical analysis tasks that require interaction beyond arms’ reach, and it also shows how often certain gestures were used.

Impact of Hand Used on One-Handed Back-of-Device Performance


ZHUZHI FAN and Céline Coutrix
One-handed Back-of-Device (BoD) interaction proved to be desired and sometimes unavoidable with mobile touchscreen device, for both preferred and non-preferred hands. Although users’ two hands are asymmetric, the impact of this asymmetry on the performance of mobile interaction has been little studied so far. Research on one-handed BoD interaction mostly focused on preferred hand, even though users cannot avoid in real life to handle their phone with their non-preferred hand. To better design one-handed BoDinteraction tailored for each hand, the identification and measure of the impact of their asymmetry are critical. In this paper, we study the impact on the performance of the asymmetry between the preferred and non-preferred hands when interacting with one hand in the back of a mobile touch device. Empirical data indicates that users’ preferred hand performs better than non-preferred hand in target acquisition tasks, for both speed (+10%) and accuracy (+20%). In contrast, for steering tasks, we found little difference in performance between users’ preferred and non-preferred hands. These results are useful for the HCI community to design mobile interaction techniques tailored for each hand only when it is necessary. We present implications for research and design directly based on the findings of the study, in particular to reduce the impact of the asymmetry between hands and improve the performance of both hands for target acquisition.

VibHand: On-Hand Vibrotactile Interface Enhancing Non-Visual Exploration of Digital Graphics


Kaixing ZHAO, Marcos Serrano, Bernard Oriola and Christophe Jouffrais
Visual graphics are widely spread in digital media and are useful in many contexts of daily life. However, access to this type of graphical information remains a challenging task for people with visual impairments. In this study, we designed and evaluated an on-hand vibrotactile interface that enables visually impaired users to explore digital graphics presented on tablets. We first conducted a set of exploratory tests with both blindfolded (BF) and visually impaired (VI) people to investigate several design factors. We then conducted a comparative experiment to verify that on-hand vibrotactile cues (indicating direction and progression) can enhance the non-visual exploration of digital graphics. The results based on 12 VI and 12 BF participants confirmed the usability of the technique and revealed that the visual status of the users does not impact graphics identification and comparison tasks.

Invited CHI: PolySense: Augmenting Textiles with Electrical Functionality using In-Situ Polymerization


Cedric Honnet, Hannah Perner-Wilson, Marc Teyssier, Bruno Fruchard, Jürgen Steimle, Ana C. Baptista and Paul Strohmeier
We present a method for enabling arbitrary textiles to sense pressure and deformation: In-situ polymerization supports integration of piezoresistive properties at the material level, preserving a textile's haptic and mechanical characteristics. We demonstrate how to enhance a wide set of fabrics and yarns using only readily available tools. To further support customisation by the designer, we present methods for patterning, as needed to create circuits and sensors, and demonstrate how to combine areas of different conductance in one material. Technical evaluation results demonstrate the performance of sensors created using our method is comparable to off-the-shelf piezoresistive textiles. As application examples, we demonstrate rapid manufacturing of on-body interfaces, tie-dyed motion-capture clothing, and zippers that act as potentiometers.

Invited CHI: Sprayable User Interfaces: Prototyping Large-Scale Interactive Surfaces with Sensors and Displays


Michael Wessely, Ticha Sethapakdi, Carlos Castillo, Jackson C. Snowden, Ollie Hanton, Isabel P. S. Qamar, Mike Fraser, Anne Roudaut and Stefanie Mueller
We present Sprayable User Interfaces: room-sized interactive surfaces that contain sensor and display elements created by airbrushing functional inks. Since airbrushing is inherently mobile, designers can create large-scale user interfaces on complex 3D geometries where existing stationary fabrication methods fail. To enable Sprayable User Interfaces, we developed a novel design and fabrication pipeline that takes a desired user interface layout as input and automatically generates stencils for airbrushing the layout onto a physical surface. After fabricating stencils from cardboard or projecting stencils digitally, designers spray each layer with an airbrush, attach a microcontroller to the user interface, and the interface is ready to be used. Our technical evaluation shows that Sprayable User Interfaces work on various geometries and surface materials, such as porous stone and rough wood. We demonstrate our system with several application examples including interactive smart home applications on a wall and a soft leather sofa, an interactive smart city application, and interactive architecture in public office spaces.

Break

16:00 - 17:00

Session 4  

Chair: Jo Vermeulen (Autodesk Research Toronto, Canada)

Eliciting Sketched Expressions of Command Intentions in an IDE


Sigurdur Gauti Samuelsson and Matthias Book
Software engineers routinely use sketches (informal, ad-hoc drawings) to visualize and communicate complex ideas for colleagues or themselves. We hypothesize that sketching could also be used as a novel interaction modality in integrated software development environments (IDEs), allowing developers to express desired source code manipulations by sketching right on top of the IDE, rather than remembering keyboard shortcuts or using a mouse to navigate menus and dialogs. For an initial assessment of the viability of this idea, we conducted an elicitation study that prompted software developers to express a number of common IDE commands through sketches. For many of our task prompts, we observed considerable agreement in how developers would express the respective commands through sketches, suggesting that further research on a more formal sketch-based visual command language for IDEs would be worthwhile.

Understanding Gesture and Speech Multimodal Interactions for Manipulation Tasks in Augmented Reality Using Unconstrained Elicitation


Adam Sinclair Williams and Francisco Raul Ortega
This research establishes a better understanding of the syntax choices in speech interactions and of how speech, gesture, and multimodal gesture and speech interactions are produced by users in unconstrained object manipulation environments using augmented reality. The work presents a multimodal elicitation study conducted with 24 participants. The canonical referents for translation, rotation, and scale were used along with some abstract referents (create, destroy, and select). In this study time windows for gesture and speech multimodal interactions are developed using the start and stop times of gestures and speech as well as the stoke times for gestures. While gestures commonly precede speech by 81(ms) we find that the stroke of the gesture is commonly within 10(ms) of the start of speech. Indicating that the information content of a gesture and its co-occurring speech are well aligned to each other. Lastly, the trends across the most common proposals for each modality are examined. Showing that the disagreement between proposals is often caused by a variation of hand posture or syntax. Allowing us to present aliasing recommendations to increase the percentage of users' natural interactions captured by future multimodal interactive systems.

Combating the Spread of Coronavirus by Modeling Fomites with Depth Cameras


Andrew D Wilson
Coronavirus is thought to spread through close contact from person to person. While it is believed that the primary means of spread is by inhaling respiratory droplets or aersols, it may also be spread by touching inanimate objects such as doorknobs and handrails that have the virus on it (``fomites''). The Centers for Disease Control and Prevention (CDC) therefore recommends individuals maintain ``social distance'' of more than six feet between one another. It further notes that an individual may be infected by touching a fomite and then touching their own mouth, nose or possibly their eyes. We propose the use of computer vision techniques to combat the spread of coronavirus by sounding an audible alarm when an individual touches their own face, or when multiple individuals come within six feet of one another or shake hands. We propose using depth cameras to track where people touch parts of their physical environment throughout the day, and a simple model of disease transmission among potential fomites. Projection mapping techniques can be used to display likely fomites in realtime, while headworn augmented reality systems can be used by custodial staff to perform more effective cleaning of surfaces. Such techniques may find application in particularly vulnerable settings such as schools, long-term care facilities and physician offices.

Expanding Side Touch Input on Mobile Phones: Finger Reachability and Two-Dimensional Taps and Flicks using the Index and Thumb


Yen-Ting Yeh, Quentin Roy, Antony Albert Raj Irudayaraj and Daniel Vogel
We investigate the performance of one-handed touch input on the side of a mobile phone. A first experiment examines grip change and subjective preference when reaching for side targets using different fingers. Results show all locations can be reached with at least one finger, but the thumb and index are most preferred and require less grip change for positions along the sides. Two following experiments examine taps and flicks using the thumb and index finger in a new two-dimensional input space. A side-touch sensor is simulated with a combination of capacitive sensing and motion tracking to distinguish touches on the lower, middle, or upper edges. When tapping, index and thumb speeds are similar with thumb more accurate and comfortable, and the lower edge is most reliable with the middle edge most comfortable. When flicking with the thumb, the upper edge is fast and rated highly.

Invited CHI: Phasking on Paper: Accessing a Continuum of PHysically Assisted SKetchING


Soheil Kianzad, Yuxiang Huang, Robert Xiao and Karon E. MacLean
When sketching, we must choose between paper (expressive ease, ruler and eraser) and computational assistance (parametric support, a digital record). PHysically Assisted SKetching provides both, with a pen that displays force constraints with which the sketcher interacts as they draw on paper. Phasking provides passive, "bound" constraints (like a ruler), or actively "brings" the sketcher along a commanded path (e.g., a curve), which they can violate for creative variation. The sketcher modulates constraint strength (control sharing) by bearing down on the pen-tip. Phasking requires untethered, graded force-feedback, achieved by modifying a ballpoint drive that generates force through rolling surface contact. To understand phasking's viability, we implemented its interaction concepts, related them to sketching tasks and measured device performance. We assessed the experience of 10 sketchers, who could understand, use and delight in phasking, and who valued its control-sharing and digital twinning for productivity, creative control and learning to draw.

Break

17:30 - 19:00

Demos Session 1 + Social Event  

Paper demo: Can we Recognize 3D Trajectories as 2D Multi-stroke Gestures?


Mehdi Ousmer, Nathan Magrofuoco, Arthur Sluÿters, Paolo Roselli and Jean Vanderdonckt
While end users are today able to acquire genuine 3D gestures thanks to many input devices, they sometimes tend to capture only 3Dtrajectories, which are any 3D uni-path, uni-stroke gesture performed in thin air. Such trajectories with their(𝑥,𝑦,𝑧)coordinates could be interpreted as three 2D multi-stroke gestures projected on three planes, i.e.,𝑋𝑌,𝑌𝑍, and𝑍𝑋, thus making them admissible for established 2D stroke gesture recognizers. To address the question whether they could be effectively and efficiently recognized, four 2D stroke gesture recognizers, i.e., $P, $P+, $Q, and Rubine, have been extended to consider the third dimension:$𝑃3,$𝑃+3,$𝑄3, and Rubine3D. The Rubine-Sheng extension is also included. Two new variations are also created to investigate some sampling flexibility:$𝐹for flexible Dollar recognition and FreeHandUni. These seven recognizers are compared against three challenging datasets containing 3D trajectories, i.e., SHREC2019 and 3DTCGS in a user-independent scenario, and 3DMadLabSD, in both user-dependent and user-independent scenarios. Individual recognition rates and execution times per dataset and aggregated ones on all datasets show a comparable performance to state-of-the-art 2D multi-stroke recognizers, thus suggesting that the approach is viable. We suggest some usage of these recognizers depending on conditions. These results are not generalizable to other types of 3D mid-air gestures.

Paper demo: Towards Rapid Prototyping of Foldable Graphical User Interfaces with Flecto


Iyad Khaddam, Jean Vanderdonckt, Salah Dowaji and Donatien Grolaux
Whilst new patents and announcements advertise the technical availability of foldable displays, which are capable to be folded to some extent, there is still a lack of fundamental and applied understanding of how to model, to design, and to prototype graphical user interfaces for these devices before actually implementing them. Without waiting for their off-the-shelf availability and without being tied to any physical foldable mechanism,Flecto defines a model, an associated notation, and a supporting software for prototyping graphical user interfaces running on foldable displays, such as foldable smartphone or assemblies of foldable surfaces. For this purpose,the Yoshizawa-Randlett diagramming system, used to describe the folds of origami models, is extended to characterize a foldable display and define possible interactive actions based on its folding operations. A guiding method for rapidly prototyping foldable user interfaces is devised and supported by Flecto, a design environment where foldable user interfaces are simulated in virtual reality instead of in physical reality. We report on a case study to demonstrate Flecto in action and we gather the feedback from users on Flecto, using Microsoft Product Reaction Cards.

Demo: Interactions with a Hybrid Map for Navigation Information Visualization in Virtual Reality


Sabah Boustila, Mehmet Ozkan and Dominique Bechmann
This work suggests an interactive navigation aid in Virtual Reality using an head-mounted display (HMD). It offers an hybrid map which is a 2D map enriched with high-quality 3D models of landmarks. Hence, in addition to the information presented on the 2D map (street layout), we represented 3D models of buildings with particular shapes that could be used as reference points for map alignment or route following. We also implemented real-time interactions with the hybrid map to allow users to display additional information according to their preferences and needs. As digital maps on smartphones are commonly used in daily life, we offered interactions with the map through a smartphone interface. Interactions using Oculus Hand controllers were also implemented. We assume that interactions could be more intuitive and natural with the smartphone. In the future, a study will be conducted to compare interaction performance with the map using each device and identify important information for maps.

Demo: SIERA: The Seismic Information Extended Reality Analytics tool


Bryson Lawton, Hannah Sloan, Patrick Abou Gharib, Frank Maurer, Marcelo Guarido De Andrade, Ali Fathalian and Daniel Trad Trad
Three-dimensional seismic data is notoriously challenging to visualize effectively and analyze efficiently due to its huge volumetric format and highly complex nature. This process becomes even more complicated when attempting to analyze machine learning results produced from seismic datasets and see how such results relate back to the original data. SIERA presents a solution to this problem through visualizing seismic data in an extended reality environment as highly customizable 3D data visualizations, and via physical movement, allow one to easily navigate these large, 3D information spaces effectively. This allows for a more immersive and intuitive way to interact with seismic data and machine learning results, providing an improved experience over conventional analytics tools.

Demo: Virtual Reality for Understanding Multidimensional Spatiotemporal Phenomena in Neuroscience


Zahra Aminolroaya, Seher Dawar, Colin Bruce Josephson, Samuel Wiebe and Frank Maurer
Neuroscientists traditionally use information representations on 2D displays to analyze multivariate spatial and temporal datasets for an evaluation stage before neurosurgery. However, it is challenging to mentally integrate the information from these datasets. Our immersive tool aims to help neuroscientists to understand spatiotemporal phenomena inside a brain during the evaluation. We refined our tool through different phases by gathering feedback from the medical experts. Early feedback from neurologists suggests that using virtual reality for epilepsy presurgical evaluation can be useful in the future.

Paper demo: Combating the Spread of Coronavirus by Modeling Fomites with Depth Cameras


Andrew D Wilson
Coronavirus is thought to spread through close contact from person to person. While it is believed that the primary means of spread is by inhaling respiratory droplets or aersols, it may also be spread by touching inanimate objects such as doorknobs and handrails that have the virus on it (``fomites''). The Centers for Disease Control and Prevention (CDC) therefore recommends individuals maintain ``social distance'' of more than six feet between one another. It further notes that an individual may be infected by touching a fomite and then touching their own mouth, nose or possibly their eyes. We propose the use of computer vision techniques to combat the spread of coronavirus by sounding an audible alarm when an individual touches their own face, or when multiple individuals come within six feet of one another or shake hands. We propose using depth cameras to track where people touch parts of their physical environment throughout the day, and a simple model of disease transmission among potential fomites. Projection mapping techniques can be used to display likely fomites in realtime, while headworn augmented reality systems can be used by custodial staff to perform more effective cleaning of surfaces. Such techniques may find application in particularly vulnerable settings such as schools, long-term care facilities and physician offices.

11:00 - 12:00

Demos Session 2 + Social Event  

Demo: ShadowDancXR: Body Gesture Digitization for Low-cost Extended Reality (XR) Headsets


Difeng Yu, Weiwei Jiang, Chaofan Wang, Tilman Dingler, Eduardo Velloso and Jorge Goncalves
Low-cost, smartphone-based extended reality (XR) headsets, such as Google Cardboard, are more accessible but lack advanced features like body-tracking, which limits the expressiveness of interaction. We introduce ShadowDancXR, a technique that relies on the front-facing camera of the smartphone to reconstruct users' body gestures from projected shadows, which allows for wider range of interaction possibilities with low-cost XR. Our approach requires no embedded accessories and serves as an inexpensive, portable solution for body gesture digitization. We provide two interactive experiences to demonstrate the feasibility of our concept.

Demo: Project Esky: Enabling High Fidelity Augmented Reality on an Open Source Platform


Damien Rompapas, Daniel Flores Quiros, Charlton Rodda, Bryan Christopher Brown, Noah Benjamin Zerkin and Alvaro Cassinelli
This demonstration showcases a complete Open-Source Augmented Reality (AR) modular platform capable of high fidelity natural hand-interactions with virtual content, high field of view, and spatial mapping for environment interactions. We do this via several live desktop demonstrations. This includes a piano hooked up to a VST plugin, and a live E-Learning example with an animated car engine. Finally, included in this demonstration is a completed open source schematic, allowing anyone interested in utilizing our proposed platform to engage with high fidelity AR. It is our hope that the work described in this demo will be a stepping stone towards bringing high-fidelity AR content to researchers and commodity users alike.

Demo: Face HotPot: An Interactive Projection Device for Hot Pot


Rao Xu, Yanjun Chen and Qin Wu
Hot pot is a popular cuisine in China. It is also one of the first meal choices that young people elect to make together. However, during the preparation period, especially with new friends, there will always be a few moments of awkward silence while waiting for it to cook. To relieve this, we designed and developed Face HotPot, an interactive device for hot pots, consisting of a personalized dipping sauce matching system and interactive game. While waiting for the meal, users can get a personalized hot pot dipping sauce based on their facial features and get food by participating in the interactive projection game. The whole process of Face HotPot can continuously provide users with new chat topics and promote mutual understanding of each other's personalities, thus optimizing their eating and social experience with hot pot. In this paper, we demonstrate Face HotPot design motivation and system implementation.

Demo: EDUCO: An Augmented Reality application to enrich learning experience of students for History and Civics subjects


P R Arjun and Anmol Srivastava
Educo focuses on the upliftment of the subject History & Civics of class 10th. The subject is considered as one of the most boring subjects due to its theoretical approach and lack of interactivity. Through immersive technologies like AR and VR, the approach to the subject, as well as the number of students preferring to take the subject for further education, is supposed to increase at an exponential rate. In this project, the research was conducted in the preliminary phase followed by literature reviews, market analysis and user surveys (N=26).The data was analysed using card sorting technique. In the ideation phase, User Interface had two directions to continue the project of which we choose the most affordable and versatile one. With the direction confirmed, explorations were done for the UI after Information Architecture and Wireframing. The prototype was made in Adobe XD followed by Unity for the AR and VR content. Heuristic analysis was conducted to evaluate and improve the experience as well as the visual part. Throughout the timeline of the project, new challenges were encountered in different phases of the project notably the COVID-19 outbreak due to which interviews were conducted through online platforms rather than Contextual inquiry which was done at the starting before the pandemic.Also the User Testing part was replaced by Heuristic Analysis and Retrospective think aloud protocol.

Demo: PUMP FIT: An Augmented Reality Application Which Helps People with Their Workout More Efficiently


Ashok Binoy and Anmol Srivastava
Project Pump fit focuses on the ways how to improve and make gym exercises, workouts more effective and less strainfu-l. By using the method of Augmented reality(AR) in which a virtual instructor will be showing how to do the exercise and will correct if the person is doing it wrong. Giving a more close monitoring to each individual making it more efficient and useful. In this project, the research was conducted in the preliminary phase followed by literature reviews, market analysis and user surveys.The data was analysed using card sorting technique. In the ideation phase, UI had two directions to continue the project of which I choose the most affordable and versatile one. With the direction confirmed, explorations were done for the UI after Information Architecture and Wireframing. The prototype was made in Adobe XD followed by Unity for the AR and VR content. Heuristic analysis was conducted to evaluate and improve the experience as well as the visual part. Throughout the timeline of the project, a lot of challenges were faced, mainly COVID-19 being one of them. Due to which interviews were conducted through online platforms. User testing part was replaced by Heuristic Analysis There is always a scope of improvement which is noted in the conclusion.

Break

12:30 - 13:30

Session 5  

Chair: Arnaud Prouzeau (Monash University, Australia)

Observing the Effects of Predictive Features on Mobile Keyboards


Ohoud Alharbi, Wolfgang Stuerzlinger and Felix Putze
Mobile users rely on typing assistant mechanisms such as prediction and autocorrect. Previous studies on mobile keyboards showed decreased performance for heavy use of word prediction which identifies a need for more research to better understand the effectiveness of predictive features for different users. Our work aims to better understand user interaction with autocorrections and the predictionpanel while entering text, in particular when these approaches fail. We investigate this issue through a crowd-sourced study with 170 participants.

CAPath: 3D-Printed Interfaces with Conductive Points in Grid Layout to Extend Capacitive Touch Inputs


Kunihiro Kato, Kaori Ikematsu and Yoshihiro Kawahara
We propose a 3D-printed interface, CAPath, in which conductive contact points are in a grid layout. This structure allows not only specific inputs (e.g., scrolling or pinching) but also general 2D inputs and gestures that fully leverage the "touch surface." We provide the requirements to fabricate the interface and implement a designing system to generate 3D objects in the conductive grid structure. The CAPath interface can be utilized in the uniquely shaped interfaces and opens up further application fields that cannot currently be accessed with existing passive touch extensions. Our contributions also include an evaluation for the recognition accuracy of the touch operations with the implemented interfaces. The results show that our technique is promising to fabricate customizable touch-sensitive interactive objects.

Rethinking the Dual Gaussian Distribution Model for Predicting Touch Accuracy in On-screen-start Pointing Tasks


Shota Yamanaka and Hiroki Usuba
The dual Gaussian distribution hypothesis has been used to predict the success rate of target pointing on touchscreens. Bi and Zhai evaluated their success-rate prediction model in off-screen-start pointing tasks. However, we found that their prediction model could also be used for on-screen-start pointing tasks. We discuss the reasons why and empirically validate our hypothesis in a series of four experiments with various target sizes and distances. The prediction accuracy of Bi and Zhai's model was high in all of the experiments, with a 10-point absolute (or 14.9% relative) prediction error at worst. Also, we show that there is no clear benefit to integrating the target distance when predicting the endpoint variability and success rate.

Exploring the Design Space of Badge Based Input


Arpit Bhatia, Yajur Ahuja, Suyash Agarwal and Aman Parnami
In this paper, we explore input with wearables that can be attached and detached at will from any of our regular clothes. These wearables do not cause any permanent effect on our clothing and are suitable to be worn anywhere, thus making them very similar to badges we wear. To explore this idea of non-permanent badge input, we studied various methods to fasten objects to our clothing and organise them in the form of a design space. We leverage this synthesis, along with literature and existing products to present possible interaction gestures these badge-based wearables can enable.

Towards domain-independent complex and fine-grained gesture recognition with RFID


Dian Cao, Dong Wang, Qian Zhang, Run Zhao and Yinggang Yu
Gesture recognition plays a fundamental role in emerging Human-Computer Interaction (HCI) paradigms. Recent advances in wireless sensing show promise for device-free and pervasive gesture recognition. Among them, RFID has gained much attention given its low-cost, light-weight and pervasiveness, but pioneer studies on RFID sensing still suffer two major problems when it comes to gesture recognition. The first is they are only evaluated on simple whole-body activities, rather than complex and fine-grained hand gestures. The second is they can not effectively work without retraining in new domains, i.e. new users or environments. To tackle these problems, in this paper, we propose RFree-GR, a domain-independent RFID system for complex and fine-grained gesture recognition. First of all, we exploit signals from the multi-tag array to profile the sophisticated spatio-temporal changes of hand gestures. Then, we elaborate a Multimodal Convolutional Neural Network (MCNN) to aggregate information across signals and abstract complex spatio-temporal patterns. Furthermore, we introduce an adversarial model to our deep learning architecture to remove domain-specific information while retaining information relevant to gesture recognition. We extensively evaluate RFree-GR on 16 commonly used American Sign Language (ASL) words. The average accuracy for new users and environments(new setup and new position) are 89.03%, 90.21% and 88.38%, respectively, significantly outperforming existing RFID based solutions, which demonstrates the superior effectiveness and generalizability of RFree-GR.

Invited CHI: KirigamiTable: Designing for Proxemic Transitions with a Shape-Changing Tabletop


Jens Emil Grønbæk, Majken Kirkegaard Rasmussen, Kim Halskov and Marianne Graves Petersen
A core challenge in tabletop research is to support transitions between individual activities and team work. Shape-changing tabletops open up new opportunities for addressing this challenge. However, interaction design for shape-changing furniture is in its early stages - so far, research has mainly focused on triggering shape-changes, and less on the actual interface transitions. We present KirigamiTable - a novel actuated shape-changing tabletop for supporting transitions in collaborative work. Our work builds on the concept of Proxemic Transitions, considering the dynamic interplay between social interactions, interactive technologies and furniture. With KirigamiTable, we demonstrate the potential of interactions for proxemic transitions that combine transformation of shape and digital contents. We highlight challenges for shape-changing tabletops: initiating shape and content transformations, cooperative control, and anticipating shape-change. To address these challenges, we propose a set of novel interaction techniques, including shape-first and content-first interaction, cooperative gestures, and physical and digital preview of shape-changes.

Break

14:00 - 15:00

Session 6     

Chair: Raimund Dachselt (University of Dresden, Germany)

Can we Recognize 3D Trajectories as 2D Multi-stroke Gestures?


Mehdi Ousmer, Nathan Magrofuoco, Arthur Sluÿters, Paolo Roselli and Jean Vanderdonckt
While end users are today able to acquire genuine 3D gestures thanks to many input devices, they sometimes tend to capture only 3Dtrajectories, which are any 3D uni-path, uni-stroke gesture performed in thin air. Such trajectories with their(𝑥,𝑦,𝑧)coordinates could be interpreted as three 2D multi-stroke gestures projected on three planes, i.e.,𝑋𝑌,𝑌𝑍, and𝑍𝑋, thus making them admissible for established 2D stroke gesture recognizers. To address the question whether they could be effectively and efficiently recognized, four 2D stroke gesture recognizers, i.e., $P, $P+, $Q, and Rubine, have been extended to consider the third dimension:$𝑃3,$𝑃+3,$𝑄3, and Rubine3D. The Rubine-Sheng extension is also included. Two new variations are also created to investigate some sampling flexibility:$𝐹for flexible Dollar recognition and FreeHandUni. These seven recognizers are compared against three challenging datasets containing 3D trajectories, i.e., SHREC2019 and 3DTCGS in a user-independent scenario, and 3DMadLabSD, in both user-dependent and user-independent scenarios. Individual recognition rates and execution times per dataset and aggregated ones on all datasets show a comparable performance to state-of-the-art 2D multi-stroke recognizers, thus suggesting that the approach is viable. We suggest some usage of these recognizers depending on conditions. These results are not generalizable to other types of 3D mid-air gestures.

Side-Crossing Menus: Enabling Large Sets of Gestures for Small Surfaces


Bruno Fruchard, Eric Lecolinet and Olivier Chapuis
Supporting many gestures on small surfaces allows users to interact remotely with complex environments such as smart homes, large remote displays, or virtual reality environments, as well as switching between environments (e.g., AR setup in a smart home). Providing eyes-free gestures in these contexts is important as this avoids disrupting the user’s visual attention. However, very few techniques enable large sets of commands on small wearable devices supporting the user’s mobility, and even less provide eyes-free interaction. We present Side-Crossing Menus (SCM), a gestural technique enabling a large set of gestures on a smartwatch. Contrary to most gestural techniques, SCM relies on broad and shallow menus that favor small and rapid gestures. We demonstrate with a first experiment that users can efficiently perform these gestures eyes-free aided with tactile cues; 95% accuracy after training 20 minutes on a representative set of 30 gestures among 172. We focus on the learning of SCM gestures in a second experiment and do not observe significant differences with conventional Multi-stroke Marking Menus in gesture accuracy and recall rate. As both techniques utilize contrasting menu structures, our results indicate that SCM is a compelling alternative for enhancing the input capabilities of small surfaces.

Effect of Physical Challenging Activity on Tactile Texture Recognition for Mobile Surface


Adnane GUETTAF, Yosra Rekik and Laurent Grisoni
Previous research demonstrated the ability for users to accurately recognize tactile textures on mobile surface. However, the experiments were only run in a lab setting and the ability for users to recognize tactile texture in a real-world environment remains unclear. In this paper, we investigate the effects of physical challenging activities on tactile textures recognition. We consider five conditions: (1) seated on an office, (2) standing in an office, (3) seated in the tramway, (4) standing in the tramway and (5) walking in the street. Our findings indicate that when walking, performances deteriorated compared to the remainder conditions. However, despite this deterioration, the recognition rate stay higher than 82% suggesting that tactile texture could be effectively recognized and used by users in different physical challenges activities including walking.

Subtletee: Augmenting Posture Awareness for Beginner Golfers


Mikołaj P. Woźniak, Julia Dominiak, Michał Pieprzowski, Piotr Ładoński, Krzysztof Grudzień, Lars Lischke, Andrzej Romanowski and Paweł W. Woźniak
While the sport of golf offers unique opportunities for people of all ages to be active, it also requires mastering complex movements before it can be fully enjoyed. Extensive training is required to begin achieving any success, which may demotivate prospective players away from the sport. To aid beginner golfers and enable them to practice without supervision, we designed Subtletee---a system that enhances the bodily awareness of golfers by providing feedback on the position of their feet and elbows. We first identified key problems golf beginners face through expert interviews. We then evaluated different feedback modalities for Subtletee---visual, tactile and auditory in an experiment with 20 beginner golfers. We found that providing proprioception feedback increased the users' performance on the driving range, with tactile feedback providing the most benefits. Our work provides insights on delivering feedback during physical activity that requires high precision movements.

Interaction with Virtual Content using Augmented Reality: a User Study in Assembly Procedures


Bernardo Marques, João Alves, Miguel Neves, Inês Justo, André Santos, Raquel Rainho, Rafael Maio, Dany Costa, Carlos Ferreira, Paulo Dias and Beatriz Sousa Santos
Assembly procedures are a common task in several domains of application. Augmented Reality (AR) has been considered as having great potential in assisting users while performing such tasks. However, poor interaction design and lack of studies, often results in complex and hard to use AR systems. This paper considers three different interaction methods for assembly procedures (Touch gestures in a mobile device; Mobile Device movements; 3D Controllers and See-through HMD). It also describes a controlled experiment aimed at comparing acceptance and usability between these methods in an assembly task using Lego blocks. The main conclusions are that participants were faster using the 3D controllers and Video see-through HMD. Participants also preferred the HMD condition, even though some reported light symptoms of nausea, sickness and/or disorientation, probably due to limited resolution of the HMD cameras used in the video see-through setting and some latency issues. In addition, although some research claims that manipulation of virtual objects with movements of the mobile device can be considered as natural, this condition was the least preferred by the participants.

15:00 - 15:30

Townhall Meeting

Chair: Miguel Nacenta

Break

16:00 - 17:00

10 Years Impact + Best Paper

Break

17:30 - 18:30

Closing Keynote by Pourang Irani

18:30 - 19:00

Closing Remarks