We awarded several distinguished contributions to highlight important work as well as promising future directions in the area of tabletops and interactive surfaces. Additionally, the audience voted the best poster and best demonstration. Congratulations to all winners!

  ACM Europe Council Best Paper Award

Collaborative Tabletops for Blind People: The Effect of Auditory Design on Workspace Awareness

Daniel Mendes, Sofia Reis, João Guerreiro and Hugo Nicolau
Interactive tabletops offer unique collaborative features, particularly their size, geometry, orientation and, more importantly, the ability to support multi-user interaction. Although previous efforts were made to make interactive tabletops accessible to blind people, the potential to use them in collaborative activities remains unexplored. In this paper, we present the design and implementation of a multi-user auditory display for interactive tabletops, supporting three feedback modes that vary on how much information about the partners' actions is conveyed. We conducted a user study with ten blind people to assess the effect of feedback modes on workspace awareness and task performance. Furthermore, we analyze the type of awareness information exchanged and the emergent collaboration strategies. Finally, we provide implications for the design of future tabletop collaborative tools for blind users.

 Best Paper Honorable Mention Awards

Exploring User Defined Gestures for Ear-Based Interactions

Yu-Chun Chen, Chia-Ying Liao, Shuo-wen Hsu and Da-Yuan Huang
The human ear is highly sensitive and accessible, making it especially suitable for being used as an interface for interacting with smart earpieces or augmented glasses. However, previous works on ear-based input mainly address gesture sensing technology and researcher-designed gestures. This paper aims to bring more understandings of gesture design. Thus, for a user elicitation study, we recruited 28 participants, each of whom designed gestures for 31 smart device-related tasks. This resulted in a total of 868 gestures generated. Upon the basis of these gestures, we compiled a taxonomy and concluded the considerations underlying the participants’ designs that also offer insights into their design rationales and preferences. Thereafter, based on these study results, we propose a set of user-defined gestures and share interesting findings. We hope this work can shed some light on not only sensing technologies of ear-based input, but also the interface design of future wearable interfaces.

Towards Rapid Prototyping of Foldable Graphical User Interfaces with Flecto

Iyad Khaddam, Jean Vanderdonckt, Salah Dowaji and Donatien Grolaux
* Work demonstrated during the conference
Whilst new patents and announcements advertise the technical availability of foldable displays, which are capable to be folded to some extent, there is still a lack of fundamental and applied understanding of how to model, to design, and to prototype graphical user interfaces for these devices before actually implementing them. Without waiting for their off-the-shelf availability and without being tied to any physical foldable mechanism,Flecto defines a model, an associated notation, and a supporting software for prototyping graphical user interfaces running on foldable displays, such as foldable smartphone or assemblies of foldable surfaces. For this purpose,the Yoshizawa-Randlett diagramming system, used to describe the folds of origami models, is extended to characterize a foldable display and define possible interactive actions based on its folding operations. A guiding method for rapidly prototyping foldable user interfaces is devised and supported by Flecto, a design environment where foldable user interfaces are simulated in virtual reality instead of in physical reality. We report on a case study to demonstrate Flecto in action and we gather the feedback from users on Flecto, using Microsoft Product Reaction Cards.

Combating the Spread of Coronavirus by Modeling Fomites with Depth Cameras

Andrew D Wilson
* Work demonstrated during the conference
Coronavirus is thought to spread through close contact from person to person. While it is believed that the primary means of spread is by inhaling respiratory droplets or aersols, it may also be spread by touching inanimate objects such as doorknobs and handrails that have the virus on it (``fomites''). The Centers for Disease Control and Prevention (CDC) therefore recommends individuals maintain ``social distance'' of more than six feet between one another. It further notes that an individual may be infected by touching a fomite and then touching their own mouth, nose or possibly their eyes. We propose the use of computer vision techniques to combat the spread of coronavirus by sounding an audible alarm when an individual touches their own face, or when multiple individuals come within six feet of one another or shake hands. We propose using depth cameras to track where people touch parts of their physical environment throughout the day, and a simple model of disease transmission among potential fomites. Projection mapping techniques can be used to display likely fomites in realtime, while headworn augmented reality systems can be used by custodial staff to perform more effective cleaning of surfaces. Such techniques may find application in particularly vulnerable settings such as schools, long-term care facilities and physician offices.

VibHand: On-Hand Vibrotactile Interface Enhancing Non-Visual Exploration of Digital Graphics

Kaixing ZHAO, Marcos Serrano, Bernard Oriola and Christophe Jouffrais
Visual graphics are widely spread in digital media and are useful in many contexts of daily life. However, access to this type of graphical information remains a challenging task for people with visual impairments. In this study, we designed and evaluated an on-hand vibrotactile interface that enables visually impaired users to explore digital graphics presented on tablets. We first conducted a set of exploratory tests with both blindfolded (BF) and visually impaired (VI) people to investigate several design factors. We then conducted a comparative experiment to verify that on-hand vibrotactile cues (indicating direction and progression) can enhance the non-visual exploration of digital graphics. The results based on 12 VI and 12 BF participants confirmed the usability of the technique and revealed that the visual status of the users does not impact graphics identification and comparison tasks.

  People's Choice Best Demo Award

Combating the Spread of Coronavirus by Modeling Fomites with Depth Cameras

Andrew D Wilson
* Demonstration accompanying an accepted paper
Coronavirus is thought to spread through close contact from person to person. While it is believed that the primary means of spread is by inhaling respiratory droplets or aersols, it may also be spread by touching inanimate objects such as doorknobs and handrails that have the virus on it (``fomites''). The Centers for Disease Control and Prevention (CDC) therefore recommends individuals maintain ``social distance'' of more than six feet between one another. It further notes that an individual may be infected by touching a fomite and then touching their own mouth, nose or possibly their eyes. We propose the use of computer vision techniques to combat the spread of coronavirus by sounding an audible alarm when an individual touches their own face, or when multiple individuals come within six feet of one another or shake hands. We propose using depth cameras to track where people touch parts of their physical environment throughout the day, and a simple model of disease transmission among potential fomites. Projection mapping techniques can be used to display likely fomites in realtime, while headworn augmented reality systems can be used by custodial staff to perform more effective cleaning of surfaces. Such techniques may find application in particularly vulnerable settings such as schools, long-term care facilities and physician offices.

  Best Demo Award

Project Esky: Enabling High Fidelity Augmented Reality on an Open Source Platform

Damien Rompapas, Daniel Flores Quiros, Charlton Rodda, Bryan Christopher Brown, Noah Benjamin Zerkin and Alvaro Cassinelli
This demonstration showcases a complete Open-Source Augmented Reality (AR) modular platform capable of high fidelity natural hand-interactions with virtual content, high field of view, and spatial mapping for environment interactions. We do this via several live desktop demonstrations. This includes a piano hooked up to a VST plugin, and a live E-Learning example with an animated car engine. Finally, included in this demonstration is a completed open source schematic, allowing anyone interested in utilizing our proposed platform to engage with high fidelity AR. It is our hope that the work described in this demo will be a stepping stone towards bringing high-fidelity AR content to researchers and commodity users alike.

 Best Poster Award

Smart Elevator Hall: Prototype of In-building Guiding Using Interactive Floor Display

Takashi Matsumoto, Motoaki Yamazaki, Yuya Igarashi, Ryotaro Yoshida and Michihito Shiraishi
The authors are designing a guiding system using interactive floor display. This paper introduces a prototype of a Smart Elevator Hall as an example of the interactive floor. User scenarios were developed to design detailed functions of the elevator hall and the prototype was implemented with a sensor and LED panels. The system was tested with scenario skits by designers to evaluate the concept’s potential and issues for further developments.

  10 Years Impact Award

Code space: touch + air gesture hybrid interactions for supporting developer meetings

Andrew Bragdon, Rob DeLine, Ken Hinckley and Meredith Ringel Morris
We present Code Space, a system that contributes touch + air gesture hybrid interactions to support co-located, small group developer meetings by democratizing access, control, and sharing of information across multiple personal devices and public displays. Our system uses a combination of a shared multi-touch screen, mobile touch devices, and Microsoft Kinect sensors. We describe cross-device interactions, which use a combination of in-air pointing for social disclosure of commands, targeting and mode setting, combined with touch for command execution and precise gestures. In a formative study, professional developers were positive about the interaction design, and most felt that pointing with hands or devices and forming hand postures are socially acceptable. Users also felt that the techniques adequately disclosed who was interacting and that existing social protocols would help to dictate most permissions, but also felt that our lightweight permission feature helped presenters manage incoming content.