Can we Recognize 3D Trajectories as 2D Multi-stroke Gestures?

Mehdi Ousmer, Nathan Magrofuoco, Arthur Sluรฟters, Paolo Roselli and Jean Vanderdonckt
* Demonstration accompanying an accepted paper
While end users are today able to acquire genuine 3D gestures thanks to many input devices, they sometimes tend to capture only 3Dtrajectories, which are any 3D uni-path, uni-stroke gesture performed in thin air. Such trajectories with their(๐‘ฅ,๐‘ฆ,๐‘ง)coordinates could be interpreted as three 2D multi-stroke gestures projected on three planes, i.e.,๐‘‹๐‘Œ,๐‘Œ๐‘, and๐‘๐‘‹, thus making them admissible for established 2D stroke gesture recognizers. To address the question whether they could be effectively and efficiently recognized, four 2D stroke gesture recognizers, i.e., $P, $P+, $Q, and Rubine, have been extended to consider the third dimension:$๐‘ƒ3,$๐‘ƒ+3,$๐‘„3, and Rubine3D. The Rubine-Sheng extension is also included. Two new variations are also created to investigate some sampling flexibility:$๐นfor flexible Dollar recognition and FreeHandUni. These seven recognizers are compared against three challenging datasets containing 3D trajectories, i.e., SHREC2019 and 3DTCGS in a user-independent scenario, and 3DMadLabSD, in both user-dependent and user-independent scenarios. Individual recognition rates and execution times per dataset and aggregated ones on all datasets show a comparable performance to state-of-the-art 2D multi-stroke recognizers, thus suggesting that the approach is viable. We suggest some usage of these recognizers depending on conditions. These results are not generalizable to other types of 3D mid-air gestures.

Combating the Spread of Coronavirus by Modeling Fomites with Depth Cameras

Andrew D Wilson
* Demonstration accompanying an accepted paper
Coronavirus is thought to spread through close contact from person to person. While it is believed that the primary means of spread is by inhaling respiratory droplets or aersols, it may also be spread by touching inanimate objects such as doorknobs and handrails that have the virus on it (``fomites''). The Centers for Disease Control and Prevention (CDC) therefore recommends individuals maintain ``social distance'' of more than six feet between one another. It further notes that an individual may be infected by touching a fomite and then touching their own mouth, nose or possibly their eyes. We propose the use of computer vision techniques to combat the spread of coronavirus by sounding an audible alarm when an individual touches their own face, or when multiple individuals come within six feet of one another or shake hands. We propose using depth cameras to track where people touch parts of their physical environment throughout the day, and a simple model of disease transmission among potential fomites. Projection mapping techniques can be used to display likely fomites in realtime, while headworn augmented reality systems can be used by custodial staff to perform more effective cleaning of surfaces. Such techniques may find application in particularly vulnerable settings such as schools, long-term care facilities and physician offices.

EDUCO: An Augmented Reality application to enrich learning experience of students for History and Civics subjects

P R Arjun and Anmol Srivastava
Educo focuses on the upliftment of the subject History & Civics of class 10th. The subject is considered as one of the most boring subjects due to its theoretical approach and lack of interactivity. Through immersive technologies like AR and VR, the approach to the subject, as well as the number of students preferring to take the subject for further education, is supposed to increase at an exponential rate. In this project, the research was conducted in the preliminary phase followed by literature reviews, market analysis and user surveys (N=26).The data was analysed using card sorting technique. In the ideation phase, User Interface had two directions to continue the project of which we choose the most affordable and versatile one. With the direction confirmed, explorations were done for the UI after Information Architecture and Wireframing. The prototype was made in Adobe XD followed by Unity for the AR and VR content. Heuristic analysis was conducted to evaluate and improve the experience as well as the visual part. Throughout the timeline of the project, new challenges were encountered in different phases of the project notably the COVID-19 outbreak due to which interviews were conducted through online platforms rather than Contextual inquiry which was done at the starting before the pandemic.Also the User Testing part was replaced by Heuristic Analysis and Retrospective think aloud protocol.

Face HotPot: An Interactive Projection Device for Hot Pot

Rao Xu, Yanjun Chen and Qin Wu
Hot pot is a popular cuisine in China. It is also one of the first meal choices that young people elect to make together. However, during the preparation period, especially with new friends, there will always be a few moments of awkward silence while waiting for it to cook. To relieve this, we designed and developed Face HotPot, an interactive device for hot pots, consisting of a personalized dipping sauce matching system and interactive game. While waiting for the meal, users can get a personalized hot pot dipping sauce based on their facial features and get food by participating in the interactive projection game. The whole process of Face HotPot can continuously provide users with new chat topics and promote mutual understanding of each other's personalities, thus optimizing their eating and social experience with hot pot. In this paper, we demonstrate Face HotPot design motivation and system implementation.

Interactions with a Hybrid Map for Navigation Information Visualization in Virtual Reality

Sabah Boustila, Mehmet Ozkan and Dominique Bechmann
This work suggests an interactive navigation aid in Virtual Reality using an head-mounted display (HMD). It offers an hybrid map which is a 2D map enriched with high-quality 3D models of landmarks. Hence, in addition to the information presented on the 2D map (street layout), we represented 3D models of buildings with particular shapes that could be used as reference points for map alignment or route following. We also implemented real-time interactions with the hybrid map to allow users to display additional information according to their preferences and needs. As digital maps on smartphones are commonly used in daily life, we offered interactions with the map through a smartphone interface. Interactions using Oculus Hand controllers were also implemented. We assume that interactions could be more intuitive and natural with the smartphone. In the future, a study will be conducted to compare interaction performance with the map using each device and identify important information for maps.

Project Esky: Enabling High Fidelity Augmented Reality on an Open Source Platform

Damien Rompapas, Daniel Flores Quiros, Charlton Rodda, Bryan Christopher Brown, Noah Benjamin Zerkin and Alvaro Cassinelli
This demonstration showcases a complete Open-Source Augmented Reality (AR) modular platform capable of high fidelity natural hand-interactions with virtual content, high field of view, and spatial mapping for environment interactions. We do this via several live desktop demonstrations. This includes a piano hooked up to a VST plugin, and a live E-Learning example with an animated car engine. Finally, included in this demonstration is a completed open source schematic, allowing anyone interested in utilizing our proposed platform to engage with high fidelity AR. It is our hope that the work described in this demo will be a stepping stone towards bringing high-fidelity AR content to researchers and commodity users alike.

PUMP FIT: An Augmented Reality Application Which Helps People with Their Workout More Efficiently

Ashok Binoy and Anmol Srivastava
Project Pump fit focuses on the ways how to improve and make gym exercises, workouts more effective and less strainfu-l. By using the method of Augmented reality(AR) in which a virtual instructor will be showing how to do the exercise and will correct if the person is doing it wrong. Giving a more close monitoring to each individual making it more efficient and useful. In this project, the research was conducted in the preliminary phase followed by literature reviews, market analysis and user surveys.The data was analysed using card sorting technique. In the ideation phase, UI had two directions to continue the project of which I choose the most affordable and versatile one. With the direction confirmed, explorations were done for the UI after Information Architecture and Wireframing. The prototype was made in Adobe XD followed by Unity for the AR and VR content. Heuristic analysis was conducted to evaluate and improve the experience as well as the visual part. Throughout the timeline of the project, a lot of challenges were faced, mainly COVID-19 being one of them. Due to which interviews were conducted through online platforms. User testing part was replaced by Heuristic Analysis There is always a scope of improvement which is noted in the conclusion.

ShadowDancXR: Body Gesture Digitization for Low-cost Extended Reality (XR) Headsets

Difeng Yu, Weiwei Jiang, Chaofan Wang, Tilman Dingler, Eduardo Velloso and Jorge Goncalves
Low-cost, smartphone-based extended reality (XR) headsets, such as Google Cardboard, are more accessible but lack advanced features like body-tracking, which limits the expressiveness of interaction. We introduce ShadowDancXR, a technique that relies on the front-facing camera of the smartphone to reconstruct users' body gestures from projected shadows, which allows for wider range of interaction possibilities with low-cost XR. Our approach requires no embedded accessories and serves as an inexpensive, portable solution for body gesture digitization. We provide two interactive experiences to demonstrate the feasibility of our concept.

SIERA: The Seismic Information Extended Reality Analytics tool

Bryson Lawton, Hannah Sloan, Patrick Abou Gharib, Frank Maurer, Marcelo Guarido De Andrade, Ali Fathalian and Daniel Trad Trad
Three-dimensional seismic data is notoriously challenging to visualize effectively and analyze efficiently due to its huge volumetric format and highly complex nature. This process becomes even more complicated when attempting to analyze machine learning results produced from seismic datasets and see how such results relate back to the original data. SIERA presents a solution to this problem through visualizing seismic data in an extended reality environment as highly customizable 3D data visualizations, and via physical movement, allow one to easily navigate these large, 3D information spaces effectively. This allows for a more immersive and intuitive way to interact with seismic data and machine learning results, providing an improved experience over conventional analytics tools.

Towards Rapid Prototyping of Foldable Graphical User Interfaces with Flecto

Iyad Khaddam, Jean Vanderdonckt, Salah Dowaji and Donatien Grolaux
* Demonstration accompanying an accepted paper
Whilst new patents and announcements advertise the technical availability of foldable displays, which are capable to be folded to some extent, there is still a lack of fundamental and applied understanding of how to model, to design, and to prototype graphical user interfaces for these devices before actually implementing them. Without waiting for their off-the-shelf availability and without being tied to any physical foldable mechanism,Flecto defines a model, an associated notation, and a supporting software for prototyping graphical user interfaces running on foldable displays, such as foldable smartphone or assemblies of foldable surfaces. For this purpose,the Yoshizawa-Randlett diagramming system, used to describe the folds of origami models, is extended to characterize a foldable display and define possible interactive actions based on its folding operations. A guiding method for rapidly prototyping foldable user interfaces is devised and supported by Flecto, a design environment where foldable user interfaces are simulated in virtual reality instead of in physical reality. We report on a case study to demonstrate Flecto in action and we gather the feedback from users on Flecto, using Microsoft Product Reaction Cards.

Virtual Reality for Understanding Multidimensional Spatiotemporal Phenomena in Neuroscience

Zahra Aminolroaya, Seher Dawar, Colin Bruce Josephson, Samuel Wiebe and Frank Maurer
Neuroscientists traditionally use information representations on 2D displays to analyze multivariate spatial and temporal datasets for an evaluation stage before neurosurgery. However, it is challenging to mentally integrate the information from these datasets. Our immersive tool aims to help neuroscientists to understand spatiotemporal phenomena inside a brain during the evaluation. We refined our tool through different phases by gathering feedback from the medical experts. Early feedback from neurologists suggests that using virtual reality for epilepsy presurgical evaluation can be useful in the future.