The vision of always-on Mixed Reality interfaces that can be used in a continuous fashion for an entire day, depends on solving many difficult problems including display technology, comfort, computing power, batteries, localization, tracking, and spatial understanding. However, solving all those will not bring us to a truly useful experience unless we also solve the fundamental problem of how to effectively interact in Mixed Reality (MR). I believe that the solution to the MR interaction problem requires that we combine the approaches from interaction design, perceptual science, and machine learning, to yield truly novel and effective MR input and interactions. Such interfaces will need to be adaptive to the user context, believable, and computational in nature. We are at the exciting point in the technology development curve where there are still few universally accepted standards for MR input, which leaves a ton of opportunities for both researchers and practitioners.
Our reliance on computing technologies for making decisions or sense-making, has undergone a transformation from occurring in well defined settings (such as on traditional PCs) to taking place in-situ, for supporting everyday activities. In-situ user interfaces, have emerged largely from improved mobile and mixed-reality technologies and rely on mid-air input as the basis for interaction. Often, designers of such, on-the-go, user interfaces place an emphasis on enhancing end-user performance. However, we argue that supporting end-user comfort is as critical if in-situ interfaces are to become commonplace among the general population. In this talk, I will present some of our work on various aspects of end-user comfort for in-situ interactions. I will present models for estimating arm fatigue induced by mid-air input, and showcase interactive systems that have been specifically designed to circumvent such fatigue. I will also discuss elements of social comfort and present a framework for including such factors in the design process of end-user interfaces. I end my presentation with a discussion of some of the open problems in this space.
Mobile devices are getting more and more powerful, however, interfaces for exploring multivariate network data on-the-go remain rare and challenging to realize. In this work, we present considerations as well as concepts on how to enable such exploration on mobile devices. Specifically, we discuss that an interface with force-directed node-link representation alongside additional views for investigating multivariate aspects and filter functionalities seems promising. Further, such an interface could be advanced by incorporating multiple mobile devices in parallel. As part of our ongoing investigations, we have realized an early prototype indicating the general feasibility of our concepts.
When controlling the mobile robot from the third-person perspective, if the pose of the robot changes, the direction of movement may be different from the direction of the controller operation by the user. Thus, it confuses general users. In this study, we proposed a control method in which the direction of operation and the direction of movement are the same, and compared it with a general method. We found that the people who take longer to control a mobile robot can control it better using the proposed method.
The authors are designing a guiding system using interactive floor display. This paper introduces a prototype of a Smart Elevator Hall as an example of the interactive floor. User scenarios were developed to design detailed functions of the elevator hall and the prototype was implemented with a sensor and LED panels. The system was tested with scenario skits by designers to evaluate the concept's potential and issues for further developments.
Digitization of Earth mountains and terrains has facilitated to plan journeys, manage natural resources, and learn about the Earth from the comfort of our homes. We aim to develop new interactions on digital mountains with novel interfaces: 3D printed representation of a mountain, an immersive virtual reality visualization, and two different touch interactive interfaces for immersive virtual reality visualizations: a 3D printed mountain with touch sensors and a multitouch tablet. We show how we have built such prototypes based on digital data retrieved from a map provider, and which interactions are possible with each interaction device. We explain how we design and conduct evaluation.
This paper embodies the need, design and evaluation of an AR welding mobile application highlighting the potential of mobile AR interactions.
There are 36 million unemployed youth in India [6]. One of the reasons for unemployment among youth is the significant disconnect between the school education and the opportunities, skills, and exposure necessary to achieve full potential and earn livelihood [6]. Various NGOs have taken initiatives to introduce vocational training as a part of mainstream education to create employment opportunities for students from low-income backgrounds on completion of schooling. The secondary schools in which such training is imparted often face a lack of space and infrastructure, regulatory restrictions and safety concerns. The objective of this project was to innovate using Augmented Reality (AR) to overcome the spatial and safety barriers that affect the efficiency of these vocational training programs. An interactive learning module was designed using marker-based mobile AR to aid in providing the knowledge as well as learning the skills required for welding. This learning module was evaluated with expert welding instructors and accordingly proposed to be used in government schools providing vocational training in welding. This project opens up the possibility of designing safe and accessible ways to develop vocational skills.
Eye contact disorder is one of the typical characteristics of children with Autism Spectrum Disorder (ASD), and this issue is directly related to the deficiencies in social interaction. In this poster, we present MaskMe, a collaborative game to help children with ASD make eye contact. Our game set several rules to encourage children with ASD to make eye contact with each other, we use the mask to cover children's face to reduce their stress when making eye contact, and we developed a device attached to the mask that can define if eye contact is occurring. We conducted a pilot user study using MaskMe with 18 children with autism, the result shows that our game was effective on encourage them to have more eye contact with each other.
In this paper, we present a demonstrator that combines elements of physical prototyping and Virtual Reality. Our goal is to integrate VR in the design process of workplaces, not as a replacement of physical prototyping, but complementary to it. We call this approach Tangible Virtual Reality. We discuss our own background in embodied interaction, and present the context for our research: an industrial work cell for human-robot interaction, proposed by Audi Brussels, Kuka Belgium and FRS Robotics. We explain and illustrate how we used 3D CAD geometry to create a VR model and a physical prototype, and how we mapped them over one another. The result is a demonstrator that offers a VR experience, enhanced with real tactile information, and channeled by the natural limitations of the physical world. We suggest where the benefits of this physical/virtual design approach lay, and discuss how it could be operationalized in workplace design practice.
There is a growing interest in the ability to detect where people are looking in real-time to support learning, collaboration, and efficiency. Here we present an overview of computational methods for accurately classifying the area of visual attention on a horizontal surface that we use to represent an interactive display (i.e. tabletop). We propose a new model that utilizes a neural network to estimate the area of visual attention, and provide a close examination of the factors that contribute to the accuracy of the model. Additionally, we discuss the use of this technique to model joint visual attention in collaboration. We achieved a mean classification accuracy of 75.75% with a standard deviation of 0.14 when data from four participants was used in training the model and then tested on the fifth participant. We also achieved a mean classification accuracy of 98.8% with 0.02 standard deviation when different amounts of overall data was used to test the model.
Smartwatches and smartphones have their own unique advantages and their combination allows the user to quickly access information. Our proposal, Backhand Display, is a wearable device for the back of the hand. It is a single device that can replace both the smartwatch and the smartphone. Its fundamental performances are evaluated, including text input, vibration notification, and response time between notification and reply. It is confirmed that Backhand Display offers faster text input than the smartwatch, greater notification capability than the smartphone, and quicker response to notifications than either of them. These benefits are supported by the results of a user questionnaire. Though there are several limitations and issues to be resolved, these results show the potential of Backhand Display, a wearable device on the back of the hand.
TapStr is a single-row reduced-Qwerty that enables text entry on smartphones by performing taps and directional strokes. It is an unambiguous keyboard, thus does not rely on a statistical decoder to function. It supports the entry of uppercase and lowercase letters, numeric characters, symbols, and emojis. Its purpose is to provide an economic alternative to virtual Qwerty when saving touchscreen real-estate is more desired than the entry speed and a convenient alternative when users do not want to repeatedly switch between multiple layouts for mixed-case alphanumeric text and symbols (such as a password). In a short-term study, TapStr yielded about 11 WPM with plain phrases and 8 WPM with phrases containing uppercase letters, numbers, and symbols.
Augmented Reality (AR) has been getting increased traction lately with a plethora of AR-enabled devices on the market ranging from smartphones to commercial Head-Mounted Displays (HMDs). With the rise in AR, a need exists for there to be a more natural and intuitive form of interaction with virtual content that does not interfere with the user experience. Moreover, advanced and sophisticated hardware such as the Leap Motion Controller makes this possible by employing sensors that can track hands accurately in real-time. This paper presents a framework, IIMR (Intangible Interactions in Mixed Reality), that operates over a network and allows for mid-air interactions using just the bare-hands. The aim is to provide researchers and developers with an inexpensive and accessible solution for quickly developing and prototyping hand interactions in a mixed reality environment. The system consists of three modules, namely a Smartphone, a Google cardboard-like HMD, and a Leap Motion Controller. Diverse applications are developed to evaluate the framework, and the results from qualitative usability studies are outlined.
Bimanual interactions using pen and touch are natural to humans and have proven and explored in previous research. However, most of the previous work has been limited to using cartesian coordinates of fingers and pen tip. In this work, we go further by exploring additional pen data, like pressure and tilt, combined with multi-touch inputs. We apply this combination to two data visualizations: Bubble Chart and Linear Regression combined with a Radar. We have performed a preliminary user study comparing Pen and Touch interactions with Mouse input. We have found the Pen and Touch interactions can consume less time while looking for specific values in the Bubble Chart, whereas Mouse can be faster while looking for specific relation in Linear Regression and Radar.
This demonstration showcases a complete Open-Source Augmented Reality (AR) modular platform capable of high fidelity natural hand-interactions with virtual content, high field of view, and spatial mapping for environment interactions. We do this via several live desktop demonstrations. Finally, included in this demonstration is a completed open source schematic, allowing anyone interested in utilizing our proposed platform to engage with high fidelity AR. It is our hope that the work described in this demo will be a stepping stone towards bringing high-fidelity AR content to researchers and commodity users alike.
Hot pot is a popular cuisine in China. It is also one of the first meal choices that young people elect to make together. However, during the preparation period, especially with new friends, there will always be a few moments of awkward silence while waiting for it to cook. To relieve this, we designed and developed Face HotPot, an interactive device for hot pots, consisting of a personalized dipping sauce matching system and interactive game. While waiting for the meal, users can get a personalized hot pot dipping sauce based on their facial features and get food by participating in the interactive projection game. The whole process of Face HotPot can continuously provide users with new chat topics and promote mutual understanding of each other's personalities, thus optimizing their eating and social experience with hot pot. In this paper, we demonstrate Face HotPot design motivation and system implementation.
This work suggests an interactive navigation aid in Virtual Reality using a head-mounted display (HMD). It offers a hybrid map which is a 2D map enriched with high-quality 3D models of landmarks. Hence, in addition to the information presented on the 2D map (street layout), we represented 3D models of buildings with particular shapes that could be used as reference points for map alignment or route following. We also implemented real-time interactions with the hybrid map to allow users to display additional information according to their preferences and needs. As digital maps on smartphones are commonly used in daily life, we offered interactions with the map through a smartphone interface. Interactions using Oculus Hand controllers were also implemented. We assumed that interactions could be more intuitive and natural with the smartphone. In the future, a study will be conducted to compare interaction performance with the map using each device and identify relevant information for maps.
Three-dimensional seismic data is notoriously challenging to visualize effectively and analyze efficiently due to its huge volumetric format and highly complex nature. This process becomes even more complicated when attempting to analyze machine learning results produced from seismic datasets and see how such results relate back to the original data. SIERA presents a solution to this problem through visualizing seismic data in an extended reality environment as highly customizable 3D data visualizations, and via physical movement, allow one to easily navigate these large, 3D information spaces effectively. This allows for a more immersive and intuitive way to interact with seismic data and machine learning results, providing an improved experience over conventional analytics tools.
Low-cost, smartphone-based extended reality (XR) headsets, such as Google Cardboard, are more accessible but lack advanced features like body-tracking, which limits the expressiveness of interaction. We introduce ShadowDancXR, a technique that relies on the front-facing camera of the smartphone to reconstruct users' body gestures from projected shadows, which allows for wider range of interaction possibilities with low-cost XR. Our approach requires no embedded accessories and serves as an inexpensive, portable solution for body gesture digitization. We provide two interactive experiences to demonstrate the feasibility of our concept.
Project Educo focuses on the upliftment of the subject History & Civics of class 10th. The subject is considered as one of the most boring subjects due to its theoretical approach and lack of interactivity. Through immersive technologies like AR and VR, the approach to the subject, as well as the number of students preferring to take the subject for further education, is supposed to increase at an exponential rate. In this project, the research was conducted in the preliminary phase followed by literature reviews, market analysis and user surveys (N=26). The data was analysed using card sorting technique. In the ideation phase, User Interface had two directions to continue the project of which we choose the most affordable and versatile one. With the direction confirmed, explorations were done for the UI after Information Architecture and Wireframing. The prototype was made in Adobe XD followed by Unity for the AR and VR content. Heuristic analysis was conducted to evaluate and improve the experience as well as the visual part.
Neuroscientists traditionally use information representations on 2D displays to analyze multivariate spatial and temporal datasets for an evaluation stage before neurosurgery. However, it is challenging to mentally integrate the information from these datasets. Our immersive tool aims to help neuroscientists to understand spatiotemporal phenomena inside a brain during the evaluation. We refined our tool through different phases by gathering feedback from the medical experts. Early feedback from neurologists suggests that using virtual reality for epilepsy presurgical evaluation can be useful in the future.
Project Pump fit focuses on the ways how to improve and make gym exercises, workouts more effective and less strainful. By using the method of Augmented reality(AR) in which a virtual instructor will be showing how to do the exercise and will correct if the person is doing it wrong. Giving a more close monitoring to each individual making it more efficient and useful.
In this project, the research was conducted in the preliminary phase followed by literature reviews, market analysis and user surveys. The data was analysed using card sorting technique. In the ideation phase, UI had two directions to continue the project of which I choose the most affordable and versatile one. With the direction confirmed, explorations were done for the UI after Information Architecture and Wireframing. The prototype was made in Adobe XD followed by Unity for the AR and VR content. Heuristic analysis was conducted to evaluate and improve the experience as well as the visual part. Throughout the timeline of the project, a lot of challenges were faced, mainly COVID-19 being one of them. Due to which interviews were conducted through online platforms. User testing part was replaced by Heuristic Analysis There is always a scope of improvement which is noted in the conclusion.
Spherical displays are increasingly being used to present global data visualizations to support Earth science education in informal learning settings such as museums. However, most spherical displays deployed to date either support no interactivity or interaction only via an external touchscreen. Only recently have spherical displays become commercially available that can support multi-touch gesture interactivity. Family groups visiting museums can gather around these displays to collaboratively explore global science datasets simultaneously. However, the use of multi-touch spherical displays as an educational tool is relatively new and a deeper knowledge of how to design interactions for spherical touch surfaces to support collaborative learning has not been explored yet. Through my thesis work, I will explore how to design gestural interactions for multi-touch spherical displays to support collaborative learning from interactive science data visualizations in museums.
With the widespread adoption of digital design and fabrication technologies, architects have been looking for ways to connect virtual and physical design platforms. Previous research offers a limited perspective on the feedback exchange between physical and digital modes of design. My doctoral research looks into the changing role of physical methods with the ongoing digitalization in architecture. The research has been progressing through research prototypes and interviews. Produced prototypes correlate tangible outcomes with the theoretical framework established during the first year of my studies. By introducing a feedback-oriented de-sign approach, linking physical and digital tools, this work aims to enhance initial design processes in architecture and facilitate creative programmatic solutions for architects.
In my PhD, I aim at developing a reach-through volumetric display where points of light are emitted from each 3d position of the display volume, and yet it allows people to introduce theirs hands inside to directly interact with the rendered content. Here, I present TomoLit, an inverse tomographic display, where multiple emitters project rays of different intensities for each angle, rendering a target image in mid-air. We have analysed the effect on image quality of the number of emitters, their locations, the angular resolution and the levels of intensities. We have developed a simple emitter and we are in the process of putting together multiple of them. And what I plan to do next, e.g. moving from 2D to 3D and exploring interaction techniques. The feedback obtained in this symposium will clearly dissipate some of of my doubts and guide my research career.
Established as separate disciplines, Augmented Reality (AR) and Virtual Reality (VR) have already positioned themselves as strong research disciplines. However, being part of the same Reality-Virtuality continuum, as presented by Paul Milgram, it is possible to envision (i) a smooth transition between systems using different degrees of virtuality or (ii) collaboration between users using different systems with different degrees of virtuality. We refer to these types of systems as cross-reality (XR) systems, which can better fulfil different modalities for a given task or context of use, and potentially enable rich applications in training, education, remote assistance, or emergency response compared to individual closed systems. This workshop will bring together researchers and practitioners that are interested in XR to identify current issues and future directions of research while the long-term goal is to create a strong interdisciplinary research community and foster future development of the discipline and collaborations.