ALiSE: Through the Mirrored Space, and what User Interacts with Avatars Naturally

Hiroki Uchida, Tadashi Ebihara, Naoto Wakatsuki, Keiichi Zempo
Augmented Reality (AR) and Virtual Reality (VR) interfaces, such as a conventional head-mounted display (HMD), have the problem of being unable to share the content experience with people who are not wearing the device. To solve this problem, we focus on AR mirrors and propose ALiSE (Augmented Layer Interweaved Semi-Reflecting Existence)---a display that displays images in a mirrored space using half-mirrors and gaps. We compared the service quality of this device with that of an ordinary display and an HMD. As a result, we were able to confirm the superiority of the ALiSE over conventional displays in several items. The results suggest that the connection generated between the service provider and the user using ALiSE is equivalent to the experience in VR. In other words, our proposed display method can provide the same level of satisfaction as the services provided in the conventional VR space. In addition, it is possible to share the content experience with an accessibility equivalent to observing digital signage without wearing an HMD.

Artefact: A UML-based Framework for Model-driven Development of Interactive Surface Prototypes

Everett Mondliwethu Mthunzi, Florian Echtler
While interactive surface prototypes may be highly application-specific, existing prototypes hint at common, recurring design considerations. Given the rapid accumulation of near-identical prototypes, there is a need to promote design reuse. In this context, existing research prototypes motivate abstracting generic structures, architectural views, and descriptions to inform future designs. This paper proposes Artefact: a UML-based framework for model-driven development of interactive surface prototypes. We define flexible base models using existing research prototypes: initial hardware and middleware abstractions to support developers in the early design stages. For validation, we use the proposed framework to capture existing research prototypes. We then conduct an interview study to learn expert perceptions towards the captured model representations. Our initial findings highlight three significant benefits: (1) an accessible graphical syntax with unambiguous model representation, (2) a system for capturing arbitrary technical specifications, and (3) flexible model representation with consistent notation. While we can not draw any absolute conclusions, initial results suggest benefits in the model-driven approach.

Distant Assist Cursor (DAC): Designing an Augmented Reality System to Facilitate Remote Collaboration for Novice Users

Cedric Kervegant, Julien Castet, Juliette Vauchez, Charles Bailly
Many existing Augmented Reality (AR) systems facilitate remote collaboration by allowing users to share activity cues such as cursors. However, a lot of these systems require expensive hardware or non-negligible calibration phases which may deter novice users from using them. To overcome these barriers, we propose the Distant Assist Cursor (DAC): an AR remote collaboration system for sharing a virtual cursor. DAC relies on a camera and a projector sharing the same optical center. This design allows to use cheap hardware and a straightforward calibration phase. In this paper, we present the design of this system made to help novice users. We then evaluate DAC by assessing its benefits and usability for remote collaboration tasks.

Fast 3D Point-cloud Segmentation for Interactive Surfaces

Everett Mondliwethu Mthunzi, Christopher Getschmann, Florian Echtler
Easily accessible depth sensors have enabled using point-cloud data to augment tabletop surfaces in everyday environments. However, point-cloud operations are computationally expensive and challenging to perform in real-time, particularly when targeting embedded systems without a dedicated GPU. Given 3D representations of a tabletop's environment, high computational space limits exploiting 3D techniques for real-time applications. In this paper, we propose mitigating the high computational costs by segmenting candidate interaction regions near real-time. We contribute an open-source solution for variable depth cameras using CPU-based architectures. For validation, we employ Microsoft's Azure Kinect and report achieved performance. Our initial findings show that our approach takes under 35 ms to segment candidate interaction regions on a tabletop surface and reduces the data volume by up to 70%. We conclude by contrasting the performance of our solution against a model-fitting approach implemented by the SurfaceStreams toolkit. Our approach outperforms the RANSAC-based strategy within the context of our test scenario, segmenting a tabletop's interaction region up to 94% faster. Our results show promise for point-cloud-based approaches, even when targeting embedded solutions with limited resources.

How Content Drives Interaction With Public Displays

Benedikt Sprecher, Thomas Havekost, Julian Fietkau
Existing research into user engagement with public displays tends to focus on aspects of visual and interaction design, with less thought given towards how the choice of content may influence user behavior. In this article, we survey the existing literature, particularly deployment studies of public displays, for lessons learned on the ramifications of content with different properties. We find that local and timely relevance of content, as well as user-driven content creation, have been independently shown to foster user engagement, but that few other solid conclusions can be drawn from the literature. On the whole, the aspect of content tends to be underspecified and not fully reflected in studies of public displays.

Sweet Spot: Displaying Interaction Areas on Everyday Home Surfaces using AR

Michael Chamunorwa, Lisa-Maria Müller, Tjado Ihmels, Dennis Diekmann, Heiko Müller, Susanne Boll
In a typical home or office setting, a work desk provides a surfacethat primarily holds objects and is used for everyday tasks suchas writing and typing. However, given its centrality and multiplesurfaces, many of which are always unoccupied, an ordinary tablemay find secondary usage as an alternative interface to controlfacets of a smart home. For tabletop surfaces to work efficientlyas alternative interfaces, there is a need to determine their mostreachable zones and those that do not have too much clutter. Wepresent Sweet Spots, an AR platform to visualize easily accessibleand non-cluttered areas on table-like surfaces using augmentedreality to assess suitable interaction areas. Sweet Spots providesdesigners and researchers with a tool to visualize their ideas andto take the exploration of key parameters that make an interactionarea a Sweet Spot.

Touch and Explore A VR Game Exploration, Based on Haptic Driven Game-play

Lokesh Kumar V M
In digital games, visuals and audios are widely used to provide immersion; with an introduction to VR (Virtual Reality) in games, novel interaction opportunities are possible. Interactions in VR are similar to real-life. Hence haptics plays a crucial role in enhancing realism. In this paper, we explore the opportunity of the haptics-driven game in VR. This game is designed to experience the world in a completely Dark space (a world without light) and navigate using senses of touch, motion restriction, and proprioception. Player touch the virtual surfaces in VR, it illuminates the touch points of the object, which acts as visual cues, using which player navigates in the game. To facilitate the realism in the game A low-cost arm based motion restriction device was used to provide required haptics for the game, which was a part of our prior work.