Conference's proceedings are now available at the ACM Digital Library.

SUNDAY, NOV 25, 2018

9:30 - 17:30

Workshops

9:00 - 17:30: Approaching Aesthetics on User Interface and Interaction Design
13:00 - 17:30: Computational Augmented Reality Displays

11:00 - 17:30

18:00 - 20:00

Welcome Reception
Venue : HUB Yaesu branch. Google Maps. 10 min walk.

MONDAY, NOV 26, 2018

9:30 - 9:50

Greetings from the Chairs

10:50 - 11:10

Coffee Break

11:10 - 12:30

Session 1: Wall Displays in our Daily lifes 

Chair: Bongshin Lee

Welcome OnBoard : An Interactive Large Surface Designed for Teamwork and Flexibility in Surgical Flow Management


Juliette Rambourg, Hélène Gaspard-Boulinc, Stéphane Conversy, Marc Garbey
Effective management of surgical suites require teamwork and constant adaptation. Attempts to bring computer support might deteriorate information accuracy, be inflexible and turn staff collaborative tools into administrative tools. We present OnBoard, an 84” multitouch surface application to support surgical flow management. OnBoard supports multiple users, transposes existing coordination artifacts into interactive objects, and offers interactions such as free writing or addition or rescheduling of cases. As part of a cyber-infrastructure, OnBoard enables users to benefit from real-time activity sensors in the surgical suite while offering ways to mitigate potential glitches. OnBoard was installed in a surgical suite of 12 operating rooms for 2 months that performed about 300 procedures. We observed its use, ran interviews and questionnaires and tailored the interactions according to users' needs. OnBoard suggests that tailored, flexible tools might effectively support the surgical staff and be an important part of patients' health improvement.

Increasing Passersby Engagement with Public Large Interactive Displays: A Study of Proxemics and Conation


Mojgan Ghare, Marvin Pafla, Caroline Wong, James R. Wallace, Stacey D. Scott
Prior research has shown that large interactive displays deployed in public spaces are often underutilized, or even un- noticed, phenomena connected to 'interaction' and 'display blindness', respectively. To better understand how designers can mitigate these issues, we conducted a field experiment that compared how different visual cues impacted engagement with a public display. The deployed interfaces were designed to progressively reveal more information about the display and entice interaction through the use of visual content designed to evoke direct or indirect conation (the mental faculty related to purpose or will to perform an action), and different animation triggers (random or proxemic). Our results show that random triggers were more effective than proxemic triggers at overcoming display and interaction blindness. Our study of conation - the first we are aware of - found that conceptual" visuals designed to evoke indirect conation were also useful in attracting people's attention."

Sage River Disaster Information (SageRDI): Demonstrating Application Data Sharing In SAGE2


Dylan Kobayashi, Matthew Ready, Alberto Gonzalez Martinez, Nurit Kirshenbaum, Tyson Seto-Mook, Jason Leigh, Jason Haga
The Scalable Amplified Group Environment (SAGE2) is an open-source, web-based middleware for driving tiled display walls. SAGE2 allows running multiple applications at once within its workspace. In large display walls, users tend to collaborate using multiple applications in the same space for simultaneous interaction and review. Unfortunately, many of these applications were created independently by different developers and were never intended to interoperate which greatly limits their potential reusability. SAGE2 developers face system limitations where applications are data segregated and cannot easily communicate with others. To counter this problem, we developed the SAGE2 data sharing components. We describe the Sage River Disaster Information (SageRDI) application and the SAGE2 architectural implementations necessary for its operation. SageRDI enables river.go.jp, an existing website that provides water sensor data for Japan, to interact with other SAGE2 applications without modifying the website's server, hosted files, nor any of the default SAGE2 applications.

Post-meeting Curation of Whiteboard Content Captured with Mobile Devices


Danniel Varona-Marin, Jan A. Oberholzer, Edward Tse, Stacey D. Scott
The traditional dry-erase whiteboard is a ubiquitous tool in the workplace, particularly in collaborative meeting spaces. Recent studies show that meeting participants commonly capture whiteboard content using integrated cameras on mobile devices such as smartphones or tablets. Yet, little is known about how people curate or use such whiteboard photographs after meetings, or how their curation practices relate to post-meeting actions. To better understand these post-meeting activities, we conducted a qualitative, interview-based study of 19 frequent whiteboard users to probe their post-meeting practices with whiteboard photos. The study identified a set of unmet design needs for the development of improved mobile-centric whiteboard capture systems. Design implications stemming from the study include the need for mobile devices to quickly capture and effortlessly transfer whiteboard photos to productivity-oriented devices and to shared-access tools, and the need to better support the extraction of whiteboard content directly into other productivity application tools.

12:30 - 14:00

Lunch
Note: Lunch will not be offered in this day. Please have a lunch at a restaurant around the venue.

14:00 - 14:20

Demo/Poster Madness

14:20 - 16:00

Demo/Poster Session

16:00 - 17:20

Session 2: Multi-camera set-ups for multi-user spaces 

Chair: Raimund Dachselt

Browsing Group First-Person Videos with 3D Visualization


Yuki Sugita, Keita Higuchi, Ryo Yonetani, Rie Kamikubo, Yoichi Sato
This work presents a novel user interface applying 3D visualization to understand complex group activities from multiple first-person videos. The proposed interface is designed to assist video viewers to easily understand the collaborative relationships of group activity based on where the individual worker is located in a workspace and how multiple workers are positioned to one another during the group activity. More specifically, the interface not only shows all recorded first-person videos but also visualizes the 3D position and orientation of each view point (i.e., the 3D position of each worker wearing a head-mounted camera) with a reconstructed 3D model of the workspace. Our user study confirms that the 3D visualization helps video viewers to understand geometric information of a worker and collaborative relationships of group activity easily and accurately.

EagleView: A Video Analysis Tool for Visualising and Querying Spatial Interactions of People and Devices


Frederik Brudy, Suppachai Suwanwatcharachat, Wenyu Zhang, Steven Houben, Nicolai Marquardt
To study and understand group collaborations involving multiple handheld devices and large interactive displays, researchers frequently analyse video recordings of interaction studies to interpret people's interactions with each other and/or devices. Advances in ubicomp technologies allow researchers to record spatial information through sensors in addition to video material. However, the volume of video data and high number of coding parameters involved in such an interaction analysis makes this a time-consuming and labour-intensive process. We designed EagleView, which provides analysts with real-time visualisations during playback of videos and an accompanying data-stream of tracked interactions. Real-time visualisations take into account key proxemic dimensions, such as distance and orientation. Overview visualisations show people's position and movement over longer periods of time. EagleView also allows the user to query people's interactions with an easy-to-use visual interface. Results are highlighted on the video player's timeline, enabling quick review of relevant instances. Our evaluation with expert users showed that EagleView is easy to learn and use, and the visualisations allow analysts to gain insights into collaborative activities.

Velt: A Framework for Multi RGB-D Camera Systems


Andreas Fender, Jörg Müller
We present Velt, a flexible framework for multi RGB-D camera systems. Velt supports modular real-time streaming and processing of multiple RGB, depth and skeleton streams in a camera network. RGB-D data from multiple devices can be combined into 3D data like point clouds. Furthermore, we present an integrated GUI, which enables viewing and controlling all streams, as well as debugging and profiling performance. The node-based GUI provides access to everything from high level parameters like frame rate to low level properties of each individual device. Velt supports modular preprocessing operations like downsampling and cropping of streaming data. Furthermore, streams can be recorded and played back. This paper presents the architecture and implementation of Velt.

Hands-Free Remote Collaboration Over Video: Exploring Viewer and Streamer Reactions


Baris Unver, Sarah D'Angelo, Matthew Miller, John C. Tang, Gina Venolia, Kori Inkpen
Video conferencing is often used to connect geographically distributed users and can be used to help them engage in a shared activity in a mobile setting for remote collaboration. One key limitation of today's systems however is that they typically only provide one camera view, which often requires users to hold a mobile phone during the activity. In this paper, we explore the opportunities and challenges of utilizing multiple cameras to create a hands-free, mobile, remote collaboration experience. Results from a study with 54 participants (18 groups of three) revealed that remote viewers could actively participate in an activity with the help of our system and that both the local participant and remote viewers preferred hands-free mode over traditional video conferencing. Our study also revealed insights on how remote viewers manage multiple camera views and showed how automation can improve the experience.

TUESDAY, NOV 27, 2018

09:30 - 10:50

Session 3: Gestures and Selection for Interactive Surfaces 

Chair: Sylvain Malacria

Replicating User-defined Gestures for Text Editing


Poorna Talkad Sukumar, Anqing Liu, Ronald Metoyer
Although initial ideas for building intuitive and usable handwriting applications originated nearly 30 years ago, recent advances in stylus technology and handwriting recognition are now making handwriting a viable text-entry option on touchscreen devices. In this paper, we use modern methods to replicate studies form the 80's to elicit hand-drawn gestures from users for common text-editing tasks in order to determine a guessable' gesture set and to determine if the early results still apply given the ubiquity of touchscreen devices today. We analyzed 360 gestures, performed with either the finger or stylus, from 20 participants for 18 tasks on a modern tablet device. Our findings indicate that the mental model of "writing on paper' found in past literature largely holds even today, although today's users' mental model also appears to support manipulating the paper elements as opposed to annotating. In addition, users prefer using the stylus to finger touch for text editing, and we found that manipulating "white space' is complex. We present our findings as well as a stylus-based, user-defined gesture set for text editing."

How Memorizing Directions or Positions Affects Gesture Learning?


Bruno Fruchard, Eric Lecolinet, Olivier Chapuis
Various techniques have been proposed to faster command selection. Many of them either rely on directional gestures (e.g. Marking menus) or pointing gestures using a spatially-stable arrangement of items (e.g. FastTap). Both types of techniques are known to leverage memorization, but not necessarily for the same reasons. In this paper, we investigate whether using directions or positions affects gesture learning. Our study shows that, while recall rates are not significantly different, participants used the novice mode more often and spent more time while learning commands with directional gestures, and they also reported more physical and mental efforts. Moreover, this study also highlights the importance of semantic relationships between gestural commands and reports on the memorization strategies that were elaborated by the participants.

How to Hold Your Phone When Tapping: A Comparative Study of Performance, Precision, and Errors


Florian Lehmann, Michael Kipp
We argue that future mobile interfaces should differentiate between various contextual factors like grip and active fingers, adjusting screen elements and behaviors automatically, thus moving from merely responsive design to responsive interaction. Toward this end we conducted a systematic study of screen taps on a mobile device to find out how the way you hold your device impacts performance, precision, and error rate. In our study, we compared three commonly used grips and found that the popular one-handed grip, tapping with the thumb, yields the worst performance. The two-handed grip, tapping with the index finger, is the most precise and least error-prone method, especially in the upper and left halves of the screen. In landscape orientation (two-handed, tapping with both thumbs) we found the best overall performance with a drop in performance in the middle of the screen. Additionally, we found differentiated trade-off relationships and directional effects. From our findings we derive design recommendations for interface designers and give an example how to make interactions truly responsive to the context-of-use.

Risk Effects of Surrounding Distractors Imposing Time Penalty in Touch-Pointing Tasks


Shota Yamanaka
Optimal target size has been studied for touch-GUI design. In addition, because the degree of risk for tapping unintended targets significantly affects users' strategy, some researchers have investigated the effects of margins (or gaps) between GUI items and the risk level (here, a penalty time) on user performance. From our touch-pointing tasks in grid-arranged icons, we found that a small gap and a long penalty time did not significantly change the task completion time, but they did negatively affect the error rates. As a design implication, we recommend using 1-mm gaps to balance the space occupation and user performance. We also found that we could not estimate user performance by using Fitts' and FFitts' laws, probably because participants had to focus their attention on avoiding distractors while aiming for the target.

10:50 - 11:10

Coffee Break

11:10 - 12:30

Session 4: Mobile and Wearable Text Entry 

Chair: Mark Hancock

Optimal-T9: An Optimized T9-like Keyboard for Small Touchscreen Devices


Ryan Qin, Suwen Zhu, Yu-Hao Lin, Yu-Jung Ko, Xiaojun Bi
T9-like keyboards (i.e., 3x3 layouts) have been commonly used on small touchscreen devices to mitigate the problem of tapping tiny keys with imprecise finger touch (e.g., T9 is the default keyboard on Samsung Gear 2). In this paper, we proposed a computational approach to design optimal T9-like layouts by considering three key factors: clarity, speed, and learnability. In particular, we devised a clarity metric to model the word collisions (i.e., words with identical tapping sequences), used the Fitts-Digraph model to predict speed, and introduced a Qwerty-bounded constraint to ensure high learnability. Founded upon rigorous mathematical optimization, our investigation led to Optimal-T9, an optimized T9-like layout which outperformed the original T9 and other T9-like layouts. A user study showed that its average input speed was 17% faster than T9 and 26% faster than a T9-like layout from literature. Optimal-T9 also drastically reduced the error rate by 72% over a regular Qwerty keyboard. Subjective ratings were in favor of Optimal-T9: it had the lowest physical, mental demands, and the best perceived-performance among all the tested keyboards. Overall, our investigation has led to a more efficient, and more accurate T9-like layout than the original T9. Such a layout would immediately benefit both T9-like keyboard users and small touchscreen device users.

DiaQwerty: QWERTY Variants to Better Utilize the Screen Area of a Round or Square Smartwatch


Sunbum Kim, Sunggeun Ahn, Geehyuk Lee
The QWERTY keyboard has a wide form factor, whereas smartwatches frequently have round or square form factors. Even if extra space for an input field is considered, the aspect ratio of the QWERTY keyboard may not be optimal for a round or square smartwatch. We surmised that a narrower keyboard would be better able to utilize the screen area of a round or square smartwatch, thereby enabling larger keys and improved performance. Larger keys, however, may not necessarily result in better performance because of the reduced familiarity of a modified keyboard layout. To investigate this, we designed DiaQwerty keyboards, which are QWERTY variants with an aspect ratio of 10:7, and compared them with a plain QWERTY keyboard and a SplitBoard on round and square smartwatches. For a 33 mm round watch, DiaQwerty was comparable with QWERTY and was faster than SplitBoard. For a 24 mm x 30 mm square watch, DiaQwerty was faster than both QWERTY and SplitBoard. In the latter case, DiaQwerty achieved a 15% improvement over QWERTY. We concluded that the advantages of the enlarged keys outweighed the disadvantages of the reduced familiarity of the modified layout on a square watch.

Design and Evaluation of Semi-Transparent Keyboards on a Touchscreen Tablet


Sunjun Kim, Geehyuk Lee
As tablet computers are hosting more productivity applications, efficient text entry is becoming more important. A soft keyboard, which is the primary text entry interface for tablets, however, often competes with applications for the limited screen space. A promising solution to this problem may be a semi-transparent soft keyboard (STK), which can share the screen with an application. A few commercial STK products are already available, but research questions about the STK design have not been explored in depth yet. Therefore, we conducted three experiments to answer 1) the effect of the transparency level on usability, 2) exploration of diverse design options for an STK, and 3) the effect of an STK on the different text caret positions. The results imply that STKs with 50\% opacity showed a balanced performance; well-designed STKs were acceptable in both content reading and typing situations; which could reach 90-100\% of an opaque keyboard in terms of overall performance; and the text caret could intrude the STK down to the number row.

A Small Virtual Keyboard is Better for Intermittent Text Entry on a Pen-Equipped Tablet


Jiseong Gu, Youngbo Aram Shim, Sunbum Kim, Geehyuk Lee
Tap-typing with a pen on a virtual keyboard is often used for intermittent text entry on a pen-equipped tablet; however, this has usability issues owing to the large size of the virtual keyboard. We speculated that tap-typing with a pen on a tablet may not require such a large keyboard, but there was no empirical study to help us to determine the best virtual keyboard size for it. Therefore, we conducted two experiments. In the first experiment, we compared virtual keyboards with different key sizes (3, 4, 5, 6, 8, 10, and 13 mm) using a text entry task. The results showed that keyboards with 6-, 8-, and 10-mm keys were the best when text entry speed, task load, and preference were considered. In the second experiment, a keyboard with 6-mm keys was compared with other conventional pen text entry options using a presentation slide editing task with a pen. The results showed that the keyboard with 6-mm keys performed better than the others in terms of preference, System Usability Scale (SUS) scores, and ease-of-learning scores in a Usefulness, Satisfaction, and Ease of use (USE) questionnaire.

12:30 - 14:00

Lunch

14:00 - 15:40

Demo/Poster Session

15:40 - 17:00

Session 5: Interactive Spaces: Moving and on the move 

Chair: Jan Oberholzer

FingerInput: Capturing Expressive Single-Hand Thumb-to-Finger Microgestures


Mohamed Soliman, Franziska Mueller, Lena Hegemann, Joan Sol Roo, Christian Theobalt, Jürgen Steimle
Single-hand thumb-to-finger microgestures have shown great promise for expressive, fast and direct interactions. However, pioneering gesture recognition systems each focused on a particular subset of gestures. We are still in lack of systems that can detect the set of possible gestures to a fuller extent. In this paper, we present a consolidated design space for thumb-to-finger microgestures. Based on this design space, we present a thumb-to-finger gesture recognition system using depth sensing and convolutional neural networks. It is the first system that accurately detects the touch points between fingers as well as the finger flexion. As a result, it can detect a broader set of gestures than the existing alternatives, while also providing high-resolution information about the contact points. The system shows an average accuracy of 91\% for the real-time detection of 8 demanding thumb-to-finger gesture classes. We demonstrate the potential of this technology via a set of example applications.

On the Effects of a Nomadic Multisensory Solution for Children's Playful Learning


Mirko Gelsomini, Annalisa Rotondaro, Giulia Cosentino, Mattia Gianotti, Fabiano Riccardi, Franca Garzotto
The paper describes the design of Ahù, a nomadic smart multisensory solution for children's playful learning. Ahù has a combination of features that make it unique to the current available multisensory technologies for learning: it has a totem shape, supports multimedia stimuli by communicating through voice, lights and multimedia contents projected on two different fields on the floor and allows cooperative and competitive games. The system is nomadic, so it can be moved around, and can be controlled with easy input methods. An exploratory study investigates and provides Ahù's potential to promote engagement, collaboration, socialization and learning in classrooms.

The Haptic Video Player: Using Mobile Robots to Create Tangible Video Annotations


Darren Guinness, Annika Muehlbradt, Daniel Szafir, Shaun K. Kane
Video and animation are common ways of delivering concepts that cannot be easily communicated through text. This visual information is often inaccessible to blind and visually impaired people, and alternative representations such as Braille and audio may leave out important details. Audio-haptic displays with along with supplemental descriptions allow for the presentation of complex spatial information, along with accompanying description. We introduce the Haptic Video Player, a system for authoring and presenting audio-haptic content from videos. The Haptic Video Player presents video using mobile robots that can be touched as they move over a touch screen. We describe the design of the Haptic Video Player system, and present user studies with educators and blind individuals that demonstrate the ability of this system to render dynamic visual content non-visually.

AdapTable: Extending Reach over Large Tabletops through Flexible Multi-Display Configuration


Yoshiki Kudo, Kazuki Takashima, Morten Fjeld, Yoshifumi Kitamura
Large interactive tabletops are beneficial for various tasks involving exploration and visualization, but regions of the screen far from users can be difficult to reach. We propose AdapTable; a concept and prototype of a flexible multi-display tabletop that can physically reconfigure its layout, allowing for interaction with difficult-to-reach regions. We conducted a design study where we found users preferred to change screen layouts for full-screen interaction, motivated by reduced physical demands and frustration. We then prototyped AdapTable using four actuated tabletop displays each propelled by a mobile robot, and a touch menu was chosen to control the layout. Finally, we conducted a user study to evaluate how well AdapTable addresses the reaching problem compared with a conventional panning technique. Our findings show that AdapTable provides a more efficient method for complex full-screen interaction.

17:00 - 17:30

10 Year Impact Award Presentation

18:30 - 20:30

Conference Dinner
Venue : Hokkaido Gourmet Dining. Google Maps. 15 min walk.

WEDNESDAY, NOV 28, 2018

09:30 - 10:50

Session 6: Novel Modalities: Sand, Air and Water 

Chair: Fabrice Matulic

Scoopirit: A Method of Scooping Mid-Air Images on Water Surface


Yu Matsuura, Naoya Koizumi
We propose Scoopirit, a system that displays an image standing vertically at an arbitrary three-dimensional position under and on a water surface. Users can scoop the image from an arbitrary horizontal position on water surface with their bare hands. So that it can be installed in public spaces containing water surfaces visited by an unspecified number of people, Scoopirit displays images without adding anything to the water and enables user to interact with the image without wearing a device. Because of the configuration of the system, users can easily understand the relationship between the real space and the image. This study makes three contributions. First, we propose a method to scoop the mid-air image at an arbitrary horizontal position. Second, we design the offset that is useful for measuring the water level of the water surface scooped with an RGB-D camera. Third, we propose a method to design interaction areas.

Fade-in Pixel: A Selective and Gradual Real-World Pixelization Filter


Jin Himeno, Tomohiro Yokota, Tomoko Hashida
Our goal is to explore novel techniques for blurring real-world objects with a physical filter that changes their appearance and naturally induces a line of sight. To reach this goal, we have developed a system, called Fade-in Pixel", that enables real-world objects to be pixelized gradually and selectively. We realized this gradual and selective pixelization by using 3D printing a transparent lens array filter and controlling the filling and discharging of transparent liquid having the same refractive index as that of the transparent resin. The result is that objects in the real world appear pixelized when the liquid is discharged and appear clear when the liquid is filled. This paper describes the motivation for this study, along with the design, implementation, and interactive applications of Fade-in Pixel."

Sand to Water: Manipulation of Liquidness Perception with Fluidized Sand and Spatial Augmented Reality


Keishiro Uragaki, Yasushi Matoba, Soichiro Toyohara, Hideki Koike
Recently, there has been a renewed interest in Fluidized Bed Interface. It can give us the haptic feedback of sand and fluid reversibly, but exhibits difficulty in visually changes. So, we proposed a novel spatial augmented reality technique to provide the visual liquidness impression. Our system gives visual feedback to users by projecting liquid images on the surface of the interface. The image is generated on the basis of human visual cognition and is different from a mere fluid simulation. As the result, the visual liquidness of our system was emphasized compared to the Fluidized Bed Interface. In addition, the effect of emphasizing visual liquidness is confirmed in the fluidized bed phenomenon, and it is found out that our system is superior in displaying a liquid image. Our system is expected not only to be applied in entertainments but also to contribute to the elucidation of human liquid cognitive mechanisms.

LeviCursor: Dexterous Interaction with a Levitating Object


Myroslav Bachynskyi, Viktorija Paneva, Jörg Müller
We present LeviCursor, a method for interactively moving a physical, levitating particle in 3D with high agility. The levitating object can move continuously and smoothly in any direction. We optimize the transducer phases for each possible levitation point independently. Using precomputation, our system can determine the optimal transducer phases within a few microseconds and achieves round-trip latencies of 15 ms. Due to our interpolation scheme, the levitated object can be controlled almost instantaneously with sub-millimeter accuracy. We present a particle stabilization mechanism which ensures that the levitating particle is always in the main levitation trap. Lastly, we conduct the first Fitts' law-type pointing study with a real 3D cursor, where participants control the movement of the levitated cursor between two physical targets. The results of the user study demonstrate that using LeviCursor, users reach performance comparable to that of a mouse pointer.

10:50 - 11:10

Coffee Break

11:10 - 12:30

Townhall Meeting

12:30 - 14:00

Lunch

14:00 - 15:20

Session 7: Interactive and Augmented Spaces 

Chair: Daisuke Iwai

MlioLight: Projector-camera Based Multi-layered Image Overlay System for Multiple Flashlights Interaction


Toshiki Sato, Dong-Hyun Hwang, Koike Hideki
We propose MlioLight, a projector-camera unit-based projection-mapping system for overlaying multiple images on a screen or on real world objects using multiple flashlight-type devices. We focus on detecting the areas of overlapping lights in a multiple light source scenario and overlaying multi-layered information on real world objects in these areas. To blend multiple images, we developed methods for light identification and overlapping area detection using wireless synchronization between a high-speed camera and multiple flashlight devices. In this study, we describe the concept of MlioLight as well as its prototype implementation and applications. In addition, we present an evaluation of the proposed prototype.

ColourAIze: AI-Driven Colourisation of Paper Drawings with Interactive Projection System


Fabrice Matulic
ColourAIze is an interactive system that analyses black and white drawings on paper, automatically determines realistic colour fills using artificial intelligence (AI) and projects those colours onto the paper within the line art. In addition to selecting between multiple colouring styles, the user can specify local colour preferences to the AI via simple stylus strokes in desired areas of the drawing. This allows users to immediately and directly view potential colour fills for paper sketches or published black and white artwork such as comics. ColourAIze was demonstrated at the Winter 2017 Comic Market in Tokyo, where it was used by more than a thousand visitors. This short paper describes the design of the system and reports on usability observations gathered from demonstrators at the fair.

Floor-Projected Guidance Cues for Collaborative Exploration of SAR Setups


Susanne Schmidt, Frank Steinicke, Andrew Irlitti, Bruce H. Thomas
In this paper we present a floor-based user interface (UI) that allows multiple users to explore a spatial augmented reality (SAR) environment with both monoscopic and stereoscopic projections. Such environments are characterized by a low level of user instrumentation and the capability of providing a shared interaction space for multiple users. However, projector-based systems using stereoscopic display are usually single-user setups, since they can provide the correct perspective for only one tracked person. To address this problem, we developed a set of guidance cues, which are projected onto the floor in order to assist multiple users regarding (i) the interaction with the SAR system, (ii) the identification of regions of interest and ideal viewpoints, and (iii) the collaboration with each other. In a user study with 40 participants all cues were evaluated and a set of feedback elements, which are essential to guarantee an intuitive self-explaining interaction, was identified. The results of the study also indicate that the developed UI guides users to more favorable viewpoints and therefore is able to improve the experience in a multi-user SAR environment.

Awareness Techniques to Aid Transitions between Personal and Shared Workspaces in Multi-Display Environments


Arnaud Prouzeau, Anastasia Bezerianos, Olivier Chapuis
In multi-display environments (MDEs) that include large shared displays and desktops, users can engage in both close collaboration and parallel or personal work. To transition between the displays can be challenging in complex settings, such as crisis management rooms. To provide workspace awareness and to factilitate these transitions, we design and implement three interactions techniques that display users' activities. We explore how and where to display this activity: briefly on the shared display, or more persistently on a peripheral floor display. In a user study, motivated by the context of a crisis room where multiple operators with different roles need to cooperate, we tested the usability of the techniques and provided insights on such transitions in systems running on MDEs.

15:20 - 15:40

Coffee Break

16:40 - 16:50

Closing