9:30 - 9:50
Greetings from the Chairs
10:50 - 11:10
11:10 - 12:30
Chair: Bongshin Lee
Juliette Rambourg, Hélène Gaspard-Boulinc, Stéphane Conversy, Marc Garbey
Effective management of surgical suites require teamwork and constant adaptation. Attempts to bring computer support might deteriorate information accuracy, be inflexible and turn staff collaborative tools into administrative tools. We present OnBoard, an 84” multitouch surface application to support surgical flow management. OnBoard supports multiple users, transposes existing coordination artifacts into interactive objects, and offers interactions such as free writing or addition or rescheduling of cases. As part of a cyber-infrastructure, OnBoard enables users to benefit from real-time activity sensors in the surgical suite while offering ways to mitigate potential glitches. OnBoard was installed in a surgical suite of 12 operating rooms for 2 months that performed about 300 procedures. We observed its use, ran interviews and questionnaires and tailored the interactions according to users' needs. OnBoard suggests that tailored, flexible tools might effectively support the surgical staff and be an important part of patients' health improvement.
Mojgan Ghare, Marvin Pafla, Caroline Wong, James R. Wallace, Stacey D. Scott
Prior research has shown that large interactive displays deployed in public spaces are often underutilized, or even un- noticed, phenomena connected to 'interaction' and 'display blindness', respectively. To better understand how designers can mitigate these issues, we conducted a field experiment that compared how different visual cues impacted engagement with a public display. The deployed interfaces were designed to progressively reveal more information about the display and entice interaction through the use of visual content designed to evoke direct or indirect conation (the mental faculty related to purpose or will to perform an action), and different animation triggers (random or proxemic). Our results show that random triggers were more effective than proxemic triggers at overcoming display and interaction blindness. Our study of conation - the first we are aware of - found that conceptual" visuals designed to evoke indirect conation were also useful in attracting people's attention."
Dylan Kobayashi, Matthew Ready, Alberto Gonzalez Martinez, Nurit Kirshenbaum, Tyson Seto-Mook, Jason Leigh, Jason Haga
The Scalable Amplified Group Environment (SAGE2) is an open-source, web-based middleware for driving tiled display walls. SAGE2 allows running multiple applications at once within its workspace. In large display walls, users tend to collaborate using multiple applications in the same space for simultaneous interaction and review. Unfortunately, many of these applications were created independently by different developers and were never intended to interoperate which greatly limits their potential reusability. SAGE2 developers face system limitations where applications are data segregated and cannot easily communicate with others. To counter this problem, we developed the SAGE2 data sharing components. We describe the Sage River Disaster Information (SageRDI) application and the SAGE2 architectural implementations necessary for its operation. SageRDI enables river.go.jp, an existing website that provides water sensor data for Japan, to interact with other SAGE2 applications without modifying the website's server, hosted files, nor any of the default SAGE2 applications.
Danniel Varona-Marin, Jan A. Oberholzer, Edward Tse, Stacey D. Scott
The traditional dry-erase whiteboard is a ubiquitous tool in the workplace, particularly in collaborative meeting spaces. Recent studies show that meeting participants commonly capture whiteboard content using integrated cameras on mobile devices such as smartphones or tablets. Yet, little is known about how people curate or use such whiteboard photographs after meetings, or how their curation practices relate to post-meeting actions. To better understand these post-meeting activities, we conducted a qualitative, interview-based study of 19 frequent whiteboard users to probe their post-meeting practices with whiteboard photos. The study identified a set of unmet design needs for the development of improved mobile-centric whiteboard capture systems. Design implications stemming from the study include the need for mobile devices to quickly capture and effortlessly transfer whiteboard photos to productivity-oriented devices and to shared-access tools, and the need to better support the extraction of whiteboard content directly into other productivity application tools.
12:30 - 14:00
Note: Lunch will not be offered in this day. Please have a lunch at a restaurant around the venue.
14:00 - 14:20
16:00 - 17:20
Chair: Raimund Dachselt
Yuki Sugita, Keita Higuchi, Ryo Yonetani, Rie Kamikubo, Yoichi Sato
This work presents a novel user interface applying 3D visualization to understand complex group activities from multiple first-person videos. The proposed interface is designed to assist video viewers to easily understand the collaborative relationships of group activity based on where the individual worker is located in a workspace and how multiple workers are positioned to one another during the group activity. More specifically, the interface not only shows all recorded first-person videos but also visualizes the 3D position and orientation of each view point (i.e., the 3D position of each worker wearing a head-mounted camera) with a reconstructed 3D model of the workspace. Our user study confirms that the 3D visualization helps video viewers to understand geometric information of a worker and collaborative relationships of group activity easily and accurately.
Frederik Brudy, Suppachai Suwanwatcharachat, Wenyu Zhang, Steven Houben, Nicolai Marquardt
To study and understand group collaborations involving multiple handheld devices and large interactive displays, researchers frequently analyse video recordings of interaction studies to interpret people's interactions with each other and/or devices. Advances in ubicomp technologies allow researchers to record spatial information through sensors in addition to video material. However, the volume of video data and high number of coding parameters involved in such an interaction analysis makes this a time-consuming and labour-intensive process. We designed EagleView, which provides analysts with real-time visualisations during playback of videos and an accompanying data-stream of tracked interactions. Real-time visualisations take into account key proxemic dimensions, such as distance and orientation. Overview visualisations show people's position and movement over longer periods of time. EagleView also allows the user to query people's interactions with an easy-to-use visual interface. Results are highlighted on the video player's timeline, enabling quick review of relevant instances. Our evaluation with expert users showed that EagleView is easy to learn and use, and the visualisations allow analysts to gain insights into collaborative activities.
Andreas Fender, Jörg Müller
We present Velt, a flexible framework for multi RGB-D camera systems. Velt supports modular real-time streaming and processing of multiple RGB, depth and skeleton streams in a camera network. RGB-D data from multiple devices can be combined into 3D data like point clouds. Furthermore, we present an integrated GUI, which enables viewing and controlling all streams, as well as debugging and profiling performance. The node-based GUI provides access to everything from high level parameters like frame rate to low level properties of each individual device. Velt supports modular preprocessing operations like downsampling and cropping of streaming data. Furthermore, streams can be recorded and played back. This paper presents the architecture and implementation of Velt.
Baris Unver, Sarah D'Angelo, Matthew Miller, John C. Tang, Gina Venolia, Kori Inkpen
Video conferencing is often used to connect geographically distributed users and can be used to help them engage in a shared activity in a mobile setting for remote collaboration. One key limitation of today's systems however is that they typically only provide one camera view, which often requires users to hold a mobile phone during the activity. In this paper, we explore the opportunities and challenges of utilizing multiple cameras to create a hands-free, mobile, remote collaboration experience. Results from a study with 54 participants (18 groups of three) revealed that remote viewers could actively participate in an activity with the help of our system and that both the local participant and remote viewers preferred hands-free mode over traditional video conferencing. Our study also revealed insights on how remote viewers manage multiple camera views and showed how automation can improve the experience.