9:00 - 10:20
Chair: Jinah Park
Mohd Adili Norasikin, Diego Martinez-Plasencia, Gianluca Memoli, Sriram Subramanian
Permeable, mid-air displays, such as those using fog or water mist are limited by our ability to shape and control the aerosol and deal with two major issues: (1) the size and complexity of the system, and (2) the creation of laminar flow, to retain display quality. Here we present SonicSpray, a technique using ultrasonic Bessel beams to create reconfigurable mid-air displays. We build a prototype from low-cost, off-the-shelf parts. We explore the potential and limitations of SonicSpray to create and redirect laminar flows of fog. We demonstrate a working prototype that precisely controls laminar aerosols through only 6x6 ultrasound transducers array. We describe the implementation steps to build the device, verify the control and projection algorithm for the display, and evaluate its performance. We finally report our exploration of several useful applications, in learning, entertainment and arts.
Tao Morisaki, Ryoma Mori, Ryosuke Mori, Yasutoshi Makino, Yuta Itoh, Yuji Yamakawa, Hiroyuki Shinoda
Physically moving real objects via a computational force connects computers and the real world and has been applied to tangible interfaces and mid-air display. Many researchers have controlled only a stationary real object by computational force. On the other hand, controlling a moving object can expand the real space that is controllable by the computer. In this paper, we explore the potential of computational force from the viewpoint of changing the trajectory of a moving object. Changing the trajectory is the primitive model to control a moving object, and it is the technological challenge requiring high-speed measurement and non-contact force with high-spatial resolution. As a proof-of-concept, we introduce Hopping-Pong changing the trajectory of a flying Ping-Pong Ball (PPB) using ultrasound force. The result shows that Hopping-Pong changes the trajectory of a PPB 344 mm. We conclude that a computational force is capable of controlling a moving object in the real world. This research contributes to expanding the computationally controlled space with applications for augmented sports, HCI and factory automation.
William Delamare, Chaklam Silpasuwanchai, Sayan Sarcar, Toshiaki Shiraki, Xiangshi Ren
Current gesture interaction paradigm mainly involves a one-to-one gesture-command mapping. This leads to memorability issues regarding (1) the mapping - as each new command requires a new gesture, and (2) the gestures specifics (e.g., motion paths) - that can be complex to leverage the recognition of several gestures. We explore the concept of combining 3D gestures when interacting in smart environments. We first propose a design space to characterize the temporal and spatial combination aspects, and the gesture types used by the combination. We then report results from three user studies in the context of smart TV interaction. The first study reveals that end-users can create gesture sets with combinations fully optimized to reuse gestures. The second study shows that combining gestures can lead to improved memorability compared to single gestures. The third study reveals that preferences for gestures combination appear when single gestures have an abstract gesture-command mapping.
Carlos E. Tejada, Jess McIntosh, Klaes Alexander Bergen, Sebastian Boring, Daniel Ashbrook, Asier Marzo
While pressing can enable a wide variety of interesting applications, most press sensing techniques operate only at close distances and rely on fragile electronics. We present EchoTube, a robust, modular, simple, and inexpensive system for sensing low-resolution press events at a distance. EchoTube works by emitting ultrasonic pulses inside a flexible tube which acts as a waveguide and detecting reflections caused by deformations in the tube. EchoTube is deployable in a wide variety of situations: the flexibility of the tubes allows them to be wrapped around and affixed to irregular objects. Because the electronic elements are located at one end of the tube, EchoTube is robust, able to withstand crushing, impacts, water, and other adverse conditions. In this paper, we detail the design, implementation, and theory behind EchoTube; characterize its performance under different configurations; and present a variety of exemplar applications that illustrate its potential.
10:20 - 10:40
10:40 - 12:20
Chair: Sayan Sarcar
Shota Yamanaka, Hiroaki Shimono, Homei Miyashita
To achieve better touch GUI designs, researchers have studied optimal target margins, but state-of-the-art results on this topic have been observed for only middle-size touchscreens. In this study, to better determine more practical target arrangements for smartphone GUIs, we conducted four experiments to test the effects of gaps among targets on touch pointing performance. Two lab-based and two crowd-based experiments showed that wider gaps tend to help users reduce the task completion time and the rate of accidental taps on unintended items. For 1D and 2D tasks, the results of the lab-based study with a one-handed thumb operation style showed that 4-mm gaps were the smallest sizes to remove negative effects of surrounding items. The results of the crowd-based study, however, indicated no specific gaps to completely remove the negative effects.
Sven Mayer, Huy Viet Le, Markus Funk, Niels Henze
With smartphones being a prime example, touchscreens became one of the most widely used interface to interact with computing systems. Compared to other touchscreen devices, smartphones pose additional challenges as the hand that interacts with the device is commonly used to also hold the device. Consequently, determining how fingers of the hand holding the device can interact with the screen is a non-trivial challenge. A body of recent work investigated the comfortable area in controlled lab studies. This poses limitations as it is based on the assumption that the grips used in the studies are representative for normal smartphone use. In this paper, we extend previous work by providing insights from in-the-wild studies using two different apps that were deployed in the Android App Store. Comparing our results with previous work we confirm that our data fits previously proposed models. Further analyzing the data, we highlight the sweet spot, the position that is touched if the input can be performed on the whole screen.
Hiroki Usuba, Shota Yamanaka, Homei Miyashita
In graphical user interfaces, users can select multiple objects simultaneously via lasso selection. This can be implemented, for example, by having users select objects by looping around their centers or entire areas. Based on differences in lassoing criteria, we presume that the performance of the criteria also differs. In this study, we compare three lassoing criteria and model lassoing motions while considering these criteria. We conducted two experiments; participants steered through straight-line and one-loop paths by using three criteria. The participants handled the lassoing criteria correctly and performed lassoing at appropriate speeds for each path shape. Although the drawn trajectories varied depending on the lassoing criteria, the criteria in the performance and subjective evaluations did not differ significantly. Additionally, from our results, we build a baseline model to predict the movement time by considering the lassoing criteria. We also discuss further experiments to predict movement time under more complex conditions.
Nicole Ke Chen Pong, Sylvain Malacria
Revealing a hidden widget with a dedicated sliding gesture is a common interaction design in today's handheld devices. Such ''Swhidgets'' (for swipe-revealed hidden widgets) provide a fast (and sometime unique) access to some commands. Interestingly, swhidgets do not follow conventional design guidelines in that they have no explicit signifiers, and users have to discover their existence before being able to use them. In this paper, we discuss the benefits of this signifierless design and investigate how iOS users deal with this type of widgets. We report on the results of a laboratory study and an online survey, investigating iOS users' experience with swhidgets. Our results suggest that swhidgets~are moderately but unevenly known by participants, yet the awareness and the discovery issues of this design is worthy of further discussion.
Martin Mihajlov, Mark Springett, Effie Lai-Chong Law
Considering the potential physical limitations of older adults, the naturalness of touch-based gestures as an interaction method is questionable. Assuming touch-based gestures are natural, they should be highly learnable and amenable to enhancement through social interactions. To investigate whether social interactions can enhance the learnability of touch gestures for older adults with low digital literacy, we conducted a study with 42 technology- naïve participants aged 64 to 82. They were paired and encouraged to play two games on an interactive tabletop with the expectation to use the drag gesture to complete the games socially. We then compared these results with a previous study of technology–naïve older adults playing the same games individually. The results of the comparisons show that dyadic interactions had some benefits for the participants in helping them to become comfortable with the drag gesture by negotiation and imitation. Further qualitative analysis suggested that playing pairs generally helped learners to comfortably explore the digital environment using the newly acquired skill.
15:40 - 17:00
Chair: Fabrice Matulic
Alix Ducros, Clemens N. Klokmose, Aurelien Tabard
Situated sketching and enactment aim at grounding designs in the spatial, social and cultural practices of a particular place. This is particularly relevant when designing for public places in which human activities are open-ended, multi-faceted, and difficult to anticipate, such as libraries, train stations, or commercial areas. In order to investigate situated sketching and enactment, we developed Ébauche. It enables designers to collaboratively sketch interfaces, distribute them across multiple displays and enact use cases.We present the lessons learned from six situated sketching and enactment workshops on public displays with Ébauche. And we present the results of a controlled study with 8 pairs of designers who used paper and Ébauche. We present the various ways in which participants leveraged the place, and how paper or Ébauche influenced the integration of their designs in the place. Looking at the design outcomes, our results suggest that paper leads to broader exploration of ideas and deeper physical integration in the environment. Whereas Ébauche leads to more refined sketches and more animated enactments.
Veronika Domova, Alvaro Aranda Munoz, Elsa Vaara, Petra Edoff
This paper describes an explorative design study conducted in the scope of a collaborative research project in the district heating domain. In the scope of the project, we have arranged extensive field studies at two power plants to understand the workflows, problems, and needs of industrial operators. We relied on the gained knowledge to design and develop novel visual interfaces that would communicate the overall status of the district heating system at-a-glance. We aimed at exploring potential directions and alternatives beyond conventional industrial interfaces. One particular aspect of our research was related to how the physicality of the underlying industrial processes can be expressed by purely visual means. The paper introduces three high-fidelity prototypes demonstrating the novel visualizations developed. The paper explains the design choices made, namely the relation between the selected visual encodings to the requirements of the industrial operators' tasks. Preliminary evaluation indicates industrial operators' interest in the designed solutions. Future work will incorporate an extensive qualitative evaluation on site.
Arnaud Prouzeau, Antoine Lhuillier, Barrett Ens, Daniel Weiskopf, Tim Dwyer
In immersive display environments, such as virtual or augmented reality, we can make explicit the connections between data points in visualisations and their context in the world, or in other visualisations. This paper considers the requirements and design space for drawing such links in order to minimise occlusion and clutter. A novel possibility in immersive environments is to optimise the link layout with respect to a particular point of view. In collaborative scenarios there is the need to do this for multiple points of view. We present an algorithm to achieve such link layouts and demonstrate its applicability in a variety of practical use cases.
Yea-Seul Kim, Nathalie Henry Riche, Bongshin Lee, Matthew Brehmer, Michel Pahud, Ken Hinckley, Jessica Hullman
Externalizing one's thoughts can be helpful during data analysis, such as which one marks interesting data, notes hypotheses, and draws diagrams. In this paper, we present two exploratory studies conducted to investigate types and use of externalizations during the analysis process. We first studied how people take notes during different stages of data analysis using VoyagerNote, a visualization recommendation system augmented to support text annotations, and coupled with participants' favorite external note-taking tools (e.g., word processor, pen & paper). Externalizations manifested mostly as notes written on paper or in a word processor, with annotations atop views used almost exclusively in the initial phase of analysis. In the second study, we investigated two specific opportunities: (1) integrating digital pen input to facilitate the use of free-form externalizations and (2) providing a more explicit linking between visualizations and externalizations. We conducted the study with VoyagerInk, a visualization system that enabled free-form externalization with a digital pen as well as touch interactions to link externalizations to data. Participants created more graphical externalizations with VoyagerInk and revisited over half of their externalizations via the linking mechanism. Reflecting on the findings from these two studies, we discuss implications for the design of data analysis tools.
18:00 - 18:30
10 Year Impact Award Presentation
18:30 - 20:00
with Korean traditional music performance by Chong Ah Yul.