An Integral Illumination Device Using Heat-Resistant Hybrid Optics

Yuichiro Takeuchi, Kunihiko Nagamine
We present a high-output integral illumination device, capable of generating a wide array of programmable lighting effects with sufficient luminous intensity to provide ambient lighting for small indoor environments. The technical foundations of such devices have been laid out in prior work, but past implementations suffered from relatively low ceilings with regards to light output, which has so far limited their range of potential applications. The defining feature of our new setup is a custom, heat-resistant optical instrument that can easily be built using a combination of manual and digital fabrication.

AURA: Urban Personal Projection to Initiate the Communication

Miyo Okada, Laura Lugaresi, Dingding Zheng, Roshan L Peiris, Katrin Wolf, Cristian Norlin, Mikael Anneroth, Kai Kunze, Masa Inakage
We present a concept of AURA, an urban personal projection to initiate the communication. In this work, we focus on how to break the ice with strangers through technology in urban public space. The Aura, as an enliven spiritual pet, floats around user__ feet. We introduce a simple interaction scenario, attracting a person who comes within personal distance of the user who carries the AURA. To break the ice between strangers, a projected butterfly as the Aura is moved toward a person who comes within 2m of the user, and then back and forth to attract that person. We believe that externalized interactive representation of the user in the form of a spiritual pet can ease and facilitate the communication, serve as a conversation starter, and make the interactions between people more fun.

Crafting Textiles of the Digital Era

Deepshikha, Pradeep Yammiyavar
Traditional methods of crating with precious and semi-precious metals and embellishments have been replaced selectively to embed electronics that can make textile crafts dynamic. The idea is to merge smart materials seamlessly as part of design element such as they can be used for dynamicity when needed, else they remain neatly embedded as part of the textile motifs.

Dynamic, Flexible and Multi-dimensional Visualization of Digital Photos and their Metadata

Xin Huang, Kazuki Takashima, Kazuyuki Fujita, Yoshifumi Kitamura
Digital photos and their metadata get explosively growth, requiring better support to browsing them. We propose a dynamic and flexible visualization of digital photos and their metadata. A prototype is designed and implemented based on the algorithm of D-FLIP, our previous work for flexibly displaying a large photo collection. We design various dynamic photo visualizations with up to four-dimensional meta-information by using Bertin's visual variables and three-dimensional representation. It allows users to dynamically and effectively manage photo visualizations by selecting meta-information of user's current needs and interest.

FlexFace: A Head Gesture Motion Display with Flexible Screen for Telecommunication

Miho Izuka, Kentaro Fukuchi
We propose a new telecommunication system using a flexible screen to display head motion gesture. Previous video-based telecommunication systems with a movable display used a flat and solid screen, and their presence was abiotic. On the other hand, traditional animation techniques provide the art of making solid objects appear to "animate". One of the most important principles of the animation techniques is "squash and stretch" that deforms the shape of the object. This technique inspired the proposed system: we employed a flexible screen and deformed it by using four servo motors to represent biotic motion.

Generating Spherical Hyperlapse Videos via Recursive Intelligent Sampling for StratoJump

Keita Higuchi, Shunichi Kasahara, Kei Nitta, Yohei Yanase
StratoJump is an installation that allows participants to explore the experience of high altitude jumping toward the edge of space. In this peper, we presents an vision-based algorithm for generating hyperlapse of spherical videos captured in high altitude space. Unlike previous work, we consider high altitude videos that cannot perfectly be stabilized by 3D reconstruction-based algorithms. We proposed recursive intelligent sampling to simultaneously stabilize and shorten spherical videos. Our preliminary results show that the proposed method can reduce cumulative errors of stabilization compared to a frame by frame method in reasonable time.

Handheld Haptic Interface for Rendering Size, Shape, and Stiffness of Virtual Objects

Yuqian Sun, Shigeo Yoshida, Takuji Narumi, Michitaka Hirose
PaCaPa is a handheld device that presents haptic stimuli on a user__ palm when the user interacts with virtual objects using virtual tools. It is a box-shaped device and has two wings. This device can present the contact force and contact angle by opening and closing the wings based on the angle between the direction of the virtual tool and hand. By changing these haptic sensations on the palm and fingers, it enables the users to perceive different size, shape, and stiffness of virtual objects.

JackIn Neck: A Neckband Wearable Telepresence System Designed for High Comfortability

Akira Matsuda, Kazunori Nozawa, Jun Rekimoto
We present a wearable telepresence system, JackIn Neck, that can be worn on the neck and supports joint activities with a local user, a conversation partner who meet the local user, and a remote user of this system. While previous works demonstrated clear advantages of the neck for the location of wearable devices, a telepresence system tailored for such use had not been developed. JackIn Neck realizes this form factor by combining a camera with a fisheye lens, speakers, and microphones. Because our device is easy to put on and take off, as well as being comfortable to wear, we see the potential of our system to be adopted in the wild, such as sightseeing and event participation remotely.

MagicPAPER: Tabletop Interactive Projection Device Based on Kraft Paper

Qin Wu, Sirui Wang, Jiayuan Wang, Zixiong Qin, Tong Su
As the most common writing material in our daily life, paper is an important carrier of traditional painting, and it also has a more comfortable physical touch than electronic screens. In this study, we designed a shadow-art device for human computer interaction called MagicPAPER, which is based on physical touch detection, gesture recognition, and reality projection. MagicPAPER consists of a pen, kraft paper, and several detection devices, such as AirBar, Kinect, LeapMotion, and WebCam. To make our MagicPAPER more interesting, we developed more than a dozen applications that allow users to experience and explore creative interactions on a desktop with a pen and a piece of paper. Results of our user study showed that MagicPAPER has received positive feedback from many different types of users, particularly children.

NiwViw: Immersive Analytics Authoring Tool

Dianna Yim, Alec McAllister, Caelum Sloan, Rachel Lee, Steven Vi, Teresa Van, Wesley Willett, Frank Maurer
Immersive analytics uses augmented and virtual reality technologies to better understand and interact with multi-dimensional data within a physical space. NiwViw is an application that allows non-technical users to create their own immersive visualizations. NiwViw currently supports the iPad, Microsoft HoloLens, and the HTC Vive.

OmniEyeball: An Interactive I/O Device For 360-Degree Video Communication

Zhengqing Li, Shio Miyafuji, Erwin Wu, Toshiki Sato, Hideaki Kuzuoka; Hideki Koike
We propose OmniEyeball (OEB), which is a novel interactive 360__ image I/O system combining a spherical display system with an omnidirectional camera. We also present our experimental design of a user interface on the OEB, including a vision-based touch detection technique as well as several visual and interactive features. Our proposed techniques may contribute to solving the weak awareness of the opposite side of the spherical display as well as the workload caused by walking around in the 360° symmetric video communication.

Printable Hydroponics: Digital Fabrication of Ecological Systems

Yuichiro Takeuchi
We demonstrate a technique to 3D print hydroponic systems which support the growth of various plant species. Our technique fabricates a landscape made entirely out of plastic, and automatically attaches plants seeds to predesignated positions on its surface; the end result is that by subjecting the printed, seed-attached landscape to water, light, and nutrient solutions, eventually plants will grow, creating a lush hydroponic "garden". Though currently only tested at modest scales the technique is theoretically scalable, possibly to architectural and environmental scales. We expect the technique to provide a foundation on which a new field of 3D printing research/practice can be built-i.e., printable ecological systems.

Proposal of a Method of Directing by Pseudo-hologram Using Motion Body and Body Information

Masumi Kiyokawa, Kie Tsuchiya, Sae Itoh, Yasuo Kawai
In recent years, digital contents are attracting attention again, but in the past many passive types of exhibits are present, and it is considered that giving interactive ability is important. In this research, we propose a production method assuming use such as exhibition. Using the produced aquarium type device, two types of images are projected from different directions onto underwater film and combined. One is a motion object detected by a camera, an image with an effect applied in real time, and the other is an image giving an effect based on user's physical information. With this method, it is possible to make the two types of images feel the depth of the space, and it is possible to create a new dynamic presentation. We also conducted an evaluation study on the color of the image to be projected and examined the images to be used.

Real Time Animation of 3D Models with Finger Plays and Hand Shadow

Amato Tsuji, Keita Ushida, Qiu Chen
In this paper the authors report a method for animating 3D models with finger play and hand shadow. For preparation, the motion of the models is associated with a motion of the hands. Appropriate association based on finger play and hand shadow provides intuitive operation. For example, the user makes a hermit crab__ claw pinch by pinching the index and middle finger. On using the system the user doesn__ need to wear sensors or gloves. The operation of the method is so easy that children can animate 3D models. In the evaluation, participants are mostly positive to the method.

Real-time Visual Feedback for Golf Training Using Virtual Shadow

Atsuki Ikeda, Dong-Hyun Hwang, Hideki Koike
In this work, we propose a golf training system using real-time visual feedback. The system projects the virtual shadow of the user on the ground in front of the user; this shadow provides feedback to the user without form collapse. Additionally, an expert's contour is overlaid on the virtual shadow of the user to make them aware of the difference between their form and that of the expert. In this system, the user can receive feedback with a fixed face orientation according to the actual golf swing; such feedback cannot be realized in systems projecting feedbacks on a wall. Moreover, by imitating the shadows already used in conventional golf training, the cost of adaptation to this training system can be reduced.

Real-Virtual Bridge: Operating Real Smartphones from the Virtual World

Tomomi Takashina, Mika Ikeya, Tsutomu Tamura, Makoto Nakazumi, Tatsushi Nomura, Yuji Kokumai
Modern industrial products should incorporate inclusive designs, be sustainable, and capable of addressing emerging usage. The adaptation interface is a supplementary approach to meet these targets. To make capacitive touchscreens adaptive, a method to simulate finger touch is needed. Takashina and others have proposed quasi-touch to allow a touch panel to recognize touches electrically without using fingers. Virtual reality is expected to be found in practical applications now after success in entertainment applications. In such practical applications, a mechanism to connect real objects and virtual objects is needed. We call this mechanism a real-virtual bridge, which is a form of an adaptation interface. As a simple example, a user might want to operate their smartphone when they does a job in the virtual world. Therefore, we applied quasi-touch to allow users to operate real smartphones from virtual worlds, and constructed a proof-of-concept prototype. In a preliminary evaluation, a participant played a real smartphone game in the virtual space and reported that he felt as if he had played a game on a real smartphone.

Scalable Autostereoscopic Display for Interaction with Floating Images

Tadatoshi Kurogi, Hideaki Nii, Roshan L Peiris, Kouta Minamizawa
The purpose of this paper is to introduce using scalable autostereoscopic displays that allow users to interact with floating images publicly, such as for digital signage. By tiling multiple display modules, it is possible to project larger autostereoscopic images in the air than by displaying it using a single display module. By projecting autostereoscopic images, it is possible for the user to understand the contents in better detail compared to a standard 2D public displays. Furthermore, by projecting floating images, it is possible to reduce the spatial divergence between the user's body and the autostereoscopic images, so that the user can interact intuitively with autostereoscopic images. Moreover, since this system has scalability, it is possible to expand the screen size just by adding more display modules. This expandability makes the system easy to move and arrange in public spaces. In this paper, we propose an autostereoscopic display module which can enlarge the screen size in the horizontal direction as the first step to realizing our concept.

SNS Door Phone as Robotic Process Automation

Toru Kobayashi, Ryota Nakashima, Rinsuke Uchida, Kenichi Arai
We developed SNS Door Phone by making an interphone system an IoT device. We integrated SNS and QR-code recognition function with an interphone system. Thanks to connection with SNS, we can know the visit of the parcel delivery service anytime through SNS even if during going out. Thanks to introduction of QR-code recognition function, if a parcel deliveryman only showed the QR-code of the parcel in front of SNS Door Phone, the re-delivery operation information would be sent to a user automatically through SNS. Then, the user can call or ask re-delivery arrangement using smart phone without inputting any additional data. We can consider this kind of seamless re-delivery operation to be a good example of Robotic Process Automation.

SoundPond: Making Sound Visible and Intuitively Manipulable

Naoki Murata, Yoshinobu Tonomura
We present an interactive sound handling system called SoundPond, in which a sound unit is treated as a virtual sound object. A short recorded sound is visualized as an oval shape that can be changed by the user, leading to modification of characteristics of the sound. Interactions between other sound objects and drawn objects are implemented. Each sound object is also affected by the pond in which it is placed. The proposed system attempts to provide intuitive and versatile interactions for rich sound handling. In the prototype system, basic functions are implemented using a multi-touch display, a microphone, and a pair of stereo speakers. Possible future application areas using SoundPond are graphical sound composition, sound performance, and a sound playground. We believe that SoundPond has the potential to be used for sound expression based on the creativity of the user.

Touchable Wall: Easy-to-Install Touch-Operated Large-Screen Projection System

Takefumi Hiraki, Masaaki Fukumoto, Yoshihiro Kawahara
Recently, small and inexpensive portable projectors are commonly being used for presentations. For convenience, it is desired to perform a direct pointing operation on the screen without requiring to grip or mount any device and control the pointer intuitively. Therefore, this study proposes a new touch-operated large-screen projection system using acoustic vibration sensing and a projector-camera system. We apply a continuous signal in the inaudible range of an actuator to a surface and acquire its elastic compliance change as a resonance of sounds. We perform touch detection, position estimation, and pressure estimation by using machine learning techniques. Our actuator and sensor units are small and easy to install on a surface. Furthermore, our system can detect "true" touch without requiring any devices such as pens or pointers. We demonstrate a series of novel interactions for a large-screen projection system with our proposed technology through three applications.

Ultrasound from Cyber Map: Intuitive Guidance Method using Binaural Parametric Loudspeaker

Ryunosuke Iwaoka, Masataka Moriga, Hisham Elser Bilal Salih, Keichi Zempo, Koichi Mizutani, Naoto Wakatsuki
The people with visual impairment are facing the problem of the difficulty of reaching their desired destination. However, to have people who are with visual impairment updated with the necessary information about the location that they are traversing in order to secure safety and independent mobility, they need to depend on non-visual information to get the information they need.This paper proposes a system that automatically presents the sound image directly from arrival point without depending on the installation position of the sound source.This system works via presenting audible cues in the direction of the destination of the user without depending on the loudspeaker location. The success probability of direction guiding to the fixed destination for the proposed method was calculated and the results has revealed success probability of about 90%. As a result, this system presents the direction of the destination intuitively and easily by presenting the sound image of the guidance sound only for the person who needs information.