That one there! Pointing to establish device identity (2002)

Colin Swindells, John C. Dill, Melanie Tory
Simon Fraser University

Kori M. Inkpen
Dalhousie University



Comments: ...

This paper deals with the issue of human-computer identification. As the number of computing devices increase per person, the number of entries in wireless network lists increase. This makes it difficult for a person to select another computer or device to connect to in order to send information to that device. Traditionally, the name of the device is selected from a list of all visible devices on the network. As more and more devices are added to that list (with non-descriptive names much of the time), it becomes hard for humans to select the correct device to connect to. This trend contrasts with the increasing ease of computers to automatically enter, exit, and identify previously connected computers and devices. A solution to the human-computer identification problem is pointing. The device which the user wants to connect to is simply pointed at, and the computer can identify the target device and connect. This paper presents a device, called the gesturePen which sends an IR signal to tags installed on the target devices. By pointing the pen at the device, the device ID can be acquired and easily connected.

The paper illustrates some similar point-to-identify solutions, and points out that all others use a system which constantly broadcasts the device ids, which can still overwhelm the user. The gesturePen system tags "are only activated when ‘pinged’ by the gesturePen."

Liquids, Smoke, and Soap Bubbles – Reflections on Materials for Ephemeral User Interfaces (2010)

Axel Sylvester
University of Hamburg

Tanja Döring, Albrecht Schmidt
University of Duisburg-Essen


Comments: ..

This is a short paper intended to "provoke thoughts about durability, control, and materiality
of tangible user interfaces" by introducing the concept of an "ephemeral user interface" composed of transient materials, liquid, smoke and soap bubbles, that eludes complete user control by demanding that the inputs be treated delicately, as the bubbles inevitably will burst. The user interacts with a computer system by generating and then manipulating soap bubbles which can be empty or filled with smoke. The interaction surface is composed of a dark liquid on which the bubbles land after being generated.

This work is motivated by the increasing presence of computing in our everyday tasks and the lack of research in the area of materials used for interaction, despite studies illustrating "the importance of materials and materiality for humans." By using such unusual and transient materials such as smoke and soap bubbles, this work easily provokes thought about the possibilities of materials and "handles" used for interaction through its unusualness and "contradiction to ordinary technical and durable materials of computer technology."

Soap bubbles are highly symbolic and therefore are relevant to many fields including science, art, and entertainment. A fascination with soap bubbles occurs when viewing them as "'in-between' spaces - spaces that are neither real nor fully virtual." This is easily applied and understood from a computing interface perspective.

The system consists of a small, dark, round pool of liquid with camera beneath tracking the bubbles, which are blown onto the surface of the liquid from above. Either empty or smoke filled bubbles can be generated, and an overhead projector can illuminate the bubbles.

Once on the surface of the liquid, the bubbles can then be moved either by blowing or gently touching. In one application, the size of the bubbles determines the brightness of the ambient light in the room, and the x and y coordinates control red and blue hues of the ambient light in the room.

The researchers see this as a playful, entertaining, yet useful interaction mechanism as computing is further integrated into our everyday lives. For example, the paper suggests "a growing demand for user interfaces for services where specific and accurate control is not necessary and playful interaction with diverse materials suits the situation well." To illustrate this concept, the paper also suggests the concept of "buttons on demand" which could use these ephemeral materials or simple ambient displays.

----------

I like the ideas that this paper provokes. I hadn't thought of such approaches to tangible user interfaces. I tend to view currently available hardware and think of ideas of how to use those for interfaces. This paper inspires me to think of alternate materials and input methods and devices.

Toward Natural Gesture/Speech HCI: A Case Study of Weather Narration

Indrajit Poddar, Yogesh Sethi, Ercan Ozyildiz, Rajeev Sharma
Pennsylvania State University


Comments: ...

This paper discusses the limitations of current gesture recognition, claiming that all the restrictions imposed in most work violate the naturalness of the HCI involved. Therefore, they have decided to impose no restraints on the user by analyzing videos of weathermen, which is a domain they claim to be analogous to HCI. They employ some vision techniques to identify the person's head and hands and extract 5 features for each hand (distances, angles, velocities).

They use an HMM to recognize the gestures and have defined possible causal models. The speech was also analyzed in conjunction with the hand gestures to try to improve correctness and accuracy in recognizing the gestures.

To begin with, three main types of gestures were imagined: "here," which refers to a specific point, "direction," which can be something like east(ern) or north(ern), and "location," which is a proper noun form of "here." Three classes of gestures were named: contour, area, and point. The speech was analyzed in conjunction with the gesture to determine at what time some keywords were spoken: before, during, or after the gesture.

Analysis of speech and gestures shows that relevant keywords are spoken during the gesture the majority of the time, and sometimes after the gesture. Therefore, the speech can be used both as classification and verification of the gesture. Separate analysis of just the video vs the video and speech shows higher correctness and accuracy when speech is included.

Though the accuracy of this system is considerable lower than other gesture recognition systems, the authors claim this is much more natural, as the subjects analyzed were not participating in any user study at all. They were just naturally speaking and making gestures. The authors state that this study can "serve as a basis for a statistical approach for more robust gesture/speech recognition for natural HCI."

----------

As someone who is currently working on a hand gesture recognition project (using the data glove), I am thinking about the implications of this work in my own project. We are currently imagining a very limited gesture set, though we have been thinking about the differences of gestures among different users. We have been imagining a user study to determine what specific gestures to use, but this paper makes me think of eventually extending my current work in a much more natural direction where users can perform the gestures they want to and the system will respond in a unique way to each user while letting the user perform his own natural gesture, undefined by the game. This could make for an interesting system considering the domains we are targeting.

The Wiimote with multiple sensor bars: creating an affordable, virtual reality controller (2009)

Torben Sko, Henry Gardner
Australian National Univeristy


Comments: ...

This paper discusses using a Wii remote as a viable method to control a virtual reality system by using multiple sensor bars to define a much larger field of view for the Wii remote. This way, the Wii remote can be used across a surrounding display.




Five sensor bars were arranged in front of the user in a vertical position, as illustrated by the image above and by the video. Software allows the Wii remote to "bunny hop" from one sensor bar to another, since the Wii remote can only see 4 IR sources at one time, and one sensor bar contains 2 IR LEDs.

The researchers modified the Half Life 2 engine to create a game suitable for testing. The Wii remote was able to successfully track across the whole screen, allowing the user to play the game as normal.

The biggest limitation of this system comes from the "bunny hopping" feature of the Wii remote. Because it knows where it is based on the currently visible IR sources, it must be constantly pointed at the screen, which causes fatigue for the user.

----------

I was impressed at the effectiveness of this method. The video clearly shows that the Wii remotes are very adequate for precise aiming across the large 2-walled display. I think the limitation imposed by the Wii remote technology is not a big issue, since more specialized systems can create a controller that the user can set down and not keep aimed at the IR sources. The main contribution of this paper is that a Wii remote style or gun-style free-hand aiming system can be implemented for surround screen usage using inexpensive parts.

That being said, I would like to see them improve on this by allowing the user to rest and lower the controller away from the screen, whether they are able to do this with the Wii remote or some other hardware. Naturally, this demo makes me think of glove applications on this type of screen, though I don't have any specific ideas yet...

The Peppermill: A Human-Powered User Interface Device (2010)

Nicolas Villar and Steve Hodges
Microsoft Research, Cambridge, UK

Comments: ...

This paper presents the Peppermill, which is a wireless and batteryless interaction device. The device is powered by the user and momentarily sends out a digital signal.

The paper gives some background on user-powered devices, beginning with the Zenith Space Commander developed in 1955. The authors also mention MIT's user-powered button. Both these devices have the limitation that power is generated and therefore interaction only happens on the down-press of a button. The authors aim to improve on this idea by providing a method for richer interaction.

The method they came up with is a rotary control that is powered when the user twists the device. The user can twist the knob in 2 directions with varying speeds. A simple circuit detects the direction and speed along with a set of modifiers consisting of three buttons. This simple device is thus capable of very rich interaction.

The authors give an example of usage for this device by using it to control a video browsing application. When no buttons are pressed, a set of videos is cycled through, much like changing the channels on a tv. The speed of rotation controls the speed of video cycling, and the direction controls the direction of cycling. When the green button is held down and the control is rotated, the volume is adjusted. Once again, the speed of rotation controls the speed by which the volume is adjusted and the direction determines if the volume is adjusted up or down.

The authors talked a bit about future work, most notably a method of providing haptic feedback to the user while the knob is being turned.

----------

I was intrigued by this control, not only because it is human-powered, but also because of its unique interaction style. When I first looked at how this device is used, I actually didn't know it was human-powered. I am impressed with the versatility of a device that has no batteries or cord.

This method of interaction, especially without batteries, got me thinking about our own projects. I wonder if we can come up with a glove that is somehow human-powered. That would allow for greater motion than the wired gloves, and would not need batteries, like the wireless gloves need.