3DM: A Three Dimensional Modeler Using a Head-Mounted Display (Butterworth, et al.) (1992)

This paper is about creating a 3D modeling environment using a head mounted display (with a small screen for each eye) to immerse the user in a 3D world. The user can fly through the 3D geometry and scale himself up or down to get different views of the world.

A floating toolbar is always present in the user's view. It contains functions for creating, editing, and viewing geometry as well as some other basic functions.

The user uses a 6D mouse to more accurately navigate and work with the 3D models.

It is reported that this system is useful for rapid prototyping, especially with organic shapes such as trees and rocks. The 3DM system is not very good at creating geometry with specific constraints (such as those in CAD programs), as it does not support them.

------------

I like this work, as it seems to be pioneering the virtual reality 3D navigation and modeling that we have seen in later work, such as HoloSketch. This system seems to be aimed at novices who want to rapidly create some 3D model to prove a point or illustrate something. As mentioned in the paper, it does not have any built-in constraints, though a grid that can be snapped to was added due to user feedback on this issue.

I think this work is a good base and can (and has been) expanded on in future work.

No specific user studies were discussed. Instead, the author reports some of the feedback of "users," whoever they may be.

Comments: Paul, Kevin

TIKL: Development of a Wearable Vibrotactile Feedback Suit for Improved Human Motor Learning (2007)

by Jeff Lieberman and Cynthia Breazeal of the Department of Media Arts and Sciences, Massachusetts Institute of Technology
http://robotic.media.mit.edu/projects/robots/tikl/tikl.html

When learning a motor skill, such as when in rehabilitation or dance class, people learn through several means: through sight, sound, and touch. The most difficult method of instruction is through touch, since the instructor also needs to be performing the action and also cannot guide all joints of the student. This research aims to create a suit that is embedded with vibrotactile actuators at each joint to give corrective feedback to help teach the motor skill to the student.

The idea is that the instructor will wear a suit with motion sensors and will be tracked with a tracking system. The student also wears a suit equipped with motion sensors and vibrotactile actuators. When the student mimic's the instructor's movements, the suit will vibrate the joints that are incorrect proportional to the error in posture.

The suit they have created covers the wrist, elbow, shoulder, and chest. Motion sensors are used and tracked by a Vicon system to model the subject's arm. The vibrotactile actuators are placed around the wrist and elbow joints. Four actuators are placed around each joint. When the control software detects that the subject is not positioned correctly, the actuators vibrate in a specific way. For rotational correction, the actuators vibrate clockwise or counterclockwise around the joints to simulate torque on the joint. For joint angle correction, two of the sensors (top and bottom of the wrist) vibrate. One of them will vibrate with more intensity, indicating the direction the hand must move. It is intended that there is a vibrational "force field" around the proper join angles.

The suit is connected to a computer, and control software processes the motion capture data from the teacher and the student (the user) to determine the student's errors. The error data is sent to a custom hardware controller which transforms the error data into the proper vibrations.

A user study was performed with 40 subjects. Half the subjects received visual correction while the other 20 received visual correction plus vibrotactile feedback. Each person in each group was shown the same videos of an instructor posing in a specific position. The videos were shown in a random order and were only a few seconds long to ensure that no subject would be able to perfectly mimic every pose.

In a questionnaire given immediately after the test, subjects who had the vibrotactile feedback felt that it didn't help much at the beginning but might help over time if they kept using the system. The vibrotactile group also reported more fatigue than the non-vibrotactile group.

The studies were video recorded, and analysis of the video shows enhanced performance with vibrotactile feedback. The error rates computed by the control software show consistently lower errors in all trials.

Hinge joint corrections were found to be easier to understand than rotational joint corrections. The error rates for hinge joint correction are much lower for the vibrotactile feedback group, while error rates for rotational joint correction are not significantly different for the vibrotactile feedback group.

Overall, the study shows a significant gain in performance (up to 27%) and accelerated learning (up to 23%). It is pointed out that the positions of the vibrotactile actuators and the methods of feedback were not optimized, so it is expected that these number will improve even more as the technology is optimized and a full-body suit is created.

For future work on the technology itself, Lieberman Breazeal plan on finding a lower cost relative positioning system to aid in the adoption of this technology. They also want to develop or find smaller, more powerful tactile actuators. They want to increase the performance and learning gains with rotational joints. They wonder if this system can be successfully scaled up to the full human body, since there would be around 100 actuators and the body might not be able to handle that. They want to learn the long-term effects of this system as well.

In other future work, they want to test this system with less or no visual feedback, so this can be used for visually impaired or blind people. They also want to explore neurological rehabilitation and posture correction.

------------

I thought this work was very interesting and presents a great use of vibrotactile technology. I have been thinking of other ways to use this technology than the project Manoj and I are currently working on with our vibrotactile gloves. I especially liked the possibility of using the suit to help blind people perform certain tasks or learn certain motor skills, since that is related to my own work, which is to help blind people see drawings and images and also draw using vibrotactile feedback to the fingers.

I was impressed that the system worked well in its initial unoptimized, simplified form (consisting of just one arm). I hope this suit will perform on par with or better than this initial study.

I was wondering why they used video and images of the instructor in the user study rather than a live instructor. I would be interested to see how the suit performs with a human instructor. This would be a different form of visual stimulus, and might be superior.

Comments: Kevin, Josh, Paul

HoloSketch: a virtual reality sketching/animation tool

by Michael F. Deering
Sun Microsystems Computer Corporation

Holosketch is an attempt at creating a 3D modeling and animation interface that provides a more direct form of control over the 3D models. It uses stereo shutter glasses with a CRT display refreshing at 112.9 Hz. The glasses alternatively block each eye so each eye gets its own display, so to speak. This way, the user perceives the 3D models in 3D. A six-axis mouse is used to try to provide a direct method of manipulation in an attempt to improve upon the 2D mouse and its approximation of 3D manipulation. The head position is also tracked, so that moving around the 3D object, to some degree, gives an alternate view of the object, similar to the real world.

The Holosketch system allows creation of simple 3D primitives and incorporates simple animation gadgets to allow novice users to create models and animations easily. A keyboard is used in conjunction with the 3D mouse, or "wand," as it is referred to. In some modes, the conventional mouse is used in conjunction with the 3D mouse.

The menu system was reworked for the Holosketch project. Since the interface is 3-dimensional, a conventional menu system would be visually intrusive and take away valuable rendering time for the 3D geometry, since it has to be rendered in real time and the computer's processing power was much more limited back in the mid 1990s. The new 3D menu system is similar to a context menu. It is a radial menu containing many options which are selected with the wand.

Instead of a traditional user study involving multiple participants performing specific tasks, an artist was employed to use the system for a month and give feedback. She responded positively, only complaining about a few minor interaction issues. It reportedly took a few days to learn how to use the system well. It was determined that novices would be able to pick up the system quickly.

------------

I like the attempt at a direct 3D manipulation. Using the six-axis mouse looks like it would perform better than the 2D mouse. Sometimes unexpected things happen when manipulating 3D models with a conventional mouse. I also like the head movement to alter the models.

It is really interesting that this work was done so long ago. I would like to see a modern instantiation of this research in combination with some of the other work we have seen. If we could incorporate Holosketch with Vogel's work with the freehand pointing and MIT's BiDi screen that allows are hand gestures, we could have a 3D modeling system in which you can reach out and grab the 3D model with your bare hand and manipulate it (someone please tell me if this exists).

Comments: Kevin, Josh, Franck, Paul

Wearable EOG Goggles: Eye-Based Interaction in Everyday Environments

This work takes another approach to the technology of eye tracking. It uses some goggles with attached electrodes to sense changes in the electric field of the eye. It describes the eye as a dipole with the cornea and the retina as the endpoints. When the eye moves, the offset in the electric field is sensed by the goggles. Their demo shows the goggles recognizing 8 directions of movement: up, down, left, right, and diagonal combinations.

Their hardware is completely worn on the user with no wires going to a computer or some other device. The sensors are attached to the goggles, and a DSP and some other hardware is attached to both the gloves and another wearable unit. The data is transferred to the computer using Bluetooth.

------------

I like the approach these people are taking by using other technologies to sense eye movements. I would like to know how accurate these sensors are. Can these goggles be used for pointing, for example?

I would also like to see an expanded user study to really show what these goggles can do. The study they give seems to be just a confirmation that the device kind of works. I would like to see a study that really puts the device through its paces and discovers how accurate it is and what its effects are on users. If we know the accuracy, maybe more people can use this technology for different areas of research.

Comments: Kevin, Josh, Franck, Paul

Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays

This research aims to provide an interaction mechanism for large displays that allow users to interact with the display both from a distance and up close. They have come up with a method to use the hand to control the cursor by attaching reflective markers to a couple fingers and points on the hand and wrist and tracking those points.

There are several "modes" of interaction. There is a clicking mode in which a click can be performed by either pushing the index finger down in the air or bringing the thumb up to the side of the hand. Feedback is introduced with some on-screen animation and sound since there is no tactile feedback on the index finger and little feedback with the thumb.

There are a couple different pointing modes they have come up with. First, they use a ray-casting mode that basically acts as if a pointer is coming out of the index finger. There is also a relative mode of movement as well as a combination of pointing and relative movement.

They found that their methods worked pretty well and had high accuracy, though the clicking and pointing modes cannot be used together.

------------

I think this type of interaction is really interesting and cool. I really like touch interaction, and this extension of touch to control from a distance is interesting, especially because the interaction at a distance works up close when the display is touched. No other systems allow an easy transition from far away to up close.

This research made me think of the CyberTouch gloves (link) I am currently working on with Manoj. We have the ability to get fingertip and wrist locations already, and we have the added benefit of vibrotactile feedback with this particular glove. We are currently working with 2D and 3D tracking, so we have the ability to do things like this here in our lab, which is exciting to me.

Comments: Kevin, Josh, Manoj, Franck, Paul, Sashi

Noise Tolerant Selection by Gaze-Controlled Pan and Zoom in 3D

This work involves using eye trackers to provide a gaze interface that uses panning and zooming to navigate a 3D-like interface. Their work is tolerant of noise introduced to the system. Unlike other gaze systems, this one does not rely on dwell time to make selections.

An application called StarGazer is used to provide the 3D interface used for the gaze tests. This program consists of a circular keyboard that the user pans and zooms to select letters and thus type using only the eyes.

It was found that this system works better than those based on dwell time. Users were able to type at a relatively high word per minute using the StarGazer program, 8.16 wpm, with a low error rate of 1.23%. Also, users remained in control even when some noise was introduced to the system, which was noteworthy. Only efficiency was decreased, not accuracy or the number of errors produced. Additionally, users only needed about 5 minutes to understand how to use the system.

------------

I think this work is interesting because it helps pave the way to eliminate the mouse and keyboard, which I feel are barriers to a natural computer-human interaction. This system still has a way to go before it can truly replace the keyboard in terms of typing speed. It looks like it is faster than the mouse, however. I would like to see this system used in some other areas besides typing. I think the quick learning curve and use of cheaper, off the shelf gear will also help this system gain popularity.

Comments: Josh, Manoj, Franck, Murat, Paul

Introduction


Hey everyone! My name is Drew.
dalogsdon@gmail.com

Originally from Midland, TX, I am a first year MS student in computer science here at Texas A&M. I am taking this class because I am interested in computer-human interaction. I have seen many people's research in the undergrad CHI class, and most of it is really interesting to me. I think this sight and touch class will give me another glimpse of current research and give me a great opportunity to get more hands-on experience and do more research in this area.

In ten years, I expect to be working full time somewhere on earth. I would also like to become a somewhat known artist on the side. That's about as specific as I can get, because I am like Josh Peschel, opportunities just seem to come to me with little input from myself.

I can't decide what I think the next big technological advancement in computer science will be, though I really hope we can get some better desktop interfaces. I want something new to come along and revolutionize computer interfaces in the way windows and GUIs revolutionized computing. Perhaps mind control or something... I think we can expect something from Apple pretty soon though, seeing what the iPods and iPhones have done for technology and CHI.

I find multi-touch very interesting, as it can not only allow for a more natural, intuitive computing experience, but it can also allow collaborative computing on one computer with one large display. The Microsoft Surface computer and James Bond movies have illustrated this, and I think it can be a valuable tool for both business and recreation.

I don't care who I have lunch with, as long as he or she pays.

I think I will say my favorite movie is The Waterboy. I grew up in Louisiana, and I feel that this is one of the few movies that my whole family can sit down and enjoy when I visit home. I also find it consistently entertaining.

I think people would like to know that I am an artist in addition to a computer scientist. I enjoy the traditional arts, mainly drawing and painting. I don't like doing digital art, perhaps because I feel like my creativity and talents are inhibited by drawing hardware and software. Maybe I can do something in the area of computer-human interaction to help give digital artists a more creative experience.