Gameplay Issues in the Design of 3D Gestures for Video Games (2006)

John Payne, Paul Keir, Jocelyn Elgoyhen, Mairghread McLundie, Martin Naef, Martyn Horner, Paul Anderson
Digital Design Studio, Glasgow School of Art

Comments:

This paper talks about the importance of designing 3D gestures and the relationship between gestures and gameplay. It discusses some important concepts such as affordance, mapping, and feedback. It stresses the importance of simplicity and the mapping of gestures to actions.

The team developed a 3D gesture capturing device which they call the 3motion. It uses a combination of accelerometers similar to a Wiimote to perform 3D gestures. It should be noted that this work was done almost a year before the Wii was released.

To test their device, several simple games were used. These included a tilt-ball game, an alarm game, the classic helicopter game, and a spell-casting game. Unique 3D gestures were defined for each game.

Two users were employed to test out each game and its gestures. Some games and their corresponding gestures had more success than others, which the researchers attribute to varying degrees of "informative tutorials, single word instructional phrases, effective semiotics and appropriate user feedback" among the games. These principles, they say, are very important to ensuring that "the gesture based interaction is intuitive, fun and rapidly understood."

While not initially an important factor, they came to realize that the gestures and the type of gameplay were tightly coupled and must be evaluated together.

----------

I was very interested in this paper particularly because of the final project for this class. We are also using 3D gestures, though with a glove instead of a handheld device.

We are also faced with the problem of designing the gestures for our system, and we plan on doing a preliminary study to help guide us to the correct gestures. We can use the insight of this paper to help guide us in out design.

I also wonder what influence the upcoming Wii and its controller had on this research, if any? I do not remember when the Wii was announced, unfortunately, though I doubt it was announced as early as this research.

An Architecture for Gesture-Based Control of Mobile Robots

Soshi Iba, J. Michael Vande Weghe, Christiaan J. J. Paredis, and Pradeep K. Khosla
Carnegie Mellon University


Comments: ...........

This paper presents a system for controlling mobile robots using hand gestures.

Previous work has been done with controlling robots in various ways, but most of those have used a keyboard and mouse to control the robots. This is deemed inappropriate for novice or unfamiliar users, so a more intuitive interface is necessary for these kinds of users.

The goal of this project is to work toward an intuitive, multi-modal system for controlling mobile robots. This project introduces hand gestures as a means to control mobile robots by waving in the desired direction the robots should move or pointing at the intended location for the robots to move.

The system uses a CyberGlove, a Polhemus 6DOF sensor, and a GPS unit in the robot itself.

6 gestures were used to control the robot:

OPENING: Moving from a closed fi st to a flat open hand
OPENED: Flat open hand
CLOSING: Moving from a flat open hand to a closed fist
POINTING: Moving from a flat open hand to index fi nger pointing, or from a closed fi st to index fi nger pointing
WAVING LEFT: Fingers extended, waving to the left, as if directing someone to the left
WAVING RIGHT: Fingers extended, waving to the right

They also incporated a "wait state," which simply was a gesture other than those above.

Robot control can occur in two modes: local and global. In local mode, the gestures are interpreted as if from the point of view of the robot. In global mode, they are interpreted in world coordinates to control the robot from the user's view. The reason for having the local control mode is to operate the robot remotely, in which video signals from the robot are viewed. The global control mode is used if the robot is in sight of the user.

The gestures work like this for Local Control:

CLOSING: decelerates and eventually stops the robot
OPENING, OPENED: maintains the current state of the robot
POINTING: accelerates the robot
WAVING LEFT/RIGHT: increase the rotational velocity to move left/right
The gestures work like this for Global Control:

CLOSING: decelerates and eventually stops the robot (cancels the destination if one exists)
OPENING, OPENED: maintains the current state of the robot
POINTING: "go there"
WAVING LEFT/RIGHT: directs the robot towards the direction in which the hand is waving.

A Hidden Markov Model algorithm was used to detect and recognize the gestures with an accuracy of 96%. The wait state feature helps the recognition significantly compared to systems without a wait state.

----------

I like that these researchers are trying to get a more intuitive interface for controlling robots so that novice users can use the system. I always like this approach to projects where appropriate. It is also interesting to see a new use of the data glove that I have not thought of before.

I wonder how feedback is given from the robot/system to the user. It would be very important to know exactly how your actions are affecting the robot, so that you don't over-steer or over-accelerate the robot, for example. This problem would be magnified if there is any sort of delay between a gesture and the robot's response as perceived by the user or if the user is using local mode, which definitely would introduce some lag.

I would like to see a usability study, obviously, to sort out issues like the one I have described, especially if the research is aimed at the general public.

I am also interested in the high-level multi-robot control to come...

Feeling the Beat Where it Counts: Fostering Multi-Limb Rhythm Skills with the Haptic Drum Kit (2010)

Simon Holland, Anders J. Bouwer, Mathew Dalgleish, Topi M. Hurtig
The Open University, UK

Comments: Murat and

This paper presents a "haptic drum kit," which adds vibrotactile actuators to the wrists and ankles to teach rhythms to people, specifically novice or unexperienced drummers.

The paper gives a background on the "human innate capacity for rhythm." Basically, all people involuntarily respond to natural rhythms and "periodic phenomena in the environment." Our brains might even have dedicated neurons for rhythmic processing. Specific instruction in rhythmic techniques may help people overcome specific physical challenges or limitations.

The experience of creating rhythm depends on the prior exposure, or "feeling," of various rhythms. Dalcroze, a famous music instructor, noted that students were better able to work with technical and written elements of music if they had the previous experience of "feeling" musical and rhythmic examples. He invented a system of feeling rhythm in his teachings by having students perform activities, such as walking, in specific rhythmic ways.

There is a theory known as sensory motor contingency theory that suggests that in order to learn some physical skill in some domain, the learning must be able to manipulate the domain physically. This has applications in musical rhythm because people can use arms, legs, and other things to create and modify rhythm. This theory provides support for the haptic drum kit that these researchers have devised.

Finally, the paper talks about the concept of entrainment, which is the tendency for two two connected processes to "connect" in some common rhythm. This is important to this research because the "students" or users will be playing a drum beat along with an audio and haptic beat. The convergence of the user to the presented rhythm is necessary to learn the rhythms. All these theoretical views on rhythmic learning helped to inspire the Haptic Drum Kit.

The Haptic Drum Kit consists of four vibrotactile actuators attached to the wrists and ankles with wristbands. The actuators are connected to a circuit board which is connected to a computer running the drum kit software and controlling audio playback.

A user study was conducted with 5 people. The goal of the study was to determine if rhythms can be taught using a combination of audio and haptic feedback. 20 rhythms were chosen from different classes of rhythms to represent a complete range of rhythms that typical drummers might learn.

When each rhythm was presented to a user, it was done with audio only, with haptics only, and with audio + haptic feedback. The user was to play along with each rhythm. It is unclear how exactly the rhythm is sent to each limb, though it seems that whatever limb is to play the current note is vibrated, while the audio lets the user know which drum to actually hit.

Probably because of this, all users preferred the mixture of audio and haptic feedback for playing back the drum patterns.

A few issues were revealed with the vibrotactile actuators. They involved things like being too "quiet" or "soft" or being slightly delayed, thus "blurring" the rhythm on fast rhythms.

Using results of the study, the paper discusses the state of the hardware and possible upgrades and future work.

------------

Being someone who is interested in rhythms and drumming but lacks the training or experience, I am interested in the approach to rhythmic training using vibrotactile feedback. We have already seen a suit that employs haptic feedback to teach specific movements. This uses a similar concept to teach a different kind of "movement," that which creates rhythm.

This system also explores multi-modal learning, presenting audio tracks coupled with vibration. The researchers seemed to be pretty successful at teaching the drum patterns despite some hardware limitations.

I think this work is more promising than TIKL, perhaps due to its better mappings of vibration to action and fewer vibrotactile actuators.

Office Activity Recognition using Hand Posture Cues (2007)

Brandon Paulson, Tracy Hammond
paper

Comments: Manoj and eventually others...

This paper shares the results of using simple machine learning algorithms to see if hand postures can be used to help identify tasks. Specifically, the 1-nearest-neighbor algorithm was used on a set of 12 office-related gestures.

Gestures were collected from 8 users with a CyberGlove. The users performed each of the 12 tasks 5 times each. The sensor readings were captured at 10 frames per second and averaged together across the whole gesture. The algorithm was trained on both single-user and multi-user gestures. Per-user training yielded much higher accuracy across all gestures (94%). Only a few of the gestures were confused. In these cases, a simple examination of the gestures shows that the gestures are very similar, such as holding a mug and stapling a paper.

------------

This work is similar to what we did with the RPS-15 data in this class. One of the class members shared a 95% accuracy across all users using the 1-nearest-neighbor algorithm. Our RPS gestures were more rigidly defined using a picture of the gesture. In contrast, the users in this paper were told to perform an activity such as dialing a phone. These can be done in many different ways.

I thought of some things that could be added to this work to help clear up some of the gesture confusions. Most simply, including more sensors up the arm could help disambiguate certain gestures. For example, the elbow and shoulder are probably going to be different when drinking from a mug than when stapling a paper, even though the hand gesture is very similar (this is similar to the proposed 3D sensor being attached to the hand that is given in the paper). This method would be easier than incorporating a 4th dimension to the data, namely time. However, analyzing gestures over time might be a very valuable addition, though the analysis would be more complex.

I am also interested in seeing the results of these same gestures with other, better learning algorithms.