Combining Voice and Gesture for Human Computer Interaction

Teachers: 
Denis Lalanne
Student: 
Haleh Chizari
Project status: 
Finished
Year: 
2013

Goal

Recently, multimodal interfaces have received a great deal of attention to provide more natural user-machine interactions, especially in applications where the use of mouse or keyboards is tedious and inappropriate. Among other, two types of inputs are increasingly integrated in multimodal interfaces: voice and gesture. Although, there are many ways for temporal fusing of speech and gesture inputs, user-friendly ones are more interesting for use in real-world applications.  

This research project aims at study, design and evaluation of multimodal commands, as a means of interaction between the user and the computer. To do this, we consider several combinations of voice and gesture inputs for typical operations. Then, these multimodal inputs are implemented and examined. During the evaluation step, we will ask some users to help us by choosing the most user-friendly commands.

Tasks

First, we will conduct a literature and web review on the topic to obtain the required knowledge about the current state of the research.

In the next step, we propose and develop several combined voice-gesture commands for different operations. Then, the proposed commands will be implemented using C#.

Afterwards, we will test the implemented designs and their functionality will be examined by the existing system in the laboratory. Finally, we will ask some customers to evaluate our developed project. This evaluation can help us to decide which voice-gesture combination is more suitable for the majority of users.