Using Microsoft Kinect to perform commands on virtual objects

Teachers: 
Denis Lalanne
Mattias Schwaller
Student: 
Simon Brunner
Project status: 
Finished
Year: 
2012

Goal

The goal of the project is to explore and find multiple ways to perform common tasks on objects like moving, rotating, enlarging, reducing, etc. Microsoft Kinect sensor device is the technology used for this project. The idea is to use gesture recognition and voice recognition to do the manipulations. One hand handles the pointing and the second hand takes care of the commands to keep the first one stable. The focus of the project is on the second hand performing the commands. Evaluation of various alternatives techniques will be done to estimate their efficiencies over each others.

Tasks

Task
Allocated time
T1: Hands-on with the technologies
2 weeks
T2: Analyze needs & requirements
2 weeks
T3: Design command strategies with feedbacks
1 weeks
T4: Implementation of various command gestures
5 weeks
T5: Design an evaluation test application
1 weeks
T6: Implementation of test application
3 weeks
T7: Test the application with subjects (User evaluation)
4 weeks
T8: Analysis of results
1 week
T9: Report writing
3 weeks

Resources

  • Microsoft Kinect sensor device for Windows
  • Microsoft Visual Studio 2010, C#
  • Kinect for Windows SDK
  • Candescent NUI
Document: