טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
M.Sc Thesis
M.Sc StudentGrinberg Maor
SubjectComprehensive Free Handed 3D User Interface for Geometric
Design Systems
DepartmentDepartment of Computer Science
Supervisor Professor Gershon Elber
Full Thesis textFull thesis text - English Version


Abstract

Three dimensional user interfaces (3DUI) have great potential in computer aided geometric design (CAGD), where users work in a virtual 3D space and perform 3D operations. 

3D depth cameras enable the use of free hand positions, postures and gestures as inputs in interactive geometric design. 


In contrast with most previous work on 3DUI for geometric modeling, which allows limited and/or specialized functionality, this work demonstrates an comprehensive system that combines a free-dual-handed 3DUI, using input from the Kinect sensor, with the functionality of a CAGD system for general modeling. 


This entails a system that can handle multiple objects within the modeling space, and can support object selection, object transformation and navigation within the virtual 3D space.

Like other CAGD systems, it support various geometric modeling functions, including basic operations for creating different types of curves, surfaces and solids, and more advanced features such as Boolean operations. A small set of postures and gestures controls the 3DUI, with a consistent behavior for all modeling functions. The consistency is manifested by using direct geometrical input for the modeling functions - the geometrical parameters of the modeling functions are determined by the position of the hands at a certain posture. The access to each of the modeling functions is done by a graphical menu. To avoid clutter, only modeling functions that are relevant in the context of the current selected object are visible.

In order to allow precise input, the user can apply snapping constraints at any time. The constraints affect the precision of the operations and transformations and also allow restricting the input to a lower degree of freedom. 


The 3DUI was implemented as two modules within an existing CAGD system. The first, input, module synthesizes the input from a Kinect, converts the positions into the modeling space, and maps the hand postures to a predefined set of "user Actions'" 

The second, main 3DUI, module handles all input and process it as needed. For example, forwards input data to the modeling functions. 

This allows for simple adaptation of other input devices or posture recognition algorithms for the 3DUI.

Extending the functionality of the system by adding further modeling functions can be done with ease by assigning or modifying their parameters according to the input positions, user actions, and active constraints.