|M.Sc Student||Bernstein Ran|
|Subject||Laban Movement Analysis and LDA Distributed Monitoring|
|Department||Department of Computer Science||Supervisor||PROF. Assaf Schuster|
|Full Thesis text|
The first chapter of the thesis deals with Laban Movement Analysis (LMA), which is a method for describing, interpreting and documenting all varieties of human movement. Analyzing movements using LMA is advantageous over kinematic description of the movement, as it captures qualitative aspects in addition to the quantitative aspects of the movement. As such, it has many applications and its popularity is increasing in recent years as the preferred method for movement analysis in motor research, theater training, and the development of interactive gaming animations and robotics. In this
study we aimed to develop an automated method for recognizing 18 different Laban motor elements (motor characteristics) from markerless 3D movement data captured by the ubiquitous Kinect camera. Using machine-learning methods we have succeeded to obtain a recall rate of 38-94% (65% on average) and precision rate of 29-100% (59% on average) for the 18 motor elements that were tested.
The second chapter of the thesis deal with systems for mining dynamic data streams should be able to detect changes that affect the accuracy of their model. A distributed setting is one of the main hallenges in this kind of change detection. In a distributed setting, model training requires centralizing the data from all nodes (hereafter, synchronization), which is very costly in terms of communication. In order to minimize the communication, a monitoring algorithm should be executed locally at each node, while preserving the validity of the global model (the model that will be computed if a synchronization will occur). For minimizing this communication, we propose the first communication-efficient algorithm for monitoring a classification model over distributed, dynamic data streams. The classification algorithm that we chose to monitor is Linear Discriminant Analysis (LDA), which is a popular method used for classification and dimensionality reduction in many fields. This choice was made due to the strong theoretical guarantee of correctness that we prove on the monitoring algorithm of this kind of model. In addition to its theoretical guarantee, we demonstrated how our algorithm and a probabilistic variant of it reduce communication volume by up to two orders of magnitude (compared to synchronization in every round) on three real data sets from different worlds of content. Moreover, our approach monitors the classification model itself as opposed to its misclassifications, which makes it possible to detect the change before the misclassification occurs.