interaction plays an important role in Virtual-Environment (VE) applications.
Such interaction can be improved by detecting and reacting to the user’s head
motion. Today’s VE systems use head-mounted inertial sensors to update and
spatially stabilize the image displayed to a user through a head-mounted display.
This approach causes latencies in the VE system’s reaction to the head motion.
Since motion can only be detected after it has already occurred, latencies in
the stabilization scheme can only be reduced, but never eliminated. Such
latencies slow down manual control, cause inaccuracies in matching real and
virtual objects through a half-transparent display, and, reduce the sense of
presence. This work presents novel methods for reducing VE latencies by
anticipating future head motion based on electromyographic (EMG) signals
originating from the major neck muscles and head kinematics. Features extracted
from the EMG signals are used to train a feed-forward neural network in mapping
EMG data, given present head kinematics, into future head motion. The trained
network is then used in real-time for head motion anticipation which gives the
VE system the time advantage necessary to compensate for the inherent
latencies. The main contribution of this work is the use of the energy of
low-pass filtered EMG signals as the key input information and the head
acceleration as the key output information of the anticipation system, which
results in improved performance compared to previous work, using features of
the differential EMG signals as the input information and the head angular
velocity as the output.