טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
M.Sc Thesis
M.Sc StudentNir Maor
SubjectCompression at the Source
DepartmentDepartment of Electrical Engineering
Supervisors Professor Emeritus Feuer Arie
Full Professor Schechner Yoav
Full Thesis textFull thesis text - English Version


Abstract

In this work we propose a novel approach to simultaneous resolution enhancement in both spatial and temporal domains. We provide an analytical justification to our approach and prove that under certain system assumptions it is possible to reconstruct a signal perfectly from its nonuniform samples. We establish a theoretical framework for the reconstruction process under these assumptions and further show that these assumptions fit many typical video data-streams and, thus, have not only theoretical value but also a practical implementation.


A CCD in digital video cameras typically has satisfactory spatial-resolution when a single frame is generated. However, the bottleneck in video acquisition process is the rate at which the pixels can be saved. This obviously results in a constraint on the pixel rate in the video clip generated, as the pixel rate must equal the number of pixels per frame multiplied by the frame-rate.


There is a tradeoff between the spatial-resolution of each frame and the frame-rate - the higher the spatial-resolutions in each frame, the lower the frame-rate. As a result, while filming a video, one can capture either static details (high spatial-resolution) or fast dynamic events, but not both.


Previous works proposed various methods for constructing a video of high spatial and temporal resolution by combining information from multiple low-resolution video sequences of the same dynamic scene captured by multiple cameras. Unlike in previous works, our approach is based on nonuniform sampling. Namely, instead of having identical spatial-resolution in all frames and uniform frame-rate, we consider nonuniform sampling patterns.


As typical video clips consist of static parts requiring high spatial-resolution and fast moving parts which require high frame-rate), we propose the following  sampling pattern: Capture low spatial-resolution frames at fast frame-rate and periodically insert a frame of high spatial-resolution. We use the following observation: In typical video clips most of the energy (when viewed in the frequency domain) is concentrated around the axes.


Under this assumption we extend a nonuniform sampling theory established by A. Papoulis and derive an analytically based reconstruction technique that allows generating high-resolution video with true spatial and temporal scene information. We present in details the sampling patterns we consider, show the reconstruction algorithm to be used and discuss types of video clips for which there is a minimal detail loss with these patterns. The benefit of the proposed concept is that a high-resolution video clip can be generated from a single photo-sensor.