Unmanned vehicles are employed more and more frequently for a range of scientific and commercial undersea applications. However, the critical dependency on a tether link, mainly for the transmission of live images to the surface for command and control, is a significant technological obstacle limiting vehicle maneuverability. The elimination of the tether requires the capability to compress a massive amount of live video data to meet the bandwidth limitations of acoustic telemetry. This paper addresses motion-compensated compression of underwater video imagery and the extraction of the sought-after transformations by the application of the brightness constancy assumption and a generalized dynamic image model for the analysis of time-varying imagery. Two approaches, suitable for automatic vision-based navigation or operator-assisted missions of unmanned submersible vehicles, are considered. In the former case, the 3D motion and position information extracted from the raw data by the vision-based navigation system is used directly to perform the compression. In the latter case, the compression is achieved using the motion and radiometric information extracted from the live and reconstructed images at the surface station. To evaluate the performance of proposed methods, results from experiments with synthetic and real data are presented and compared to compression methods that are based on the brightness constancy model.
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition