Despite limited range, high resolution and data rate are among factors motivating the investigation of vision-based technologies in support of unmanned submersible platform operations. Among many, automatic vision-guided station keeping, localization and navigation, photo-mosaicking and 3-D mapping comprise application areas of special interest. The core issue in realizing these capabilities is to know with high accuracy the motion and (or) position of the vehicle. Over the last decade, a small, but growing, group of research centers have committed an extensive effort in underwater vision activities. Methods based on various approaches in motion vision- e.g., correlation-based, feature-based, optical flow-based, or direct- are being developed and implemented in support of different applications on specialized platforms that may simultaneously utilize other sensors for positioning. This makes it somewhat difficult, for user groups unfamiliar with the methodologies or technologies, to evaluate the capabilities of these vision systems. The main motivation for this work was to carry out and report the results of experiments with various techniques for positioning and mosaicking, on the same data set. However, we realized that justice is not served in arriving at a fair comparison, if the implementations are not reproduced perfectly, particularly those requiring sensory information not available to us. Consequently, a less challenging task was undertaken in presenting the results that would be provided to us by the researchers themselves on certain common data sets. Unfortunately, we were able to collect very limited results, primarily those from the authors' research group and one other organization. However, we hope to pursue this effort with a more extensive investigation, based on diverse contributions from a larger number of organizations.
ASJC Scopus subject areas