On 3-D motion estimation from feature tracks in 2-D FS sonar video

Shahriar Negahdaripour

Research output: Contribution to journalArticlepeer-review

28 Scopus citations


Visual odometry involves the computation of 3-D motion and (or) trajectory by tracking features in the video or image sequences recorded by the camera(s) on some autonomous terrestrial, aerial, and marine robotics platform. For exploration, mapping, inspection, and surveillance operations within turbid waters, high-frequency 2-D forward-scan sonar systems offer a significant advantage over cameras by providing both imagery with target details and attractive tradeoff in range, resolution, and data rate. Operating these at grazing incidence gives larger scene coverage and improved image quality due to the dominance of diffuse backscattered reflectance but induces cast shadows that are typically more distinct than brightness patterns due to the direct reflectance of casting objects. For the computation of 3-D motion by automatic video processing, the estimation accuracy and robustness can be enhanced by integrating the visual cues from shadow dynamics with the image flow of stationary 3-D objects, both induced by sonar motion. In this paper, we present the mathematical models of image flow for 3-D objects and their cast shadows, utilize them in devising various 3-D sonar motion estimation solutions, and study their robustness. We present results of experiments with both synthetic and real data in order to assess the accuracy and performance of these methods.

Original languageEnglish (US)
Article number6522179
Pages (from-to)1016-1030
Number of pages15
JournalIEEE Transactions on Robotics
Issue number4
StatePublished - Jun 6 2013


  • 2-D forward-scan sonar video
  • 3-D motion estimation
  • Benthic habitat mapping
  • marine robotics
  • visual odometry

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Control and Systems Engineering
  • Computer Science Applications


Dive into the research topics of 'On 3-D motion estimation from feature tracks in 2-D FS sonar video'. Together they form a unique fingerprint.

Cite this