Integration of motion cues in optical and sonar videos for 3-D positioning

Shahriar Negahdaripour, H. Pirsiavash, H. Sekkati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Target-based positioning and 3-D target reconstruction are critical capabilities in deploying submersible platforms for a range of underwater applications, e.g., search and inspection missions. While optical cameras provide high-resolution and target details, they are constrained by limited visibility range. In highly turbid waters, target at up to distances of 10s of meters can be recorded by high-frequency (MHz) 2-D sonar imaging systems that have become introduced to the commercial market in recent years. Because of lower resolution and SNR level and inferior target details compared to optical camera in favorable visibility conditions, the integration of both sensing modalities can enable operation in a wider range of conditions with generally better performance compared to deploying either system alone. In this paper, estimate of the 3-D motion of the integrated system and the 3-D reconstruction of scene features are addressed. We do not require establishing matches between optical and sonar features, referred to as opti-acoustic correspondences, but rather matches in either the sonar or optical motion sequences. In addition to improving the motion estimation accuracy, advantages of the system comprise overcoming certain inherent ambiguities of monocular vision, e.g., the scale-factor ambiguity, and dual interpretation of planar scenes. We discuss how the proposed solution provides an effective strategy to address the rather complex opti-acoustic stereo matching problem. Experiment with real data demonstrate our technical contribution.

Original languageEnglish
Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - Oct 11 2007
Event2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 - Minneapolis, MN, United States
Duration: Jun 17 2007Jun 22 2007

Other

Other2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
CountryUnited States
CityMinneapolis, MN
Period6/17/076/22/07

Fingerprint

Sonar
Visibility
Acoustics
Cameras
Motion estimation
Imaging systems
Inspection
Water
Experiments

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Vision and Pattern Recognition
  • Software
  • Control and Systems Engineering

Cite this

Negahdaripour, S., Pirsiavash, H., & Sekkati, H. (2007). Integration of motion cues in optical and sonar videos for 3-D positioning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [4270352] https://doi.org/10.1109/CVPR.2007.383354

Integration of motion cues in optical and sonar videos for 3-D positioning. / Negahdaripour, Shahriar; Pirsiavash, H.; Sekkati, H.

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2007. 4270352.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Negahdaripour, S, Pirsiavash, H & Sekkati, H 2007, Integration of motion cues in optical and sonar videos for 3-D positioning. in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition., 4270352, 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07, Minneapolis, MN, United States, 6/17/07. https://doi.org/10.1109/CVPR.2007.383354
Negahdaripour S, Pirsiavash H, Sekkati H. Integration of motion cues in optical and sonar videos for 3-D positioning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2007. 4270352 https://doi.org/10.1109/CVPR.2007.383354
Negahdaripour, Shahriar ; Pirsiavash, H. ; Sekkati, H. / Integration of motion cues in optical and sonar videos for 3-D positioning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2007.
@inproceedings{bd8dae37aa4f420a86856e55f1585797,
title = "Integration of motion cues in optical and sonar videos for 3-D positioning",
abstract = "Target-based positioning and 3-D target reconstruction are critical capabilities in deploying submersible platforms for a range of underwater applications, e.g., search and inspection missions. While optical cameras provide high-resolution and target details, they are constrained by limited visibility range. In highly turbid waters, target at up to distances of 10s of meters can be recorded by high-frequency (MHz) 2-D sonar imaging systems that have become introduced to the commercial market in recent years. Because of lower resolution and SNR level and inferior target details compared to optical camera in favorable visibility conditions, the integration of both sensing modalities can enable operation in a wider range of conditions with generally better performance compared to deploying either system alone. In this paper, estimate of the 3-D motion of the integrated system and the 3-D reconstruction of scene features are addressed. We do not require establishing matches between optical and sonar features, referred to as opti-acoustic correspondences, but rather matches in either the sonar or optical motion sequences. In addition to improving the motion estimation accuracy, advantages of the system comprise overcoming certain inherent ambiguities of monocular vision, e.g., the scale-factor ambiguity, and dual interpretation of planar scenes. We discuss how the proposed solution provides an effective strategy to address the rather complex opti-acoustic stereo matching problem. Experiment with real data demonstrate our technical contribution.",
author = "Shahriar Negahdaripour and H. Pirsiavash and H. Sekkati",
year = "2007",
month = "10",
day = "11",
doi = "10.1109/CVPR.2007.383354",
language = "English",
isbn = "1424411807",
booktitle = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",

}

TY - GEN

T1 - Integration of motion cues in optical and sonar videos for 3-D positioning

AU - Negahdaripour, Shahriar

AU - Pirsiavash, H.

AU - Sekkati, H.

PY - 2007/10/11

Y1 - 2007/10/11

N2 - Target-based positioning and 3-D target reconstruction are critical capabilities in deploying submersible platforms for a range of underwater applications, e.g., search and inspection missions. While optical cameras provide high-resolution and target details, they are constrained by limited visibility range. In highly turbid waters, target at up to distances of 10s of meters can be recorded by high-frequency (MHz) 2-D sonar imaging systems that have become introduced to the commercial market in recent years. Because of lower resolution and SNR level and inferior target details compared to optical camera in favorable visibility conditions, the integration of both sensing modalities can enable operation in a wider range of conditions with generally better performance compared to deploying either system alone. In this paper, estimate of the 3-D motion of the integrated system and the 3-D reconstruction of scene features are addressed. We do not require establishing matches between optical and sonar features, referred to as opti-acoustic correspondences, but rather matches in either the sonar or optical motion sequences. In addition to improving the motion estimation accuracy, advantages of the system comprise overcoming certain inherent ambiguities of monocular vision, e.g., the scale-factor ambiguity, and dual interpretation of planar scenes. We discuss how the proposed solution provides an effective strategy to address the rather complex opti-acoustic stereo matching problem. Experiment with real data demonstrate our technical contribution.

AB - Target-based positioning and 3-D target reconstruction are critical capabilities in deploying submersible platforms for a range of underwater applications, e.g., search and inspection missions. While optical cameras provide high-resolution and target details, they are constrained by limited visibility range. In highly turbid waters, target at up to distances of 10s of meters can be recorded by high-frequency (MHz) 2-D sonar imaging systems that have become introduced to the commercial market in recent years. Because of lower resolution and SNR level and inferior target details compared to optical camera in favorable visibility conditions, the integration of both sensing modalities can enable operation in a wider range of conditions with generally better performance compared to deploying either system alone. In this paper, estimate of the 3-D motion of the integrated system and the 3-D reconstruction of scene features are addressed. We do not require establishing matches between optical and sonar features, referred to as opti-acoustic correspondences, but rather matches in either the sonar or optical motion sequences. In addition to improving the motion estimation accuracy, advantages of the system comprise overcoming certain inherent ambiguities of monocular vision, e.g., the scale-factor ambiguity, and dual interpretation of planar scenes. We discuss how the proposed solution provides an effective strategy to address the rather complex opti-acoustic stereo matching problem. Experiment with real data demonstrate our technical contribution.

UR - http://www.scopus.com/inward/record.url?scp=34948840912&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34948840912&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2007.383354

DO - 10.1109/CVPR.2007.383354

M3 - Conference contribution

SN - 1424411807

SN - 9781424411801

BT - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

ER -