Direct recovery of motion and range from images of scenes with time-varying illumination

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

We present a direct closed-form solution for estimating 3D depth and translational motion of a camera from image sequences where the scene illumination can vary from one frame to the next. The method is based on a dynamic image model that we proposed previously for the computation of optical flow. We use only selected moments of the image and its spatio-temporal derivatives in small image regions to calculate the solution. The direct solution for selected experiments with real images are given, and compared with that from the computed optical flow, to show that the former typically gives better estimates than the flow-based approach even though both techniques are based on the same dynamic image model. In particular, the direct solution is more robust with respect to motion sizes of more than one pixel per frame.

Original languageEnglish
Title of host publicationProceedings of the IEEE International Conference on Computer Vision
Place of PublicationPiscataway, NJ, United States
PublisherIEEE
Pages467-472
Number of pages6
StatePublished - Dec 1 1995
EventInternational Symposium on Computer Vision, ISCV'95, Proceedings - Coral Gables, FL, USA
Duration: Nov 21 1995Nov 23 1995

Other

OtherInternational Symposium on Computer Vision, ISCV'95, Proceedings
CityCoral Gables, FL, USA
Period11/21/9511/23/95

Fingerprint

Lighting
Recovery
Optical flows
Pixels
Cameras
Derivatives
Experiments

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Software

Cite this

Negahdaripour, S., & Lanjing, J. (1995). Direct recovery of motion and range from images of scenes with time-varying illumination. In Proceedings of the IEEE International Conference on Computer Vision (pp. 467-472). Piscataway, NJ, United States: IEEE.

Direct recovery of motion and range from images of scenes with time-varying illumination. / Negahdaripour, Shahriar; Lanjing, Jin.

Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ, United States : IEEE, 1995. p. 467-472.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Negahdaripour, S & Lanjing, J 1995, Direct recovery of motion and range from images of scenes with time-varying illumination. in Proceedings of the IEEE International Conference on Computer Vision. IEEE, Piscataway, NJ, United States, pp. 467-472, International Symposium on Computer Vision, ISCV'95, Proceedings, Coral Gables, FL, USA, 11/21/95.
Negahdaripour S, Lanjing J. Direct recovery of motion and range from images of scenes with time-varying illumination. In Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ, United States: IEEE. 1995. p. 467-472
Negahdaripour, Shahriar ; Lanjing, Jin. / Direct recovery of motion and range from images of scenes with time-varying illumination. Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ, United States : IEEE, 1995. pp. 467-472
@inproceedings{277b124a03c5439bb9ec870528ae3bea,
title = "Direct recovery of motion and range from images of scenes with time-varying illumination",
abstract = "We present a direct closed-form solution for estimating 3D depth and translational motion of a camera from image sequences where the scene illumination can vary from one frame to the next. The method is based on a dynamic image model that we proposed previously for the computation of optical flow. We use only selected moments of the image and its spatio-temporal derivatives in small image regions to calculate the solution. The direct solution for selected experiments with real images are given, and compared with that from the computed optical flow, to show that the former typically gives better estimates than the flow-based approach even though both techniques are based on the same dynamic image model. In particular, the direct solution is more robust with respect to motion sizes of more than one pixel per frame.",
author = "Shahriar Negahdaripour and Jin Lanjing",
year = "1995",
month = "12",
day = "1",
language = "English",
pages = "467--472",
booktitle = "Proceedings of the IEEE International Conference on Computer Vision",
publisher = "IEEE",

}

TY - GEN

T1 - Direct recovery of motion and range from images of scenes with time-varying illumination

AU - Negahdaripour, Shahriar

AU - Lanjing, Jin

PY - 1995/12/1

Y1 - 1995/12/1

N2 - We present a direct closed-form solution for estimating 3D depth and translational motion of a camera from image sequences where the scene illumination can vary from one frame to the next. The method is based on a dynamic image model that we proposed previously for the computation of optical flow. We use only selected moments of the image and its spatio-temporal derivatives in small image regions to calculate the solution. The direct solution for selected experiments with real images are given, and compared with that from the computed optical flow, to show that the former typically gives better estimates than the flow-based approach even though both techniques are based on the same dynamic image model. In particular, the direct solution is more robust with respect to motion sizes of more than one pixel per frame.

AB - We present a direct closed-form solution for estimating 3D depth and translational motion of a camera from image sequences where the scene illumination can vary from one frame to the next. The method is based on a dynamic image model that we proposed previously for the computation of optical flow. We use only selected moments of the image and its spatio-temporal derivatives in small image regions to calculate the solution. The direct solution for selected experiments with real images are given, and compared with that from the computed optical flow, to show that the former typically gives better estimates than the flow-based approach even though both techniques are based on the same dynamic image model. In particular, the direct solution is more robust with respect to motion sizes of more than one pixel per frame.

UR - http://www.scopus.com/inward/record.url?scp=0029489043&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029489043&partnerID=8YFLogxK

M3 - Conference contribution

SP - 467

EP - 472

BT - Proceedings of the IEEE International Conference on Computer Vision

PB - IEEE

CY - Piscataway, NJ, United States

ER -