Unsupervised segmentation of highly dynamic scenes through global optimization of multiscale cues

Yinhui Zhang, Mohamed Abdel-Mottaleb, Zifen He

Research output: Contribution to journalArticlepeer-review

4 Scopus citations


We propose a novel method for highly dynamic scene segmentation by formulating foreground object extraction as a global optimization framework that integrates a set of multiscale spatio-temporal cues. The multiscale features consist of a combination of motion and spectral components at a pixel level as well as spatio-temporal consistency constraints between superpixels. To compensate for the ambiguities of foreground hypothesis due to highly dynamic and cluttered backgrounds, we formulate salient foreground mapping as a convex optimization of weighted total variation energy, which is efficiently solved by using an alternating minimization scheme. Moreover, the appearance and position spatio-temporal consistency constraints between superpixels are explicitly incorporated into a Markov random field energy functional for further refinement of the set of salient pixel-level foreground mapping. This work facilitates sequential integration of multiscale probability constraints into a global optimal segmentation framework to help address object boundary ambiguities in the case of highly dynamic scenes. Extensive experiments on challenging dynamic scene data sets demonstrate the feasibility and superiority of the proposed segmentation approach.

Original languageEnglish (US)
Pages (from-to)3477-3487
Number of pages11
JournalPattern Recognition
Issue number11
StatePublished - Nov 1 2015


  • Dynamic scene
  • Global optimization
  • Image sequence segmentation
  • Unsupervised segmentation

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Unsupervised segmentation of highly dynamic scenes through global optimization of multiscale cues'. Together they form a unique fingerprint.

Cite this