Semantic retrieval for videos in non-static background using motion saliency and global features

Dianting Liu, Mei-Ling Shyu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

In this paper, a video semantic retrieval framework is proposed based on a novel unsupervised motion region detection algorithm which works reasonably well with dynamic background and camera motion. The proposed framework is inspired by biological mechanisms of human vision which make motion salience (defined as attention due to motion) is more attractive than some other low-level visual features to people while watching videos. Under this biological observation, motion vectors in frame sequences are calculated using the optical flow algorithm to estimate the movement of a block from one frame to another. Next, a center-surround coherency evaluation model is proposed to compute the local motion saliency in a completely unsupervised manner. The integral density algorithm is employed to search the globally optimal solution of the minimum coherency region as the motion region which is then integrated into the video semantic retrieval framework to enhance the performance of video semantic analysis and understanding. Our proposed framework is evaluated using video sequences in non-static background, and the promising experimental results reveal that the semantic retrieval performance can be improved by integrating the global texture and local motion information.

Original languageEnglish
Title of host publicationProceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013
Pages294-301
Number of pages8
DOIs
StatePublished - Dec 1 2013
Event2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013 - Irvine, CA, United States
Duration: Sep 16 2013Sep 18 2013

Other

Other2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013
CountryUnited States
CityIrvine, CA
Period9/16/139/18/13

Fingerprint

Semantics
Optical flows
Textures
Cameras

Keywords

  • Global feature
  • Motion detection
  • Motion saliency
  • Non-static background
  • Video semantic retrieval

ASJC Scopus subject areas

  • Software

Cite this

Liu, D., & Shyu, M-L. (2013). Semantic retrieval for videos in non-static background using motion saliency and global features. In Proceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013 (pp. 294-301). [6693532] https://doi.org/10.1109/ICSC.2013.57

Semantic retrieval for videos in non-static background using motion saliency and global features. / Liu, Dianting; Shyu, Mei-Ling.

Proceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013. 2013. p. 294-301 6693532.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Liu, D & Shyu, M-L 2013, Semantic retrieval for videos in non-static background using motion saliency and global features. in Proceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013., 6693532, pp. 294-301, 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013, Irvine, CA, United States, 9/16/13. https://doi.org/10.1109/ICSC.2013.57
Liu D, Shyu M-L. Semantic retrieval for videos in non-static background using motion saliency and global features. In Proceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013. 2013. p. 294-301. 6693532 https://doi.org/10.1109/ICSC.2013.57
Liu, Dianting ; Shyu, Mei-Ling. / Semantic retrieval for videos in non-static background using motion saliency and global features. Proceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013. 2013. pp. 294-301
@inproceedings{5634fc7e72f24258926ad2c480d2524f,
title = "Semantic retrieval for videos in non-static background using motion saliency and global features",
abstract = "In this paper, a video semantic retrieval framework is proposed based on a novel unsupervised motion region detection algorithm which works reasonably well with dynamic background and camera motion. The proposed framework is inspired by biological mechanisms of human vision which make motion salience (defined as attention due to motion) is more attractive than some other low-level visual features to people while watching videos. Under this biological observation, motion vectors in frame sequences are calculated using the optical flow algorithm to estimate the movement of a block from one frame to another. Next, a center-surround coherency evaluation model is proposed to compute the local motion saliency in a completely unsupervised manner. The integral density algorithm is employed to search the globally optimal solution of the minimum coherency region as the motion region which is then integrated into the video semantic retrieval framework to enhance the performance of video semantic analysis and understanding. Our proposed framework is evaluated using video sequences in non-static background, and the promising experimental results reveal that the semantic retrieval performance can be improved by integrating the global texture and local motion information.",
keywords = "Global feature, Motion detection, Motion saliency, Non-static background, Video semantic retrieval",
author = "Dianting Liu and Mei-Ling Shyu",
year = "2013",
month = "12",
day = "1",
doi = "10.1109/ICSC.2013.57",
language = "English",
isbn = "9780769551197",
pages = "294--301",
booktitle = "Proceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013",

}

TY - GEN

T1 - Semantic retrieval for videos in non-static background using motion saliency and global features

AU - Liu, Dianting

AU - Shyu, Mei-Ling

PY - 2013/12/1

Y1 - 2013/12/1

N2 - In this paper, a video semantic retrieval framework is proposed based on a novel unsupervised motion region detection algorithm which works reasonably well with dynamic background and camera motion. The proposed framework is inspired by biological mechanisms of human vision which make motion salience (defined as attention due to motion) is more attractive than some other low-level visual features to people while watching videos. Under this biological observation, motion vectors in frame sequences are calculated using the optical flow algorithm to estimate the movement of a block from one frame to another. Next, a center-surround coherency evaluation model is proposed to compute the local motion saliency in a completely unsupervised manner. The integral density algorithm is employed to search the globally optimal solution of the minimum coherency region as the motion region which is then integrated into the video semantic retrieval framework to enhance the performance of video semantic analysis and understanding. Our proposed framework is evaluated using video sequences in non-static background, and the promising experimental results reveal that the semantic retrieval performance can be improved by integrating the global texture and local motion information.

AB - In this paper, a video semantic retrieval framework is proposed based on a novel unsupervised motion region detection algorithm which works reasonably well with dynamic background and camera motion. The proposed framework is inspired by biological mechanisms of human vision which make motion salience (defined as attention due to motion) is more attractive than some other low-level visual features to people while watching videos. Under this biological observation, motion vectors in frame sequences are calculated using the optical flow algorithm to estimate the movement of a block from one frame to another. Next, a center-surround coherency evaluation model is proposed to compute the local motion saliency in a completely unsupervised manner. The integral density algorithm is employed to search the globally optimal solution of the minimum coherency region as the motion region which is then integrated into the video semantic retrieval framework to enhance the performance of video semantic analysis and understanding. Our proposed framework is evaluated using video sequences in non-static background, and the promising experimental results reveal that the semantic retrieval performance can be improved by integrating the global texture and local motion information.

KW - Global feature

KW - Motion detection

KW - Motion saliency

KW - Non-static background

KW - Video semantic retrieval

UR - http://www.scopus.com/inward/record.url?scp=84893947502&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84893947502&partnerID=8YFLogxK

U2 - 10.1109/ICSC.2013.57

DO - 10.1109/ICSC.2013.57

M3 - Conference contribution

AN - SCOPUS:84893947502

SN - 9780769551197

SP - 294

EP - 301

BT - Proceedings - 2013 IEEE 7th International Conference on Semantic Computing, ICSC 2013

ER -