Video semantic event/concept detection using a subspace-based multimedia data mining framework

Mei Ling Shyu, Zongxing Xie, Min Chen, Shu Ching Chen

Research output: Contribution to journalArticle

108 Scopus citations

Abstract

In this paper, a subspace-based multimedia data mining framework is proposed for video semantic analysis, specifically video event/concept detection, by addressing two basic issues, i.e., semantic gap and rare event/concept detection. The proposed framework achieves full automation via multimodal content analysis and intelligent integration of distance-based and rule-based data mining techniques. The content analysis process facilitates the comprehensive video analysis by extracting low-level and middle-level features from audio/visual channels. The integrated data mining techniques effectively address these two basic issues by alleviating the class imbalance issue along the process and by reconstructing and refining the feature dimension automatically. The promising experimental performance on goal/corner event detection and sports/commercials/building concepts extraction from soccer videos and TRECVID news collections demonstrates the effectiveness of the proposed framework. Furthermore, its unique domain-free characteristic indicates the great potential of extending the proposed multimedia data mining framework to a wide range of different application domains.

Original languageEnglish (US)
Pages (from-to)252-259
Number of pages8
JournalIEEE Transactions on Multimedia
Volume10
Issue number2
DOIs
StatePublished - Feb 2008

Keywords

  • Data mining
  • Eigenspace
  • Eigenvalue
  • Event/concept detection
  • Principal component
  • Video semantics analysis

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Computer Graphics and Computer-Aided Design
  • Software

Fingerprint Dive into the research topics of 'Video semantic event/concept detection using a subspace-based multimedia data mining framework'. Together they form a unique fingerprint.

  • Cite this