Video semantic concept discovery using multimodal-based association classification

Lin Lin, Guy Ravitz, Mei Ling Shyu, Shu Ching Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

41 Scopus citations

Abstract

Digital audio and video have recently taken a center stage in the communication world, which highlights the importance of digital media information management and indexing. It is of great interest for the multimedia research community to find methods and solutions that could help bridge the semantic gap that exists between the low-level features extracted from the audio or video data and the actual semantics of the data. In this paper, we propose a novel framework that works towards reducing this semantic gap. The proposed framework uses the apriori algorithm and association rule mining to find frequent itemsets in the feature data set and generate classification rules to classify video shots to different concepts (semantics). We also introduce a novel pre-filtering architecture which reduces the high positive to negative instances ratio in the classifier training step. This helps reduce the amount of misclassification errors. Our proposed framework shows promising results in classifying multiple concepts.

Original languageEnglish (US)
Title of host publicationProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007
PublisherIEEE Computer Society
Pages859-862
Number of pages4
ISBN (Print)1424410177, 9781424410170
DOIs
StatePublished - 2007
EventIEEE International Conference onMultimedia and Expo, ICME 2007 - Beijing, China
Duration: Jul 2 2007Jul 5 2007

Publication series

NameProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007

Other

OtherIEEE International Conference onMultimedia and Expo, ICME 2007
CountryChina
CityBeijing
Period7/2/077/5/07

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Software

Fingerprint Dive into the research topics of 'Video semantic concept discovery using multimodal-based association classification'. Together they form a unique fingerprint.

Cite this