Video semantic concept discovery using multimodal-based association classification

Lin Lin, Guy Ravitz, Mei-Ling Shyu, Shu Ching Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

34 Citations (Scopus)

Abstract

Digital audio and video have recently taken a center stage in the communication world, which highlights the importance of digital media information management and indexing. It is of great interest for the multimedia research community to find methods and solutions that could help bridge the semantic gap that exists between the low-level features extracted from the audio or video data and the actual semantics of the data. In this paper, we propose a novel framework that works towards reducing this semantic gap. The proposed framework uses the apriori algorithm and association rule mining to find frequent itemsets in the feature data set and generate classification rules to classify video shots to different concepts (semantics). We also introduce a novel pre-filtering architecture which reduces the high positive to negative instances ratio in the classifier training step. This helps reduce the amount of misclassification errors. Our proposed framework shows promising results in classifying multiple concepts.

Original languageEnglish
Title of host publicationProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007
Pages859-862
Number of pages4
StatePublished - Dec 1 2007
EventIEEE International Conference onMultimedia and Expo, ICME 2007 - Beijing, China
Duration: Jul 2 2007Jul 5 2007

Other

OtherIEEE International Conference onMultimedia and Expo, ICME 2007
CountryChina
CityBeijing
Period7/2/077/5/07

Fingerprint

Semantics
Digital storage
Association rules
Information management
Classifiers
Communication

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Software

Cite this

Lin, L., Ravitz, G., Shyu, M-L., & Chen, S. C. (2007). Video semantic concept discovery using multimodal-based association classification. In Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007 (pp. 859-862). [4284786]

Video semantic concept discovery using multimodal-based association classification. / Lin, Lin; Ravitz, Guy; Shyu, Mei-Ling; Chen, Shu Ching.

Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007. 2007. p. 859-862 4284786.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lin, L, Ravitz, G, Shyu, M-L & Chen, SC 2007, Video semantic concept discovery using multimodal-based association classification. in Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007., 4284786, pp. 859-862, IEEE International Conference onMultimedia and Expo, ICME 2007, Beijing, China, 7/2/07.
Lin L, Ravitz G, Shyu M-L, Chen SC. Video semantic concept discovery using multimodal-based association classification. In Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007. 2007. p. 859-862. 4284786
Lin, Lin ; Ravitz, Guy ; Shyu, Mei-Ling ; Chen, Shu Ching. / Video semantic concept discovery using multimodal-based association classification. Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007. 2007. pp. 859-862
@inproceedings{9346daf09dc94aa599555da27094fbd1,
title = "Video semantic concept discovery using multimodal-based association classification",
abstract = "Digital audio and video have recently taken a center stage in the communication world, which highlights the importance of digital media information management and indexing. It is of great interest for the multimedia research community to find methods and solutions that could help bridge the semantic gap that exists between the low-level features extracted from the audio or video data and the actual semantics of the data. In this paper, we propose a novel framework that works towards reducing this semantic gap. The proposed framework uses the apriori algorithm and association rule mining to find frequent itemsets in the feature data set and generate classification rules to classify video shots to different concepts (semantics). We also introduce a novel pre-filtering architecture which reduces the high positive to negative instances ratio in the classifier training step. This helps reduce the amount of misclassification errors. Our proposed framework shows promising results in classifying multiple concepts.",
author = "Lin Lin and Guy Ravitz and Mei-Ling Shyu and Chen, {Shu Ching}",
year = "2007",
month = "12",
day = "1",
language = "English",
isbn = "1424410177",
pages = "859--862",
booktitle = "Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007",

}

TY - GEN

T1 - Video semantic concept discovery using multimodal-based association classification

AU - Lin, Lin

AU - Ravitz, Guy

AU - Shyu, Mei-Ling

AU - Chen, Shu Ching

PY - 2007/12/1

Y1 - 2007/12/1

N2 - Digital audio and video have recently taken a center stage in the communication world, which highlights the importance of digital media information management and indexing. It is of great interest for the multimedia research community to find methods and solutions that could help bridge the semantic gap that exists between the low-level features extracted from the audio or video data and the actual semantics of the data. In this paper, we propose a novel framework that works towards reducing this semantic gap. The proposed framework uses the apriori algorithm and association rule mining to find frequent itemsets in the feature data set and generate classification rules to classify video shots to different concepts (semantics). We also introduce a novel pre-filtering architecture which reduces the high positive to negative instances ratio in the classifier training step. This helps reduce the amount of misclassification errors. Our proposed framework shows promising results in classifying multiple concepts.

AB - Digital audio and video have recently taken a center stage in the communication world, which highlights the importance of digital media information management and indexing. It is of great interest for the multimedia research community to find methods and solutions that could help bridge the semantic gap that exists between the low-level features extracted from the audio or video data and the actual semantics of the data. In this paper, we propose a novel framework that works towards reducing this semantic gap. The proposed framework uses the apriori algorithm and association rule mining to find frequent itemsets in the feature data set and generate classification rules to classify video shots to different concepts (semantics). We also introduce a novel pre-filtering architecture which reduces the high positive to negative instances ratio in the classifier training step. This helps reduce the amount of misclassification errors. Our proposed framework shows promising results in classifying multiple concepts.

UR - http://www.scopus.com/inward/record.url?scp=46449118661&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=46449118661&partnerID=8YFLogxK

M3 - Conference contribution

SN - 1424410177

SN - 9781424410170

SP - 859

EP - 862

BT - Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007

ER -