Integration of Visual Temporal Information and Textual Distribution Information for News Web Video Event Mining

Chengde Zhang, Xiao Wu, Mei-Ling Shyu, Qiang Peng

Research output: Contribution to journalArticle

8 Scopus citations

Abstract

News web videos exhibit several characteristics, including a limited number of features, noisy text information, and error in near-duplicate keyframes (NDK) detection. Such characteristics have made the mining of the events from news web videos a challenging task. In this paper, a novel framework is proposed to better group the associated web videos to events. First, the data preprocessing stage performs feature selection and tag relevance learning. Next, multiple correspondence analysis is applied to explore the correlations between terms and events with the assistance of visual information. Cooccurrence and visual near-duplicate feature trajectory induced from NDKs are combined to calculate the similarity between NDKs and events. Finally, a probabilistic model is proposed for news web video event mining, where both visual temporal information and textual distribution information are integrated. Experiments on the news web videos from YouTube demonstrate that the integration of visual temporal information and textual distribution information outperforms the existing methods in the news web video event mining.

Original languageEnglish (US)
JournalIEEE Transactions on Human-Machine Systems
DOIs
StateAccepted/In press - Nov 4 2015

    Fingerprint

ASJC Scopus subject areas

  • Artificial Intelligence
  • Signal Processing
  • Human Factors and Ergonomics
  • Computer Networks and Communications
  • Computer Science Applications
  • Human-Computer Interaction
  • Control and Systems Engineering

Cite this