Florida international university and University of Miami TRECVID 2011

Chao Chen, Dianting Liu, Qiusha Zhu, Tao Meng, Mei-Ling Shyu, Yimin Yang, Hsinyu Ha, Fausto Fleites, Shu Ching Chen, Winnie Chen, Tiffany Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

This paper presents a summary of the team "Florida International University - University of Miami (FIU-UM)" in TRECVID 2011 tasks [1]. This year, the FIU-UM team participated in the Semantic Indexing (SIN) and Instance Search (INS) tasks. Four runs of the SIN results were submitted. • F_A_FIU-UM-1_1:KF+Meta&Relation+Audio+SPCPE&SIFT-FusetheresultsfromSubspaceModeling and Ranking (SMR) using the Key Frame-based low-level features (KF), LibSVM classification using Metadata from those meta-xml files associated with the IACC videos as well as the relationships between semantic concepts (Meta&Relation), Gaussian Mixture Models (GMM) using the Mel-frequency cepstral coefficients (MFCC) audio features, and the simultaneous partition and class parameter estimation (SPCPE) algorithm with scale-invariant feature transform (SIFT) interesting points matching (SPCPE&SIFT). • F_A_FIU-UM-2_2: KF+Meta&Relation - Fuse the results from SMR using KF and LibSVM using meta information and relationships between semantic concepts. • F_A_FIU-UM-3_3: Fuse the results from SMR using KF, GMM using MFCC audio features as well as SPCPE&SIFT matching. • F_A_FIU-UM-4_4: KF - Served as the baseline model which uses SMR on the Key Frame-based low-level features. In addition, four runs of the INS task were also submitted. • FIU-UM-1: Use 95 original example images as well as 261 self-collected images to train the Multiple Correspondence Analysis (MCA) models to rank the testing video clips according to each image query; and use SIFT, K-Nearest Neighbor (KNN), and related SIN models to re-rank the returned video clips. • FIU-UM-2: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and use the MCA model trained by 95 original example images to re-rank the video clips. • FIU-UM-3: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and no re-ranking processing is performed in this run. • FIU-UM-4: Use 95 original example images to train the MCA models to rank the testing video clips according to each image query; and re-rank the testing video clips by the KNN results obtained by 95 original example images. After analyzing this year's results, a few future directions are proposed to improve the current framework.

Original languageEnglish
Title of host publication2011 TREC Video Retrieval Evaluation Notebook Papers
PublisherNational Institute of Standards and Technology
StatePublished - Jan 1 2011
EventTREC Video Retrieval Evaluation, TRECVID 2011 - Gaithersburg, MD, United States
Duration: Dec 5 2011Dec 7 2011

Other

OtherTREC Video Retrieval Evaluation, TRECVID 2011
CountryUnited States
CityGaithersburg, MD
Period12/5/1112/7/11

    Fingerprint

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Software

Cite this

Chen, C., Liu, D., Zhu, Q., Meng, T., Shyu, M-L., Yang, Y., Ha, H., Fleites, F., Chen, S. C., Chen, W., & Chen, T. (2011). Florida international university and University of Miami TRECVID 2011. In 2011 TREC Video Retrieval Evaluation Notebook Papers National Institute of Standards and Technology.