Florida international university and University of Miami TRECVID 2011

Chao Chen, Dianting Liu, Qiusha Zhu, Tao Meng, Mei-Ling Shyu, Yimin Yang, Hsinyu Ha, Fausto Fleites, Shu Ching Chen, Winnie Chen, Tiffany Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

This paper presents a summary of the team "Florida International University - University of Miami (FIU-UM)" in TRECVID 2011 tasks [1]. This year, the FIU-UM team participated in the Semantic Indexing (SIN) and Instance Search (INS) tasks. Four runs of the SIN results were submitted. • F_A_FIU-UM-1_1:KF+Meta&Relation+Audio+SPCPE&SIFT-FusetheresultsfromSubspaceModeling and Ranking (SMR) using the Key Frame-based low-level features (KF), LibSVM classification using Metadata from those meta-xml files associated with the IACC videos as well as the relationships between semantic concepts (Meta&Relation), Gaussian Mixture Models (GMM) using the Mel-frequency cepstral coefficients (MFCC) audio features, and the simultaneous partition and class parameter estimation (SPCPE) algorithm with scale-invariant feature transform (SIFT) interesting points matching (SPCPE&SIFT). • F_A_FIU-UM-2_2: KF+Meta&Relation - Fuse the results from SMR using KF and LibSVM using meta information and relationships between semantic concepts. • F_A_FIU-UM-3_3: Fuse the results from SMR using KF, GMM using MFCC audio features as well as SPCPE&SIFT matching. • F_A_FIU-UM-4_4: KF - Served as the baseline model which uses SMR on the Key Frame-based low-level features. In addition, four runs of the INS task were also submitted. • FIU-UM-1: Use 95 original example images as well as 261 self-collected images to train the Multiple Correspondence Analysis (MCA) models to rank the testing video clips according to each image query; and use SIFT, K-Nearest Neighbor (KNN), and related SIN models to re-rank the returned video clips. • FIU-UM-2: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and use the MCA model trained by 95 original example images to re-rank the video clips. • FIU-UM-3: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and no re-ranking processing is performed in this run. • FIU-UM-4: Use 95 original example images to train the MCA models to rank the testing video clips according to each image query; and re-rank the testing video clips by the KNN results obtained by 95 original example images. After analyzing this year's results, a few future directions are proposed to improve the current framework.

Original languageEnglish
Title of host publication2011 TREC Video Retrieval Evaluation Notebook Papers
PublisherNational Institute of Standards and Technology
StatePublished - Jan 1 2011
EventTREC Video Retrieval Evaluation, TRECVID 2011 - Gaithersburg, MD, United States
Duration: Dec 5 2011Dec 7 2011

Other

OtherTREC Video Retrieval Evaluation, TRECVID 2011
CountryUnited States
CityGaithersburg, MD
Period12/5/1112/7/11

Fingerprint

Parameter estimation
Semantics
Testing
Electric fuses
Metadata
Processing

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Software

Cite this

Chen, C., Liu, D., Zhu, Q., Meng, T., Shyu, M-L., Yang, Y., ... Chen, T. (2011). Florida international university and University of Miami TRECVID 2011. In 2011 TREC Video Retrieval Evaluation Notebook Papers National Institute of Standards and Technology.

Florida international university and University of Miami TRECVID 2011. / Chen, Chao; Liu, Dianting; Zhu, Qiusha; Meng, Tao; Shyu, Mei-Ling; Yang, Yimin; Ha, Hsinyu; Fleites, Fausto; Chen, Shu Ching; Chen, Winnie; Chen, Tiffany.

2011 TREC Video Retrieval Evaluation Notebook Papers. National Institute of Standards and Technology, 2011.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chen, C, Liu, D, Zhu, Q, Meng, T, Shyu, M-L, Yang, Y, Ha, H, Fleites, F, Chen, SC, Chen, W & Chen, T 2011, Florida international university and University of Miami TRECVID 2011. in 2011 TREC Video Retrieval Evaluation Notebook Papers. National Institute of Standards and Technology, TREC Video Retrieval Evaluation, TRECVID 2011, Gaithersburg, MD, United States, 12/5/11.
Chen C, Liu D, Zhu Q, Meng T, Shyu M-L, Yang Y et al. Florida international university and University of Miami TRECVID 2011. In 2011 TREC Video Retrieval Evaluation Notebook Papers. National Institute of Standards and Technology. 2011
Chen, Chao ; Liu, Dianting ; Zhu, Qiusha ; Meng, Tao ; Shyu, Mei-Ling ; Yang, Yimin ; Ha, Hsinyu ; Fleites, Fausto ; Chen, Shu Ching ; Chen, Winnie ; Chen, Tiffany. / Florida international university and University of Miami TRECVID 2011. 2011 TREC Video Retrieval Evaluation Notebook Papers. National Institute of Standards and Technology, 2011.
@inproceedings{d7096000cf5b44d5ab51ea787dec6ba2,
title = "Florida international university and University of Miami TRECVID 2011",
abstract = "This paper presents a summary of the team {"}Florida International University - University of Miami (FIU-UM){"} in TRECVID 2011 tasks [1]. This year, the FIU-UM team participated in the Semantic Indexing (SIN) and Instance Search (INS) tasks. Four runs of the SIN results were submitted. • F_A_FIU-UM-1_1:KF+Meta&Relation+Audio+SPCPE&SIFT-FusetheresultsfromSubspaceModeling and Ranking (SMR) using the Key Frame-based low-level features (KF), LibSVM classification using Metadata from those meta-xml files associated with the IACC videos as well as the relationships between semantic concepts (Meta&Relation), Gaussian Mixture Models (GMM) using the Mel-frequency cepstral coefficients (MFCC) audio features, and the simultaneous partition and class parameter estimation (SPCPE) algorithm with scale-invariant feature transform (SIFT) interesting points matching (SPCPE&SIFT). • F_A_FIU-UM-2_2: KF+Meta&Relation - Fuse the results from SMR using KF and LibSVM using meta information and relationships between semantic concepts. • F_A_FIU-UM-3_3: Fuse the results from SMR using KF, GMM using MFCC audio features as well as SPCPE&SIFT matching. • F_A_FIU-UM-4_4: KF - Served as the baseline model which uses SMR on the Key Frame-based low-level features. In addition, four runs of the INS task were also submitted. • FIU-UM-1: Use 95 original example images as well as 261 self-collected images to train the Multiple Correspondence Analysis (MCA) models to rank the testing video clips according to each image query; and use SIFT, K-Nearest Neighbor (KNN), and related SIN models to re-rank the returned video clips. • FIU-UM-2: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and use the MCA model trained by 95 original example images to re-rank the video clips. • FIU-UM-3: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and no re-ranking processing is performed in this run. • FIU-UM-4: Use 95 original example images to train the MCA models to rank the testing video clips according to each image query; and re-rank the testing video clips by the KNN results obtained by 95 original example images. After analyzing this year's results, a few future directions are proposed to improve the current framework.",
author = "Chao Chen and Dianting Liu and Qiusha Zhu and Tao Meng and Mei-Ling Shyu and Yimin Yang and Hsinyu Ha and Fausto Fleites and Chen, {Shu Ching} and Winnie Chen and Tiffany Chen",
year = "2011",
month = "1",
day = "1",
language = "English",
booktitle = "2011 TREC Video Retrieval Evaluation Notebook Papers",
publisher = "National Institute of Standards and Technology",

}

TY - GEN

T1 - Florida international university and University of Miami TRECVID 2011

AU - Chen, Chao

AU - Liu, Dianting

AU - Zhu, Qiusha

AU - Meng, Tao

AU - Shyu, Mei-Ling

AU - Yang, Yimin

AU - Ha, Hsinyu

AU - Fleites, Fausto

AU - Chen, Shu Ching

AU - Chen, Winnie

AU - Chen, Tiffany

PY - 2011/1/1

Y1 - 2011/1/1

N2 - This paper presents a summary of the team "Florida International University - University of Miami (FIU-UM)" in TRECVID 2011 tasks [1]. This year, the FIU-UM team participated in the Semantic Indexing (SIN) and Instance Search (INS) tasks. Four runs of the SIN results were submitted. • F_A_FIU-UM-1_1:KF+Meta&Relation+Audio+SPCPE&SIFT-FusetheresultsfromSubspaceModeling and Ranking (SMR) using the Key Frame-based low-level features (KF), LibSVM classification using Metadata from those meta-xml files associated with the IACC videos as well as the relationships between semantic concepts (Meta&Relation), Gaussian Mixture Models (GMM) using the Mel-frequency cepstral coefficients (MFCC) audio features, and the simultaneous partition and class parameter estimation (SPCPE) algorithm with scale-invariant feature transform (SIFT) interesting points matching (SPCPE&SIFT). • F_A_FIU-UM-2_2: KF+Meta&Relation - Fuse the results from SMR using KF and LibSVM using meta information and relationships between semantic concepts. • F_A_FIU-UM-3_3: Fuse the results from SMR using KF, GMM using MFCC audio features as well as SPCPE&SIFT matching. • F_A_FIU-UM-4_4: KF - Served as the baseline model which uses SMR on the Key Frame-based low-level features. In addition, four runs of the INS task were also submitted. • FIU-UM-1: Use 95 original example images as well as 261 self-collected images to train the Multiple Correspondence Analysis (MCA) models to rank the testing video clips according to each image query; and use SIFT, K-Nearest Neighbor (KNN), and related SIN models to re-rank the returned video clips. • FIU-UM-2: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and use the MCA model trained by 95 original example images to re-rank the video clips. • FIU-UM-3: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and no re-ranking processing is performed in this run. • FIU-UM-4: Use 95 original example images to train the MCA models to rank the testing video clips according to each image query; and re-rank the testing video clips by the KNN results obtained by 95 original example images. After analyzing this year's results, a few future directions are proposed to improve the current framework.

AB - This paper presents a summary of the team "Florida International University - University of Miami (FIU-UM)" in TRECVID 2011 tasks [1]. This year, the FIU-UM team participated in the Semantic Indexing (SIN) and Instance Search (INS) tasks. Four runs of the SIN results were submitted. • F_A_FIU-UM-1_1:KF+Meta&Relation+Audio+SPCPE&SIFT-FusetheresultsfromSubspaceModeling and Ranking (SMR) using the Key Frame-based low-level features (KF), LibSVM classification using Metadata from those meta-xml files associated with the IACC videos as well as the relationships between semantic concepts (Meta&Relation), Gaussian Mixture Models (GMM) using the Mel-frequency cepstral coefficients (MFCC) audio features, and the simultaneous partition and class parameter estimation (SPCPE) algorithm with scale-invariant feature transform (SIFT) interesting points matching (SPCPE&SIFT). • F_A_FIU-UM-2_2: KF+Meta&Relation - Fuse the results from SMR using KF and LibSVM using meta information and relationships between semantic concepts. • F_A_FIU-UM-3_3: Fuse the results from SMR using KF, GMM using MFCC audio features as well as SPCPE&SIFT matching. • F_A_FIU-UM-4_4: KF - Served as the baseline model which uses SMR on the Key Frame-based low-level features. In addition, four runs of the INS task were also submitted. • FIU-UM-1: Use 95 original example images as well as 261 self-collected images to train the Multiple Correspondence Analysis (MCA) models to rank the testing video clips according to each image query; and use SIFT, K-Nearest Neighbor (KNN), and related SIN models to re-rank the returned video clips. • FIU-UM-2: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and use the MCA model trained by 95 original example images to re-rank the video clips. • FIU-UM-3: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and no re-ranking processing is performed in this run. • FIU-UM-4: Use 95 original example images to train the MCA models to rank the testing video clips according to each image query; and re-rank the testing video clips by the KNN results obtained by 95 original example images. After analyzing this year's results, a few future directions are proposed to improve the current framework.

UR - http://www.scopus.com/inward/record.url?scp=84905263418&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84905263418&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84905263418

BT - 2011 TREC Video Retrieval Evaluation Notebook Papers

PB - National Institute of Standards and Technology

ER -