Induction from multi-label training examples in text categorization: Combining subclassifiers (a case study)

Kanoksri Sarinnapakorn, Miroslav Kubat

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The main problem with algorithms for induction from multi-label training examples is that they often suffer from prohibitive computational costs, especially in text-categorization domains with thousands of features to characterize each document. One way to reduce these costs is to run a baseline induction algorithm separately for different subsets of features, obtaining a set of subclassifiers to be then combined. In this case study, we investigate three different ways to combine subclassifiers, including our own solution based on the Dempster-Shafer Theory.

Original languageEnglish (US)
Title of host publicationProceedings of the 2007 International Conference on Artificial Intelligence, ICAI 2007
Pages351-357
Number of pages7
StatePublished - Dec 1 2007
Event2007 International Conference on Artificial Intelligence, ICAI 2007 - Las Vegas, NV, United States
Duration: Jun 25 2007Jun 28 2007

Publication series

NameProceedings of the 2007 International Conference on Artificial Intelligence, ICAI 2007
Volume1

Other

Other2007 International Conference on Artificial Intelligence, ICAI 2007
Country/TerritoryUnited States
CityLas Vegas, NV
Period6/25/076/28/07

Keywords

  • Dempster-Shafer Theory
  • Multi-label examples
  • Text categorization

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Induction from multi-label training examples in text categorization: Combining subclassifiers (a case study)'. Together they form a unique fingerprint.

Cite this