A reduction technique for nearest-neighbor classification: Small groups of examples

Miroslav Kubat, Martin Cooperson

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


An important issue in nearest-neighbor classifiers is how to reduce the size of large sets of examples. Whereas many researchers recommend to replace the original set with a carefully selected subset, we investigate a mechanism that creates three or more such subsets. The idea is to make sure that each of them, when used as a 1-NN subclassifier, tends to err in a different part of the instance space. In this case, failures of individuals can be corrected by voting. The costs of our example-selection procedure are linear in the size of the original training set and, as our experiments demonstrate, dramatic data reduction can be achieved without a major drop in classification accuracy.

Original languageEnglish (US)
Pages (from-to)463-476
Number of pages14
JournalIntelligent Data Analysis
Issue number6
StatePublished - Jan 1 2001


  • example selection
  • nearest-neighbor classifiers
  • voting

ASJC Scopus subject areas

  • Artificial Intelligence
  • Theoretical Computer Science
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'A reduction technique for nearest-neighbor classification: Small groups of examples'. Together they form a unique fingerprint.

Cite this