Semi-supervised learning for music artists style identification

Research output: Contribution to conferencePaper

Abstract

Semi-supervised learning for music artists style identification, in which a classification algorithm is trained for each feature set, is discussed. For lyrics-based style features, text-based feature extraction consists of four components which include, bag-of-words features, part-of-speech statistics, lexical features, and orthographic features. Accuracy measures can be applied to the lyrics-based classifier in its final form, acoustic-based classifier in its final form, and the combination of the two. The result show that artist similarity can be efficiently learned using a small number of labeled samples by combining multiple data sources.

Original languageEnglish (US)
Pages152-153
Number of pages2
StatePublished - Dec 1 2004
EventCIKM 2004: Proceedings of the Thirteenth ACM Conference on Information and Knowledge Management - Washington, DC, United States
Duration: Nov 8 2004Nov 13 2004

Other

OtherCIKM 2004: Proceedings of the Thirteenth ACM Conference on Information and Knowledge Management
CountryUnited States
CityWashington, DC
Period11/8/0411/13/04

    Fingerprint

Keywords

  • Artist style
  • Content
  • Lyrics
  • Semi-supervised learning

ASJC Scopus subject areas

  • Decision Sciences(all)
  • Business, Management and Accounting(all)

Cite this

Li, T., & Ogihara, M. (2004). Semi-supervised learning for music artists style identification. 152-153. Paper presented at CIKM 2004: Proceedings of the Thirteenth ACM Conference on Information and Knowledge Management, Washington, DC, United States.