Automatic soundscape classification via comparative psychometrics and machine learning

Krithika Rajagopal, Phil Minnick, Colby Leider

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Computational acoustical ecology is a relatively new field in which long-term environmental recordings are mined for meaningful data. Humans quite naturally and automatically associate environmental sounds with emotions and can easily identify the components of a soundscape. However, equipping a computer to accurately and automatically rate unknown environmental recordings along subjective psychoacoustic dimensions, let alone report the environment (e.g., beach, barnyard, home kitchen, research lab, etc.) in which the environmental recordings were made with a high degree of accuracy is quite difficult. We present here a robust algorithm for automatic soundscape classification in which both psychometric data and computed audio features are compared and used to train a Naive Bayesian classifier. An algorithm for classifying the type of soundscape across different categories was developed. In a pilot test, automatic classification accuracy of 88% was achieved on 20 soundscapes, and the classifier was able to outperform human ratings in some tests. In a second test, classification accuracy of 95% was achieved on 30 soundscapes.

Original languageEnglish (US)
Title of host publication131st Audio Engineering Society Convention 2011
Number of pages8
StatePublished - Dec 1 2011
Event131st Audio Engineering Society Convention 2011 - New York, NY, United States
Duration: Oct 20 2011Oct 23 2011

Publication series

Name131st Audio Engineering Society Convention 2011


Other131st Audio Engineering Society Convention 2011
Country/TerritoryUnited States
CityNew York, NY

ASJC Scopus subject areas

  • Modeling and Simulation
  • Acoustics and Ultrasonics


Dive into the research topics of 'Automatic soundscape classification via comparative psychometrics and machine learning'. Together they form a unique fingerprint.

Cite this