Enhancing sparsity in linear prediction of speech by iteratively reweighted 1-norm minimization

Daniele Giacobello, Mads Græsbøll Christensen, Manohar N. Murthi, Søren Holdt Jensen, Marc Moonen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

Linear prediction of speech based on 1-norm minimization has already proved to be an interesting alternative to 2-norm minimization. In particular, choosing the 1-norm as a convex relaxation of the 0-norm, the corresponding linear prediction model offers a sparser residual better suited for coding applications. In this paper, we propose a new speech modeling technique based on reweighted 1-norm minimization. The purpose of the reweighted scheme is to overcome the mismatch between 0-norm minimization and 1-norm minimization while keeping the problem solvable with convex estimation tools. Experimental results prove the effectiveness of the reweighted 1-norm minimization, offering better coding properties compared to 1-norm minimization.

Original languageEnglish (US)
Title of host publication2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings
Pages4650-4653
Number of pages4
DOIs
StatePublished - 2010
Event2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Dallas, TX, United States
Duration: Mar 14 2010Mar 19 2010

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Other

Other2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010
Country/TerritoryUnited States
CityDallas, TX
Period3/14/103/19/10

Keywords

  • 1-norm minimization
  • Linear prediction
  • Speech analysis
  • Speech coding

ASJC Scopus subject areas

  • Signal Processing
  • Software
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Enhancing sparsity in linear prediction of speech by iteratively reweighted 1-norm minimization'. Together they form a unique fingerprint.

Cite this