Neural network-based control strategy for a speech formant synthesizer

Michael S. Scordilis, John N. Gowdy

Research output: Contribution to journalArticle

Abstract

The quality of text-to-speech conversion performed by machines is still unacceptably low and does not compare well with natural speech. As a result, synthetic speech has not found wide acceptance and has been restricted mainly in useful applications for the handicapped. The problems associated with automatic speech synthesis are related primarily to the methods of controlling the mathematical models of the human vocal tract and its properties as they change with time during discourse. In formant synthesis, rules are applied to relate the incoming phonemic information to values of the synthesizer control vectors. Such rules are usually developed through the analysis of a representative set of utterances and adjusted with listening tests. The tedious nature of the parameter extraction process and the lack of unambiguous relationships of acoustic events with spectral information have hindered the effective control of the models. In this article, artificial neural networks are employed to assist with the latter concern. For this purpose, a set of 56 common words composed of larynx-produced phonemes was analyzed and used to train a network cluster. The system was able to produce intelligible speech for certain phonemic combinations, and the role of neural networks in designing alternatives to the classical rule-based approach was studied.

Original languageEnglish (US)
Pages (from-to)195-203
Number of pages9
JournalJournal of artificial neural networks
Volume2
Issue number3
StatePublished - Dec 1 1995

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Neural network-based control strategy for a speech formant synthesizer'. Together they form a unique fingerprint.

  • Cite this