The quality of text-to-speech conversion performed by machines is still unacceptably low and does not compare well with natural speech. As a result, synthetic speech has not found wide acceptance and has been restricted mainly in useful applications for the handicapped. The problems associated with automatic speech synthesis are related primarily to the methods of controlling the mathematical models of the human vocal tract and its properties as they change with time during discourse. In formant synthesis, rules are applied to relate the incoming phonemic information to values of the synthesizer control vectors. Such rules are usually developed through the analysis of a representative set of utterances and adjusted with listening tests. The tedious nature of the parameter extraction process and the lack of unambiguous relationships of acoustic events with spectral information have hindered the effective control of the models. In this article, artificial neural networks are employed to assist with the latter concern. For this purpose, a set of 56 common words composed of larynx-produced phonemes was analyzed and used to train a network cluster. The system was able to produce intelligible speech for certain phonemic combinations, and the role of neural networks in designing alternatives to the classical rule-based approach was studied.
|Original language||English (US)|
|Number of pages||9|
|Journal||Journal of artificial neural networks|
|State||Published - Dec 1 1995|
ASJC Scopus subject areas