Summary form only given. The problems associated with automatic speech synthesis are related, to a large extent, to the methods of controlling the mathematical models of the human vocal tract and its properties as they change with time during discourse. In formant synthesis, which is the most effective synthesis method, rules are applied to relate the incoming phonemic information to values of the synthesizer control vectors. Such rules are usually developed through the analysis of a representative set of utterances and adjusted with listening tests. The tedious nature of the parameter extraction process and the lack of unambiguous relationships of acoustic events with spectral information have hindered the effective control of the models. In the present work, artificial neural networks were employed to assist with the latter concern. For this purpose, 56 common words comprising larynx-produced phonemes were analyzed and used to train a network cluster. The system was able to produce intelligible speech for certain phonemic combinations.