Modelling Acoustic-Feature Dependencies with Artificial Neural-Networks: Trajectory-RNADE

Benigno Uría, Iain Murray, Steve Renals, Cassia Valentini-Botinhao and John Bridle.

Given a transcription, sampling from a good model of acoustic feature trajectories should result in plausible realizations of an utterance. However, samples from current probabilistic speech synthesis systems result in low quality synthetic speech. Henter et al. have demonstrated the need to capture the dependencies between acoustic features conditioned on the phonetic labels in order to obtain high quality synthetic speech. These dependencies are often ignored in neural network based acoustic models. We tackle this deficiency by introducing a probabilistic neural network model of acoustic trajectories, trajectory RNADE, able to capture these dependencies.

IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp4465–4469, 2015.
[PDF, DjVu, GoogleViewer, BibTeX, slides and demo ]