Korin Richmond

Centre for Speech Technology Research

Inversion (aka "acoustic-to-articulatory") mapping


[1] K. Richmond, Z. Ling, and J. Yamagishi. The use of articulatory movement data in speech synthesis applications: an overview - application of articulatory movements using machine learning algorithms [invited review]. Acoustical Science and Technology, 36(6):467–477, 2015. doi:10.1250/ast.36.467. [doi]

[2] K. Richmond, J. Yamagishi, and Z.-H. Ling. Applications of articulatory movements based on machine learning. Journal of the Acoustical Society of Japan, 70(10):539–545, 2015.

[3] K. Richmond, Z. Ling, J. Yamagishi, and B. Uría. On the evaluation of inversion mapping performance in the acoustic domain. In Proc. Interspeech. Lyon, France, August 2013. [pdf]

[4] B. Uria, I. Murray, S. Renals, and K. Richmond. Deep architectures for articulatory inversion. In Proc. Interspeech. Portland, Oregon, USA, September 2012. [pdf]

[5] B. Uria, S. Renals, and K. Richmond. A deep neural network for acoustic-articulatory speech inversion. In Proc. NIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning. Sierra Nevada, Spain, December 2011. [pdf]

[6] G. Hofer and K. Richmond. Comparison of HMM and TMDN methods for lip synchronisation. In Proc. Interspeech, 454–457. Makuhari, Japan, September 2010. [pdf]

[7] G. Hofer, K. Richmond, and M. Berger. Lip synchronization by acoustic inversion. Poster at Siggraph 2010, 2010. [pdf]

[8] K. Richmond. Preliminary inversion mapping results with a new EMA corpus. In Proc. Interspeech, 2835–2838. Brighton, UK, September 2009. [pdf]

[9] K. Richmond. A multitask learning perspective on acoustic-articulatory inversion. In Proc. Interspeech. Antwerp, Belgium, August 2007. [pdf]

[10] K. Richmond. Trajectory mixture density networks with multiple mixtures for acoustic-articulatory inversion. In M. Chetouani, A. Hussain, B. Gas, M. Milgram, and J.-L. Zarader, editors, Advances in Nonlinear Speech Processing, International Conference on Non-Linear Speech Processing, NOLISP 2007, volume 4885 of Lecture Notes in Computer Science, 263–272. Springer-Verlag Berlin Heidelberg, December 2007. doi:10.1007/978-3-540-77347-4_23. [pdf | doi]

[11] K. Richmond. A trajectory mixture density network for the acoustic-articulatory inversion mapping. In Proc. Interspeech. Pittsburgh, USA, September 2006. [pdf]

[12] K. Richmond, S. King, and P. Taylor. Modelling the uncertainty in recovering articulation from acoustics. Computer Speech and Language, 17:153–172, 2003. [pdf]

[13] K. Richmond. Estimating Articulatory Parameters from the Acoustic Speech Signal. PhD thesis, The Centre for Speech Technology Research, Edinburgh University, 2002. [ps]

[14] K. Richmond. Mixture density networks, human articulatory data and acoustic-to-articulatory inversion of continuous speech. In Proc. Workshop on Innovation in Speech Processing, 259–276. Institute of Acoustics, April 2001. [ps]

[15] A. Wrench and K. Richmond. Continuous speech recognition using articulatory data. In Proc. ICSLP 2000. Beijing, China, 2000. [pdf | ps]

[16] J. Frankel, K. Richmond, S. King, and P. Taylor. An automatic speech recognition system using neural networks and linear dynamic models to recover and model articulatory traces. In Proc. ICSLP. 2000. [pdf | ps]

[17] K. Richmond. Estimating velum height from acoustics during continuous speech. In Proc. Eurospeech, volume 1, 149–152. Budapest, Hungary, 1999. [pdf | ps]