Korin Richmond

Centre for Speech Technology Research

Promotion to Senior Research Fellow

I am delighted to have been notified this month that the University has approved my promotion to the position of Senior Research Fellow in Informatics. At the University of Edinburgh, Senior Research Fellow is a Grade 09 post, roughly equivalent to a research-only Reader (or Associate Professor, for those more familiar with American-style titles).


Three ICASSP papers accepted

Three papers accepted to appear at the ICASSP 2016 Conference, which will be held in Shanghai at the end of March this year:

  • "Smooth Talking: Articulatory Join Costs for Unit Selection"
  • "Initial Investigation of Speech Synthesis Based on Complex-Valued Neural Networks"
  • "Testing the Consistency Assumption: Pronunciation Variant Forced Alignment in Read and Spontaneous Speech Synthesis"


Elevated to IEEE Senior grade

Have just heard that I have been elevated to IEEE Senior Member grade.


One hundred users of mngu0 corpus registered

My website for the mngu0 multimodal articulatory data corpus has just reached the milestone of 100 registered users! To date, more than 27 published papers have been based on this corpus, with many more citations.

Hopefully, more users will find the corpus helpful in the coming years, and usage will continue to grow, further consolidating the benefits of multiple people using the same data.


IEEE SLTC Newsletter article

An article I wrote for the IEEE SLTC Newsletter has just been published. This article covers the 3rd UKSpeech Conference, which took place in Edinburgh in June 2014. This two-day conference was a great opportunity for researchers and students from the UK speech community to get together and catch up with research going on at centres around the country.


1.3 million hits and still going strong!

The Festival online demo I implemented as a public demonstration of TTS, which first went live in Apr 2007, has just passed the mark of serving 1.3 million synthesis requests for visitors - and still going strong on the original hardware! :)


Ultrafest VI successfully concluded

The Ultrafest VI Conference meeting has just finished after a successful three day meeting (Wed 6th to Fri 8th November 2013). The meeting was co-organised by the CASL Research Centre (Centre for Audiology, Speech and Language, Queen Margaret University) and CSTR (the Centre for Speech Technology Research, University of Edinburgh). The meeting took place in the Informatics Forum of the School of Informatics, at the University of Edinburgh.

This recurrent meeting is aimed at researchers working with Ultrasound imaging for linguistic analysis and speech technology.

Previous meetings have been held in:

  • Haskins Laboratories (Ultrafest V - 2010)
  • New York University (Ultrafest IV - 2007)
  • University of Arizona (Ultrafest III - 2005)
  • University of British Columbia (Ultrafest II - 2004)
  • Haskins Laboratories (Ultrafest - 2002)

The Organisation Committee for Ultrafest VI was:

  • Edinburgh University -- Korin Richmond
  • Queen Margaret University -- Eleanor Lawson, Zoe Roxburgh, Sonja Schaeffler, James M Scobbie, Claire Timmins, Alan Wrench & Natasha Zharkova


Double Special Session Chair at Interspeech 2013

Organised, together with Slim Ouni and Asterios Toutios, the Interspeech 2013 Special Session on "Articulatory data acquisition and processing". We decided to stage this special session to bring together many of the parties who are developing and using articulatory speech data, to encourage the identification of best practices, focusing particularly on the technical aspects of articulatory data acquisition and its exploitation.

We selected 10 oral presentation for the double session, which represented an excellent selection of articulography-linked work. This group of papers was preceded by the presentation of the results of a survey into various aspects of articulatory research in the community. At the end of the session, we reserved the final slot for a 20 minute free discussion on what can be done to further support and promote the use of articulatory data for speech science and technology research.


Invited talk at SPASR 2013 (Interspeech Satellite)

Gave an invited talk ("On Measuring and Estimating Speech Articulation") at the Workshop on Speech Production in Automatic Speech Recognition (SPASR) in Lyon, France. This was a satellite workshop to the Interspeech 2013 conference - the largest international conference dedicated to speech.


Invited talk in Vienna

Gave an invited lecture on "Controllable Speech Synthesis" at the Telecommunication Forum in Vienna, Austria. This forum is run jointly between FTW Forschungszentrum Telekommunikation Wien GmbH and the Technical University of Vienna.


Invited talk in Grenoble

Gave an invited talk on "Exploiting Articulation for Speech Technology" at the GIPSA Lab in Grenoble, France. Thanks to Pierre Badin, Thomas Hueber, Gerard Bailly, Atef Ben Youssef and others for demonstrating the work done at the lab, and for the very interesting discussions!


Election to IEEE SLT Committee

Have been elected to be a member of the IEEE Speech and Language Technical Committee (SLTC).


mngu0 web forum goes live

Following presentation at ISCA's "Interspeech 2011" conference in Florence, Italy, the http://www.mngu0.org website has now been activated. Users may now register, and then download the articulatory speech data and associated software tools.

The mngu0 corpus is a set of articulatory data of different forms (EMA, MRI, video, 3D scans of upper/lower jaw, audio etc.) acquired from one speaker. The purpose of the dedicated web forum two-fold. First, it is obviously to distribute the raw data itself. But, second (and at least as important!), the aim is to provide a forum and repository for all research work that uses this data. All researchers who want to use the data must register with the website first. This is just the beginning though in keeping track of who is using the data for what. All those who use the data are strongly encouraged to contribute back to the forum. For example, if they publish work using the data, we offer to host a copy or link to that work. If a novel data processing method is developed, we will gladly host a new version of the data.

Ultimately, I want to make this data available to allow as many people as possible to undertake experiments with articulatory data, and in such as way that anybody and everybody can use exactly the same data in their experiments. Hopefully, this will make it easier to compare methods directly, and prove illuminating in time!


Ultrax in the news!


The Ultrax project has received some media coverage in recent days, like this clipping from the national Metro newspaper. Some more of that coverage is listed on the Ultrax project website here.


ICASSP 2011 Awards Ceremony

Zhenhua Ling attended in person to receive the IEEE Signal Processing Society's 2010 Young Author Best Paper Award, at The 36th International Conference on Acoustics, Speech and Signal Processing (ICASSP 2011) on 24th May in Prague, Czech Republic.

Once again, many congratulations to Zhenhua!


Visit to iFlyTEK and USTC, China

Junichi Yamagishi and I have recently returned from China, where we visited the University of Science and Technology of China (USTC), and the iFlyTEK speech technology company. The purpose of the visit was to give talks as invited, and also to discuss future collaboration.


IEEE SPS 2010 Young Author Best Paper Award

Z. Ling, K. Richmond, J. Yamagishi, and R. Wang. Integrating articulatory features into HMM-based parametric speech synthesis. IEEE Transactions on Audio, Speech and Language Processing, 17:1171–1185, August 2009. [doi]

This paper has won the IEEE Signal Processing Society's 2010 Young Author Best Paper Award!

The paper resulted from work undertaken during Zhenhua Ling's 6-month visit to CSTR as a Marie Curie Fellow on the EdSST project. MANY CONGRATULATIONS to Zhenhua!


Ultrax project funded!

A proposal submitted to EPSRC's Healthcare Partnerships Programme has been selected to receive funding! The value of this funding will be £586,000 over three years. This project is very much interdisciplinary in nature and will involve close collaboration between Informatics at the University of Edinburgh (Steve Renals and me), CASL at Queen Margaret University (Jim Scobbie and Joanne Cleland), and Alan Wrench at Articulate Instruments Ltd.

The purpose of the project will be to develop ultrasound into an effective tool for speech therapy for children. So, this is a particularly timely project, since 2011 has been designated as the UK national "year of speech language and communication", and The Communication Trust will be running a year-long campaign: 'Hello: a year to help all children communicate'.