CSTR VCTK Corpus
English Multi-speaker Corpus for CSTR Voice Cloning Toolkit


Overview

This CSTR VCTK Corpus includes speech data uttered by 109 native speakers of English with various accents. Each speaker reads out about 400 sentences, most of which were selected from a newspaper plus the Rainbow Passage and an elicitation paragraph intended to identify the speaker's accent. The newspaper texts were taken from The Herald (Glasgow), with permission from Herald & Times Group. Each speaker reads a different set of the newspaper sentences, where each set was selected using a greedy algorithm designed to maximise the contextual and phonetic coverage. The Rainbow Passage and elicitation paragraph are the same for all speakers. The Rainbow Passage can be found in the International Dialects of English Archive: (http://web.ku.edu/~idea/readings/rainbow.htm). The elicitation paragraph is identical to the one used for the speech accent archive (http://accent.gmu.edu). The details of the the speech accent archive can be found at http://www.ualberta.ca/~aacl2009/PDFs/WeinbergerKunath2009AACL.pdf

All speech data was recorded using an identical recording setup: an omni-directional head-mounted microphone (DPA 4035), 96kHz sampling frequency at 24 bits and in a hemi-anechoic chamber of the University of Edinburgh. All recordings were converted into 16 bits, were downsampled to 48 kHz based on STPK, and were manually end-pointed. This corpus was recorded for the purpose of building HMM-based text-to-speech synthesis systems, especially for speaker-adaptive HMM-based speech synthesis using average voice models trained on multiple speakers and speaker adaptation technologies.

COPYING
This corpus is licensed under Open Data Commons Attribution License (ODC-By) v1.0.
http://opendatacommons.org/licenses/by/1.0/
http://opendatacommons.org/licenses/by/summary/

DOWNLOAD
http://homepages.inf.ed.ac.uk/jyamagis/release/VCTK-Corpus.tar.gz

ACKNOWLEDGEMENTS
The CSTR VCTK Corpus was constructed by:
Christophe Veaux (University of Edinburgh)
Junichi Yamagishi (University of Edinburgh)
Kirsten MacDonald

The research leading to these results was partly funded from EPSRC grants EP/I031022/1 (NST) and EP/J002526/1 (CAF), from the RSE-NSFC grant (61111130120), and from the JST CREST (uDialogue).
Pasted GraphicPasted Graphic 1Pasted Graphic 2Pasted Graphic 3Pasted Graphic 4