Multi-language Dataset Beta Release

The multi-language dataset is now available to the Common Voice community as a beta release! This release includes all new, multi-language data that has been collected in 2018.

There are two reasons for choosing a community-focused beta release. First, the data in this release is raw. The Common Voice team will continue improving the way the data is bundled across languages, but we also want to get the dataset in the hands of those who want to start using it immediately.

Second, before a wider release, we need the help and expertise of this community to make the data better for everyone. With your help, we are targeting a full release on the Common Voice site by the end of January.

Mozilla’s DeepSpeech team has created a CorporaCreator repository on GitHub with tools for processing the Common Voice dataset. To help clean the data you can either write or improve a preprocessor for a language (here is the one shared across languages) or you can post a comment about irregularities you may have noticed in the dataset. In particular, we are looking for irregularities like:

  • Numbers. There should be no digits in the source text because they can cause problems when read aloud. The way a number is read depends on context and might introduce confusion in the dataset. For example, the number “2409” could be accurately read as both “twenty-four zero nine”; and " two thousand four hundred nine".
  • Abbreviations and Acronyms. Abbreviations and acronyms like “USA” or “ICE” should be avoided in the source text because they may be read in a way that does not coincide with their spelling. Additionally, there may be multiple accurate readings for a single abbreviation. For example, the acronym “ICE”; could be pronounced “I-C-E” or as a single word.
  • Punctuation. Special symbols and punctuation should only be included when absolutely necessary. For example, an apostrophe is included in English words like “don’t” and “we’re” and should be included in the source text, but it is unlikely you’ll ever need a special symbol like “@” or “#.”
  • Foreign letters. Letters must be valid in the language being spoken. For example, “ж” is a letter in the Russian alphabet but is never used in English and so should never appear in any English source text.

To get started, you will need to download the dataset’s clips.tsv file and follow the instructions in the included README. This will only give you access to the text data.

For access to the full dataset, including voice clip audio, you will need to fill out this form.

Reviewing and cleaning the Common Voice data will help everyone who uses it – from academics to small companies and all the makers who need CC0 data – to move forward with a voice-enabled project. The Common Voice team is committed to building a dataset of clean and stable data so we can practice appropriate version control and provide everyone with a way to recreate any testing they need to do in the future.

Thank you for being a part of this project!

8 Likes

Finally.
Thanks for release.

What languages can we expect to be released?
And about those which won’t be - what’s missing for them?

Happy new year!

We are going to release all of the languages that have data in them as of October 2018 which includes 16 languages. You can see the full list of languages here https://voice.mozilla.org/en/languages

Hello Everyone, the new dataset is back up and ready for use!

5 Likes

Hi,
thanks for your hard work on making this release happen!
I imagine you’re getting tons of requests for access to the voice data right now. Is giving access a manual process (i.e., giving access only after review of each form submission by a human)?

Hi there,

If you would like voice access you can fill out the form above and we will be sending a link via email at the beginning of each day. Each of the sentences you hear has been reviewed by 2 humans to ensure its correctness. Does that answer your question?

Thanks, that fully answers my question. I filled out the form above earlier today (shortly after your post), but haven’t receive any email, so I was just wondering if it had to be approved first. Again, thanks for your work on this and I’m looking forward to working with the data!

You should be getting your email shortly!

1 Like

Hi @r_LsdZVv67VKuK6fuHZ_tFpg,
I’ve found there is an audio file broken in zh-TW dataset.
53777c75a47473ca6101ac395e74d3a8e9b66f2ad58ce3d7defc1a22761f5f0b7072ddf8d62fd06be02a4843587ea1322c29f90b61edf99cc608981306dc35e4.mp3
This audio is in other.tsv.

Finally,
thanks for release.

1 Like

Thanks for the heads up @areyliu6!
@gregor Do you need any further information about the break? Lets review in the next sprint meeting.

I downloaded the kab dataset but I can’t find the transcript (sentences). I got only the audio files.
I’m going to train the first dataset using deepspeech to show it on an event we are going to organize to show the importance and recruit more recorders from Kabylia.

Thanks again for the release.

The sentences are in the clips.tsv file. If you want to get them split up by language, validity & bucket, you need to run the CorporaCreator on the file.

1 Like

Thanks again for help :heart_eyes::heart_eyes::heart_eyes: wonderfull

But when I downloaded the audio files, there is no file clips.tsv. Is this file downloadable lownly?

Question from some local community member about the data:

Are we delivering the voices which only had been verified for multiple times on the site?
If yes, then what’re the differences of voices listed in valid.tsv and in other .tsv (besides invalid)?

You can find that in the first post in this thread:

The clips.tsv files contains the number of votes each clip got and the audio files are everything we have for this language up to this point (which for that release is 2018-12-19 I think). The valid.tsv only contains clips which have at least 2 up-votes and more up- than down-votes.

1 Like

Thanks @r_LsdZVv67VKuK6fuHZ_tFpg gweber

Hi there. I just want to tell thank you for the really great job. We have been waiting for this release so long and finally you did it :grinning:

1 Like