Discussion of new guidelines for recording validation

Yes I’d be happy to do that. Will update in the next day or two.

Perhaps it would make sense to for me to separate out into a new thread guidelines for validation of uploaded sentences, as those will mostly be of use in the Sentence Collector.

2 Likes

I’m curious what fellow validators think about this: https://github.com/mozilla/voice-web/issues/1927

I’ll quote your message there for reference

Recently a change was made to the site to list sentences with the fewest recordings first in order to add more unique sentences for the DeepSpeech model. I think that this was a good idea overall, however I’m starting to see something that could be a problem.

Some users are recording a LOT of sentences. In fact, over the past few days I have validated around 1500-2000 clips and I would estimate around 70% of them were recorded by the same user, all of which were unique sentences.

I’m sure that the DeepSpeech team makes certain that there aren’t too many recordings by a single user, so these sentences will most likely be discarded until there are more recordings available. But if the site shows sentences with the fewest recordings first, it will have to go through the thousands of unrecorded sentences to get to that point again, which may never happen if more sentences keep getting added.

The DeepSpeech team said they don’t want more than a few hundred recordings from any one user. So a user with 5000 recordings may have prevented 4700 sentences from making it into the model.

So I think the solution to this is either:

Put a hard limit on the total number of recordings users can make or have a daily per-user limit.

Change the algorithm so that each sentence has, say, 3 recordings minimum before it’s given a lower priority in the queue.

In the coming weeks we will be working on a few experiments involving personal goals and also invite more people to contribute since, as we have commented in the past, diversity is super important.

FYI, I edited the post since then to clarify a couple of things I thought were unclear, but probably most relevant is that I thought of an additional option:

  1. Deprioritize recordings in the validation queue by users who have x number of validated recordings. So we’re prioritizing unique users AND unique sentences.

In my opinion some combination of options 2 and 3 is best.

Since I prepared the draft guidelines at the top of this thread the number of sentences for recording verification that have errors in the written text has much decreased - probably as most have now gone through the sentence collector.

To avoid overloading the recording reviewer guidelines I’d now suggest removing the Problems with the written text section entirely. It can be moved over into a new guideline for sentence reviewers. At present all they have is a single paragraph on this page: https://common-voice.github.io/sentence-collector/#/how-to.

I’ve moved the sentence review guidelines over to this thread: Discussion of new guidelines for uploaded sentence validation

Hello. What should I do if the audio/voice is generated by robots(TTS)? :sweat_smile:

1 Like

You should reject it. If you’re finding large numbers of instances, it would be worth posing a few examples here so that the developers can remove the whole batch by script, if need be.

2 Likes

I wonder if it should be the mandatory registration and there would be a moderator to avoid funny things that give a lot of work thanks

How strict should I be with possessives? An example sentence would be:

James’s car was stolen.

But the following is also valid in the English language:

James’ car was stolen.

Sometimes users pronounce that extra S when it’s not there and don’t pronounce it when it is. I am generally generous when it comes to mispronunciations if they are common (e.g. American mispronunciations of British names like “Warwickshire”) and I feel like this could be classed as a particular reader’s “style”. I have not formulated any particular policy on this and whether I approve or reject has so far depended on how generous I felt at that particular moment.

Just wanted to check what others think is appropriate or if it even matters.

I would argue to be valid the sentence has to be read as written. If a reader changes what is written, that may be comprehensible of course, but to me that goes beyond what’s reasonable to consider merely the reader’s ‘style’. It’s the same as rejecting the reading “Weren’t” when the sentence actually says “Were not”.

Mispronunciations of proper names is a difficult one. Many errors will inevitably get through as we’re not using solely British speakers to verify British terms nor solely American speakers to verify American ones. I don’t know if it’s the best approach, but what I do is to reject any bad pronunciation that I know to be wrong, and allow through minor things such as slightly odd stress patterns. I’d reject “war-wick-shire” for Warwickshire as well as “ar-kan-sus” for Arkansas.

I’ve been thinking a bit more about local place names that have unusual pronunciations. Since the whole point of the project is to allow users to speak to their computer, we ought to make sure the computer’s pronunciation of a local or regional term is the same as that of the class of people who will frequently want to use it in dictation or whatever. For example, 99%+ of users who use the word ‘Warwickshire‘ frequently and who will expect their computer to be able to understand it will be Brits, so we really ought to make sure their pronunciation is in the database. Having the guessed-but-incorrect “war-wick-shire” is worse than not having the word at all, as 99%+ of future users who need the word will have to teach their computer to unlearn it. The problem is particularly acute as only one reading of each sentence is being accepted for the corpus, so there will be many words where training is based only on a single reading by a single individual.

I wonder what people would think about modifying the guidelines at the top of this thread (Discussion of new guidelines for recording validation) to cover this?

As a new user, I was very glad to find these guidelines. I agree about the wall-of-words problem. It would be most helpful if they were available with the FAQs say - their absence brought me here.

As for Warwickshire, to me, that sounds like one for the “too hard” pile. Even among Brits, is there really but one pronunciation? And how would that be picked out? What about other cases: “St. John” as “sinjin”? “Worcestershire sauce” as it looks, as “woster sauce”, “wooster sauce”? I don’t know enough about the corpus, but if the same word appears in different sentences, wouldn’t the variants be there?

Re commas: Comma usage is highly personal - for examples, search “Oxford comma” or check Lynn Truss’s fun book about punctuation, “Eats, Shoots and Leaves” (the title refers to koalas) . Usage may also vary with context - business, fiction, text to be read aloud, scripts, and so on. I dictate them explicitly when using voice-to-text (Dragon, MSSR for example) and can’t imagine it working well otherwise.

Finally, should recordings that leave a very long silence at the outset but that are otherwise correct be accepted?

Thanks for putting these together! They really helped answer several questions I had.

1 Like

Yes, this is fine. The algorithm is designed to deal with this.

1 Like

Hi, I’m happy to finally find the guidelines. I believe they should be added somewhere on the main site. Phrases that I found there, “Users validate the accuracy of donated clips, checking that the speaker read the sentence correctly” and “did they accurately speak the sentence?” are just too vague.
I understand that the extensive guidelines are hidden here to avoid scaring new users, but I think these should be available on the main site for users who prefers to be precise in their decisions in validating/rejecting clips.

1 Like

@Michael_Maggs
Misspelling, “out” is written twice in the guideline.

1 Like

Well spotted! I’ve made the correction.

Are these guidelines going to be published somewhere one the main page? Right now they’re very hard to find.

2 Likes

Hi @EwoutH, welcome to the Common Voice community!

This is definitely something we want to see how to best implement on the site in 2020. Right now the development focus is on the infrastructure improvements.

We’ll definitely use this work to inform any changes in 2020.

Thanks!

1 Like

I approve any recording that is understandable and match the text, including incorrect pronunciations as long as it’s a common one.

It’s an inconvenient truth that any somewhat non-basic word will have multiple pronunciations over the world, but I don’t think keeping the “technically incorrect” ones out of the dataset is the solution, it’s rather something that needs to be handled in a way that accounts for it.

1 Like