Discussion: Relaxation of the 10 sec recording limitation

We had a side conversation about this a while back in this topic.

Problem definition:

  • Common Voice recordings are limited to 10 seconds. This was mainly based on Mozilla’s Deepspeech work which was done 5+ years before, and mainly because of the VRAM limits at that time (8 GB).
  • This limit also affects the text-corpus (default 14 words / ~100-110 char sentences, which can be changed with locale-specific rules - but because of the recording limit it is not wise to increase them much). Common Voice mainly relies on CC-0/Public domain sentences, so many sentences coming from public domain works got stripped out, or we had to do the tedious work of dividing sentences manually from sub-sentences to get the vocabulary needed.
  • Deepspeech is no more, and so does the follower Coqui-STT. Current state-of-the-art points towards whisper or similar models (which can process larger chunks).

Current Status:

  • I really searched and could not find exact market values for the latest graphics cards, and existing ones are from gamer’s perspective. But the current optimum for ML work in the wild is rtx-3090/3090-ti and rtx-4090 series, which have 24 GB RAM. Or more dedicated ones with 40/48 or more VRAM.
  • So, from the HW perspective, the capacity at least doubled, more like tripled. So that would allow us to work on longer audio, keeping the batch sizes the same.
  • Whisper has a 30-sec limitation per inference, which is exactly 3x of the current 10-sec limit.

So, I propose thinking about increasing this limit to 20 sec (for 16 GB VRAM) and opening it to the discussion.

Upsides:

  • More text-corpus data, possibly more vocabulary, more domain-specific data (many of the languages already covered everyday speech which tends to be short, 3-5 words).
  • Possibly more volunteers / better volunteer retention as they get more interesting sentences.
  • More voice-corpus
  • Better data for state-of-the-art models
  • Better models

Downsides:

  • My experience with volunteers shows that they are happy when they get shorter sentences - in a shorter time they go to the next one.
  • Longer sentences can mean more errors, not enough breath, more re-recording the same sentence etc.
  • That would increase the data size to be processed.
  • That would need some changes in the code, both web/server and offline work.
  • That would also effect user-side code, if not written parametric, it can take some time to adapt.

So, what do you think?
Ref: @ftyers, @kathyreid, anybody.

Strong agree with this proposal, for the following reasons:

  • A 20s time limit will allow for more “natural”-sounding speech. Although it will still be read speech, in contrast to spontaneous speech, a 20s time limit will allow for greater intonation variance and variety of speech.

  • A 20s time limit allows for more flexibility in written text sentence generation, and allows for more variation in sentence / grammatical structure.

I would like more information on:

  • Is there any impact on force aligners from longer audio length?
  • Have we tested the compute impact?
  • How do common ASR algorithms limit or constrain audio length - DeepSpeech is all but abandoned, but how does say Coqui STT or the NVIDIA algorithms constrain audio length?
  • Will this impact some languages more than others? I wonder if this would benefit agglutinative languages, which can have very long word constructions, favourably? I’m thinking of say German or Kiswahili, where because of agglutination, you end up with very long words in sentences. I’m not strong enough in linguistics to know which groups of languages are more agglutinative, but I bet @Francis_Tyers knows :laughing:. So, there might be an argument here on the basis of equity and diversity - if under-represented or marginalised languages are better served by longer audio length.
2 Likes

Thank you for more insight into the topic @kathyreid. Important points indeed.

Coqui STT

Unfortunately, it is also abandoned last month. Therefore I moved to other possibilities, initially whisper for now… On Deepspeech (i.e. Coqui STT), I could work with 128/256 batch sizes in most cases with a 16 GB VRAM, if fails, you just have to drop the batch size. So, when you increase the duration, you should halve the batch size for the same GPU. I never worked with longer-than-10-secs but I know people working with 20-25 secs with batch sizes like 32 or so, which is mostly enough - but takes more time (up to 30% more in my experiments with Coqui STT). I did not see important changes in model accuracy in these ranges with Coqui STT in my experiments.

There is always the possibility of dividing the audio of course.

AFAIK, the recommended audio duration for nVidia’s Jasper/Quartznet is 5-25 sec. It is not a hard limit, but I read that more than 20 sec will start to “hit the GPU memory”. On whisper, 30 sec. I’m new to whisper, but I read about some problems during inference with longer audio, where it tries to enforce the sentence pattern from the training set, so people tend to divide the audio into shorter batches - but that is inference. I do not have enough knowledge for fine-tuning yet, I will have it in a couple of weeks - “more work needed :slight_smile:”.

We could try to custom-split a single dataset as experiments where the total training audio is the same but the length of sentences is in different ranges. But there are many other factors that would affect the experiment. I thought about that before, but left it aside for this reason.

More info in these areas is needed of course. Anybody with knowledge, please chime in.

agglutinative languages

Turkish is one of them, nearly the worst among them, where we cannot use classic n-grams in LM’s for a general-purpose application, it easily becomes huge.

But, as we know, it is mainly the total training audio duration that matters, not only the number of sentences. So, instead of 10.000 5 sec average sentences, if we get longer ones making the average 10 sec, it will provide much better results. So marginalized languages can more easily collect text-corpus and create a better model as there will be longer audio - I think.

Another problem with long audio wrt training will be the distribution. If you have around 5 sec audio and there is a single one 15 sec and you use large batch sizes which are good for 5 -6 secs, at some point it will crash the training. This has been the reason for the hard limit. It will be an annoyance at the start, but it will equalize in time. I can think of several methods to ease that, e.g. not including the longer ones as we did until now, or training with shorter ones (10 sec) and fine-tuning with longer ones, etc.

In any way, my main concern is UX - i.e. people getting tired of reading long sentences.

1 Like

FWIW, could the next ‘evolution’ of Common Voice be a collaboration or overlap with the LibriVox project? That could solve the problem of longer clips whilst providing interesting sentences to volunteers.

One feedback here, after I learned HF & Whisper and fine-tuned some languages. The Trainer in HF has parameters to minimize this effect, namely padding, where the code sorts and feeds similar length sentences in batches.

1 Like

Most of the languages in CV have average recording durations between 3-6 secs. This is mainly because of text-corpora are taken from everyday conversations, which were more easy to collect. I don’t say that short sentences are not needed, but there are so many combinations of them and at some point they become similar, especially the sentence collection utilities do not use normalization and or similarity measures while checking against the already existing ones. So “Hey! How are you?” and “Hey, how are you?” become two separate sentences but they will sound very similar, thus do not add much value.

Longer sentences have much more permutations, so similarity measures will drop. This is also valid with the current 10 sec limit.

In the meantime, I’m in the process of preparing to use cv-sentence-extractor for the first time, and I plan to get longer - if possible longest sentences from Wikipedia. To do this, some code changes are required, if it is legally possible of course. We had a related discussion on a github issue here.

FYI…

I just wanted to ping this discussion. Will there be any consideration for this request? How can this be processed?

Because also the discussion on the above issue is not concluded, I had to run multiple times to fine-tune the cv-sentence-extractor rule. I selected 3 words minimum but also increased the min-sentence-length to get longer sentences and maximize the total number of sentences. My results work and statistical results are presented here, and I think it is the best comprise I can reach with the current limits.

Generally this is a good proposal (just to make reading and recording more interesting), but please test this before. Not every smartphone contributor has the latest top model for contribution.
Possible solutions could be (if this does not function as expected):

  • Single clip upload
  • official moz app for smartphone contribtion
1 Like

While translating in Pontoon, I hit the following “hidden” sentence:

image

So, is this happening? Or did it happen and I missed it?

Hello, and my apologies for the slow response to this thread. I wanted to get a better understanding of how and when we were going to be able to address this with the team.

The short answer is that: we agree with you all, the recording time should be expanded and we’ll be trying to get an expansion of the recording time to 15 seconds into the engineering roadmap.

@bozden that documentation is incorrect and I appreciate you flagging this, but we’ll be expanding the recording time so the bulk sentences documentation should be corrected by this change.

I’ll get a blog post with more details as we get closer to being able to get this change out and released!

1 Like

@jesslynnrose: Actually, it is (re-)implemented a day after your post with this commit for v1.114.2:

As this will result in major changes for many workflows, I’ll be eagerly waiting your blog post.

1 Like