đź—Ł Feedback needed: Languages and accents strategy

When in doubt, we will request the help of a linguistic expert to decide.

Please note that Common Voice project needs might not be the same as others. As I commented previously, our technical needs to train DeepSpeech models require us to have a more restrictive definition on what a language is, that will differ from other definitions out there.

I take a note about listing languages, but there is probably a fair amount of work for us to analyze all languages in advance rather that analyzing them as they are requested.

Note that while English is one dataset, the accents metadata would allow us to create sub-dataset based on accent for example, so it would be possible to create a dataset for English in New Zeland if we have enough voices from there. Also note that the first priority is to be able to understand English, the possibility to adapt to local accents is an extra thing we will be able to do, but first we need to the to the first one :slight_smile:

People won’t have to select a city if they don’t want to. They will be always able to select region or country.

1 Like

I take a note about listing languages, but there is probably a fair amount of work for us to analyze all languages in advance rather that analyzing them as they are requested.

You wouldn’t have to analyze all the languages in advance. The purpose is more that you wouldn’t have to take the responsibility of deciding how to organize this very complex thing that is human languages. You would delegate it to another entity, so that you can put more effort into designing and developing your tools.

By relying on an existing list of languages, you are more or less choosing a framework. There has been several attempts to define what a language is, and you don’t have to start this work from scratch.

You can still have your own definition of what is a language, that is tailored specifically for DeepSpeech, but that definition would have to remain internal to Common Voice and DeepSpeech. It’s not a definition you can easily impose on the rest of the world.

When you say “our technical needs to train DeepSpeech models require us to have a more restrictive definition on what a language is, that will differ from other definitions out there”, you have to be a little bit careful. With this approach you are asking users/contributors to adapt to your needs. You’ll be asking them to understand what a language is for you, rather than trying to understand what a language is for them.

I know how difficult it is to build software and I perfectly understand the rationale behind your approach. But I can tell you with a lot of confidence that the concept of language carries more than just a common set of words, grammar and writing system.

Language is, for a lot of people, something very tightly connected to their identity. It’s a facet of their culture, their history, their people. If you categorize their language in a different way that they perceive it, they won’t be happy with it or they will be confused about it.

If you want to be as inclusive as possible, if you want to cater diversity, you have to forget about technical requirements.

Iveskins mentioned Serbian and Croatian and it’s an interesting example. Based on your definition of a language, you might put Serbian and Croatian – and Bosnian – under one same language, just like you would put American and British English under a same language. Then you might sub-categorize this language into a Croatian accent, a Serbian accent, a Bosnian accent. But, if I may quote a Serbian who once wrote to us on this topic, “You will probably cause civil unrest if you would publicly put it as one language, from pure political reasons”.

Concretely if you were to add Serbian, Croatian and Bosnian to your supported languages, you’d probably prefer to present them as different languages from the user interface. But under the hood you could remap the data into one “technical language” with different accents, if that’s more useful for DeepSpeech. The way you organize the data before feeding it to DeepSpeech is, after all, your own business. But you cannot tell your contributors “We grouped these languages into one because it makes more sense for DeepSpeech”, that’s not something they will all appreciate.

It is not a bad thing if you prefer to handle your language requests case by case and build your own list of languages along the way. I don’t want to discourage you from it. Just be aware that you’d be doing linguistic work (and a difficult one). If that’s the path you choose, you probably want to involve linguists already at this stage, where you’re trying to define what a language is.

I don’t want to make you over-worried about it though. I’m pretty sure you can carry on with an intuitive and technically oriented definition of language. Many people will still be very enthusiastic to donate their voice regardless of how you define what a language is. They will be understanding and they will comply to your definition. But involving linguists now can save you from awkward situations later. Or at the very least, get you better prepared for these awkward situations.

Thanks for bringing this perspective, I understand what you mean. I’ll have to check with the team about what is possible at a technical level, maybe there is a way to solve this for the languages where working with linguistics we determine it’s the best thing to do.

We will also have to check if there is a list of languages closer to our definition we can rely on, as I said, this proposal was created based on work one of our linguistic experts.

Thanks again for your input!

I just want to say that in case of Basque language, choosing a region is OK. Choosing a city wouldn’t work and for some speakers would be confusing. Accent regions and political regions don’t match, so some people could choose the nearest city or the city into their political region and would do a wrong election, because the Basque accent regions aren’t distributed that way. If people with different accents choose the same city, data will be mixed. So, if city choosing is optional, Basque will keep using regions just as it already does (preferably without a city list, just to avoid user confusion).

Basque is mainly spoken in two countries, France and Spain. Two of the accents are used in the french side (in two different regions) and three accents in the spanish side (in three different regions). So combining countries and regions would be possible too, but in Basque language there aren’t so many regions, so just choosing one region from a simple list is enough.

Hi, I’m a bit late to the party here, but I’d like to offer a linguist’s point view.

TLDR; we should not crowdsource these definitions. Incorporate academic resources instead.

  1. Writing is a technology that has to be invented recently.
  2. Native speakers universally acquire their native language.
  3. A natural language has an internally consistent phonology.
  4. Spoken variations for continuums; division into “languages” are sometime political or historical.
  5. The official version of a language is often highly codified, constructed, and “unnatural” (far from spoken varieties).

Example: Norwegian is a language group, the varieties are largely mutually intelligible with each other, and with Swedish. The two formalised standards are Bokmål (which half-jokingly is a koineized version written in Danish), and Nynorsk (an imaginary proto-version), both of which no-one “really speaks”.

Comparable situation with Finnish. The official version that has a standard is an invention by amalgamating features from natural varieties, it’s highly constructed (though can be spoken).

Example: dialect continuum.

Consider:
ENGLISH: I am the son of my father and my mother.
SCOTS: A am the son o ma faither an ma mither.
FRISIAN: Ik bin de soan fan myn heit en myn mem.
DUTCH: Ik ben de zoon van mijn vader en mijn moeder.

Consider:
The Balkan example mentioned above.

What are the recommendations then?

  • For high-resource languages that have standard bodies, the meta-data should designate speaker status of whether they are producing the standardised variety, e.g. a “native” English speaker, who can either use the General American, or Standard Southern British

  • For regional varieties, the meta-data should designate native speakers of a variety, as defined by widely established dialectology.

  • Non-native speech should be labelled as such. There are varying levels of “accentedness”, from highly consistent L1-interference (in this case, you may say that the speaker has created a merged internal phonology in the process), to rampant lexical errors (e.g. using wrong tone or quantity as a result of having no control over phonemic contrast).

Now in terms of ASR, conventionally there are two models: the acoustic models and the language model. At some point it may be helpful to also have a separate phonology model: e.g. which phonemes can occur together, how they change into allophones in different contexts, or in the case of non-native phonology, substitutions etc.

1 Like

In practical terms, what crowdsourced questions would be useful for describing the speech production itself. I imagine independent of the language/variety designation, we can get meaningful self-reported information along several axes:

Stable-Unstable
“When you speak this language, how stable is your accent over time?”
This goes from a native variety-speaker, to say, cosmopolitan Finns who speaks convincing TV English, but the accent varies widely from week to week.
Cf. https://www.phonetik.uni-muenchen.de/~jmh/research/papers/harrington00.nature.pdf

Convincing-Accented
“How do others (especially native speakers) perceive your accent?”
This only applies when you are aiming for an idealised target, e.g. an actor in a film playing a speaker of some other variety. Note: here “accented” may be a bit misleading, since “convincing” is relative to the selected target, for example, when there is a consistent L1-mediated non-native phonology, the actor can put on a convincing “Russian accent” in English.

Regional-Koineized
“Are you speaking a variety that is used when regional locals talk to each other?”
When Glaswegians talk to Glaswegians, the production may be different from when they talk to New Zealanders.

1 Like

Do you have any empirical evidence that people are not able to self-classify their accents and that the subsequent classifications are not useful for the task of producing targetted speech recognition?

1 Like

This is more or less the right approach.

A post was split to a new topic: New to the community

@ftyers sometimes. Ferragne_2010_JPho.pdf has a British Isles English accent map, sort of. (1.3 MB)

Sorry everyone for taking so long to provide an update here.

We have been analyzing all your feedback, as well as consulting with more linguistic experts, both online and in person.

We are currently getting agreement on the final proposal, that I’ll share here as soon as it’s ready. It’s currently leaning towards a less restrictive approach (as lot of you asked for).

Thanks for your patience.

1 Like

October update:

We are still waiting to have a few more conversations with linguistics (sorry, this is taking way longer than we expected) and we have also have been trying to balance the current proposal so we bring value to both product and linguistic researchers using our dataset.

A lot of this project has been learning as we go, and the complexity of providing value to everyone is higher than we initially though.

My plan is to be able to come back with a recommendation by the end of this month.

Thanks all for your understanding!

Hi @txopi
It’s the same with Kabyle language.

When I met some members from the Garabide foundation we talked about the different dialects and accents. They don’t correspond to the administrative repartition. It’s the same with Kabylia.

Thanks for the lesson!!
Nice to know that

Hey everyone,

I know we haven’t published this yet, but getting to an agreement is taking a lot of time since I’m scheduling personal conversation with different stakeholders.

My goal is to get green light from Deep Speech and legal team this week so we can share it here.

Cheers.

Update: As we have expressed here, we are definitely considering a more location-oriented metadata strategy to understand how people is likely to sound (as an improved version of the May proposal posted here)

We are right now evaluating with our legal team the requirements and limitations. We’ll share it as soon as we have agreement.

1 Like

December update:

We have been working with our legal team about this proposal in the past month. Having in mind the current development focus is on infrastructure and we won’t have the bandwidth to implement any changes on the site as a result of this proposal until at least February 2020, we agreed that we’ll give the legal team more time to finish the review and come back with recommendations.

Ideally this will happen mid/late-January so we have clarity to start planning on implementation by February.

I’ll keep you posted and share the final proposal after the legal recommendations are out.

Thanks for your patience!

1 Like

The latest version of the strategy (v5) is now published here