🗣 Feedback needed: Languages and accents strategy

@achraf.khelil thanks for your feedback and welcome to the Common Voice community!

It seems to me that your message is centered in implementation and user experience. I would said that’s something our UX experts will figure out how to make it easy when we get there. In my opinion we shouldn’t be worrying much now, and focus on agreement on the basic needs and strategy.

Cheers.

Thank you for your response, but I’m not focused on UX. I suggested these changes to :

  • Respond on the confusing situations announced by @ dschridde , non native speakers without accent and native speakers with accent.
  • Reduce the data set and improve segmentation performance by bringing it closer to reality. The reality is that there are no different accents between cities, but between regions. And for non native speakers, theirs regions and cities are not important, it’s theirs native language and country which can influence their accent.

I would say that’s a very broad statement. I have some examples in my country where accents change between cities within the same region, so this can be different from one country to another. Also, this is what we heard from our linguistic experts.

The good thing about this proposal is that people can decide if self-identify themselves with country, region, city or none. This should not influence because we have data about where each city is located (region and country) so won’t affect people just selecting regions in countries where there is no difference between cities.

1 Like

In my humble opinion and experience, there are lots of nuances to “accent” in a voice and the words used in communication. It will be difficult to capture all of these nuances with just one location.

However, given that we are dealing with this data primarily for research and development – including machine learning – I think there is value in categorizing contributions by the apparent characteristics of the contribution.

I believe this is best supported by “tags” which augment the primary category. For example, a speaker of English could have their category (“English”) described by multiple tags. #us_southern might appropriately apply to a large group of speakers in the United States, but even people in the same city might sound dramatically different. In Kansas City I think I would apply #us_midwestern to most speakers, but additionally some might be described with #urban and/or #latino even though they grew up within a few miles of each other. There are likely subtleties that would apply to someone #latino from #newyork_bronx, or someone #jewish from #newyork_bronx.

This sort of tagging is a little more complex, but I think the accent difference between geographically near locations in New York are much more pronounced than between Indiana and southern Kansas which are hundreds of miles apart and would both fall under a general #us_midwestern tag.

In less widely spoken languages I can imagine location defines much of this. But I also think a tagging system would serve just as well.

The challenge with this system is keeping the tags under control (avoiding both #southern and #us_southern, for example). But if you allow a dynamic and easily searchable list that could be curated by admins to merge similar tags – including automatic updates to users who had the curated tags – plus if you can keep the data associated with the individual so later refinements of their tags would retroactively be applied to earlier contributions then I think you would have a usable mechanism.

2 Likes

I’m also worried about the complexity of this proposal, we don’t have bandwidth to maintain a curated list of tags or time to have admins patroling duplicates in all languages.

Being more granular than city is probably better, yes, but at some point we need to make a compromise between utility, complexity and resources.

Hi! I’m working on a fully accent-independent way to help everyone with pronunciation remediation for free. Instead of examining whether an accented utterance is or is not “correct” according to a pronunciation expert or panel of judges, we use Nakagawa (2011) and his grad students’ method of trying to predict whether a listener, whether they be a native English speaker or not, would transcribe the utterance as the speech which was supposed to have been said. Please join the Spoken Language Interest Group of the IEEE Learning Technologies Standards Committee at http://bit.ly/slig and follow the main Discourse topic at: Intelligibility remediation

Thank you!

One small point : in your definition of a language in order to avoid many variants of the same language, you make reference to a common writing system. It means that for a language without a stable writing system (and they are numerous) your definition will not be operative; you will not be able to say if it’s a variant of the language with a variant of the writing system, or a new language. If you want to be normative, you may add “if a writing system exists which allows you to express the same words with the same grammar
”

Thanks for the feedback @gadda and welcome to the community!

Can you provide a few languages as examples where this happens?

Thanks!

I see feedback is closing soon. I have time for a few thoughts about writing systems. (There’s much to say about “the same words and grammar”, too.)

Why include a writing system in the definition of a language? In line with @gadda, many languages do not have one writing system. Reasons for this can include:

  1. the language is not (or at least not primarily) written
  2. the language has no standard writing system
  3. the language has historically been written with multiple systems
  4. the language has multiple standard writing systems
  5. the language has multiple conventions (with or without a standard)

Examples of 4: Mongolian, Serbian. And of the last: ASL.

Moreover, scripts have variants. Do “Simplified” and “Traditional” Chinese characters represent different languages? What about various scripts for writing Assyrian Aramaic? Or, looking back in history, what about Egyptian, or the Tamil Brahmic system variants layered onto the language? I don’t know which cases matter to these projects, but they’re relevant linguistically.

I’m a bit late to this thread, hopefully you’re still reading feedback.

My concern for you is about languages.

You have a definition for what you consider to be a language, but in the end, the definition alone may not help you. When people will start to request languages that are not so common, it will become difficult to know if they’re requesting a valid language.

(1) Sometimes it’s very hard to decide if something is a just “variant” rather than a whole language on its own. Unless you have a team of linguists working on categorizing languages, you probably won’t be able to decide properly.

(2) You may also face difficulties when accepting or rejecting constructed languages. You already have Esperanto, that one is easy to accept. But if someone would request Toki Pona, would you accept it or reject it?

Based on those two problems, in Tatoeba we ended up following the ISO 639-3 categorization. This helps us to decide what is a language, what is a dialect/variant, and which constructed languages are “officially” recognized as languages.

I can give you a concrete example of a difficult decision with Arabic.

  • Based on the ISO 639-3 categorization, our contributors are allowed to request each of the languages listed under the Arabic macrolanguage.
  • We had a request to add “Gulf Arabic” as a language: https://github.com/Tatoeba/tatoeba2/issues/1084
  • Some people disagreed with this and argued that there is only one Arabic language and that adding Gulf Arabic will make the Tatoeba corpus messy.
  • Someone counter-argued that there is linguistic evidence for separating Arabic into those many languages.
  • Gulf Arabic is valid based on the ISO 639-3 language list, and we added it.

More recently we had issues with Berber and Kabyle. I won’t go into details for this one, this is just to say that Arabic was not the only difficult case.

Based on my experience, I would recommend that you check what’s available out there and get a predefined list of languages, so that can tell your contributors “These are the languages we acknowledge as languages” and don’t have to worry too much about drawing the line between languages, dialects, variants. Doesn’t have to be ISO 693, there’s maybe something else more suitable for you.

Even with a predefined list, it won’t spare you from lots of headaches, but at least it will give you a direction for deciding what languages you can accept.

Also, if it helps, here’s a snippet of Tatoeba’s instructions for language requests so that you have a concrete idea how we handle this:

Search for your language in the ISO 639-3 list of languages.

[
]

Please understand that if your language is not recognized in the ISO 639-3 standard, we cannot support it in Tatoeba. Language classification is a complex task and it is not part of Tatoeba’s mission. We rely on the ISO 639-3 standard to define what is a valid language.

There are some exceptions due to legacy reasons, but we will not make more exceptions.

If your language is missing in this standard, please contact the ISO 639-3 Registration Authority from their website: https://iso639-3.sil.org/.

My first language is New Zealand English. It has a common writing system with most other Englishes but also has some unique spelling/sound correspondences. These are due to the intermixing with NZs other common language, Maori.
There are many common words borrowed from Maori in New Zealand English. These words are written with the Maori spelling/sound correspondences. E.G ‘Whakatane’ is said fah-kah-TAH-nə, not wack-a-tain. Ngaruawahia = [Ƌaːɟʉaˈwaːhia] with the NG being a nasal sound like suNG 

Macros can sometimes also be used for long vowels in Māori. Macros are kept when writing in english.
To complicate things further
 There are a wide range of real life ways that non-native and uneducated speakers of Maori or NZ english will pronounce such words. If you showed the prompt “navigate to Whatatane” to someone with no experience with New Zealand English, it would be hard for them to know how to pronounce it.
So both ‘correct’ and incorrect pronunciations are in common use. If someone wanted to make a voice controlled GPS app that could be used by international tourists, and Locals a like then it would be important to capture all these data points.
Google and Vodafone NZ have made a presumably private dataset described here.
https://news.vodafone.co.nz/article/new-zealanders-highlight-te-reo-maori-names-be-updated-google-maps

If you are making an app that is for transcription of text and it is being used by somone outside of NZ , you probably dont want Fah-Kah correspond to the letters Whaka. So this data set needs to be separatable from other Englishes.

So there are a plurality of Englishes around the world which share many but not all words. If each language has just one dataset, will the unique features of each country be left out, or all mixed together. Neither seem desirable. Or will there be a great duplication of words where there is overlap. Also not the best.
Say New Zealand English and Australian English are 95% similar in terms of words and grammar. New Zealand and British are 90% a like and New Zealand and American are also 90% a like but in different ways. Do we need to collect completely different sentence sets for NZ, AUS, UK, US?
If not, what does an American do with the prompt “Rangatoto” or a NZer with “Arkansas” for that matter.
When I talk with English speakers from Kenya or India, they have their own unique set of words and grammars too. These cannot simply be accounted for as accents or informal language.
This study has some good examples of the differences
http://archive.gameswithwords.org/WhichEnglish. 
 It might need the way back machine to read now. “the dog was chased the cat.” Etc.
(Also Does grammar even matter to an agent trained on sound files / text chunks?).

My current second language is Japanese. I am forever embarrassed by Amazon Alexa’s refusal to understand a word I say even when humans have no problem. I wouldn’t consider myself near native, but I defiantly think my accent is influenced by the region I live in. This is the language community I participate in to become a Japanese speaker, so of course I pick up it’s habits.

As for ascents. I don’t think defining by cities is a good idea. Firstly because about half the world doesn’t live in one (yet). Rural people are already underserved by technology, I would hesitate to choose categories that by design make something less useful to them.
Secondly, because accent is more about language communities, maybe
 There is difference based on, age, class, education level, ethnicity, also. Common description of English accents usually have a Cultivated variant, because people like to show how educated they are by changing up their vowels.
Accent is of course related to how different speakers move from graphemes (signs) to morphemes(mental) to phonemes (sounds) , this is segmental. There is also suprasegmental elements to accents, stress, intonation, prosody, pitch. These seem to be missing from this definition.
Sorry of there’s mistakes here I’m no expert. Also it’s hard to write on a phone.

https://i.imgur.com/56VgVmP.jpg Indeed. Which is Serbian, which is Croatian
 I don’t want to be the one to say


For our practical purposes, yes. Have in mind our app is displaying text for people to read, and that text is then matched with voices and passed to a machine learning system.

We need to organize different writing systems in different datasets, that’s why we consider languages bases on a common writing system.

When in doubt, we will request the help of a linguistic expert to decide.

Please note that Common Voice project needs might not be the same as others. As I commented previously, our technical needs to train DeepSpeech models require us to have a more restrictive definition on what a language is, that will differ from other definitions out there.

I take a note about listing languages, but there is probably a fair amount of work for us to analyze all languages in advance rather that analyzing them as they are requested.

Note that while English is one dataset, the accents metadata would allow us to create sub-dataset based on accent for example, so it would be possible to create a dataset for English in New Zeland if we have enough voices from there. Also note that the first priority is to be able to understand English, the possibility to adapt to local accents is an extra thing we will be able to do, but first we need to the to the first one :slight_smile:

People won’t have to select a city if they don’t want to. They will be always able to select region or country.

1 Like

I take a note about listing languages, but there is probably a fair amount of work for us to analyze all languages in advance rather that analyzing them as they are requested.

You wouldn’t have to analyze all the languages in advance. The purpose is more that you wouldn’t have to take the responsibility of deciding how to organize this very complex thing that is human languages. You would delegate it to another entity, so that you can put more effort into designing and developing your tools.

By relying on an existing list of languages, you are more or less choosing a framework. There has been several attempts to define what a language is, and you don’t have to start this work from scratch.

You can still have your own definition of what is a language, that is tailored specifically for DeepSpeech, but that definition would have to remain internal to Common Voice and DeepSpeech. It’s not a definition you can easily impose on the rest of the world.

When you say “our technical needs to train DeepSpeech models require us to have a more restrictive definition on what a language is, that will differ from other definitions out there”, you have to be a little bit careful. With this approach you are asking users/contributors to adapt to your needs. You’ll be asking them to understand what a language is for you, rather than trying to understand what a language is for them.

I know how difficult it is to build software and I perfectly understand the rationale behind your approach. But I can tell you with a lot of confidence that the concept of language carries more than just a common set of words, grammar and writing system.

Language is, for a lot of people, something very tightly connected to their identity. It’s a facet of their culture, their history, their people. If you categorize their language in a different way that they perceive it, they won’t be happy with it or they will be confused about it.

If you want to be as inclusive as possible, if you want to cater diversity, you have to forget about technical requirements.

Iveskins mentioned Serbian and Croatian and it’s an interesting example. Based on your definition of a language, you might put Serbian and Croatian – and Bosnian – under one same language, just like you would put American and British English under a same language. Then you might sub-categorize this language into a Croatian accent, a Serbian accent, a Bosnian accent. But, if I may quote a Serbian who once wrote to us on this topic, “You will probably cause civil unrest if you would publicly put it as one language, from pure political reasons”.

Concretely if you were to add Serbian, Croatian and Bosnian to your supported languages, you’d probably prefer to present them as different languages from the user interface. But under the hood you could remap the data into one “technical language” with different accents, if that’s more useful for DeepSpeech. The way you organize the data before feeding it to DeepSpeech is, after all, your own business. But you cannot tell your contributors “We grouped these languages into one because it makes more sense for DeepSpeech”, that’s not something they will all appreciate.

It is not a bad thing if you prefer to handle your language requests case by case and build your own list of languages along the way. I don’t want to discourage you from it. Just be aware that you’d be doing linguistic work (and a difficult one). If that’s the path you choose, you probably want to involve linguists already at this stage, where you’re trying to define what a language is.

I don’t want to make you over-worried about it though. I’m pretty sure you can carry on with an intuitive and technically oriented definition of language. Many people will still be very enthusiastic to donate their voice regardless of how you define what a language is. They will be understanding and they will comply to your definition. But involving linguists now can save you from awkward situations later. Or at the very least, get you better prepared for these awkward situations.

Thanks for bringing this perspective, I understand what you mean. I’ll have to check with the team about what is possible at a technical level, maybe there is a way to solve this for the languages where working with linguistics we determine it’s the best thing to do.

We will also have to check if there is a list of languages closer to our definition we can rely on, as I said, this proposal was created based on work one of our linguistic experts.

Thanks again for your input!

I just want to say that in case of Basque language, choosing a region is OK. Choosing a city wouldn’t work and for some speakers would be confusing. Accent regions and political regions don’t match, so some people could choose the nearest city or the city into their political region and would do a wrong election, because the Basque accent regions aren’t distributed that way. If people with different accents choose the same city, data will be mixed. So, if city choosing is optional, Basque will keep using regions just as it already does (preferably without a city list, just to avoid user confusion).

Basque is mainly spoken in two countries, France and Spain. Two of the accents are used in the french side (in two different regions) and three accents in the spanish side (in three different regions). So combining countries and regions would be possible too, but in Basque language there aren’t so many regions, so just choosing one region from a simple list is enough.

Hi, I’m a bit late to the party here, but I’d like to offer a linguist’s point view.

TLDR; we should not crowdsource these definitions. Incorporate academic resources instead.

  1. Writing is a technology that has to be invented recently.
  2. Native speakers universally acquire their native language.
  3. A natural language has an internally consistent phonology.
  4. Spoken variations for continuums; division into “languages” are sometime political or historical.
  5. The official version of a language is often highly codified, constructed, and “unnatural” (far from spoken varieties).

Example: Norwegian is a language group, the varieties are largely mutually intelligible with each other, and with Swedish. The two formalised standards are BokmĂ„l (which half-jokingly is a koineized version written in Danish), and Nynorsk (an imaginary proto-version), both of which no-one “really speaks”.

Comparable situation with Finnish. The official version that has a standard is an invention by amalgamating features from natural varieties, it’s highly constructed (though can be spoken).

Example: dialect continuum.

Consider:
ENGLISH: I am the son of my father and my mother.
SCOTS: A am the son o ma faither an ma mither.
FRISIAN: Ik bin de soan fan myn heit en myn mem.
DUTCH: Ik ben de zoon van mijn vader en mijn moeder.

Consider:
The Balkan example mentioned above.

What are the recommendations then?

  • For high-resource languages that have standard bodies, the meta-data should designate speaker status of whether they are producing the standardised variety, e.g. a “native” English speaker, who can either use the General American, or Standard Southern British

  • For regional varieties, the meta-data should designate native speakers of a variety, as defined by widely established dialectology.

  • Non-native speech should be labelled as such. There are varying levels of “accentedness”, from highly consistent L1-interference (in this case, you may say that the speaker has created a merged internal phonology in the process), to rampant lexical errors (e.g. using wrong tone or quantity as a result of having no control over phonemic contrast).

Now in terms of ASR, conventionally there are two models: the acoustic models and the language model. At some point it may be helpful to also have a separate phonology model: e.g. which phonemes can occur together, how they change into allophones in different contexts, or in the case of non-native phonology, substitutions etc.

1 Like

In practical terms, what crowdsourced questions would be useful for describing the speech production itself. I imagine independent of the language/variety designation, we can get meaningful self-reported information along several axes:

Stable-Unstable
“When you speak this language, how stable is your accent over time?”
This goes from a native variety-speaker, to say, cosmopolitan Finns who speaks convincing TV English, but the accent varies widely from week to week.
Cf. https://www.phonetik.uni-muenchen.de/~jmh/research/papers/harrington00.nature.pdf

Convincing-Accented
“How do others (especially native speakers) perceive your accent?”
This only applies when you are aiming for an idealised target, e.g. an actor in a film playing a speaker of some other variety. Note: here “accented” may be a bit misleading, since “convincing” is relative to the selected target, for example, when there is a consistent L1-mediated non-native phonology, the actor can put on a convincing “Russian accent” in English.

Regional-Koineized
“Are you speaking a variety that is used when regional locals talk to each other?”
When Glaswegians talk to Glaswegians, the production may be different from when they talk to New Zealanders.

1 Like