Community Strategy: What does Community Health mean to you?

What does Community Health mean to you ?

As part of the Common Voice Community strategy, we are thinking of ways to support sustainable and healthy communities across common voices.

One of the ideas, is the development of community health metrics, to help us have a wide picture of the health of language communities. In turn, helping communities self-organise effectively and for me prioritise support across our growing community.

Data collection and metrics

I have taken inspiration from CHAOSS open source project by Linux Foundation. The project also recommends software we could use to collect data for the metrics. There could also be a bi-annual community pulse survey for contributors to self-report feedback anonymously. Some of these metric could be shared on the Community dashboard [we hope to fix this when we have hired our two engineers] .

I would love to hear your views and suggestions regarding the development of these metrics.

Please share with me your thoughts on this topic, feel free to use the questions below as prompts.

  • What does community health mean to you ?
  • Would community health metrics be valuable to your communities?
  • How would you like to be involved in the development of the metrics?
  • How comfortable are you with us possibly using tools such CHAOSS recommends software ?

Here are some initial thoughts, I don’t know to what extent any of them are relevant:

  • Consistent contributions
  • Contributions from a diverse range of people (gender, age, dialect, …)
  • Ability for communities to self organise and make relevant decisions locally
  • Ability for communities to have a say in issues that have an effect on everyone
  • Everyone getting a say as to what is recorded

adding from conversation on matrix:

“As I proposed here one year ago or so, from my point of view it’s not clear enough how to keep measures healthy: number of sentences for X recording hours, max recordings per user, etc. That’s why in some languages people is making new recordings when they should be working on getting new sentences, few recorders make too many recordings, etc. Some health indicators on each dataset (apart from age, sex and accent) probably will make people more conscious about what is needed more on each moment; it’s not 5.000 sentences and few people recording all the day as some teams do, specially on new languages.”


I have been consciously following the guidelines to have a balanced database, I’m working with the Abkhazian language (attached screenshot).
But once the language takes off, then I would need some constraints to be enforced such as:

  1. Limit recording and validating per person (225 record, 450 validate).
  2. Limit sentence recordings to one record per sentence.

Another important piece is a complete overhaul of the dashboard and the rewarding system in Common Voice, I wouldn’t want contributors to start competing on the leaderboard, from one hand explaining to people that
they should only contribute 15-minutes and on the other hand having some sort of rewarding system and competition implemented on CV that promotes the opposite!

On the dashboard, the metrics and ideas that I would be interested in:

  1. The numbers of sentences that are left in the sentence pool for recording.
  2. Implement a rewarding system that counts recordings per invitation.
    i.e as a contributor once I reach 225, I unlock an invitation feature, where I am qualified to invite other people. Their recordings count against their accounts and my account, so the more they record the more counted records I get.

1 Like

Limit recording and validating per person (225 record, 450 validate).

I don’t see the point of putting limits on validation. Especially since this is clearly an area that will always lack contributors.

I think it’s important, in order to eliminate/minimize bias.

I have been setting with people in record and validation sessions.
I have noticed that in some instances there are disagreements between them on what is valid and what is not.
People hear differently.

Sorry for writing late due to a long vacation, I’m merely trying to catch up. Anyway, I’d like to share what is in my mind. This also includes some UX suggestions.

I must say that I agree with @ftyers, @daniel.abzakh and the matrix post, on all points, perhaps except one: Limiting recording (if it means temporarily disabling). I said otherwise on the other thread but after some thought I changed my mind, at the end quality recordings are what counts. You may throttle new recordings of one person if he/she is recording too much (e.g. 100 sentences/day) and promote others like I try to explain below.

If we think this together with Recognition, Rewards and Contribution Pathways post, I would say “promote what is missing” on the dashboard, such as:

  • Your language has xxx sentences waiting to be verified.
  • Your language is out of new sentences, go find new ones, [link]here is how[/link].
  • You are getting low on [or out of] recordings, go record some.
  • There are xxx recordings waiting to be verified, go listen to them.
  • There are very few people recording to much, please consider inviting others.
  • There are very few female contributors recording, can you invite some?

A casual user only gets to the dashboard. No visible pointers to (sub-)Discourse, even the sentence collector is somewhere deep (and not multilingual). I only could found it after I read some Discourse posts (at the very beginning of my journey). When you click the Contribute link above in the dashboard it shows you the “speak” tab first, why not switch it to listen?

As mentioned in the above Recognition post, the parameters in recognition/reward calculations can be adaptive to what is needed. If each action is 1 point at the start, you may drop recording points to 0.5 and increase the listening points to 1.5 ( can be calculated from queue length ) if the listening queue is increasing. You may even give extra/promotion points (a multiplier perhaps) if a language reaches an increase of xx% between Corpus versions. Communities might decide to make a limited time campaign and give extra points to the missing part. Even, you can promote/demote people’s recording quality, every bad (rejected) recording drops some points, continuous good recordings adds more…

Very broad range of opportunities here. Marketing people would suggest many more…

If you give coins/points etc., you should probably remove the current leadership boards and give that information in their profiles. Perhaps giving “x recordings rejected” type info will mean a lot here. You might like to put the “contributors with top points” list back instead… People have tendency to become top of the best, perhaps such a calculation would mean more for the overall quality.

But please do not remove points gained from the four main tasks. Not everyone can can open a stand in a fair or organize a campaign with printed materials, or have good communication and/or organizational skills. The project needs everyone, and time is the most precious think one has and they are giving it. You may lose some. The emphasis should be on creating a good thing and doing it together.

In addition: Common Voice should provide some e-tools (like mass-mailing to language x contributors) for promoting languages/communities, cold-starting and expanding a community is tough. Adding the following to the dashboard should be easy enough:

  • Your language has a sub-Discourt, you have xxx unread posts waiting for you.

Designs of promotional material are a good start thou, thank you for those.


@bozden Could you explain your logic on why you disagree on limiting recordings?

@daniel.abzakh, I’m no expert in ML, I only know a bit from the courses I took. But using some common sense and expertise from my long life in civil society, I can say the following, please correct if I’m wrong, probably most of them are points you already are aware off, sorry for that…

First of all please understand that I’m with you on limiting if absolutely necessary (as in “throttling”, but not as in “disabling”) the recordings of those users, if only a couple out of thousand volunteers are doing most of the recordings.

  • When you prevent a volunteer from their choice of voluntary action, you may lose him/her. This is human nature.
  • It would be more preferable to (positively) guide them to what is needed, as I explained above. Only after that, if their action disrupts/endangers the main task, first warn, then throttle them.
  • As specified everywhere, this is the only large scale data with CC-0, and it can be used for a variety of purposes. This is not limited to currently available methods, but we should always think of methods which will be “invented” in the future. For example, one can start to implement a system which ONLY responds to its "master"s commands, such as security systems which must recognize the master. This example would need as many recordings from the same person and this data can be used to test it.
  • The dataset is raw, researchers can select whatever their application needs.
  • This is a long term project. Each language starts slow and gradually speeds up. After a while your enthusiastic recorder’s voice will probably be a small portion of a hopefully very large data set. Think of a start-up language: One person takes responsibility, enters 5000 sentences and records them solely to enter into the list of supported languages. After years, that language will have 1.8 M sentences & 2000 hours.
  • In my opinion, no one would be [nerd|free|energetic|etc] enough to do it at the same speed for years. People will come and go and/or lose interest to the project.

Their voice will be heard indefinitely, because this is a time capsule. Some may want their voices to be heard, some may want their dialect/tongue to live. Probably this is why they are so persistent… Perhaps CV can prepare a questionnaire and ask everybody the following question: “Why are you volunteering?”…

BTW, on another thread I suggested to throttle people if their recordings reach X times the average of that language… A more perfect area would be to throttle/disable people with rejected recording ratio exceed a threshold, to prevent spammers…

Edit: Typos and leftovers.

If the science says you should do it this way, then probably you should.

This is an indication of a problem, find out why and how to fix it.

There are many open source projects that need volunteers and help, we could steer their attention to those and utilize their energy more effectively where it’s really needed.

This is not enough excuse to allow mass recordings from one person.

Polluting the dataset adds additional overhead for researchers.

You are suggesting their work was a waste of time in the first place.

You could record e-books and open source them, this project has a different purpose.

In addition, negative impacts of allowing unlimited recordings per person:

  1. It is an addictive behavior that shouldn’t be encouraged.
  2. It delivers low quality recordings.
  3. For low resource languages, everything counts, and should be optimized, including how many sentences you can record.

Thank you for your comments ! I’ll only answer one of them as it seems to be misunderstood.

I’m saying that any negative effects they are causing will be diminished.

1 Like