Polish dataset download

I’ll reinforce my previous message, note that wikipedia extracted sentences are never imported into the sentence collector, please don’t do this.

The process is to contribute to the rulesets and blacklist:

And then the team will run the extraction and incorporate to Common Voice repo.


ahh. sh.t - i didnt understand Your idea. Do You already have
rulesets and blacklists for polish wikipedia scrapping database?

it seems that at beginning i didnt understand model for wiki
scrapping and collector :wink: - i’ll try to get net source of polish
datasets to review

No, we don’t, we need technical Polish-speaking contributors to help :slight_smile:

:slight_smile: - if u want me to prepare rules and blacklist just let me know

1 Like

Let’s do it together, see here https://discourse.mozilla.org/t/coordination-of-input-for-polish-language-wiki-scrapper/53380

Hey - is there any hard limit (10 000) on sentence collector?.
Coz I’m adding and also renewing new sentences, while counters on
profile doesn’t change:

Profile: aiteam

  • 10000 sentences added
  • 10000 sentences reviewed
  • … across 1 language(s)

Hello @aiteam,
a lot of contribution You have made, can You please add a better source description of sentences origin? “own choosed and edited” may not indicate that sentences are copyright free in my opinion.
Some sentences seem broken and it would be a bit hard to read and process them. Like this one: “Wyskoczy do klubu tury…iej wielkości gwiazd.”
Also I am not sure if multiple sentences in one sentence string is ok for the dataset (like this one “Wytrzeźwiej. Odpocznij. I przyjdź do mnie.”)

It is very nice that You have made a lot of contribution but keep in mind we also aim for some quality of the dataset ;). So if You can somehow pre-process the sentences it would reduce review time and percentage of rejected sentences.

If it is possible to refine those sentences which You have uploaded by a script, maybe it would be good to actually remove them from the sentence collector for now and add after refinement?

Hey Kuba,

  i've build semi automatic solution to extract sentences (within

nltk) from old movie subtitles. Process is not fully automated and
i’m trying to review sentences before i’ll add it to collector
(sometimes i can miss low quality senetce). I’ll try to modify
process - it will take only those sentences, in which words are in
polish dictionary only (btw my blacklist is still growing, so
quality of batch should be better each iteration).

  Do I need to write movie title within description of sentences




  what about counters on profile - is't safe to add new sentences?

BTW I’ve reviewed my code - i had english pickle on tokenizer :wink:
so new batches should be much more better now (now process is
armed with polish pickle)


No this this just a display problem of the website, see here and here. You can add as much sentences as you want.

Hey Stefan

  thanks for answer, got another issue - why total number of

sentences doesnt change also? Is it same issue like counters on

  • 25097 total sentences.
    best regards

Hey, how old movies are we talking? Please take note that if the movie was released any sooner than 1950, or probably even 1940, the scripts, and, in effect, subtitles, are still probably protected by author rights, and thus unsuitable for common voice.

Hey, most of them are from 90’ - but the group who made
translations doesnt exists anymore

btw if You have ideas where from I can take source to make
sentences just let me know :wink:

You may consider manually scraping some stuff from https://pl.wikisource.org/wiki/Wikiźródła:Strona_główna, they have a fair collection of usually guaranteedly CC0 texts.

I had tried to add sentence from another profile and number of
added senteces didn’t increase

Ruben told to not add sentences in collector from wikipedia …

Wikisources is a different project by the Wikimedia foundation, and serves basically only as a database of PD works. The restriction on Wikipedia is only on the main project itself (and possibly some of the other projects), and that is due to the reason that the content there is originally created by the Wikipedia contributors, who retain the rights to their content under the CC-BY-SA license or some similar (and thus, it is not CC0)

thx for the hint :wink: !!! i’m going to prepare batch from
wikisource then :slight_smile: