Using the Europarl Dataset with sentences from speeches from the European Parliament

The new swiping-mode of the sentence-collector makes the review process much quicker and it would filter out the worst sentences. I would be willing to review maybe 10 000 sentences in German. (I already reviewed that much for the Esperanto sentence collection) We would need at least another 19 people doing the same to import the complete dataset for one language. Likely more since sentences need more than two votes when people disagree.

That being said I recommend everyone to download the dataset and search for some words, topics and phrases that come to your mind that could be problematic. As far as I can see it there are very few really problematic sentences.

In the Europarl dataset most controversial opinions are part of a longer sentence like: “Mister President I have to say that …” and this puts the opinion in a context that makes it easier to be read by someone who doesn’t like it. But there will be some people who will complain about some sentences since they are all highly political. But I could live with that.

Happy to hear that :slight_smile:

I didn’t review it, so if most of them are in this format or alike, I’m totally fine with a full import and relying on the reporting function.

Are there any notable reactions to controversial sentences that exists in the dataset right now? Did you guys get any angry mails yet?

Most sentences are only recorded by one person, so the impact of a bad sentence is likely not very high. One could also delete some topics with a blacklist as we go, based on the things we find over time.

Here is a sample file with 300 random English sentences from fr-en, the only thing I changed before creating this was deleting sentences longer than 14 words.:

Interesting, will take a look at the Dutch sentences.
Concerning validating, swiping is nice, but you need a touch screen. On a desktop/laptop I’d like to be able to have more sentences like 10 or 20 on 1 page, have a “Check all” option and validate, instead of clicking all one by one which is tedious.

3 Likes

I started a Thread in the german section of this forum about this issue to discuss what the german community wants:

Concerning the sentence collector:

True, I would love to have this too.
You can use the Selenium IDE Browser plugin to automate smaller work steps for now. You can record and play clicks on a website with it. For example when all good sentences are successfully reviewed and only the bad sentences with one downvote are left, then you can use it do automate the second downvote. But be careful with it!

I know, I have used iMacro to automate tasks in Firefox and it works very well. In this case I first need to read (only) 5 sentences before I can hit a macrobutton to validate, if the sentences are correct. I read faster than hitting 5, 10 or 20 buttons, so it would be nice to have more sentences on 1 page and click 1 button to validate.

2 Likes

I would ask to avoid any kind of automation tool for the sentence collector. The whole point of the tool is to enforce human review of each sentence to ensure quality or we will end up with a bad corpus for voice collection, delaying the whole process.

If you have a big public domain corpus (> 500K) coming from a trusted source, please reach out independently to me and we can figure out a different QA process than the sentence collector. But note we currently don’t have the team bandwidth to have a process for smaller corpus that ensure the high quality we are looking for.

Thanks!

Also, as I commented over Slack, we probably want to remove the 60K Dutch sentences from the collector and see if we can follow a QA process for all languages that doesn’t involve individual review from large and trusted sources of text.

I created a pull request for the German corpus:

Anyone who wants to help with the review process is welcome to help :slight_smile:

This corpus has around 379 k sentences after cleanup. Am I too quick here, would you prefere another process?

Yes, let’s have a separate process here, this is too complex to just have it on a PR.

I’ll reach out directly to explore which options we have for this corpus.

1 Like

Sounds good. I personally reviewed a thousand sentences out of the 60K I added for Dutch (so more than 1.5% of then), and didn’t find a single sentence that would be bad to have in the corpus. The two worst sentences I found were a sentence where a space was missing (“overPakistan” instead of “over Pakistan”) and another one that said without context “nuclear plants are bombs waiting to explode” but honestly I don’t think anybody would be truly upset if either sentence ended up in the dataset. So, I validated all the sentences for myself, but was planning on letting a second review happen, but if there’s a different process I’m fine with this as well. I can also provide longer sentences from that dataset, similarly to what has been done for German, if the German sentences are deemed ok (they should all be translations of each other in the end).

@FremyCompany I’m working with @stergro for the German ones, to avoid overloading the sentence collector.

I think we should take an unified approach for this corpus and applied to all languages. How many sentences were available for Dutch?

The same amount as in German, I’d say. There are only 60K in the sentence collector because I focused on a strict set of rules, while German sentences were selected more generously from the dataset, but if the sentences from German are deemed ok, I can apply the same filtering rules as them and get approximately the same amount of sentences, since those sentences are translations of each other.

For German we are talking to almost 500K sentences. I would prefer if we can do something similar for Dutch outside the sentence collector.

My advise for Dutch would be:

  1. Make sure/help with at least the wikipedia process is finished to quickly get a lot of diverse sentences. Talk with @Fjoerfoks who is leading this effort.
  2. Wait until we see with @stergro how to handle the Europarl dataset so we can run a similar QA process with other languages.

Cheers.

Hey @nukeador What are the next steps now? We talked about a possible process with the excel review sheets and I think it sounds like quite some work but doable. Reviewing a few thousand sentences for statistically sound results is still better than reviewing 500k sentences. Is it okay if I just prepare such a sheet for the German corpus and we see how it goes? Maybe you can explain the process again to the group so that everyone knows what we are talking about.

Yes, let’s kick-off the process we talked in private and see how it goes. Once that’s done we can share back with everyone and see how to do it with other languages.

Thanks!

Alright, I will start to review the sentences, everyone who wants to help finds the link to the sheet here:

Thanks to the great help of @benekuehn and other helpers from the german forum the 4000 sentences are reviewed now. 94.25% are fine, 2.10% have spelling errors, most of them are caused by the german spelling reform that happened in 1996. Another 3.05% are hard to pronounce, mainly names and political words.

Hey all,
What is the status on this effort at this point?