I used the TIMIT dataset to test the pre-trained model, and the wer is about 27%. I want to know the training set of the pre-trained model, so I can try to improve the selection strategy of choosing training set. Can anyone help me?
The pre-trained model was trained on Fisher, Switchboard, and Librivox training data sets.
Combining these three sets to be the training set and validation set？
The training set was the Fisher, Switchboard, and Librivox training data sets.
The validation set was the clean Librivox validation data set.
The test set was the clean Librivox test data set.
Thank you, and is the language model the 4-gram language model with a 30,000 word vocabulary trained on the Fisher and Switchboard transcriptions as the paper says?
No. We didn’t try to exactly reproduce the paper’s results.
We created a KenLM language model based off of Fisher, Switchboard, andLibrivox training data sets as well as part of Wikipedia.
Is it possible to train this model further using other voice datasets ?
I don’t think that’s possible until tensorflow checkpoints are also published - frozen out_graph.pb cannot be used for further training AFAIK
Does the common Voice data set include other data sets like Librivox, Switchboard, Fisher?
No. Common Voice, Librivox, Switchboard, and Fisher are separate, distinct data sets.
Thank you, and may I ask the WER value of the clean Librivox test data set?
And did you use the 4-gram model to train the language model?
The WER is 6.0 percent for the Librivox clean, test data set
Not sure what you’re asking here.
Do you mean “Is the language model a 4-gram language model?”
Yes, that’t what I want to express.
Where can I download Fisher and Switchboard datasets?
So, the Common Voice data is not factored into the pre-trained model?