yk98
1
Hi,
I want to use the pre trained deepspeech 0.7.0 model as is. To use it for my use case, I saw that making an external scorer helps.
There are many hyperparameters that impact the scorer like “lm alpha”, “lm beta” along with a bunch of other parameters like: “beam width”
How will changing these parameters affect the transcription output?
Also is the scorer required to train the acoustic model??
Thanks!
This should answer the hyperparameter questions: Sequence Modeling with CTC
No, it’s only used for the test set at the end of training, which you can skip by simply not providing a --test_files
parameter.
1 Like
There’s a script in the repo, lm_optimizer.py, that can help you figure out the optimal alpha and beta parameters for your custom language model.
1 Like
yk98
4
are lm alpha and lm beta used for acoustic model training?