Training CPU parallelism

Hi everyone, I have an HP Proliant DL380 Gen 9 server without gpu and I would like to use it to train a model with deepspeech. I have read that the use of the gpu is strongly recommended but I do not have it, for this reason I wanted to understand if the parallelism process with the CPU is optimized and it is possible to use all the cores available during the training of a deepspeech model to decrease the time waiting for the calculation

Didn’t you ask a very similiar question last year? Training on CPU is way too slow to do that for a larger amount of data. Think of 5 hours on GPU and 5 months on CPU.

Try instead training on Google Colab. They offer free GPU if you can’t afford AWS, Google or MS for training.

1 Like

@Pablo Please contribute with @Mte90 and other italian community members, they might be able to help you on hardware access.

We have a server for model generation for Italian language, reach us to the italian community on telegram with @mozitabot

Our model is available there https://github.com/MozillaItalia/DeepSpeech-Italian-Model but we are still working on creating new datasets and other things.

1 Like