Hello, everyone. I try to test DeepSpeech pretrained models.
When I install deepspeech-gpu library and load model(deepspeech-0.7.1-models.pbmm) it take all my GPU-RAM.
And when I install deepspeech (without gpu usage) library and load model(deepspeech-0.7.1-models.pbmm) it take near 100Mb of RAM on my computer.
Can anyone explain me why it take less RAM without GPU? Is it super optimized? Because I don`t understand why it take 100Mb of RAM if language model itself takes near 180 Mb.