How to change batch size?

I have a GPU with 6GB memory, and received out of memory error. I reduced the batch size in the config file by

import json
from utils.generic_utils import load_config
CONFIG = load_config('config.json')
CONFIG['datasets'][0]['path'] = '../LJSpeech-1.1/'
CONFIG['output_path'] = '../'
CONFIG['batch_size'] = 8
CONFIG['eval_batch_size'] = 8
with open('config.json', 'w') as fp:
    json.dump(CONFIG, fp)

but still, I get the memory error

RuntimeError: CUDA out of memory. Tried to allocate 106.00 MiB (GPU 0; 5.93 GiB total capacity; 3.20 GiB already allocated; 25.44 MiB free; 3.47 GiB reserved in total by PyTorch)

Seems you’ve overwritten value in code instead of using configfile value.

I’m in doubt that reaching alignment with a value of 8 is possible.

here says about 9.5GB GPU memory is needed for standard training (batch_size 32), but it is possible to train with 4GB GPU memory with a batch_size 8-12.

I have 6GB GPU memory, and I think I have enough memory for batch_size 16.

What is the standard way to change the training batch_size?

config.json in repo root directory is the right place. I’m just confused on the code snipplet on your first post which includes code not config.

I am following the instruction given here.

The config file is re-created, and my code attempts to set new batch_size. I can verify that the batch_size in the generated config file is changed, but I still get the memory errors as mentioned above.

Juypter Notebook is not needed.
Try starting with readme on mozilla tts repo at https://github.com/mozilla/tts

1 Like