OSError: [Errno 12] Cannot allocate memory

I installed tts on my server to synthesize with CPU only.

It has 2GB RAM, which is not sufficient, as I get CaM when trying to synthesize.

I tried

sudo dd if=/dev/zero of=/swapfile bs=1MiB count=$((4*1024)) status=progress && sync && sudo chmod 0600 /swapfile && sudo mkswap /swapfile

Either it cannot access that fake ram or it’s not enough.

How did you circumvent this error?

Here’s the complete output

2021-01-31 21:53:24.463647: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2021-01-31 21:53:24.463861: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
 > tts_models/en/ljspeech/speedy-speech-wn is already downloaded.
 > vocoder_models/universal/libri-tts/wavegrad is already downloaded.
 > Setting up Audio Processor...
 | > sample_rate:22050
 | > resample:False
 | > num_mels:80
 | > min_level_db:-100
 | > frame_shift_ms:None
 | > frame_length_ms:None
 | > ref_level_db:20
 | > fft_size:1024
 | > power:1.5
 | > preemphasis:0.0
 | > griffin_lim_iters:60
 | > signal_norm:True
 | > symmetric_norm:True
 | > mel_fmin:50.0
 | > mel_fmax:7600.0
 | > spec_gain:1.0
 | > stft_pad_mode:reflect
 | > max_norm:4.0
 | > clip_norm:True
 | > do_trim_silence:True
 | > trim_db:60
 | > do_sound_norm:False
 | > stats_path:.../scale_stats.npy
 | > hop_length:256
 | > win_length:1024
 > Using model: speedy_speech
 > Setting up Audio Processor...
 | > sample_rate:24000
 | > resample:False
 | > num_mels:80
 | > min_level_db:-100
 | > frame_shift_ms:None
 | > frame_length_ms:None
 | > ref_level_db:0
 | > fft_size:1024
 | > power:None
 | > preemphasis:0.0
 | > griffin_lim_iters:None
 | > signal_norm:True
 | > symmetric_norm:True
 | > mel_fmin:50.0
 | > mel_fmax:7600.0
 | > spec_gain:1.0
 | > stft_pad_mode:reflect
 | > max_norm:4.0
 | > clip_norm:True
 | > do_trim_silence:True
 | > trim_db:60
 | > do_sound_norm:False
 | > stats_path:.../scale_stats.npy
 | > hop_length:256
 | > win_length:1024
 > Generator Model: wavegrad
 > Text: This is working
 > Text splitted to sentences.
['This is working']
Traceback (most recent call last):
  File "/usr/local/bin/tts", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.6/dist-packages/TTS/bin/synthesize.py", line 206, in main
    wav = synthesizer.tts(args.text)
  File "/usr/local/lib/python3.6/dist-packages/TTS/utils/synthesizer.py", line 135, in tts
    speaker_embedding=speaker_embedding)
  File "/usr/local/lib/python3.6/dist-packages/TTS/tts/utils/synthesis.py", line 235, in synthesis
    inputs = text_to_seqvec(text, CONFIG)
  File "/usr/local/lib/python3.6/dist-packages/TTS/tts/utils/synthesis.py", line 18, in text_to_seqvec
    add_blank=CONFIG['add_blank'] if 'add_blank' in CONFIG.keys() else False),
  File "/usr/local/lib/python3.6/dist-packages/TTS/tts/utils/text/__init__.py", line 87, in phoneme_to_sequence
    to_phonemes = text2phone(clean_text, language)
  File "/usr/local/lib/python3.6/dist-packages/TTS/tts/utils/text/__init__.py", line 50, in text2phone
    ph = phonemize(text, separator=seperator, strip=False, njobs=1, backend='espeak', language=language, preserve_punctuation=True, language_switch='remove-flags')
  File "/usr/local/lib/python3.6/dist-packages/phonemizer/phonemize.py", line 161, in phonemize
    logger=logger)
  File "/usr/local/lib/python3.6/dist-packages/phonemizer/backend/espeak.py", line 148, in __init__
    preserve_punctuation=preserve_punctuation, logger=logger)
  File "/usr/local/lib/python3.6/dist-packages/phonemizer/backend/base.py", line 48, in __init__
    'initializing backend %s-%s', self.name(), self.version())
  File "/usr/local/lib/python3.6/dist-packages/phonemizer/backend/espeak.py", line 114, in version
    long_version = cls.long_version()
  File "/usr/local/lib/python3.6/dist-packages/phonemizer/backend/espeak.py", line 102, in long_version
    '{} --help'.format(cls.espeak_path()), posix=False)).decode(
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 423, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.6/subprocess.py", line 1295, in _execute_child
    restore_signals, start_new_session, preexec_fn)
OSError: [Errno 12] Cannot allocate memory

Any ideas?

eta: sorry, thought you were still trying to train.

The question is: how to use swap memory as memory that synthesize.py or server.py will use?

I can only speak for Tegra/Jetson/Xavier platform, but I guess this is valid for other platforms as well: GPU/CUDA can not use virtual/swap memory.

I am using CPU only. I updated my main post with this.