When trying to train using cpu, I got
Traceback (most recent call last):
File "TTS/bin/train_glow_tts.py", line 647, in <module>
main(args)
File "TTS/bin/train_glow_tts.py", line 558, in main
epoch)
File "TTS/bin/train_glow_tts.py", line 190, in train
text_input, text_lengths, mel_input, mel_lengths, attn_mask, g=speaker_c)
File "C:\mozillatts\TTS\tts\models\glow_tts.py", line 161, in forward
z, logdet = self.decoder(y, y_mask, g=g, reverse=False)
File "C:\envs\project\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\mozillatts\TTS\tts\layers\glow_tts\decoder.py", line 122, in forward
x, logdet = f(x, x_mask, g=g, reverse=reverse)
File "C:\envs\project\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\mozillatts\TTS\tts\layers\glow_tts\glow.py", line 200, in forward
x = self.wn(x, x_mask, g)
File "C:\envs\project\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\mozillatts\TTS\tts\layers\generic\wavenet.py", line 105, in forward
n_channels_tensor)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "C:\mozillatts\TTS\tts\layers\generic\wavenet.py", line 8, in <foward op>
def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
n_channels_int = n_channels[0]
in_act = input_a + input_b
~~~~~~~~~~~~~~~~~ <--- HERE
t_act = torch.tanh(in_act[:, :n_channels_int, :])
s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data.
DefaultCPUAllocator: not enough memory: you tried to allocate 25411584 bytes. Buy new RAM!
This is a first on CPU.
I only changed r
to 6
and used the main Glow model ( https://colab.research.google.com/drive/1NC4eQJFvVEqD8L4Rd8CVK25_Z-ypaBHD?usp=sharing ).
Is it because of r
or because of the Glow model?