Hello @carlfm01.
Indeed they were erroneous audios. Thank you for your suggestion.
Now I receive the following error, I don’t know if you know anything about it:
ResourceExhaustedError (see above for tracing): OOM when assigning tensor with form [32,1024] and type float on / job: localhost / replica: 0 / task: 0 / device: GPU: 0 by allocator GPU_0_bfc
[[node Tacotron_model / inference / decoder / while / CustomDecoderStep / decoder_LSTM / decoder_LSTM / multi_rnn_cell / cell_1 / dropout_1 / random_uniform / RandomUniform (defined in / home / manuel_garcia02 / Tacotron-2 / tacotron / models: 13)
]]
Tip: If you want to see a list of the assigned tensioners when OOM occurs, add report_tensor_allocations_upon_oom to RunOptions to get the current assignment information.
[[node Tacotron_model / clip_by_global_norm / mul_38 (defined in /home/manuel_garcia02/Tacotron-2/tacotron/models/tacotron.py:429)]]
Tip: If you want to see a list of the assigned tensioners when OOM occurs, add report_tensor_allocations_upon_oom to RunOptions to get the current assignment information.
I already placed tacotron_batch_size in 4, but still