Error while attempting to train WaveRNN

Hi guys,

I am trying to train WaveRNN from this repo https://github.com/erogol/WaveRNN. I am using the default config:

{
“model_name”: “libriTTS-360”,
“model_description”: “Training a universal vocoder.”,

"audio":{
    // Audio processing parameters
    "num_mels": 80,         // size of the mel spec frame. 
    "num_freq": 1025,       // number of stft frequency levels. Size of the linear spectogram frame.
    "sample_rate": 16000,   // DATASET-RELATED: wav sample-rate. If different than the original data, it is resampled.
    "frame_length_ms": 50,  // stft window length in ms.
    "frame_shift_ms": 12.5, // stft window hop-lengh in ms.
    "preemphasis": 0.98,    // pre-emphasis to reduce spec noise and make it more structured. If 0.0, no -pre-emphasis.
    "min_level_db": -100,   // normalization range
    "ref_level_db": 20,     // reference level db, theoretically 20db is the sound of air.
    "power": 1.5,           // value to sharpen wav signals after GL algorithm.
    "griffin_lim_iters": 60,// #griffin-lim iterations. 30-60 is a good range. Larger the value, slower the generation.
    // Normalization parameters
    "signal_norm": true,    // normalize the spec values in range [0, 1]
    "symmetric_norm": false, // move normalization to range [-1, 1]
    "max_norm": 1,          // scale normalization to range [-max_norm, max_norm] or [0, max_norm]
    "clip_norm": true,      // clip normalized values into the range.
    "mel_fmin": 0.0,         // minimum freq level for mel-spec. ~50 for male and ~95 for female voices. Tune for dataset!!
    "mel_fmax": 8000.0,        // maximum freq level for mel-spec. Tune for dataset!!
    "do_trim_silence": true  // enable trimming of slience of audio as you load it. LJspeech (false), TWEB (false), Nancy (true)
},

"distributed":{
    "backend": "nccl",
    "url": "tcp:\/\/localhost:54321"
},

"epochs": 10000,
"grad_clip": 500,
"lr": 0.0001,
"warmup_steps": 10,
"batch_size": 64,
"checkpoint_step": 10000,
"print_step": 10,
"num_workers": 4,
"mel_len": 10,
"pad": 2,
"use_aux_net": false,
"use_upsample_net": false,
"upsample_factors": [5, 5, 12],
"mode": "mold",         // mold [string], gauss [string], bits [int]
"mulaw": true,         // apply mulaw if mode is bits

}

I just changed the sample rate to be 22050 to match my dataset and do_trim_silences = false

I used this notebook to extract the tts spectrograms: https://github.com/mozilla/TTS/blob/master/notebooks/ExtractTTSpectrogram.ipynb

However when i try to train WaveRNN i get the following error:
coarse = np.stack(coarse).astype(np.float32)
ValueError: all input arrays must have the same shape

I have previously trained a Tacotron 2 model.

some clips might be shorter than the minimum required sequence length of WaveRNN. You can check with a simple debugging. If this is the case remove those samples from your dataset or filter them out.

I think i found what my issue was. I was training both Tacotron 2 and WaveRNN from the master branches and the config files were not adequate. I trained Tacotron 2 with the config from this branch https://github.com/mozilla/TTS/tree/Tacotron2-iter-260K-824c091.

After 300 000 of training WaveRNN i have gotten something that resembles the required sentence but is not great. Is this normal and how many steps should pass untill the speech is good. I am both trying with mode “mold” and 10bits. The loss using mold is currently 4.8 and with the 10bit 1.2. I don’t know if this is normal and i should continue training or stop the training and revise everything again.

Here is a sample from Tacotron 2 + WaveRNN after 300 000 steps.
09a1aea2-77d2-11ea-9f6a-44032cb27a32.wav.zip (33.5 KB)