Using custom trained WaveRNN models

Hi,
I have trained a WaveRNN model by extracting a training spectrogram set from a set of records using a Tacotron1 model. Then, I have applied the training procedure in https://github.com/erogol/WaveRNN
At the end of the work, I have obtained a WaveRNN vocoder model.
My question is that is there any code sample published to use tacotron and WaveRNN model together to generate audio samples?
I have searched for some synthesis scripts in both MozillaTTS and WaveRNN repos but I could not find any.
Best Regards.

I have found out where it can be used, it is configurable in server config script

Can you post the link to the procedure?

Hi,
After training a wavernn model or while using a pretrained wavernn model, you can activate wavernn by adding three more command line arguments to your server command:

python server.py --wavernn_lib_path [wavernn_directory] --wavernn_file [checkpoint_path.pth.tar] --wavernn_config [wavernn_config_path.json] …