Training russian TTS

Hello, I am trying to make russian model and I will share here results with the time.
Process: Ruslan dataset (took from https://github.com/vlomme/Multi-Tacotron-Voice-Cloning) -> compute_statistics.py -> training encoder (train_tts.py)-> training vocoder (multiband-melgan).
After 84 000 steps on encoder and 69 000 steps on vocoder I am generating something vaguely resembling voice. I am wondering if I am doing things right, and how many steps it took for you to generate actual speech? Should I just train more?
Encoder:


Vocoder:

config tts:
{
“model”: “Tacotron2”,
“run_name”: “ljspeech-ddc”,
“run_description”: “tacotron2 with DDC and differential spectral loss.”,

// AUDIO PARAMETERS
"audio":{
    // stft parameters
    "fft_size": 1024,         // number of stft frequency levels. Size of the linear spectogram frame.
    "win_length": 1024,      // stft window length in ms.
    "hop_length": 256,       // stft window hop-lengh in ms.
    "frame_length_ms": null, // stft window length in ms.If null, 'win_length' is used.
    "frame_shift_ms": null,  // stft window hop-lengh in ms. If null, 'hop_length' is used.

    // Audio processing parameters
    "sample_rate": 16000,   // DATASET-RELATED: wav sample-rate.
    "preemphasis": 0.0,     // pre-emphasis to reduce spec noise and make it more structured. If 0.0, no -pre-emphasis.
    "ref_level_db": 20,     // reference level db, theoretically 20db is the sound of air.

    // Silence trimming
    "do_trim_silence": true,// enable trimming of slience of audio as you load it. LJspeech (true), TWEB (false), Nancy (true)
    "trim_db": 60,          // threshold for timming silence. Set this according to your dataset.

    // Griffin-Lim
    "power": 1.5,           // value to sharpen wav signals after GL algorithm.
    "griffin_lim_iters": 60,// #griffin-lim iterations. 30-60 is a good range. Larger the value, slower the generation.

    // MelSpectrogram parameters
    "num_mels": 80,         // size of the mel spec frame.
    "mel_fmin": 50.0,        // minimum freq level for mel-spec. ~50 for male and ~95 for female voices. Tune for dataset!!
    "mel_fmax": 7600.0,     // maximum freq level for mel-spec. Tune for dataset!!
    "spec_gain": 1,

    // Normalization parameters
    "signal_norm": true,    // normalize spec values. Mean-Var normalization if 'stats_path' is defined otherwise range normalization defined by the other params.
    "min_level_db": -100,   // lower bound for normalization
    "symmetric_norm": true, // move normalization to range [-1, 1]
    "max_norm": 4.0,        // scale normalization to range [-max_norm, max_norm] or [0, max_norm]
    "clip_norm": true,      // clip normalized values into the range.
    "stats_path": "/home/dias/Downloads/Ruslan_mono/scale_stats.npy"  // DO NOT USE WITH MULTI_SPEAKER MODEL. scaler stats file computed by 'compute_statistics.py'. If it is defined, mean-std based notmalization is used and other normalization params are ignored
},

// VOCABULARY PARAMETERS
// if custom character set is not defined,
// default set in symbols.py is used
// "characters":{
//     "pad": "_",
//     "eos": "~",
//     "bos": "^",
//     "characters": "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя!'(),-.:;? ",
//     "punctuations":"!'(),-.:;? ",
//     "phonemes":"iyɨʉɯuɪʏʊeøɘəɵɤoɛœɜɞʌɔæɐaɶɑɒᵻʘɓǀɗǃʄǂɠǁʛpbtdʈɖcɟkɡqɢʔɴŋɲɳnɱmʙrʀⱱɾɽɸβfvθðszʃʒʂʐçʝxɣχʁħʕhɦɬɮʋɹɻjɰlɭʎʟˈˌːˑʍwɥʜʢʡɕʑɺɧɚ˞ɫ"
// },

// DISTRIBUTED TRAINING
"distributed":{
    "backend": "nccl",
    "url": "tcp:\/\/localhost:54321"
},

"reinit_layers": [],    // give a list of layer names to restore from the given checkpoint. If not defined, it reloads all heuristically matching layers.

// TRAINING
"batch_size": 32,       // Batch size for training. Lower values than 32 might cause hard to learn attention. It is overwritten by 'gradual_training'.
"eval_batch_size":16,
"r": 7,                 // Number of decoder frames to predict per iteration. Set the initial values if gradual training is enabled.
"gradual_training": [[0, 7, 64], [1, 5, 64], [50000, 3, 32], [130000, 2, 32], [290000, 1, 32]], //set gradual training steps [first_step, r, batch_size]. If it is null, gradual training is disabled. For Tacotron, you might need to reduce the 'batch_size' as you proceeed.
"apex_amp_level": null,     // level of optimization with NVIDIA's apex feature for automatic mixed FP16/FP32 precision (AMP), NOTE: currently only O1 is supported, and use "O1" to activate.

// LOSS SETTINGS
"loss_masking": true,       // enable / disable loss masking against the sequence padding.
"decoder_loss_alpha": 0.5,  // decoder loss weight. If > 0, it is enabled
"postnet_loss_alpha": 0.25, // postnet loss weight. If > 0, it is enabled
"ga_alpha": 5.0,           // weight for guided attention loss. If > 0, guided attention is enabled.
"diff_spec_alpha": 0.25,     // differential spectral loss weight. If > 0, it is enabled

// VALIDATION
"run_eval": true,
"test_delay_epochs": 10,  //Until attention is aligned, testing only wastes computation time.
"test_sentences_file": null,  // set a file to load sentences to be used for testing. If it is null then we use default english sentences.

// OPTIMIZER
"noam_schedule": false,        // use noam warmup and lr schedule.
"grad_clip": 1.0,              // upper limit for gradients for clipping.
"epochs": 200,                // total number of epochs to train.
"lr": 0.0001,                  // Initial learning rate. If Noam decay is active, maximum learning rate.
"wd": 0.000001,                // Weight decay weight.
"warmup_steps": 4000,          // Noam decay steps to increase the learning rate from 0 to "lr"
"seq_len_norm": false,         // Normalize eash sample loss with its length to alleviate imbalanced datasets. Use it if your dataset is small or has skewed distribution of sequence lengths.

// TACOTRON PRENET
"memory_size": -1,             // ONLY TACOTRON - size of the memory queue used fro storing last decoder predictions for auto-regression. If < 0, memory queue is disabled and decoder only uses the last prediction frame.
"prenet_type": "original",     // "original" or "bn".
"prenet_dropout": false,       // enable/disable dropout at prenet.

// TACOTRON ATTENTION
"attention_type": "original",  // 'original' or 'graves'
"attention_heads": 4,          // number of attention heads (only for 'graves')
"attention_norm": "sigmoid",   // softmax or sigmoid.
"windowing": false,            // Enables attention windowing. Used only in eval mode.
"use_forward_attn": false,     // if it uses forward attention. In general, it aligns faster.
"forward_attn_mask": false,    // Additional masking forcing monotonicity only in eval mode.
"transition_agent": false,     // enable/disable transition agent of forward attention.
"location_attn": true,         // enable_disable location sensitive attention. It is enabled for TACOTRON by default.
"bidirectional_decoder": false,  // use https://arxiv.org/abs/1907.09006. Use it, if attention does not work well with your dataset.
"double_decoder_consistency": true,  // use DDC explained here https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency-draft/
"ddc_r": 7,                           // reduction rate for coarse decoder.

// STOPNET
"stopnet": true,               // Train stopnet predicting the end of synthesis.
"separate_stopnet": true,      // Train stopnet seperately if 'stopnet==true'. It prevents stopnet loss to influence the rest of the model. It causes a better model, but it trains SLOWER.

// TENSORBOARD and LOGGING
"print_step": 25,       // Number of steps to log training on console.
"tb_plot_step": 100,    // Number of steps to plot TB training figures.
"print_eval": false,     // If True, it prints intermediate loss values in evalulation.
"save_step": 10000,      // Number of training steps expected to save traninpg stats and checkpoints.
"checkpoint": true,     // If true, it saves checkpoints per "save_step"
"tb_model_param_stats": false,     // true, plots param stats per layer on tensorboard. Might be memory consuming, but good for debugging.

// DATA LOADING
"text_cleaner": "phoneme_cleaners",
"enable_eos_bos_chars": false, // enable/disable beginning of sentence and end of sentence chars.
"num_loader_workers": 4,        // number of training data loader processes. Don't set it too big. 4-8 are good values.
"num_val_loader_workers": 4,    // number of evaluation data loader processes.
"batch_group_size": 4,  //Number of batches to shuffle after bucketing.
"min_seq_len": 6,       // DATASET-RELATED: minimum text length to use in training
"max_seq_len": 153,     // DATASET-RELATED: maximum text length

// PATHS
"output_path": "/home/dias/Downloads/Models/Ruslan/",

// PHONEMES
"phoneme_cache_path": "/home/dias/Downloads/Models/phoneme_cache/",  // phoneme computation is slow, therefore, it caches results in the given folder.
"use_phonemes": false,           // use phonemes instead of raw characters. It is suggested for better pronounciation.
"phoneme_language": "ru",     // depending on your target language, pick one from  https://github.com/bootphon/phonemizer#languages

// MULTI-SPEAKER and GST
"use_speaker_embedding": false,      // use speaker embedding to enable multi-speaker learning.
"use_gst": false,       			    // use global style tokens
"use_external_speaker_embedding_file": false, // if true, forces the model to use external embedding per sample instead of nn.embeddings, that is, it supports external embeddings such as those used at: https://arxiv.org/abs /1806.04558
"external_speaker_embedding_file": "../../speakers-vctk-en.json", // if not null and use_external_speaker_embedding_file is true, it is used to load a specific embedding file and thus uses these embeddings instead of nn.embeddings, that is, it supports external embeddings such as those used at: https://arxiv.org/abs /1806.04558
"gst":	{			                // gst parameter if gst is enabled
    "gst_style_input": null,        // Condition the style input either on a
                                    // -> wave file [path to wave] or
                                    // -> dictionary using the style tokens {'token1': 'value', 'token2': 'value'} example {"0": 0.15, "1": 0.15, "5": -0.15}
                                    // with the dictionary being len(dict) <= len(gst_style_tokens).
    "gst_embedding_dim": 512,
    "gst_num_heads": 4,
    "gst_style_tokens": 10,
    "gst_use_speaker_embedding": false
},

// DATASETS
"datasets":   // List of datasets. They all merged and they get different speaker_ids.
    [
        {
            "name": "ljspeech",
            "path": "/home/dias/Downloads/Ruslan_mono/",
            "meta_file_train": "metadata.csv", // for vtck if list, ignore speakers id in list for train, its useful for test cloning with new speakers
            "meta_file_val": null
        }
    ]

}
config melgan:
{
“run_name”: “multiband-melgan”,
“run_description”: “multiband melgan mean-var scaling”,

// AUDIO PARAMETERS
"audio":{
    "fft_size": 1024,         // number of stft frequency levels. Size of the linear spectogram frame.
    "win_length": 1024,      // stft window length in ms.
    "hop_length": 256,       // stft window hop-lengh in ms.
    "frame_length_ms": null, // stft window length in ms.If null, 'win_length' is used.
    "frame_shift_ms": null,  // stft window hop-lengh in ms. If null, 'hop_length' is used.

    // Audio processing parameters
    "sample_rate": 16000,   // DATASET-RELATED: wav sample-rate. If different than the original data, it is resampled.
    "preemphasis": 0.0,     // pre-emphasis to reduce spec noise and make it more structured. If 0.0, no -pre-emphasis.
    "ref_level_db": 0,     // reference level db, theoretically 20db is the sound of air.

    // Silence trimming
    "do_trim_silence": true,// enable trimming of slience of audio as you load it. LJspeech (false), TWEB (false), Nancy (true)
    "trim_db": 60,          // threshold for timming silence. Set this according to your dataset.

    // MelSpectrogram parameters
    "num_mels": 80,         // size of the mel spec frame.
    "mel_fmin": 50.0,        // minimum freq level for mel-spec. ~50 for male and ~95 for female voices. Tune for dataset!!
    "mel_fmax": 7600.0,     // maximum freq level for mel-spec. Tune for dataset!!
    "spec_gain": 1.0,         // scaler value appplied after log transform of spectrogram.

    // Normalization parameters
    "signal_norm": true,    // normalize spec values. Mean-Var normalization if 'stats_path' is defined otherwise range normalization defined by the other params.
    "min_level_db": -100,   // lower bound for normalization
    "symmetric_norm": true, // move normalization to range [-1, 1]
    "max_norm": 4.0,        // scale normalization to range [-max_norm, max_norm] or [0, max_norm]
    "clip_norm": true,      // clip normalized values into the range.
    "stats_path": "/home/dias/Downloads/Ruslan_mono/scale_stats.npy"    // DO NOT USE WITH MULTI_SPEAKER MODEL. scaler stats file computed by 'compute_statistics.py'. If it is defined, mean-std based notmalization is used and other normalization params are ignored
},

// DISTRIBUTED TRAINING
// "distributed":{
//     "backend": "nccl",
//     "url": "tcp:\/\/localhost:54321"
// },

// LOSS PARAMETERS
"use_stft_loss": true,
"use_subband_stft_loss": true,  // use only with multi-band models.
"use_mse_gan_loss": true,
"use_hinge_gan_loss": false,
"use_feat_match_loss": false,  // use only with melgan discriminators

// loss weights
"stft_loss_weight": 0.5,
"subband_stft_loss_weight": 0.5,
"mse_G_loss_weight": 2.5,
"hinge_G_loss_weight": 2.5,
"feat_match_loss_weight": 25,

// multiscale stft loss parameters
"stft_loss_params": {
    "n_ffts": [1024, 2048, 512],
    "hop_lengths": [120, 240, 50],
    "win_lengths": [600, 1200, 240]
},

// subband multiscale stft loss parameters
"subband_stft_loss_params":{
    "n_ffts": [384, 683, 171],
    "hop_lengths": [30, 60, 10],
    "win_lengths": [150, 300, 60]
},

"target_loss": "avg_G_loss",  // loss value to pick the best model to save after each epoch

// DISCRIMINATOR
"discriminator_model": "melgan_multiscale_discriminator",
"discriminator_model_params":{
    "base_channels": 16,
    "max_channels":512,
    "downsample_factors":[4, 4, 4]
},
"steps_to_start_discriminator": 200000,      // steps required to start GAN trainining.1

// GENERATOR
"generator_model": "multiband_melgan_generator",
"generator_model_params": {
    "upsample_factors":[8, 4, 2],
    "num_res_blocks": 4
},

// DATASET
"data_path": "/home/dias/Downloads/Ruslan_mono/wavs/",
"feature_path": null,
"seq_len": 16384,
"pad_short": 2000,
"conv_pad": 0,
"use_noise_augment": false,
"use_cache": true,

"reinit_layers": [],    // give a list of layer names to restore from the given checkpoint. If not defined, it reloads all heuristically matching layers.

// TRAINING
"batch_size": 64,       // Batch size for training. Lower values than 32 might cause hard to learn attention. It is overwritten by 'gradual_training'.

// VALIDATION
"run_eval": true,
"test_delay_epochs": 10,  //Until attention is aligned, testing only wastes computation time.
"test_sentences_file": null,  // set a file to load sentences to be used for testing. If it is null then we use default english sentences.

// OPTIMIZER
"epochs": 200,                // total number of epochs to train.
"wd": 0.0,                // Weight decay weight.
"gen_clip_grad": -1,      // Generator gradient clipping threshold. Apply gradient clipping if > 0
"disc_clip_grad": -1,     // Discriminator gradient clipping threshold.
"lr_scheduler_gen": "MultiStepLR",   // one of the schedulers from https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"lr_scheduler_gen_params": {
    "gamma": 0.5,
    "milestones": [100000, 200000, 300000, 400000, 500000, 600000]
},
"lr_scheduler_disc": "MultiStepLR",   // one of the schedulers from https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"lr_scheduler_disc_params": {
    "gamma": 0.5,
    "milestones": [100000, 200000, 300000, 400000, 500000, 600000]
},
"lr_gen": 1e-4,                  // Initial learning rate. If Noam decay is active, maximum learning rate.
"lr_disc": 1e-4,

// TENSORBOARD and LOGGING
"print_step": 25,       // Number of steps to log traning on console.
"print_eval": false,     // If True, it prints loss values for each step in eval run.
"save_step": 25000,      // Number of training steps expected to plot training stats on TB and save model checkpoints.
"checkpoint": true,     // If true, it saves checkpoints per "save_step"
"tb_model_param_stats": false,     // true, plots param stats per layer on tensorboard. Might be memory consuming, but good for debugging.

// DATA LOADING
"num_loader_workers": 4,        // number of training data loader processes. Don't set it too big. 4-8 are good values.
"num_val_loader_workers": 4,    // number of evaluation data loader processes.
"eval_split_size": 10,

// PATHS
"output_path": "/home/dias/Downloads/Models/Ruslan/"

}
Would appreciate any help

the logs look fine. One tipp, double-click on the graph to show the full log.

TTS should sound quite good after around 200k steps.
Vocoder will take much longer, about 600k and more steps, but it always depends.

I would suggest you try to first train a working tts model you are happy with and then focus on the vocoder.

2 Likes

Adding to @sanjaesc, for our German training you can already hear what the voice says with about 20k. It lacks endings or has wrong pronunciations, but you can clearly hear words. Quality is not great of course. That gets better with 300k. And maybe train Taco first, then go on with a vocoder.

1 Like

I third that :slight_smile: First train Taco2 (it’s a beast) and then focus on vocoding. Vocoding is a different kind of hassle and will definitely take up lots of your time.

2 Likes

@sanjaesc @othiele @georroussos thanks a lot! I trained TTS for 200k steps, is there some kind of test for checking the results, besides the audio section in tensorboard? Still same results on generating on custom sentence as it was with 84k, but the samples in audio section are pretty good

No, there is no parameter to check the quality except for your ear if all else looks ok :slight_smile: But for our model we could hear that Taco2 didn’t get better after 450k or so.

Or do you have something else @sanjaesc, @georroussos?

Update: after 200k TTS and 300k Multiband-Melgan custom output sounds like this

CAREFUL VERY LOUD NOISE

It is similar to the output I had on 84k and 69k. Maybe my generating process is wrong?

Sorry, you can’t attach sound files. It is OK to not sound great or too loud.

Try training a vocoder now and you should see improvements.

1 Like

I realized I gave wrong paths
The model actually sounds great!

Dias, hello! Could you share your experience? I also need to make a Russian TTS, but with the own narrator voice
I’m relatively new to machine learning and not exactly sure where to start