SpecAugment learning rate scheduler

did anyone tried learning rate scheduler mentioned in SpecAugment paper ->> https://arxiv.org/pdf/1904.08779.pdf section 3.2 ?
or anyone have a insight on this ?

There are several PRs pending related to that, have you had a look ?

@lissyx couldn’t find any active pr regarding specAugment lr scheduler

https://github.com/mozilla/DeepSpeech/pulls?q=is%3Apr+SpecAugment

@lissyx as written in SpecAugment paper -->
the learning rate schedule turns out to be an important factor in determining the performance of ASR networks, especially so when augmentation is present. Here, we introduce training schedules that serve two purposes. First, we use these schedules to verify that a longer schedule improves the final performance of the network, even more so with augmentation (Table 2). Sec-ond, based on this, we introduce very long schedules that are used to maximize the performance of the networks.

seems like they have used a particular scheduler to get the performance bump
i couldn’t find any PR regarding this

There is no PR regarding this. We haven’t tried it. We have found that we can continue training for many more epochs with a reduced LR after reaching apparent convergence with the original LR. This was incorporated in the 0.7.0 model/checkpoint that we’ll release soon. But sadly there’s no easy way to share this configuration, we don’t have an LR scheduler in the code. Would be a good contribution if you’re interested!

@ruben sure i’ll start implementing that