Any documentation around the tensorboard outputs?

Hey all,

So there’s tons of awesome tensorboard outputs - scalars in particular - when I run the DeepSpeech.py on common-voice… but I am not sure what they mean.

In particular I was kind of hoping to see the loss function in the list.

Did these get configured somewhere? If so does anyone know where in the code they are configured?

… to partially answer my own question I can see that they are configured in the
log_variable method which in turn gets called by log_grads_and_vars which, in turn, receives the avg_tower_gradients across all GPUs… but what I still don’t really know is which one I should be looking at :wink:

There is the interesting looking total_loss variable that is only computed when doing the beam search… does that mean it can’t be added to tensorboard? Or is that something i could look at doing?

Failing that the short version of this question is what would folks recommend I focus on in the tensorboard outputs?

for me training german CV dataset on pre trained model v.0.5.0 my tensorboard output looks ok for training loss as deepspeech uses Curriculum Learning (orange), but weird for validation loss (blue). Any experience: is this an expected behavior?

yup… that looks about right. with the curriculum learning it gets harder each epoch which is why you see it going up each time - but if you compare, say, the beginning of each epoch it’s definitely still going down each time…

1 Like

but the blue lines, the validation loss does look weird, right? Shouldn’t it increase in each epoch as well?