Guide: Using Automatic Mixed Precision for NVIDIA Tensor Cores

It should work, I’m doing the opposite, training first on fp16 and then fine tuning on fp32. When you train on fp16 the entire saved model will be fp32.

No, sorry :confused:

You can ask here, maybe there are new uptades :