I’ve been using the --automatic_mixed_precision since a while and since then I’m capable of using larger batch sizes and training times takes 40% less than before in a Tesla V100. But when I read the code I noticed this
f.DEFINE_boolean('automatic_mixed_precision', False, 'whether to allow automatic mixed precision training. USE OF THIS FLAG IS UNSUPPORTED. Checkpoints created with automatic mixed precision training will not be usable without mixed precision.')
Can I ask what does “USE OF THIS FLAG IS UNSUPPORTED” means? this is the same for inter/intra_op_parallelism_threads. Can I set the flags to True and get better performance or shouldn’t I?