Unsupported flags and data augmentation

I’ve been using the --automatic_mixed_precision since a while and since then I’m capable of using larger batch sizes and training times takes 40% less than before in a Tesla V100. But when I read the code I noticed this

    f.DEFINE_boolean('automatic_mixed_precision', False, 'whether to allow automatic mixed precision training. USE OF THIS FLAG IS UNSUPPORTED. Checkpoints created with automatic mixed precision training will not be usable without mixed precision.')

Can I ask what does “USE OF THIS FLAG IS UNSUPPORTED” means? this is the same for inter/intra_op_parallelism_threads. Can I set the flags to True and get better performance or shouldn’t I?

It means we make no promises about anything working if you’re using it. If you run into issues, you’ll have to figure them out by yourself, as we don’t use the flag.

May I ask why are you not using these flags? I’ve been testing, for example inter/intra parallelism in tensorflow and I get good training times.

Because we have enough hyperparameters to tune already.

Please don’t post the same question everywhere in old threads. Open a new thread instead, give good information and we’ll all profit from it. This feels more like trolling/spamming …