Are there straight-forward ways to set specific layers to be non-trainable, i.e. weights do not update?
I’d like to fine-tune only the last few layers if possible.
I was hoping that the recent transfer learning commits offer such a capability, but looks like it only re-initialize the selected layers, but still trains the entire set of weights.
At first glance, it seems that we can set “tfv1.get_variable” in “variable_on_cpu” to trainable=False (for the non-trainable layers desired) and also initialize tf.contrib.rnn.LSTMBlockFusedCell to trainable=False? Would this do the trick or are there other potential pitfalls?
Any pointers would be very much appreciated.