Retrieving tensor names from a pre-trained model

Dear all,
I am trying to inspect the hidden states of the recurrent part of the network, but I cannot find a way to retrieve the relevant tensors. Is there any way to do that?

A good starting point for me could be to retrieve the tensors corresponding to the output states as returned by tf.nn.bidirectional_dynamic_rnn(...) as in:

My plan is to use the tensor name to retrieve the tensor value using tf.Graph.get_tensor_by_name(). I started by retrieving all available names using

    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    
    names = [n.name for n in graph_def.node]

but to the best of my understanding this list does not contain the name of the tensors I’m looking for. Am I on the right track?

Thanks in advance.

Roberto

We only set names for input and output layers, though internally they all have names. The way you do it should be okay. What you want is the tensor outputs, right?

Hi,

thanks for your answer. What me and @boborbt would like to do is actually to retrieve both the output_states and outputs tensors.

Right now we are looking at all the operation names defined in the pre-trained model graph; however the only name we can see that seems to be connected with the output_states tensor is called output_node. However, this is not what we are looking for as it is the name that was used for the tensor representing the decoded sequence:

https://github.com/mozilla/DeepSpeech/blob/9f26c64db66a04c702e50ce30d099732f6c0db05/DeepSpeech.py#L1703.

Per our understanding, the tensors we are looking for have not been named explicitly in DeepSpeech.py. Is there any way to understand which names have they been assigned by tensorflow?

Here is a pastebin of the operation names we are retrieving. Does any of this stand out to you as possibly being either outputs or output_states?

Have you tried having a look at the graphviz output of the graph? This should help you identify much easily. Naming includes the scope, so for example bidirectional_rnn/* is actually inside the bidirectional layers.

With the graph representation, you should be able to see the nodes and edges that are coming outside of those layers and getting into the concat / reshape. You also can now that layer 3 / 5 elements will be named with some h3, b3, h5 or b5.

Maybe you could also just make your own graph, naming those elements, and then comparing the dumps between your graph and ours, thus identifying the layers?

Ah, thanks, we were not aware of tfgraphviz - were you recommending this? We tried to visualize the frozen graph that comes with the 1.0 release, however the bidirectional_rnn function call is a bit of a black box in the resulting image:

tfgraphviz

It does not seem to be showing the output tensors, and filtering all the op names for ‘bidirectional_rnn’ does not really help as the majority of ops are in the scope with that very name. Filtering for both ‘bidirectional_rnn’ and ‘output’ instead shows no results.

Your suggestion about creating a new graph is also very sensible. Could you please give us a bit of direction in using DeepSpeech.py’s BiRNN() class definition to create a graph ourselves? We are currently looking to use the export() function but it seems like we need a checkpoint file for that since it is using the freeze_graph tensorflow utility. I have a tentative script here, would you please be so kind as to tell me if I am going in the right direction?

Thanks again!

Hm no, I was thinking about ./tensorflow/tools/quantization/graph_to_dot.py: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/quantization/graph_to_dot.py

Alright, thanks! By looking at the output dot file we were able to locate the outputs tensor in concat.outputs[0]. Thanks for all your help!

We would be able to proceed forward with our work now, but we could not help but notice that some tensors and operations are not named - in the case of output_states, we believe this situation makes it very hard to identify it while using visualization tools such as ./tensorflow/tools/quantization/graph_to_dot.py or tfgraphviz. Would a pull request to include names for the output_states tensor and others - if necessary - be appreciated?

I guess that anything making people able to hack on top of our model is good :slight_smile: