Iterator of dataset cached error

Hello,
I am trying to train on my own data, it is about 2 seconds per one and I have about 40 minutes.
I got an error from the model by using transfer learning branch: "
W tensorflow/core/kernels/data/cache_dataset_ops.cc:810] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the datasetwill be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat(). You should use dataset.take(k).cache().repeat() instead."

Here is my full error:

I STARTING Optimization
Epoch 0 | Training | Elapsed Time: 0:00:00 | Steps: 0 | Loss: 0.000000 2019-07-04 23:50:48.642107: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
2019-07-04 23:50:48.792722: W tensorflow/core/kernels/data/cache_dataset_ops.cc:810] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the datasetwill be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat() . You should use dataset.take(k).cache().repeat() instead.
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 1334, in _do_call
return fn(*args)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value layer_6/bias
[[{{node layer_6/bias/read}}]]
[[{{node _arg_dropout_5_0_5}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “DeepSpeech.py”, line 869, in
tf.app.run(main)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py”, line 125, in run
_sys.exit(main(argv))
File “DeepSpeech.py”, line 853, in main
train()
File “DeepSpeech.py”, line 550, in train
train_loss, _ = run_set(‘train’, epoch, train_init_op)
File “DeepSpeech.py”, line 523, in run_set
feed_dict=feed_dict)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 929, in run
run_metadata_ptr)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 1152, in _run
feed_dict_tensor, options, run_metadata)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 1328, in _do_run
run_metadata)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value layer_6/bias
[[node layer_6/bias/read (defined at DeepSpeech.py:40) ]]
[[{{node _arg_dropout_5_0_5}}]]

Caused by op ‘layer_6/bias/read’, defined at:
File “DeepSpeech.py”, line 869, in
tf.app.run(main)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py”, line 125, in run
_sys.exit(main(argv))
File “DeepSpeech.py”, line 853, in main
train()
File “DeepSpeech.py”, line 430, in train
gradients, loss = get_tower_results(iterator, optimizer, dropout_rates, drop_source_layers)
File “DeepSpeech.py”, line 253, in get_tower_results
avg_loss = calculate_mean_edit_distance_and_loss(iterator, dropout_rates, reuse=i > 0)
File “DeepSpeech.py”, line 186, in calculate_mean_edit_distance_and_loss
logits, _ = create_model(batch_x, batch_seq_len, dropout, reuse=reuse)
File “DeepSpeech.py”, line 154, in create_model
layers[‘layer_6’] = layer_6 = dense(‘layer_6’, layer_5, Config.n_hidden_6, relu=False)
File “DeepSpeech.py”, line 66, in dense
bias = variable_on_cpu(‘bias’, [units], tf.zeros_initializer())
File “DeepSpeech.py”, line 40, in variable_on_cpu
var = tf.get_variable(name=name, shape=shape, initializer=initializer)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py”, line 1479, in get_variable
aggregation=aggregation)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py”, line 1220, in get_variable
aggregation=aggregation)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py”, line 547, in get_variable
aggregation=aggregation)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py”, line 499, in _true_getter
aggregation=aggregation)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py”, line 911, in _get_single_variable
aggregation=aggregation)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py”, line 213, in call
return cls._variable_v1_call(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py”, line 176, in _variable_v1_call
aggregation=aggregation)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py”, line 155, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py”, line 2495, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py”, line 217, in call
return super(VariableMetaclass, cls). call (*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py”, line 1395, in init
constraint=constraint)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py”, line 1557, in _init_from_args
self._snapshot = array_ops.identity(self._variable, name=“read”)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py”, line 180, in wrapper
return target(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py”, line 81, in identity
ret = gen_array_ops.identity(input, name=name)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py”, line 3890, in identity
“Identity”, input=input, name=name)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py”, line 788, in _apply_op_helper
op_def=op_def)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py”, line 3300, in create_op
op_def=op_def)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py”, line 1801, in init
self._traceback = tf_stack.extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value layer_6/bias
[[node layer_6/bias/read (defined at DeepSpeech.py:40) ]]
[[{{node _arg_dropout_5_0_5}}]]

How can I solve it? Should I delete the cached file?

Thank you

The dataset error seems like the smaller problem in this case, the actual error you need to fix is the one about uninitialized variables.

how can I fixed it. Also, when I switched to master, it does training, but the error rate is really higher. However, the transfer learning 2 does give me that above error. @reuben

No idea, I’ve never seen that exact error before. The transfer learning branches requires a specific setup with an existing checkpoint, new flags, etc. It’s hard to tell what’s going on without more info.

how can I setup existing checkpoint for transfer learning? @reuben