The same spped with cpu and with gpu

What level ne better? 1 or 2 ?

Now nvidia model finish and i will try with log output

The one providing the more infos ?

So per --helpfull, --log_level 0 for debug-level.

We also don’t know what CPU you have. Feeding GPUs requires some CPU power as well as RAM.

cat /proc/cpuinfo
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i5-2300 CPU @ 2.80GHz
stepping	: 7
microcode	: 0x2f
cpu MHz		: 2659.078
cache size	: 6144 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 4
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
bugs		: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips	: 5587.34
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i5-2300 CPU @ 2.80GHz
stepping	: 7
microcode	: 0x2f
cpu MHz		: 2760.393
cache size	: 6144 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 4
apicid		: 2
initial apicid	: 2
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
bugs		: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips	: 5587.34
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 2
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i5-2300 CPU @ 2.80GHz
stepping	: 7
microcode	: 0x2f
cpu MHz		: 2590.279
cache size	: 6144 KB
physical id	: 0
siblings	: 4
core id		: 2
cpu cores	: 4
apicid		: 4
initial apicid	: 4
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
bugs		: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips	: 5587.34
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i5-2300 CPU @ 2.80GHz
stepping	: 7
microcode	: 0x2f
cpu MHz		: 2761.047
cache size	: 6144 KB
physical id	: 0
siblings	: 4
core id		: 3
cpu cores	: 4
apicid		: 6
initial apicid	: 6
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
bugs		: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips	: 5587.34
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

(deepspeech-train-venv) (base) v@gpu:~/DeepSpeech$

RAM 10Gb in my system.

This comparison, if done always with tensorflow-gpu package, might be always with CUDA.

Please:

  • pip uninstall tensorflow ; pip uninstall tensorflow-gpu
  • pip install tensorflow==1.15.2
  • CUDA_VISIBLE_DEVICES=0 python3 DeepSpeech.py [...]
  • pip uninstall tensorflow && pip install tensorflow-gpu==1.15.2
  • CUDA_VISIBLE_DEVICES=1 python3 DeepSpeech.py [...]

In that order, to really verify CPU vs GPU.

So, Core i5-2300 2.8GHz, 10GB RAM and what is your storage subsystem?

My ububntu works on usb 2Gb disk. I understant that it works slowly , but i compare similar packages from Mozilla and from Nvidia in the same conditions.

-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
root@7b9304519bbb:/workspace/OpenSeq2Seq# nvidia-smi
Sun May  3 11:56:52 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82       Driver Version: 440.82       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:01:00.0  On |                  N/A |
| 37%   34C    P8     9W / 120W |    620MiB /  3018MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

It is state of GPU with DS runnig as is.

So it’s not using your GPU. We’re back to square one when @othiele asked for more tensorflow logs, which I asked again, and you still have not shared.

You’re comparing oranges and beef, here. Also, your system is on a 2GB USB stick, but I don’t think your data are? So that still does not give us insight of how good you can feed your GPUs (not to mention the size of your dataset).

 python3 DeepSpeech.py     --drop_source_layers 1     --alphabet_config_path ~/ASR/data-cv/alphabet.ru     --save_checkpoint_dir ~/ASR/ru-output-checkpoint     --load_checkpoint_dir ~/ASR/ru-release-checkpoint     --train_files   ~/ASR/data-cv/clips/train.csv     --dev_files   ~/ASR/data-cv/clips/dev.csv     --test_files  ~/ASR/data-cv/clips/test.csv --scorer_path ~/ASR/ru-release-checkpoint/deepspeech-0.7.0-models.scorer --train_batch_size 64 --dropout_rate 0.25 --learning_rate 0.00005 --dev_batch_size 64 —train_cudnn True --log_level 0
2020-05-03 14:58:48.359928: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2793670000 Hz
2020-05-03 14:58:48.360316: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d1137597c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-05-03 14:58:48.360363: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
W WARNING: You specified different values for --load_checkpoint_dir and --save_checkpoint_dir, but you are running training and testing in a single invocation. The testing step will respect --load_checkpoint_dir, and thus WILL NOT TEST THE CHECKPOINT CREATED BY THE TRAINING STEP. Train and test in two separate invocations, specifying the correct --load_checkpoint_dir in both cases, or use the same location for loading and saving.
WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:347: Iterator.output_types (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_types(iterator)`.
W0503 14:58:49.436052 140558978221888 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:347: Iterator.output_types (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_types(iterator)`.
WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:348: Iterator.output_shapes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_shapes(iterator)`.
W0503 14:58:49.436302 140558978221888 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:348: Iterator.output_shapes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_shapes(iterator)`.
WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:350: Iterator.output_classes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_classes(iterator)`.
W0503 14:58:49.436474 140558978221888 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:350: Iterator.output_classes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_classes(iterator)`.
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

W0503 14:58:49.590739 140558978221888 lazy_loader.py:50] 
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/contrib/rnn/python/ops/lstm_ops.py:597: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
W0503 14:58:49.592159 140558978221888 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/contrib/rnn/python/ops/lstm_ops.py:597: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
WARNING:tensorflow:From /home/v/DeepSpeech/training/deepspeech_training/train.py:246: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0503 14:58:49.672730 140558978221888 deprecation.py:323] From /home/v/DeepSpeech/training/deepspeech_training/train.py:246: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
D Session opened.
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization

here is lod
Nest step i uninstall and reinspal different verions of TF ( with and withput gpu)

Ok, and which one is it? tensorflow ? tensorflow-gpu ?

Please also collect logs with TF_CPP_MIN_VLOG_LEVEL=1 env var.

TF without GPU

 CUDA_VISIBLE_DEVICES=0 python3 DeepSpeech.py     --drop_source_layers 1     --alphabet_config_path ~/ASR/data-cv/alphabet.ru     --save_checkpoint_dir ~/ASR/ru-output-checkpoint     --load_checkpoint_dir ~/ASR/ru-release-checkpoint     --train_files   ~/ASR/data-cv/clips/train.csv     --dev_files   ~/ASR/data-cv/clips/dev.csv     --test_files  ~/ASR/data-cv/clips/test.csv --scorer_path ~/ASR/ru-release-checkpoint/deepspeech-0.7.0-models.scorer --train_batch_size 64 --dropout_rate 0.25 --learning_rate 0.00005 --dev_batch_size 64 —train_cudnn True

W WARNING: You specified different values for --load_checkpoint_dir and --save_checkpoint_dir, but you are running training and testing in a single invocation. The testing step will respect --load_checkpoint_dir, and thus WILL NOT TEST THE CHECKPOINT CREATED BY THE TRAINING STEP. Train and test in two separate invocations, specifying the correct --load_checkpoint_dir in both cases, or use the same location for loading and saving.
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:00:39 | Steps: 1 | Loss: 293.825439
 CUDA_VISIBLE_DEVICES=1  python3 DeepSpeech.py     --drop_source_layers 1     --alphabet_config_path ~/ASR/data-cv/alphabet.ru     --save_checkpoint_dir ~/ASR/ru-output-checkpoint     --load_checkpoint_dir ~/ASR/ru-release-checkpoint     --train_files   ~/ASR/data-cv/clips/train.csv     --dev_files   ~/ASR/data-cv/clips/dev.csv     --test_files  ~/ASR/data-cv/clips/test.csv --scorer_path ~/ASR/ru-release-checkpoint/deepspeech-0.7.0-models.scorer --train_batch_size 64 --dropout_rate 0.25 --learning_rate 0.00005 --dev_batch_size 64 —train_cudnn True --log_level 0

2020-05-03 15:11:14.587849: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2793670000 Hz
2020-05-03 15:11:14.588300: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d2985da870 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-05-03 15:11:14.588331: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-05-03 15:11:14.591408: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-05-03 15:11:14.682084: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2020-05-03 15:11:14.682181: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: gpu
2020-05-03 15:11:14.682205: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: gpu
2020-05-03 15:11:14.682351: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 440.82.0
2020-05-03 15:11:14.682421: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 440.82.0
2020-05-03 15:11:14.682445: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 440.82.0
W WARNING: You specified different values for --load_checkpoint_dir and --save_checkpoint_dir, but you are running training and testing in a single invocation. The testing step will respect --load_checkpoint_dir, and thus WILL NOT TEST THE CHECKPOINT CREATED BY THE TRAINING STEP. Train and test in two separate invocations, specifying the correct --load_checkpoint_dir in both cases, or use the same location for loading and saving.
WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:347: Iterator.output_types (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_types(iterator)`.
W0503 15:11:15.770550 140041799657280 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:347: Iterator.output_types (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_types(iterator)`.
WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:348: Iterator.output_shapes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_shapes(iterator)`.
W0503 15:11:15.770819 140041799657280 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:348: Iterator.output_shapes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_shapes(iterator)`.
WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:350: Iterator.output_classes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_classes(iterator)`.
W0503 15:11:15.770987 140041799657280 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py:350: Iterator.output_classes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_classes(iterator)`.
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

W0503 15:11:15.929409 140041799657280 lazy_loader.py:50] 
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/contrib/rnn/python/ops/lstm_ops.py:597: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
W0503 15:11:15.930897 140041799657280 deprecation.py:323] From /home/v/ASR/deepspeech-train-venv/lib/python3.7/site-packages/tensorflow_core/contrib/rnn/python/ops/lstm_ops.py:597: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
WARNING:tensorflow:From /home/v/DeepSpeech/training/deepspeech_training/train.py:246: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0503 15:11:16.015750 140041799657280 deprecation.py:323] From /home/v/DeepSpeech/training/deepspeech_training/train.py:246: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
D Session opened.
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization
Epoch 0 |   Training | Elapsed Time: 0:00:39 | Steps: 1 | Loss: 293.825439
Sun May  3 15:12:04 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82       Driver Version: 440.82       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:01:00.0  On |                  N/A |
| 35%   29C    P8     7W / 120W |    621MiB /  3018MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1982      G   /usr/lib/xorg/Xorg                           374MiB |
|    0      2367      G   /usr/bin/gnome-shell                         232MiB |
|    0      3095      G   gnome-control-center                           2MiB |
|    0      3372      G   /usr/lib/firefox/firefox                       2MiB |
|    0      5284      G   /usr/lib/firefox/firefox                       2MiB |
|    0     14734      G   /usr/lib/firefox/firefox                       2MiB |
|    0     17812      G   /usr/lib/firefox/firefox                       2MiB |
+-----------------------------------------------------------------------------+
(base) v@gpu:~$

So, how not obvious is this ?

Oh sorry. This string run so quickly.
Do you have idea how to solve this problem?
I have nvidia docker with TensorFlow Version 1.13.1. Is it compatible with youe package?

No, this is TensorFlow / system level, not DeepSpeech, there could be many many reasons and I really don’t have time right now.

As documented, training requires 1.15.2, so no. You could try commonvoice-fr/DeepSpeech/Dockerfile.train at master · common-voice/commonvoice-fr · GitHub but this is quite opiniated on how things should run, and I don’t have time to provide support for the moment.