Empty inferences after training a french model

I’m getting a problem when trying to make inferences using an exported model after training process :
I trained deepspeech on a specific french dataset and get an output_graph.pb.
my wave file was tested during the learning process and it gives a result. but when I tried to test the same file with my exported model “output_graph.pb” it gives me empty inferences…
This is the command I’m doing :
python3.6 /usr/local/bin/deepspeech --model ~/results/model_export/output_graph.pb --alphabet ~/Deepspeech/data/alphabet.txt --lm ~/DeepSpeech/data/lm/lm.binary --trie ~/DeepSpeech/data/lm/trie --audio ~/deepspeech_dataset/clips/test.wav
And this is the result I get :
Loading model from file ~/results/model_export/output_graph.pbmm TensorFlow: v1.13.1-10-g3e0cc53 DeepSpeech: v0.5.0-alpha.11-0-g1201739 2019-06-13 09:51:17.639352: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFl ow binary was not compiled to use: AVX2 FMA 2019-06-13 09:51:17.743397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had n egative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-06-13 09:51:17.743809: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235 pciBusID: 0000:00:04.0 totalMemory: 11.17GiB freeMemory: 11.09GiB 2019-06-13 09:51:17.743842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-06-13 09:51:18.035008: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with streng th 1 edge matrix: 2019-06-13 09:51:18.035078: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-06-13 09:51:18.035088: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-06-13 09:51:18.035392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/rep lica:0/task:0/device:GPU:0 with 10749 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute ca pability: 3.7) 2019-06-13 09:51:18.042594: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU "') for unknown op: UnwrapDatasetVariant 2019-06-13 09:51:18.042658: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant 2019-06-13 09:51:18.042678: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"' ) for unknown op: WrapDatasetVariant 2019-06-13 09:51:18.042885: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU " host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant Loaded model in 0.407s. Loading language model from files ~/DeepSpeech/data/lm/lm.binary ~/data /lm/trie Loaded language model in 0.0116s. Running inference. 2019-06-13 09:51:18.326565: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 loc ally Inference took 0.831s for 2.112s audio file.

Any help plz ?

I’ve already explained to you why. You should also share again your training configuration, and definitively give a try to collaborate with the model and training tooling for french that I shared.