Real-time DeepSpeech Analysis using built-in microphone

What’s mic.sh ? By code above, is this the code pasted in the first post ? You should just use examples from the git repo, not this one.

And you error is unreadable, please properly copy/paste and use code-formatting …

#!/usr/bin/env bash
from deepspeech import Models
import numpy as np
import speech_recognition as sr

sample_rate=16000
beam_width=500
lm_alpha=0.75
lm_beta=1.85
n_features=26
n_context=9

model_name=“home/sehar/urdu-models/output_graph.pb”
alphabet=“home/sehar/urdu-models/alphabet.txt”
langauage_model=“home/sehar/urdu-models/lm.binary”
trie=“home/sehar/urdu-models/trie”
audio_file=“home/sehar/urdu-models/sent6urd.wav”

if name == ‘main’:
ds = Model(model_name, n_features, n_context, alphabet, beam_width)
ds.enableDecoderWithLM(alphabet, langauage_model, trie, lm_alpha, lm_beta)

r = sr.Recognizer()
with sr.Microphone(sample_rate=sample_rate) as source:
    print("Say Something")
    audio = r.listen(source)
    fs = audio.sample_rate
    audio = np.frombuffer(audio.frame_data, np.int16)



#fin = wave.open(audio_file, 'rb')
#fs = fin.getframerate()
#print("Framerate: ", fs)

#audio = np.frombuffer(fin.readframes(fin.getnframes()), np.int16)

#audio_length = fin.getnframes() * (1/sample_rate)
#fin.close()

print("Infering {} file".format(audio_file))

print(ds.stt(audio, fs))

This is my code for using microphone as input for my own trained model. can I used this code while I was running this file I got this above error

Ok, you need to learn some Python before using DeepSpeech, I fear. You are pasting Python code and running that with Bash. There’s no way this can work.

Traceback (most recent call last):
File “mic.py”, line 21, in
ds = Model(model_name, n_features, n_context, alphabet, beam_width)
File “/home/sehar/.local/lib/python2.7/site-packages/deepspeech/init.py”, line 40, in init
status, impl = deepspeech.impl.CreateModel(*args, **kwargs)
TypeError: CreateModel() takes at most 2 arguments (5 given)

now i have ran this file as python file and i am getting this above error kindly help

Please use proper code format (try the forum toolbox and the preview)

Looks like a version mismatch, please make sure that you are using the same version tags for the client and model.

version mismatch of tensorflow??

Of the deepspeech Python wheel. The code your wrote does not use the same API as the module you have installed …

Ok, @sehar_capricon, you seriously need to make an effort on your end and read and do what we are instructing you to help you. Please read the code of the examples in the git repo, the link and the instructions were already shared to you earlier. We are welcoming newcomers, but we cannot do this work for you. If you refuses to make any effort, we won’t be able to help you.

I don’t think you are using deepspeech here. I believe you are simply using speech_recognition’s default STT. Without deepspeech, if you just install pyaudio and SpeechRegognition you can type python -m speech_recognition, and it will work without pointing to a STT engine.

whats the difference between output_graph.pb and output_graph.pbmm

If you read the documentation, you will learn that it is a protocobuffer modification to make the file mmap()able.

how to get output_graph.pbmm file

what is the process to get output_graph.pbmm

Read the documentation

Which documentation can you provide me the link

Hi! Starting from the @duys’ script and the @sehar_capricon’s issue I adapted the script to match the __init__.pyof the DeepSpeech 0.6.0-g6d43e21 installed on python 3.7 and now I did the first attempts with the English pre-trained model (to downloaded it following the documentation) and audio-streaming (no .wav file). Here’s the code:

from deepspeech import Model
import numpy as np
import speech_recognition as sr
sample_rate = 16000
beam_width = 500
lm_alpha = 0.75
lm_beta = 1.85
n_features = 26
n_context = 9
models_folder = 'deepspeech-0.6.0-models/'
model_name = models_folder+"output_graph.pbmm"
alphabet = models_folder+"alphabet.txt"
language_model = models_folder+"lm.binary"
trie = models_folder+"trie"

ds = Model(model_name, beam_width)
ds.enableDecoderWithLM(language_model, trie, lm_alpha, lm_beta)

r = sr.Recognizer()
with sr.Microphone(sample_rate=sample_rate) as source:
    print("Say Something")
    audio = r.listen(source)
    fs = audio.sample_rate
    audio = np.frombuffer(audio.frame_data, np.int16)
    print(ds.stt(audio))

Hope it helps

1 Like

please i have been working on a real time speech to text, but i notice deepspeech can actually give me what i want then i notice the algorithm only accepts a wave file and not microphone as i want to record and get text in real time, does this approach finally works here in real time