0
votes

My programming skill is very limited, so I apologize in advance.

I am trying to receive a DialogFlow's intent through a streaming audio. I am testing it using a microphone.

I referenced the following Google sample codes.

Microphone Streaming Audio for Google STT

Intent Detection for Google DialogFlow

Both works fine, but when I try to combine the two sample codes I get the following error.

No handlers could be found for logger "grpc._channel"
Traceback (most recent call last):
  File "detect_intent_stream.py", line 181, in <module> detect_intent_stream(project_id, session_id, language_code)
  File "detect_intent_stream.py", line 162, in detect_intent_stream for response in responses:
  File "C:\Python27\lib\site-packages\google\api_core\grpc_helpers.py", line 83, in next six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "C:\Python27\lib\site-packages\six.py", line 737, in raise_from raise value
google.api_core.exceptions.Unknown: None Exception iterating requests!

I was searching for a solution, and I came across this post. But I am not sure how to implement the suggestion provided.

Intermediate results on using session_client.streaming_detect_intent()

Below is the code I have at the moment.

def detect_intent_stream(project_id, session_id, language_code):
    import dialogflow_v2 as dialogflow
    session_client = dialogflow.SessionsClient()

    audio_encoding = dialogflow.enums.AudioEncoding.AUDIO_ENCODING_LINEAR_16
    sample_rate_hertz = 8000

    session_path = session_client.session_path(project_id, session_id)

    def request_generator(audio_config):
        query_input = dialogflow.types.QueryInput(audio_config=audio_config)
        yield dialogflow.types.StreamingDetectIntentRequest(session=session_path, query_input=query_input, single_utterance=True)

        with MicrophoneStream(RATE, CHUNK) as stream:
            #while True:
            #Temp condition
            while dialogflow.types.StreamingRecognitionResult().is_final == False:
                audio_generated = stream.generator()
                #Temp condition
                if not audio_generated:
                    break
                yield dialogflow.types.StreamingDetectIntentRequest(input_audio=audio_generated)

    audio_config = dialogflow.types.InputAudioConfig(audio_encoding=audio_encoding, language_code=language_code, sample_rate_hertz=sample_rate_hertz)

    requests = request_generator(audio_config)
    responses = session_client.streaming_detect_intent(requests)

    print('=' * 20)
    for response in responses:
        print('Intermediate transcript: "{}".'.format(response.recognition_result.transcript)).encode('utf-8')

    query_result = response.query_result

    print('=' * 20)
    print('Query text: {}'.format(query_result.query_text))
    print('Detected intent: {} (confidence: {})\n'.format(
        query_result.intent.display_name,
        query_result.intent_detection_confidence))
    print('Fulfillment text: {}\n'.format(
        query_result.fulfillment_text))

Edit: I have fixed my reference code.

1
After some testing, it seems the error is caused by the format of the microphone stream data. The example I referred to had WAV files being streamed. I am trying to see what is the right format for the input audio. - June

1 Answers

0
votes

Solved!

So the issue was that the STT function StreamingRecognizeRequest and the DF function StreamingDetectIntentRequest takes in different parameter.

STT function takes in the generator as the parameter, where as the DF function takes in the actual buffer.