I am trying to have a custom Speech to Intent in a single function call, that leverage custom acoustic and language model.
I am following the documentation at https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/intent and aim to use continuous custom speech with LUIS intent.
As per the documentation, the function
SpeechFactory::FromSubscription
can either take LUIS subscription ID (available at luis.ai) in case of speech to intent detection or Custom Speech subscription ID (available after registration www.cris.ai) from speech.
There is a way to train custom speech with acoustic data model and custom language for higher accuracy training.
I have trained the speech subscription with custom language and acoustic data model and I would like to use these models directly for speech to intent recognition.
How do I do so?
So far, I have successfully used either custom speech with my acoustic and language model for STT or LUIS subscription ID for Speech to Intent recognition but unable to link my custom models for LUIS Speech to Intent.
I am using subscriptions from cris.ai and luis.ai. I am not interested in previous Bing STT SDK cause I need these custom acoustic model and language models for my use case.