I need to detect the user voice when they pick-up the reciever on the other end.
Because Modems usually start playing files (playback terminal) when the first ring goes there. So I planned to use speech recognition when they say "hello", it can start playing the file until wait for playing file.
Or even any noise interference it can start speak.
I accomplished this with few settings. I found few common words that my engine detects when we speak and the words that comes when it's ringing. It works fine as a stand alone application but if I try to integrate this with my application it just does not raises "SpeechHypothesized" event.
I cant understand why this happens.
If i see using a break point, the engine is having the delegate assign and invocation property also is initialized properly but than to is doesn't call the event. For calling I'm using C4F tapi manager and for speech recognition i'm using System.Speech library of .Net 3.5.
The code for events is as follows :
engine.SpeechDetected += new EventHandler<SpeechDetectedEventArgs>(engine_SpeechDetected);
engine.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(engine_SpeechRecognized);
engine.SpeechHypothesized+=new EventHandler<SpeechHypothesizedEventArgs> (engine_SpeechHypothesized);
engine.SpeechRecognitionRejected += new EventHandler<SpeechRecognitionRejectedEventArgs>(engine_SpeechRecognitionRejected);
All event's are raised except the speechhypothesized event.
Any idea why this happens ????
EDIT:
Error is not thrown by the service it's Windows Form that throws the error!!!
The Code is as follows for speech recognition :
System.Collections.ObjectModel.ReadOnlyCollection<RecognizerInfo>
recognizedSpeeches = System.Speech.Recognition.SpeechRecognitionEngine.InstalledRecognizers(); if (recognizedSpeeches != null) {
Console.WriteLine("Recognized Speeches:");
int recognizerNumber = 0;engine = new SpeechRecognitionEngine(recognizedSpeeches[recognizerNumber]); engine.SetInputToDefaultAudioDevice(); engine.SpeechDetected -= new
EventHandler(engine_SpeechDetected); engine.SpeechRecognized -= new EventHandler(engine_SpeechRecognized); engine.SpeechHypothesized -= new EventHandler(engine_SpeechHypothesized); engine.SpeechRecognitionRejected -= new EventHandler(engine_SpeechRecognitionRejected); engine.SpeechDetected += new EventHandler(engine_SpeechDetected); engine.SpeechRecognized += new EventHandler(engine_SpeechRecognized); engine.SpeechHypothesized+=new EventHandler(engine_SpeechHypothesized); engine.SpeechRecognitionRejected += new EventHandler(engine_SpeechRecognitionRejected); engine.LoadGrammar(new DictationGrammar());
RecognitionResult srResult = engine.Recognize(new TimeSpan(0, 0, 30)); }
Any clue????