0
votes

I am developing a WPF application which uses speech recognition. The events does not fire up when the grammar words are spoken. Secondly, I am not sure whether the engine starts up on not. How to check that? Following is the code.

namespace Summerproject_trial
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
       private SpeechRecognitionEngine recEngine = 
                                    new SpeechRecognitionEngine();           

        public MainWindow()
        {
            InitializeComponent();
            Choices mychoices = new Choices();
            mychoices.Add(new string[] {"Ok", "Test", "Hello"});
            GrammarBuilder gb = new GrammarBuilder();
            gb.Append(mychoices);
            Grammar mygrammar = new Grammar(gb);
            recEngine.LoadGrammarAsync(mygrammar);          

            recEngine.SpeechRecognized += 
                               new EventHandler<SpeechRecognizedEventArgs>
                                              (recEngine_SpeechRecognized);

            recEngine.SetInputToDefaultAudioDevice();              
        }

        void recEngine_SpeechRecognized(object sender,
                                        SpeechRecognizedEventArgs e)
        {
            MessageBox.Show("You said: " + e.Result.Text);
        }    
    }
}
2
Have you tried to do it exactly as in the example on the SpeechRecognitionEngine MSDN page?Clemens
yes, exactly the same way. I think the code reflects it.user3755903
"I think the code reflects it". Doesn't look like. No idea if it's important, but the MSDN sample creates the SpeechRecognitionEngine with a CultureInfo, you don't. Then it loads a DictationGrammar. You don't. That's why I asked for exactly.Clemens
I have seen some video tutorials and in them nowhere did it use Dictation grammar or CultureInfo.user3755903

2 Answers

1
votes

You forgot to start listening to input.

Try this in the end of your constructor.

recEngine.RecognizeAsync(RecognizeMode.Multiple);
0
votes

@Anri's answer is needed, but you also need to create the SpeechRecognitionEngine with a CultureInfo. (You can create a SpeechRecognitionEngine without a CultureInfo, but then you need to set the recognizer language explicitly.)

Also: Mobile earphones (by which I assume you mean some sort of Bluetooth headset) will typically NOT work with System.Speech. The SR engine used in the desktop SR engine requires higher quality audio input than it can get from Bluetooth.

So, complete code that should work:

   private SpeechRecognitionEngine recEngine = 
                                new SpeechRecognitionEngine("en-US");           

    public MainWindow()
    {
        InitializeComponent();
        Choices mychoices = new Choices();
        mychoices.Add(new string[] {"Ok", "Test", "Hello"});
        GrammarBuilder gb = new GrammarBuilder();
        gb.Append(mychoices);
        Grammar mygrammar = new Grammar(gb);
        recEngine.LoadGrammarAsync(mygrammar);          

        recEngine.SpeechRecognized += 
                           new EventHandler<SpeechRecognizedEventArgs>
                                          (recEngine_SpeechRecognized);

        recEngine.SetInputToDefaultAudioDevice();
        recEngine.RecognizeAsync(RecognizeMode.Multiple);
    }