After following the tutorial provided by Google itself still only voice commands are recognized.
https://developers.google.com/glass/develop/gdk/voice?hl=de#voice-and-touch
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
View testView = new CardBuilder(this, CardBuilder.Layout.TEXT)
.setText("test123")
.getView();
getWindow().requestFeature(WindowUtils.FEATURE_VOICE_COMMANDS);
getWindow().requestFeature(Window.FEATURE_OPTIONS_PANEL);
setContentView(testView);
}
@Override
public boolean onCreatePanelMenu(int featureId, Menu menu) {
if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS ||
featureId == Window.FEATURE_OPTIONS_PANEL) {
getMenuInflater().inflate(R.menu.menu_one, menu);
return true;
}
// Pass through to super to setup touch menu.
return super.onCreatePanelMenu(featureId, menu);
}
@Override
public boolean onMenuItemSelected(int featureId, MenuItem item) {
if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS ||
featureId == Window.FEATURE_OPTIONS_PANEL) {
switch (item.getItemId()) {
case R.id.menu_one_item:
//Some stuff
//...
break;
default:
return true;
}
return true;
}
return super.onMenuItemSelected(featureId, item);
}
Is it really enough to just check for FEATURE_OPTIONS_PANEL
or is it necessary to add an onClick
listener of some sort? Maybe it's just me having trouble understanding the instructions the way intended.
The tutorial is from July 31st, is it possible the API has changed since then and they haven't updated their post?