My understanding is that Amazon ASK still does not provide:
- The raw user input
- An option for a fallback intent
- An API to dynamically add possible options from which Alexa can be better informed to select an intent.
Is this right or am I missing out on knowing about some critical capabilities?
Actions on Google w/ Dialogflow provides:
raw user input for analysis: request.body.result.resolvedQuery
fallback intents: https://dialogflow.com/docs/intents#fallback_intents
An APi to dynamically add user expressions (aka sample utterances): PUT /intents/{id}
These tools provide devs with the ability to check to see if the identified intent is correct and if not fix it.
I know there have been a lot of questions asked previously, just a few here:
How to add slot values dynamically to alexa skill
Can Alexa skill handler receive full user input?
Amazon Alexa dynamic variables for intent
I have far more users on my Alexa skill than my AoG app simply because of Amazon's dominance to date in the market - but their experience falls short of a Google Assistant user experience because of these limitations. I've been waiting for almost a year for new Alexa capabilities here, thinking that after Amazon's guidance to not use AMAZON.LITERAL there would be improvements coming to custom slots. To date it still looks like this old blog post is still the only guidance given. With Google, I dynamically pull in utterance options from a db that are custom for a given user following account linking. By having the user's raw input, I can correct the choice of intent if necessary.
If you've wanted these capabilities but have had to move forward without them, what tricks do you have to get accurate intent handling with Amazon when you don't know what the user will say?
EDIT 11/21/17: In September Amazon announced the Alexa Skill Management API (SMAPI) which does provide the 3rd bullet above.