4
votes

Whenever I go inside the skill and say one completely random word, the Fallback Intent is not triggered. The echo will just emit a sound and in the Alexa simulator, it would just show nothing. But I know for a fact that I am still inside the skill and the session has not yet ended since if I try to say an utterance that is mapped to a certain intent without including the word Alexa, it would respond correctly. BUT, if I try to say TWO completely random words the Fallback Intent is triggered. For example(this is already inside the skill), if I say the word "pizza" it would just respond with that weird noise and stay in the current session. But if I say the words "pizza pie" it would map to the Fallback Intent.

I have observed this behavior in a skill that has many custom intents each having many utterances configured. But when I tried inputting the word "pizza" to a skill with only 3 custom intents, the Fallback intent works fine.

1
When you do this are inside a dialog management session?German
Did you figure this out? I'm having this issue now too exactly as you describe.James
for future reference in case it gets answered there, heres the question that was asked on amazon dev forums forums.developer.amazon.com/questions/185075/…James

1 Answers

1
votes

If, when you say the out-of-domain word, you get a reprompt and then and end of session it means that Alexa assigned a very low confidence level to the mapping of that utterance to an intent. And this also applies the fallback intent! Every time you build your model and out-of-domain model for fallback is built in parallel. That model is supposed to catch out-of-domain utterances but it's not perfect. Only utterances with a high confidence of matching the fallback model will be routed to the fallback intent. This is by design (fot the current version) meaning that not all utterances (both low and high confidence) will trigger fallback when fallback is the candidate. So what you're seeing here is an utterance that generates a low confidence for fallback (fallback is the best chosen candidate but confidence is too low). As fallback gets better it will become more effective at capturing these cases. A rather awkward solution (which defeats the purpose of fallback I guess) will be to extend fallback with sample one word utterances similar to the ones you're trying. Hope this helps...

Update: FallbackIntent sensitivity tuning was added recently so now, if you set it to high in the voice interactions model, it will work as you expected!