0
votes

Am trying to implement a hierarchical chat bot capitalizing LUIS to identify primary and secondary intents. As part of this created numerous LUIS models and trained. However the behavior of the LUIS is observed weird and unpredicted at various instances. For instance, got a LUIS model named Leave trained with following utterances.

Utterance Intent Am I eligible for leave of adoption? Leave Query What is my leave balance? Leave Query What is sick leave? Leave Query Who approves my sick leave? Leave Approval

Upon training these utterances, the queries against those on leave context are working as expected. However when the following messages are validated against the Leaves Model with the expectation of receiving “None” intent, LUIS is returning intents other thanNone”, which is not making any sense.

Query Expected Intent Actual Intent Am I eligible for loan? None Leave Query What is my loan balance None Leave Query Who approves my loan None Leave Query

The issue here is “Am I eligible for loan” doesn’t belong to this LUIS model at all and am expecting a “None” intent. The idea is to receive a None intent when the utterance doesn’t belong to queried LUIS model, so that can check other models for valid intent. However am always getting some intent instead of “none”.

Not sure if I am doing something wrong here. Any help/guidance on this would be much helpful.

2
can you format with bullets, the queries, utterances, intents so it's a bit more readable?Ezequiel Jadib

2 Answers

2
votes

I agree with what Steven has suggested above

  1. Training None intent is a good practice
  2. Defining entities will help

If you want to categorize your intents based on some domain for e.g., Leave in the present case. I would suggest creating a List entity with value as leave.

if you want to have anything with leave word go to leave Query Intent.

anything about [leave ]

Current version results

Top scoring intent
Leave Query (1)
Other intents
None (0.28)

and rest of sentences without Leave

anything about loan

Current version results

Top scoring intent
None (0.89)
Other intents
Leave Query (0)

Although the constraint here is, you would make it more definitive like scoring would be either 1 or 0 for Leave query.

it depends on your use case, whether you want to take a definitive approach or predictive approach. For Machine to Machine communication, you might think of taking a definitive approach but for things like chatbot you might prefer taking predictive approach.

None the less, this is nice little trick which might help you.

Hope this helps

enter image description here

0
votes

How trained is your model and how many utterances are registered? Just to check, have you gone into the LUIS portal after you received the utterances "Am I eligible for loan?", and "Who approves my loan" and trained the bot that they are to not match against the Leave intents?

Please note that until any language understanding model is thoroughly trained, they are going to be prone to errors.

When looking at your utterances I noticed that they're all very similar:

  • "Am I eligible for leave of adoption?" vs "Am I eligible for loan?"
  • "What is my leave balance?" vs "What is my loan balance?"
  • "Who approves my sick leave?" vs "Who approves my loan"

These utterances have minimal differences. They're very general questions and you haven't indicated that any entities are currently being used. While the lack of entities for these questions is understandable with your simple examples, entities definitely help LUIS in understanding which intent to match against.

To resolve this problem you'll need to train your model more and should add entities. Some additional utterances you might use are "What's my leave balance?", "Check my leave balance", "Tell me my leave balance.", "Check leave balances", et cetera.