1
votes

I’ve got an issue with my QnA Knowledge Base where it’s retuning ‘No good match found in KB’ when using follow-up prompts that I've configured to return an answer.

I have c.200 question/answer pairs set up and all of them have follow-up prompts linking each question/answer pair to other question/answer pairs. However, when I test the knowledge base in QnA Maker I get an answer of ‘No answer found in KB’.

Below is an example:

I have a question/answer pair to answer the question ‘What is depression’ that has five different follow-up prompts - Prevalence, Causes, Types, Symptoms and Related Issues:

Image of ‘What is depression’ QnA question/answer pair

As you can see in the below image, the Prevalence follow-up prompt is configured to answer using the ‘How common is depression’ questions/answer pair:

Image of Prevalence follow-up prompt configuration

However, when I test this using QnA Maker’s built in test chatbot I get the answer ‘No good match found in KB’:

Image of Prevalence follow-up prompt answer in test chatbot

When I inspect the result I see the following:

Image of inspection of Prevalence follow-up prompt answer

As you can see, there is no answer returned and the confidence score is 'None'.

Has anyone else seen this problem before and have a solution?

2

2 Answers

2
votes

Follow-up prompts aren't currently supported out of the box outside of the QnA Maker portal. There is a C# and NodeJS experimental samples available which demonstrate how you can integrate this functionality into your bot.

Since you haven't specified a language preference, I will go with the C# one, basically your QnA code needs to be updated from something like:

var qnaMaker = new QnAMaker(new QnAMakerEndpoint
{
    KnowledgeBaseId = _configuration["QnAKnowledgebaseId"],
    EndpointKey = _configuration["QnAEndpointKey"],
    Host = _configuration["QnAEndpointHostName"]
},
null,
httpClient);

var response = await qnaMaker.GetAnswersAsync(turnContext);

if (response != null && response.Length > 0)
{
    await turnContext.SendActivityAsync(MessageFactory.Text(response[0].Answer), cancellationToken);
}
else
{
    await turnContext.SendActivityAsync(MessageFactory.Text("No QnA Maker answers were found."), cancellationToken);
}

to:

var qnaMaker = new QnAMaker(new QnAMakerEndpoint
{
    KnowledgeBaseId = _configuration["QnAKnowledgebaseId"],
    EndpointKey = _configuration["QnAEndpointKey"],
    Host = _configuration["QnAEndpointHostName"]
},
null,
httpClient);

var response = await qnaMaker.GetAnswersAsync(turnContext);
var qnaAnswer = response[0].Answer;
var prompts = response[0].Context?.Prompts;

if (prompts == null || prompts.Length < 1)
{
    await turnContext.SendActivityAsync(MessageFactory.Text(response[0].Answer), cancellationToken);
}
else
{
    // Set bot state only if prompts are found in QnA result
    newState = new QnABotState
    {
        PreviousQnaId = response[0].Id,
        PreviousUserQuery = query
    };

    outputActivity = CardHelper.GetHeroCard(qnaAnswer, prompts);
}

The relevant code is in this file, as you can see there is some additional code involved to store the progress through the follow-up prompts, so perhaps it might be easier to plug in your KB details to the sample then try it out to see how it works before porting it over into your bot.

0
votes

With Matt's help we did some testing of the portal chatbot vs. the QnA API and found that there is a bug with the portal chatbot as the API returns the answers as expected. I am following up by posting details of this as feedback on the BotFramework documentation page here