I'm not surprised. Looking at the just the characters of those questions, if your QnA KB is trained to recognize 'What is SDG 1' and does, to a high lvl of certainity, then 'What is SDG 1x' is going to be recognized simply based on the percentage of matching characters. To QnA maker:
'What is SDG 1x' ALL look like 'What is SDG 1'. You need to go into your QnA KB, and train it so that questions like 'What is SDG 19' has a 100% certainty. You can check this by looking at the 'inspect' element of the 'test' feature:

As you can see from my image 'One' was a possible anser to this question, but 'One' is what I have as the answer to 'What is SDG 1'. (ignore the other answer, i do alot of testing on this KB). If you go into inspect like this, and you see that the wrong answer is selected, you can simply chose the right one, then retrain your KB.
I did this, choosing the wrong answer repeated, then retraining, until I got my KB (despite having a perfect answer already), to respond with 100% certainty with the wrong answer (shown below):

You're going to have to do something similar, but with the correct answer.
It's just not only that question, but there are also many other similar questions like what is the agenda of sdg 11, what is the mandate of sdg 11 etc. for each sdg which are currently predicted as sdg 1 by QnA Maker
The idea that one question -> one answer is a good one, but you're going to have to work a bit more if all the questions look the same. Additionally, if you think this is going to come up when working with customers, you can code into your bot to return the top 3 or 5 answers, and have your bot do a follow up question like "I'm not sure I understood, did you mean '1' or '17' or '19'?", then have the user select which one they meant.