I'm reading a text file that has unicode characters from many different countries. The data in the file is also in JSON format.
I'm working on a CentOS machine. When I open the file in a terminal, the unicode characters display just fine (so my termininal is configured for unicode).
When I test my code in Eclipse, it works fine. When I run my code in the terminal, it throws an error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128)
for line in open("data-01083"):
try:
tmp = line
if tmp == "":
break
theData = json.loads(tmp[41:])
for loc in theData["locList"]:
outLine = tmp[:40]
outLine = outLine + delim + theData["names"][0]["name"]
outLine = outLine + delim + str(theData.get("Flagvalue"))
outLine = outLine + delim + str(loc.get("myType"))
flatAdd = ""
srcAddr = loc.get("Address")
if srcAddr != None:
flatAdd = delim + str(srcAddr.get("houseNumber"))
flatAdd = flatAdd + delim + str(srcAddr.get("streetName"))
flatAdd = flatAdd + delim + str(srcAddr.get("postalCode"))
flatAdd = flatAdd + delim + str(srcAddr.get("CountryCode"))
else:
flatAdd = delim + "None" + delim + "None" + delim +"None" + delim +"None" + delim +"None"
outLine = outLine + FlatAdd
sys.stdout.write(("%s\n" % (outLine)).encode('utf-8'))
except:
sys.stdout.write("Error Processing record\n")
So everything works until it gets to StreetName, where it crashes with the UnicodeDecodeError, which is where the non-ascii characters start showing up.
I can fix that instance by added .encode('utf-8'):
flatAdd = flatAdd + delim + str(srcAddr.get("streetName").encode('utf-8'))
but then it crashes with the UnicodeDecodeError on the next line:
outLine = outLine + FlatAdd
I have been stumbling through these types of issues for a month. Any feedback would be greatly appreciated!!