1
votes
I am new in python and want to apply p reprocessing steps 
so here is decoding error 

import nltk
from nltk.tokenize import word_tokenize,sent_tokenize
from nltk.corpus import stopwords
from nltk.tag import pos_tag
from nltk.stem import PorterStemmer

`ps=PorterStemmer()
print ("\n Reading file with out stopwords.")
text_file=open('preprocessing.txt',encoding='utf-8').read()
stop_words= set(stopwords.words("english"))
words=word_tokenize(text_file)
filtered_sentence = [w for w in words if not w in stop_words]
print(filtered_sentence)
print ("\n Removed stopword.")
print(stop_words)
print ("\n Stemming.")
for w in text_file:
print (ps.stem(w))
print(w)
print(sent_tokenize(text_file))
print ("\n tokenization.")
print(word_tokenize(text_file))
print ("\n part of speech tagging.")
print (pos_tag(words))   `

" i want to show the result in specific format but the output is ", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x92 in position 257: invalid start byte"

2
Are you sure your file is encoded using UTF-8? - L3viathan
no how to encoded it ? - umarsaleem
If you're unsure what encoding your file has, you can try chardet to figure it out. - L3viathan

2 Answers

1
votes

Please try to read the data using encoding='unicode_escape'. For example:

text_file=open('preprocessing.txt',encoding ='unicode_escape').read()

This resolved the UnicodeDecodeError for me.

Else you can try as below:

text_file=open(r'preprocessing.txt',encoding ='unicode_escape').read()
0
votes

Make sure you your file is encoded with UTF-8. If not, open it in Notepad++, go to encoding tab, then convert to UTF-8 and save as.