I'm trying to remove urls from a tweets dataset using pyspark, but I'm getting the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 58: ordinal not in range(128)
Importing dataframe from csv file:
tweetImport=spark.read.format('com.databricks.spark.csv')\
.option('delimiter', ';')\
.option('header', 'true')\
.option('charset', 'utf-8')\
.load('./output_got.csv')
Removing urls from the tweets:
from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
":url:", str(text).encode('ascii','ignore')), \
StringType())
tweetsNormalized=tweetImport.select(normalizeTextUDF(\
lower(tweetImport.text)).alias('text'))
tweetsNormalized.show()
Already tried:
normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
":url:", str(text).encode('utf-8')), \
StringType())
And:
normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
":url:", unicode(str(text), 'utf-8')), \
StringType())
Didn't work
------------edit--------------
Traceback:
Py4JJavaError: An error occurred while calling o581.showString. :org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task
0.0 in stage 10.0 (TID 10, localhost, executor driver): org.apache.spark.api.python.PythonException:
Traceback (most recent call last):
File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 174, in main
process()
File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 169, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 106, in <lambda>
func = lambda _, it: map(mapper, it)
File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 92, in <lambda>
mapper = lambda a: udf(*a)
File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 70, in <lambda>
return lambda *a: f(*a)
File "<stdin>", line 3, in <lambda>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 58: ordinal not in range(128)