1
votes

I'm trying to remove urls from a tweets dataset using pyspark, but I'm getting the following error:

UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 58: ordinal not in range(128)

Importing dataframe from csv file:

tweetImport=spark.read.format('com.databricks.spark.csv')\
                    .option('delimiter', ';')\
                    .option('header', 'true')\
                    .option('charset', 'utf-8')\
                    .load('./output_got.csv')

Removing urls from the tweets:

from pyspark.sql.types import StringType
from pyspark.sql.functions import udf

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
              ":url:",  str(text).encode('ascii','ignore')), \        
              StringType())

tweetsNormalized=tweetImport.select(normalizeTextUDF(\
              lower(tweetImport.text)).alias('text'))
tweetsNormalized.show()

Already tried:

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
              ":url:",  str(text).encode('utf-8')), \        
              StringType())

And:

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \
              ":url:",  unicode(str(text), 'utf-8')), \        
              StringType())

Didn't work

------------edit--------------

Traceback:

Py4JJavaError: An error occurred while calling o581.showString. :org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task
0.0 in stage 10.0 (TID 10, localhost, executor driver): org.apache.spark.api.python.PythonException:
Traceback (most recent call last):
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 174, in main
    process()
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 169, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 106, in <lambda>
    func = lambda _, it: map(mapper, it)
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 92, in <lambda>
    mapper = lambda a: udf(*a)
  File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 70, in <lambda>
    return lambda *a: f(*a)
  File "<stdin>", line 3, in <lambda>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 58: ordinal not in range(128)
2
We'll need the full traceback, because that shows what line threw the exception and how Python got there.Martijn Pieters♦
Traceback added to the original post ^.~Flav Scheidt
That is unfortunately not very clear; either the exception was raised in Pyspark itself or Pyspark has managed to hide the actual exception traceback.Martijn Pieters♦

2 Answers

2
votes

I figured out a way to do what I need by removing ponctuation first, using the following function:

import string
import unicodedata
from pyspark.sql.functions import *

def normalizeData(text):
    replace_punctuation = string.maketrans(string.punctuation, ' '*len(string.punctuation))
    nfkd_form=unicodedata.normalize('NFKD', unicode(text))
    dataContent=nfkd_form.encode('ASCII', 'ignore').translate(replace_punctuation)
    dataContentSingleLine=' '.join(dataContent.split())

return dataContentSingleLine

udfNormalizeData=udf(lambda text: normalizeData(text))
tweetsNorm=tweetImport.select(tweetImport.date,udfNormalizeData(lower(tweetImport.text)).alias('text'))
0
votes

Try decoding the text first:

str(text).decode('utf-8-sig')

then running the encode:

str(text).encode('utf-8')