2017-04-12 72 views
0

我试图删除数据集使用pyspark从微博的网址,但我发现了以下错误:从微博删除网址 - UnicodeEncodeError:“ASCII”编解码器不能编码字符

UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 58: ordinal not in range(128)

导入从csv文件数据框中:

tweetImport=spark.read.format('com.databricks.spark.csv')\ 
        .option('delimiter', ';')\ 
        .option('header', 'true')\ 
        .option('charset', 'utf-8')\ 
        .load('./output_got.csv') 

从微博中删除网址:

from pyspark.sql.types import StringType 
from pyspark.sql.functions import udf 

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \ 
       ":url:", str(text).encode('ascii','ignore')), \   
       StringType()) 

tweetsNormalized=tweetImport.select(normalizeTextUDF(\ 
       lower(tweetImport.text)).alias('text')) 
tweetsNormalized.show() 

已经TR灭蝇灯:

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \ 
       ":url:", str(text).encode('utf-8')), \   
       StringType()) 

和:

normalizeTextUDF=udf(lambda text: re.sub(r"(\w+:\/\/\S+)", \ 
       ":url:", unicode(str(text), 'utf-8')), \   
       StringType()) 

没有工作

------------ -----------编辑---

回溯:

Py4JJavaError: An error occurred while calling o581.showString. :org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task 
0.0 in stage 10.0 (TID 10, localhost, executor driver): org.apache.spark.api.python.PythonException: 
Traceback (most recent call last): 
    File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 174, in main 
    process() 
    File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 169, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 106, in <lambda> 
    func = lambda _, it: map(mapper, it) 
    File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 92, in <lambda> 
    mapper = lambda a: udf(*a) 
    File "/home/flav/zeppelin-0.7.1-bin-all/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 70, in <lambda> 
    return lambda *a: f(*a) 
    File "<stdin>", line 3, in <lambda> 
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 58: ordinal not in range(128) 
+1

我们需要的完整的回溯,因为它显示了什么行抛出异常以及Python如何到达那里。 –

+0

追溯添加到原始文章^。〜 –

+0

这不幸不是很清楚;无论是Pyspark本身提出的例外,还是Pyspark设法隐藏实际的例外追溯。 –

回答

1

我想出了一个办法做什么,我需要先删除ponctuation,使用下面的函数:

import string 
import unicodedata 
from pyspark.sql.functions import * 

def normalizeData(text): 
    replace_punctuation = string.maketrans(string.punctuation, ' '*len(string.punctuation)) 
    nfkd_form=unicodedata.normalize('NFKD', unicode(text)) 
    dataContent=nfkd_form.encode('ASCII', 'ignore').translate(replace_punctuation) 
    dataContentSingleLine=' '.join(dataContent.split()) 

return dataContentSingleLine 

udfNormalizeData=udf(lambda text: normalizeData(text)) 
tweetsNorm=tweetImport.select(tweetImport.date,udfNormalizeData(lower(tweetImport.text)).alias('text')) 
0

首先尝试解码文本:

str(text).decode('utf-8-sig') 

然后运行编码:

str(text).encode('utf-8') 
相关问题