2017-02-13 102 views
0

我想计算几个csv文件中单词的出现次数。首先,我想展示10个最常遇到的单词,其中有停用词,然后没有停用词。在没有停用词的多个csv文件中计算单词的频率

这是我的代码:

import nltk 
nltk.download("stopwords") 


from nltk.corpus import stopwords 


myfile = sc.textFile('./Sacramento*.csv') 


counts = myfile.flatMap(lambda line: line.split(",")).map(lambda word: (word, 1)).reduceByKey(lambda v1,v2: v1 + v2) 


sorted_counts = counts.map(lambda (a, b): (b, a)).sortByKey(0, 1).map(lambda (a, b): (b, a)) 


first_ten = sorted_counts.take(10) 


first_ten 
Out[7]: 
[(u'Residential', 917), 
(u'2', 677), 
(u'CA', 597), 
(u'3', 545), 
(u'SACRAMENTO', 439), 
(u'ours', 388), 
(u'0', 387), 
(u'4', 277), 
(u'Mon May 19 00:00:00 EDT 2008', 268), 
(u'Fri May 16 00:00:00 EDT 2008', 264)] 


cachedStopWords = stopwords.words("english") 


result_ll = counts.map(lambda (a, b): (b, a)).sortByKey(0, 
1).map(lambda (a, b): (b, a)) 


print [i for i in result_ll.take(10) if i not in cachedStopWords] 

但产量仍与停止的话 - “我们” 也停用词

[(u'Residential”,917),(U'2之间(u'CA',597),(u'3',545),(u'SACRAMENTO',439),(u'ours',388),(u'0',387), (u'4',277),(u'Mon May 19 00:00:00 EDT 2008',268),(u'Fri May 16 00:00:00 EDT 2008',264)]

How我应该改变我的代码,所以输出没有停止词:“我们的”?

回答

0

你必须在最后一行的错误,它应该像

print [i for i in result_ll.take(10) if i[0] not in cachedStopWords] 

因为i[0]保存实际字。