2017-04-06 142 views
0

我对Python和NLTK都是新手。我必须从语料库中提取名词短语,然后使用NLTK删除停用词。我已经做了我的编码,但仍然有错误。任何人都可以帮我解决这个问题吗?或者也请推荐是否有更好的解决方案。谢谢从训练语料库中提取名词短语时出错并使用NLTK删除停用词

import nltk 
from nltk.tokenize import word_tokenize 
from nltk.corpus import stopwords 

docid='19509' 
title='Example noun-phrase and stop words' 
print('Document id:'),docid 
print('Title:'),title 

#list noun phrase 
content='This is a sample sentence, showing off the stop words filtration.' 
is_noun = lambda pos: pos[:2] == 'NN' 
tokenized = nltk.word_tokenize(content) 
nouns = [word for (word,pos) in nltk.pos_tag(tokenized) if is_noun(pos)] 
print('All Noun Phrase:'),nouns 

#remove stop words 
stop_words = set(stopwords.words("english")) 

example_words = word_tokenize(nouns) 
filtered_sentence = [] 

for w in example_words: 
    if w not in stop_words: 
    filtered_sentence.append(w) 

print('Without stop words:'),filtered_sentence 

而且我得到了以下错误

Traceback (most recent call last): 
File "C:\Users\User\Desktop\NLP\stop_word.py", line 20, in <module> 
    example_words = word_tokenize(nouns) 
File "C:\Python27\lib\site-packages\nltk\tokenize\__init__.py", line 109,in 
word_tokenize 
    return [token for sent in sent_tokenize(text, language) 
File "C:\Python27\lib\site-packages\nltk\tokenize\__init__.py", line 94, in 
sent_tokenize 
    return tokenizer.tokenize(text) 
File "C:\Python27\lib\site-packages\nltk\tokenize\punkt.py", line 1237, in 
tokenize 
    return list(self.sentences_from_text(text, realign_boundaries)) 
File "C:\Python27\lib\site-packages\nltk\tokenize\punkt.py", line 1285, in 
sentences_from_text 
    return [text[s:e] for s, e in self.span_tokenize(text,realign_boundaries)] 
File "C:\Python27\lib\site-packages\nltk\tokenize\punkt.py", line 1276, in 
span_tokenize 
    return [(sl.start, sl.stop) for sl in slices] 
File "C:\Python27\lib\site-packages\nltk\tokenize\punkt.py", line 1316, in 
_realign_boundaries 
    for sl1, sl2 in _pair_iter(slices): 
File "C:\Python27\lib\site-packages\nltk\tokenize\punkt.py", line 310, in 
_pair_iter 
    prev = next(it) 
File "C:\Python27\lib\site-packages\nltk\tokenize\punkt.py", line 1289, in 
_slices_from_text 
    for match in self._lang_vars.period_context_re().finditer(text): 
TypeError: expected string or buffer 
+0

你能解释一下错误究竟是什么?哪部分不工作? – christinabo

+0

错误太多,我无法理解。我已经附上上面的错误@christinabo – Nur

+0

http://stackoverflow.com/questions/5486337/how-to-remove-stop-words-using-nltk-or-python可能的重复? – alvas

回答

1

因为函数word_tokenize期待一个字符串作为参数,您收到此错误,你给字符串列表。 据我了解你想达到的目标,在这一点上你不需要标记化。直到print('All Noun Phrase:'),nouns,你有你的句子的所有名词。要删除停用词,你可以使用:

### remove stop words ### 
stop_words = set(stopwords.words("english")) 
# find the nouns that are not in the stopwords 
nouns_without_stopwords = [noun for noun in nouns if noun not in stop_words] 
# your sentence is now clear 
print('Without stop words:',nouns_without_stopwords) 

当然,在这种情况下,你必须与名词相同的结果,因为没有一个名词是一个停用词。

我希望这会有所帮助。

+0

是的,它的工作..感谢了很多帮助我@christinabo – Nur

相关问题