2014-09-30 109 views
15

我正在使用NLTK和scikit-learnCountVectorizer组合来标记词和标记。在NLTK和scikit-learn中结合文本标注和删除标点符号

下面是CountVectorizer的普通使用的例子:

from sklearn.feature_extraction.text import CountVectorizer 

vocab = ['The swimmer likes swimming so he swims.'] 
vec = CountVectorizer().fit(vocab) 

sentence1 = vec.transform(['The swimmer likes swimming.']) 
sentence2 = vec.transform(['The swimmer swims.']) 

print('Vocabulary: %s' %vec.get_feature_names()) 
print('Sentence 1: %s' %sentence1.toarray()) 
print('Sentence 2: %s' %sentence2.toarray()) 

,它将打印

Vocabulary: ['he', 'likes', 'so', 'swimmer', 'swimming', 'swims', 'the'] 
Sentence 1: [[0 1 0 1 1 0 1]] 
Sentence 2: [[0 0 0 1 0 1 1]] 

现在,让我们说,我想删除停用词和干的话。一种选择是做像这样:

from nltk import word_tokenize   
from nltk.stem.porter import PorterStemmer 

####### 
# based on http://www.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html 
stemmer = PorterStemmer() 
def stem_tokens(tokens, stemmer): 
    stemmed = [] 
    for item in tokens: 
     stemmed.append(stemmer.stem(item)) 
    return stemmed 

def tokenize(text): 
    tokens = nltk.word_tokenize(text) 
    stems = stem_tokens(tokens, stemmer) 
    return stems 
######## 

vect = CountVectorizer(tokenizer=tokenize, stop_words='english') 

vect.fit(vocab) 

sentence1 = vect.transform(['The swimmer likes swimming.']) 
sentence2 = vect.transform(['The swimmer swims.']) 

print('Vocabulary: %s' %vect.get_feature_names()) 
print('Sentence 1: %s' %sentence1.toarray()) 
print('Sentence 2: %s' %sentence2.toarray()) 

它打印:

Vocabulary: ['.', 'like', 'swim', 'swimmer'] 
Sentence 1: [[1 1 1 1]] 
Sentence 2: [[1 0 1 1]] 

但我怎么才能得到最好在这第二个版本去掉标点符号?

回答

23

有几个选项,请尝试在标记之前删除标点符号。但是,这将意味着don't - >dont

import string 

def tokenize(text): 
    text = "".join([ch for ch in text if ch not in string.punctuation]) 
    tokens = nltk.word_tokenize(text) 
    stems = stem_tokens(tokens, stemmer) 
    return stems 

或者尝试切分后去除标点符号。

def tokenize(text): 
    tokens = nltk.word_tokenize(text) 
    tokens = [i for i in tokens if i not in string.punctuation] 
    stems = stem_tokens(tokens, stemmer) 
    return stems 

EDITED

上面的代码将工作,但因为它是循环通过相同的文字是相当缓慢的多次:

  • 一旦删除标点
  • 第二次来标记
  • 第三次干。

如果您有更多的如消除数字或删除禁用词或lowercasing步骤等

这将是更好的疙瘩步骤尽可能地在一起,这里有一些更好的答案,如果这是更有效的数据需要更多的预处理步骤:

+0

简单而有效。谢谢! – Sebastian 2014-10-01 03:57:23

+4

请注意,第二个不会捕获'...'或其他多字符标点符号。 – 2014-10-01 19:04:43

+0

@FredFoo和其他人:您如何评价GENSIM和Scikit以提取关键字而不是普通文档?你可以回答我吗? http://stackoverflow.com/questions/40436110/rake-with-gensim – 2016-11-05 08:53:00