2017-04-01 79 views
0
from nltk.tokenize import word_tokenize 

music_comments = [['So cant you just run the bot outside of the US? ', ''], ["Just because it's illegal doesn't mean it will stop. I hope it actually gets enforced. ", ''], ['Can they do something about all the fucking bots on Tinder next? \n\nEdit: Holy crap my inbox just blew up ', '']] 

print(word_tokenize(music_comments[1])) 

我发现this other question这说来传递字符串到word_tokenize的名单,但在我的情况下运行上面我得到以下输出后:word_tokenize在NLTK不采取字符串列表作为参数

Traceback (most recent call last): 
    File "testing.py", line 5, in <module> 
    print(word_tokenize(music_comments[1])) 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\__init__.py", line 109, in word_tokenize 
    return [token for sent in sent_tokenize(text, language) 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\__init__.py", line 94, in sent_tokenize 
    return tokenizer.tokenize(text) 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py", line 1237, in tokenize 
    return list(self.sentences_from_text(text, realign_boundaries)) 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py", line 1285, in sentences_from_text 
    return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py", line 1276, in span_tokenize 
    return [(sl.start, sl.stop) for sl in slices] 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py", line 1276, in <listcomp> 
    return [(sl.start, sl.stop) for sl in slices] 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py", line 1316, in _realign_boundaries 
    for sl1, sl2 in _pair_iter(slices): 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py", line 310, in _pair_iter 
    prev = next(it) 
    File "C:\Users\Shraddheya Shendre\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py", line 1289, in _slices_from_text 
    for match in self._lang_vars.period_context_re().finditer(text): 
TypeError: expected string or bytes-like object 

问题是什么?我错过了什么?

+0

您将ONE字符串传递给'word_tokenize()',而不是列表。这就是链接问题中的代码所做的。 (当然这是你的问题的答案。) – alexis

回答

1

你喂养两个项目的名单到tokenize()

["Just because it's illegal doesn't mean it will stop. I hope it actually gets enforced. ", ''] 

即句子和一个空字符串。

改变你的代码,这应该做的伎俩:

print(word_tokenize(music_comments[1][0])) 
1
def word_tokenize(self, s): 
    """Tokenize a string to split off punctuation other than periods""" 
    return self._word_tokenizer_re().findall(s) 

这是对 'Source code for nltk.tokenize.punkt' 的一部分。

函数word_tokenize()的输入应该是一个字符串,而不是一个列表。