2015-10-13 69 views
5

我最近开始使用nltk模块进行文本分析。我被困在一个点上。我想在数据框上使用word_tokenize,以获取数据框特定行中使用的所有单词。如何在数据框中使用word_tokenize

data example: 
     text 
1. This is a very good site. I will recommend it to others. 
2. Can you please give me a call at 9983938428. have issues with the listings. 
3. good work! keep it up 
4. not a very helpful site in finding home decor. 

expected output: 

1. 'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.' 
2. 'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings' 
3. 'good','work','!','keep','it','up' 
4. 'not','a','very','helpful','site','in','finding','home','decor' 

基本上,我想分开所有单词并找到数据框中每个文本的长度。

我知道word_tokenize可以为它的字符串,但如何将它应用到整个数据框?

请帮忙!

在此先感谢...

+0

您的问题描述缺少数据输入,您的代码,您期望的输出可以充实吗?谢谢 – EdChum

+0

@EdChum:已编辑查询。希望它具有所需的信息。 – eclairs

回答

9

您可以使用申请数据框API的方法:

import pandas as pd 
import nltk 

df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']}) 
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1) 

输出:

>>> df 
              sentences \ 
0 This is a very good site. I will recommend it ... 
1 Can you please give me a call at 9983938428. h... 
2        good work! keep it up 

            tokenized_sents 
0 [This, is, a, very, good, site, ., I, will, re... 
1 [Can, you, please, give, me, a, call, at, 9983... 
2      [good, work, !, keep, it, up] 

为了找到每个文本的长度尽量使用apply and lambda function again:

df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1) 

>>> df 
              sentences \ 
0 This is a very good site. I will recommend it ... 
1 Can you please give me a call at 9983938428. h... 
2        good work! keep it up 

            tokenized_sents sents_length 
0 [This, is, a, very, good, site, ., I, will, re...   14 
1 [Can, you, please, give, me, a, call, at, 9983...   15 
2      [good, work, !, keep, it, up]    6 
+1

当数据框中有多行时,我们该怎么做? – eclairs

+0

@eclairs,你是什么意思? – Gregg

+0

尝试令牌化时出现此错误消息: – eclairs

10

pandas.Series.apply比pandas.DataFrame.apply

import pandas as pd 
import nltk 

df = pd.read_csv("/path/to/file.csv") 

start = time.time() 
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize) 
print "series.apply", (time.time() - start) 

start = time.time() 
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1) 
print "dataframe.apply", (time.time() - start) 

在样品125 MB csv文件更快,

series.apply 144.428858995

dataframe.apply 201.884778976

编辑:你可以以后series.apply(nltk.word_tokenize)是尺寸较大想着数据帧DF,这可能会影响到运行时进行下一个操作dataframe.apply(nltk.word_tokenize)

大熊猫在这种情况下进行了优化。通过单独执行dataframe.apply(nltk.word_tokenize),我得到了类似的运行时间200s