2010-12-10 85 views
2

我有一个以3列开头的CSV。累积百分比成本列,成本列和关键字列。 R脚本适用于小文件,但当我向它提供实际文件(有一百万行)时完全死亡(永远不会结束)。你能帮我让这个脚本更高效吗? Token.Count是我无法创建的人。谢谢!计数令牌字的最佳和最有效的方法

# Token Histogram 

# Import CSV data from Report Downloader API Feed 
Mydf <- read.csv("Output_test.csv.csv", sep=",", header = TRUE, stringsAsFactors=FALSE) 

# Helps limit the dataframe according the HTT 
# Change number to: 
# .99 for big picture 
# .8 for HEAD 
limitor <- Mydf$CumuCost <= .8 
# De-comment to ONLY measure TORSO 
#limitor <- (Mydf$CumuCost <= .95 & Mydf$CumuCost > .8) 
# De-comment to ONLY measure TAIL 
#limitor <- (Mydf$CumuCost <= 1 & Mydf$CumuCost > .95) 
# De-comment to ONLY measure Non-HEAD 
#limitor <- (Mydf$CumuCost <= 1 & Mydf$CumuCost > .8) 

# Creates a column with HTT segmentation labels 
# Creates a dataframe 
HTT <- data.frame() 
# Populates dataframe according to conditions 
HTT <- ifelse(Mydf$CumuCost <= .8,"HEAD",ifelse(Mydf$CumuCost <= .95,"TORSO","TAIL")) 
# Add the column to Mydf and rename it HTT 
Mydf <- transform(Mydf, HTT = HTT) 

# Count all KWs in account by using the dimension function 
KWportfolioSize <- dim(Mydf)[1] 

# Percent of portfolio 
PercentofPortfolio <- sum(limitor)/KWportfolioSize 

# Length of Keyword -- TOO SLOW 
# Uses the Tau package 
# My function takes the row number and returns the number of tokens 
library(tau) 
Myfun = function(n) { 
    sum(sapply(Mydf$Keyword.text[n], textcnt, split = "[[:space:][:punct:]]+", method = "string", n = 1L))} 
# Creates a dataframe to hold the results 
Token.Count <- data.frame() 
# Loops until last row and store it in data.frame 
for (i in c(1:dim(Mydf)[1])) {Token.Count <- rbind(Token.Count,Myfun(i))} 
# Add the column to Mydf 
Mydf <- transform(Mydf, Token.Count = Token.Count) 
# Not quite sure why but the column needs renaming in this case 
colnames(Mydf)[dim(Mydf)[2]] <- "Token.Count" 
+1

您可以链接到一块样本数据的?随意使它合成,只是具有代表性,所以人们可以测试他们的方法,以确保他们更快。 – 2010-12-10 21:12:56

+0

CumuCost \t \t成本Keyword.text 0.004394288 \t \t 678.5北+脸+出口 0.006698245 \t \t 80.05超高动力学传感器 0.008738991 \t \t 79.51 X盒360 250 – datayoda 2010-12-10 22:47:12

+0

'data.frame':74231个OBS。 5个变量: $ CumuCost:num 0.00439 0.0067 0.00874 0.01067 0.01258 ... $ Cost:num 1678 880 780 736 731 ... $ Keyword.text:chr“north + face + outlet”“kinect sensor”“x box 360 250“... $ HTT:因子w/1级别”HEAD“:1 1 1 1 1 1 1 1 1 1 ... $ Token.Count:int 3 2 4 1 4 2 2 2 2 1 ... – datayoda 2010-12-10 22:51:07

回答

2

预分配存储与循环填充它之前从不做你在做什么,连接或r | cbind循环内的对象。 R必须在循环的每次迭代中复制,分配更多的存储空间等,这是削弱代码的开销。

用足够的行和列创建Token.Count并将其填充到循环中。喜欢的东西:

Token.Count <- matrix(ncol = ?, nrow = nrow(Mydf)) 
for (i in seq_len(nrow(Mydf))) { 
    Token.Count[i, ] <- Myfun(i) 
} 
Token.Count <- data.frame(Token.Count) 

对不起,我不能更具体,但我不知道有多少列Myfun回报。


更新1:具有textcnt采取一看,我想你也许能避免循环完全。你有这样的事情的数据帧

DF <- data.frame(CumuCost = c(0.00439, 0.0067), Cost = c(1678, 880), 
       Keyword.text = c("north+face+outlet", "kinect sensor"), 
       stringsAsFactors = FALSE) 

如果我们剔除了关键词,并将其转换为一个列表

keywrds <- with(DF, as.list(Keyword.text)) 
head(keywrds) 

然后我们可以调用textcnt递归这个名单来算的话在每个列表组件;

countKeys <- textcnt(keywrds, split = "[[:space:][:punct:]]+", method = "string", 
        n = 1L, recursive = TRUE) 
head(countKeys) 

以上的几乎是你有什么,除了我加recursive = TRUE分别处理每个输入向量。最后一步是sapplysum功能countKeys得到的单词数:这似乎是你正在努力实现与循环和功能是什么

> sapply(countKeys, sum) 
[1] 3 2 

。我有这个权利吗?


更新2: OK,如果有固定的预分配问题的量化方式使用textcnt还是不太一样快,你想,我们可以调查字数统计的其他方式。你很可能不需要textcnt的所有功能来做你想做的事。 [我无法检查下面的解决方案是否适用于您的所有数据,但速度更快。]

一个潜在解决方案是将Keyword.text矢量使用内置strsplit功能,使用上述和仅第一元件产生keywrds分割成单词,例如:

> length(unlist(strsplit(keywrds[[1]], split = "[[:space:][:punct:]]+"))) 
[1] 3 

要使用此想法它可能是更容易它包装在一个用户功能:

fooFun <- function(x) { 
    length(unlist(strsplit(x, split = "[[:space:][:punct:]]+"), 
        use.names = FALSE, recursive = FALSE)) 
} 

然后我们可以应用到keywrds列表:

> sapply(keywrds, fooFun) 
[1] 3 2 

对于这个简单的示例数据集,我们可以得到相同的结果。计算时间呢?第一使用textcnt该溶液中,从组合两个步骤更新1

> system.time(replicate(10000, sapply(textcnt(keywrds, 
+          split = "[[:space:][:punct:]]+", 
+          method = "string", n = 1L, 
+          recursive = TRUE), sum))) 
    user system elapsed 
    4.165 0.026 4.285 

,然后在更新2溶液:

​​

所以即使对于这个小样本,呼叫textcnt涉及相当大的开销,但是在将这两种方法应用于完整数据集时是否存在这种差异仍有待观察。

最后,我们应该注意到,strsplit方法可以矢量化对矢量Keyword.textDF直接工作:

> sapply(strsplit(DF$Keyword.text, split = "[[:space:][:punct:]]+"), length) 
[1] 3 2 

可以得到相同的结果与其他两种方法,并稍高于更快非矢量化使用strsplit

> system.time(replicate(10000, sapply(strsplit(DF$Keyword.text, 
+        split = "[[:space:][:punct:]]+"), length))) 
    user system elapsed 
    0.732 0.001 0.734 

这些是否对您的全部数据集更快?

次要更新:复制DF给130行数据和定时三种方法表明,去年(矢量strsplit())能更好地伸缩:

> DF2 <- rbind(DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF,DF) 
> dim(DF2) 
[1] 130 3 
> system.time(replicate(10000, sapply(textcnt(keywrds2, split = "[[:space:][:punct:]]+", method = "string", n = 1L, recursive = TRUE), sum))) 
    user system elapsed 
238.266 1.790 241.404 
> system.time(replicate(10000, sapply(keywrds2, fooFun))) 
    user system elapsed 
28.405 0.007 28.511 
> system.time(replicate(10000, sapply(strsplit(DF2$Keyword.text,split = "[[:space:][:punct:]]+"), length))) 
    user system elapsed 
    7.497 0.011 7.528 
+0

照明不快但工作正常。谢谢! – datayoda 2010-12-11 00:21:18