2017-10-17 83 views
0

我有一个非常大的多千兆字节文件,其成本太高,无法加载到内存中。但是,文件中行的排序不是随机的。有没有办法使用类似fread的行来读取行的随机子集?R:使用fread或同等文件从文件中随机读取行吗?

像这样的东西,例如?

data <- fread("data_file", nrows_sample = 90000) 

github post表明一种可能性是做这样的事情:

fread("shuf -n 5 data_file") 

这不适合我,但是。有任何想法吗?

回答

1

使用tidyverse(相对于data.table),你可以这样做:

library(readr) 
library(purrr) 
library(dplyr) 

# generate some random numbers between 1 and how many rows your files has, 
# assuming you can ballpark the number of rows in your file 
# 
# Generating 900 integers because we'll grab 10 rows for each start, 
# giving us a total of 9000 rows in the final 
start_at <- floor(runif(900, min = 1, max = (n_rows_in_your_file - 10))) 

# sort the index sequentially 
start_at <- start_at[order(start_at)] 

# read in 10 rows at a time, starting at your random numbers 
sample_of_rows <- map(start_at, ~read_csv("data_file", n_max = 10, skip = .x)) %>% 
    bind_rows() 
1

如果您的数据文件正好是文本使用软件包文件该解决方案LaF可能是有用的:

library(LaF) 

# Prepare dummy data 
mat <- matrix(sample(letters,10*1000000,T), nrow = 1000000) 

dim(mat) 
#[1] 1000000  10 

write.table(mat, "tmp.csv", 
    row.names = F, 
    sep = ",", 
    quote = F) 

# Read 90'000 random lines 
start <- Sys.time() 
random_mat <- sample_lines(filename = "tmp.csv", 
    n = 90000, 
    nlines = 1000000) 
random_mat <- do.call("rbind",strsplit(random_mat,",")) 
Sys.time() - start 
#Time difference of 1.135546 secs  

dim(random_mat) 
#[1] 90000 10