2015-10-18 92 views
0

我正在从多个网站Link here中取出数据并尝试将它们全部合并到一个数据框中。该网站有一个循环模式,所以我尝试这样得到的链接在一个地方,然后遍历一个for循环: 这里的代码我工作的块:追加/合并到循环中的R中的向量

ingredientsList = c() 
links<-paste0("http://www.bbc.co.uk/food/ingredients/by/letter/",letters) 
#prints out: 
#http://www.bbc.co.uk/food/ingredients/by/letter/a 
#http://www.bbc.co.uk/food/ingredients/by/letter/b 
#http://www.bbc.co.uk/food/ingredients/by/letter/c and so-on till z 
for(i in 1:26){ 
    session<-html_session(links[i]) 
    ingredients<-session %>% html_nodes("ol:nth-child(4) a") %>% html_text() 
    ingredientsList<-c(ingredientsList,ingredients) 
} 

结果是ingredientList即应理想地包含从'A'到'Z' 所有成分的列表我想学习R和相当新的刮,我真的很感激一些指导。谢谢。

+0

代码块运行正常,但没有以适当的格式显示输出,我想知道我必须使用什么样的格式,以及上述方法是否是优化的。 –

回答

1

您将使用list代替vector的更好,你可以使用lapply直接创建,就像这样:

library(rvest) 
library(stringr) 

url <- "http://www.bbc.co.uk/food/ingredients/by/letter/" 
urls <- paste0(url, letters) 

ingredientsList <- lapply(urls, function(u) { 
    u %>% 
    html_session() %>% 
    html_nodes("ol:nth-child(4) a") %>% 
    html_text() %>% 
    str_replace_all(pattern = "\n|Related|\\(\\d\\)|\\s{2,}", replacement = "") %>% ## clean results (remove space, etc) 
    subset(!str_detect(., "^\\s{1}")) 
}) 

names(ingredientsList) <- LETTERS 
str(ingredientsList) 
## List of 26 
## $ A: chr [1:33] "Acidulated water" "Ackee" "Acorn squash" "Aduki beans" ... 
## $ B: chr [1:101] "Bacon" "Bagel" "Baguette" "Baked beans" ... 
## $ C: chr [1:174] "Cabbage" "Caerphilly" "Cake" "Calasparra rice" ... 
## $ D: chr [1:31] "Dab" "Daikon" "Damsons" "Dandelion" ... 
## $ E: chr [1:15] "Edam" "Eel" "Egg" "Egg liqueur" ... 
## $ F: chr [1:50] "Farfalle" "Fat" "Fennel" "Fennel seeds" ... 
## $ G: chr [1:53] "Galangal" "Game" "Gammon" "Garam masala" ... 
## $ H: chr [1:30] "Habañero chillies" "Haddock" "Haggis" "Hake" ... 
## $ I: chr [1:5] "Ice cream" "Iceberg lettuce" "Icing" "Icing sugar" ... 
## $ J: chr [1:12] "Jaggery" "Jam" "January King cabbage" "Japanese pumpkin" ... 
## $ K: chr [1:12] "Kabana" "Kale" "Ketchup" "Ketjap manis" ... 
## $ L: chr [1:49] "Lager" "Lamb" "Lamb breast" "Lamb chop" ... 
## $ M: chr [1:76] "Macadamia" "Macaroni" "Macaroon" "Mace" ... 
## $ N: chr [1:14] "Naan bread" "Nachos" "Nashi" "Nasturtium" ... 
## $ O: chr [1:20] "Oatcakes" "Oatmeal" "Oats" "Octopus" ... 
## $ P: chr [1:109] "Paella" "Pak choi" "Palm sugar" "Pancakes" ... 
## $ Q: chr [1:6] "Quail" "Quail's egg" "Quark" "Quatre-épices" ... 
## $ R: chr [1:62] "Rabbit" "Rack of lamb" "Radicchio" "Radish" ... 
## $ S: chr [1:125] "Safflower oil" "Saffron" "Sage" "Salad" ... 
## $ T: chr [1:47] "T-bone steak" "Tabasco" "Taco" "Tagliatelle" ... 
## $ U: chr "Unleavened bread" 
## $ V: chr [1:18] "Vacherin" "Vanilla essence" "Vanilla extract" "Vanilla pod" ... 
## $ W: chr [1:38] "Waffles" "Walnut" "Walnut oil" "Wasabi" ... 
## $ X: chr(0) 
## $ Y: chr [1:4] "Yam" "Yeast" "Yellow lentil" "Yoghurt" 
## $ Z: chr [1:2] "Zander" "Zest" 

或者,我们可以使用类似的一个方法你有for循环

n <- length(letters) 
ingredientsList <- vector(mode = "list", length = n) 
names(ingredientsList) <- LETTERS 

for(i in 1:n) { 
    session<-html_session(urls[i]) 
    ingredientsList[[i]] <-session %>% 
          html_nodes("ol:nth-child(4) a") %>% 
          html_text() 
} 

但诀窍是坚持list保持您的结果。

+0

它工作得更好@dickoa我也遇到过这个词hierarchal scraping其中我们可以从链接中的子链接提取信息任何资源,你可以建议我可以查找? –