2016-06-28 109 views
1

我试图从网站的HTML数据,但data_table是返回null 并试图跟踪代码,当我试图让头数据将返回HTML中的上下文beautifulsoup find_all不能得到DIV数据

import requests 
    from bs4 import BeautifulSoup 
    import html.parser 
    from html.parser import HTMLParser 
    import time 
    from random import randint 
    import sys 
    from IPython.display import clear_output 
    import pymysql 

links = ['https://www.ptt.cc/bbs/Gossiping/index'+str(i+1)+'.html' for i in range(10)] 
    data_links=[] 

for link in links: 
    res = requests.get(link) 
    soup = BeautifulSoup(res.text.encode("utf-8"),"html.parser") 
    data_table = soup.findAll("div",{"id":"r-ent"}) 
    print(data_table) 
+1

可以粘贴你想要获取数据的html结构。同时检查您从获取请求响应中获得的“res”值。 – min2bro

回答

1

当您在浏览器中访问该页面时,必须确认您已超过18个,然后才能看到实际内容,以便获得该页面,您需要发送https://www.ptt.cc/ask/over18的帖子,其中的数据为yes=yesfrom = "/bbs/Gossiping/index{the_number}.html",如果您打印返回的源代码,您可以看到该表单。

<form action="/ask/over18" method="post"> 
    <input type="hidden" name="from" value="/bbs/Gossiping/index1.html"> 
    <div class="over18-button-container"> 
     <button class="btn-big" type="submit" name="yes" value="yes">我同意,我已年滿十八歲<br><small>進入</small></button> 
    </div> 
    <div class="over18-button-container"> 
     <button class="btn-big" type="submit" name="no" value="no">未滿十八歲或不同意本條款<br><small>離開</small></button> 
    </div> 
</form> 

还没有在页面上R-ENT,只有div的:

import requests 
from bs4 import BeautifulSoup 

links = ['https://www.ptt.cc/bbs/Gossiping/index{}.html' for i in range(1,11)] 
data_links = [] 
data = {"yes":"yes"} 
head = {"User-Agent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"} 

for ind, link in enumerate(links, 1): 
    with requests.Session() as s: 
     data["from"] = "/bbs/Gossiping/index{}.html".format(ind) 
     s.post("https://www.ptt.cc/ask/over18", data=data, headers=head) 
     res = s.get(link, headers=head) 
     soup = BeautifulSoup(res.text,"html.parser") 
     data_divs= soup.select("div.r-ent") 
     print(data_divs) 

上面的代码可以让你所有的类r-ent的div。

使用Session只需发布一次可能是好的,因为Cookie将被存储,因此以下代码应该可以正常工作。

links = ['https://www.ptt.cc/bbs/Gossiping/index{}.html' for i in range(1,11)] 
data_links=[] 
data = {"yes":"yes"} 
head = {"User-Agent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"} 
with requests.Session() as s: 
    data["from"] = "/bbs/Gossiping/index1.html" 
    s.post("https://www.ptt.cc/ask/over18", data=data, headers=head) 
    for link in links: 
     res = s.get(link, headers=head) 
     BeautifulSoup(res.text,"html.parser") 
     data_divs= soup.select("div.r-ent") 
     print(data_divs) 
相关问题