2016-04-27 82 views
1

现在,我们已经通过这个代码的网站www.theft-alerts.com的第一页:如何凑下一个页面(链接)

connection = urllib2.urlopen('http://www.theft-alerts.com') 
soup = BeautifulSoup(connection.read().replace("<br>","\n"), "html.parser") 

theftalerts = [] 
for sp in soup.select("table div.itemspacingmodified"): 
    for wd in sp.select("div.itemindentmodified"): 
     text = wd.text 
     if not text.startswith("Images :"): 
      print(text) 

第一页的输出:

STOLEN : A LARGE TAYLORS OF LOUGHBOROUGH BELL 
Stolen from Bromyard on 7 August 2014 
Item : The bell has a diameter of 37 1/2" is approx 3' tall weighs just shy of half a ton and was made by Taylor's of Loughborough in 1902. It is stamped with the numbers 232 and 11. 

The bell had come from Co-operative Wholesale Society's Crumpsall Biscuit Works in Manchester. 
Any info to : PC 2361. Tel 0300 333 3000 
Messages : Send a message 
Crime Ref : 22EJ/50213D-14 

No of items stolen : 1 

Location : UK > Hereford & Worcs 
Category : Shop, Pub, Church, Telephone Boxes & Bygones 
ID : 84377 
User : 1 ; Antique/Reclamation/Salvage Trade ; (Administrator) 
Date Created : 11 Aug 2014 15:27:57 
Date Modified : 11 Aug 2014 15:37:21; 

在网站上是更多页面(1至19)。我们只看到第1页。我们怎样才能得到其余的页面?

我们尝试了这一点:

connection = urllib2.urlopen('http://www.theft-alerts.com', 'http://www.theft-alerts.com/index-2.html', 'http://www.theft-alerts.com/index-3.html', 'http://www.theft-alerts.com/index-4.html','http://www.theft-alerts.com/index-5.html', 'http://www.theft-alerts.com/index-6.html', 'http://www.theft-alerts.com/index-7.html') 

但是,这并不工作。 输出:

"You can't pass both context and any of cafile, capath, and " 
ValueError: You can't pass both context and any of cafile, capath, and cadefault 

回答

0

你可以通过与resultnav类访问code标签和迭代里面了a标签接下来的几页链接:

pages_nav = soup.find('code', class_='resultnav'); 
pages_links = pages_nav.find_all('a') 
# Access `href` attribute after that 
+0

我想你应该更详细一点,解释了为什么他的代码不工作,你做什么 – Whitefret

+0

@Whitefret实际上,他没有任何代码在当时 – FrozenHeart

+0

查看代码时会刮擦下一页的链接,他试图同时打开所有url – Whitefret

0

为什么不使用索引号循环?

for i in range(1, 20): 
    connection = urllib2.urlopen("http://www.theft-alerts.com/index-%i.html" % i0 
    # process the file here 

对于那些一直走,直到下一个页面是不是一个有效的链接更通用的解决方案:

i = 1 
while True: 
    conn = urllib2.urlopen("http://www.theft-alerts.com/index-%i.html" % i0 
    if conn.getcode != 200: # perhaps retry a couple of times 
     break 
    # process the file here 
    i += 1 

与您的代码的问题是,你正在试图通过几个环节,以urllib2.urlopen,这就是不是它如何工作。您需要传递每个链接,然后处理响应。

这里是urlopen应该解释一下你看到的错误签名:

def urlopen(url, data=None, timeout=socket. 
      _GLOBAL_DEFAULT_TIMEOUT, 
      cafile=None, capath=None, cadefault=False, context=None) 
+2

我认为他想要一个通用的解决方案 - 也许会有超过19页 – FrozenHeart