0
我想从中文网站递归抓取数据。我让我的蜘蛛跟着“下一页”的网址,直到没有“下一页”可用。下面是我的蜘蛛:Scrapy递归抓取无法抓取所有页面
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from hrb.items_hrb import HrbItem
class HrbSpider(CrawlSpider):
name = "hrb"
allowed_domains = ["www.harbin.gov.cn"]
start_urls = ["http://bxt.harbin.gov.cn/hrb_bzbxt/list_hf.php"]
rules = (
Rule(SgmlLinkExtractor(allow=(), restrict_xpaths=(u'//a[@title="\u4e0b\u4e00\u9875"]',)), callback="parse_items", follow= True),
)
def parse_items(self, response):
items = []
for sel in response.xpath("//table[3]//tr[position() > 1]"):
item = HrbItem()
item['id'] = sel.xpath("td[1]/text()").extract()[0]
title = sel.xpath("td[3]/a/text()").extract()[0]
item['title'] = title.encode('gbk')
item['time1'] = sel.xpath("td[3]/text()").extract()[0][2:12]
item['time2'] = sel.xpath("td[5]/text()").extract()[1]
items.append(item)
return(items)
问题是它只刮了前15页。我浏览了第15页,并且还有一个“下一页”按钮。那为什么它停下来?网站是否打算防止刮擦?或者我的代码存在一些问题?如果我们只允许一次删除15页,是否有办法从某个页面开始刮取,比如说?非常感谢!