2016-11-18 150 views
0

Scrapy统计显示像这样在运行代码如何从Scrapy获取已经获取的URL数(request_count)?

2016-11-18 06:41:38 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 656, 
'downloader/request_count': 2, 
'downloader/request_method_count/GET': 2, 
'downloader/response_bytes': 2661, 
'downloader/response_count': 2, 
'downloader/response_status_count/200': 2, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2016, 11, 18, 14, 41, 38, 759760), 
'item_scraped_count': 2, 
'log_count/DEBUG': 5, 
'log_count/INFO': 7, 
'response_received_count': 2, 
'scheduler/dequeued': 2, 
'scheduler/dequeued/memory': 2, 
'scheduler/enqueued': 2, 
'scheduler/enqueued/memory': 2, 
'start_time': datetime.datetime(2016, 11, 18, 14, 41, 37, 807590)} 

我的目标是访问process_responseresponse_countrequest_count或蜘蛛的任何方法。

我想关闭蜘蛛一次N个网址被我的Spider抓取。

回答

1

,如果你想关闭蜘蛛取决于完成的请求数,我建议使用[CLOSESPIDER_PAGECOUNTsettings.py:(https://doc.scrapy.org/en/latest/topics/extensions.html#closespider-pagecount

settings.py

CLOSESPIDER_PAGECOUNT= 20 # so end after 20 pages have been crawled 

不过如果你想访问蜘蛛内的Scrapy Stats,你可以这样做:

self.crawler.stats.get_value('my_stat_name') # change it to `response_count` or `request_count`