2016-04-03 75 views
0

我的特殊使用案例是: 我有一个正在刮取网站的刮板,一旦产生了一个项目 - 我有一个绑定信号,在Redis中设置一个密钥并过期。下次运行scraper时,它应忽略Redis中存在密钥的所有URL。是否有可能在Scrapy中默认出列请求?

第一部分我工作得很好;第二部分 - 我创建了一个DownloaderMiddleware,它有一个process_request函数,用于查看传入请求对象,并检查其URL是否存在于Redis中。如果是这样,那么raise是一个IgnoreRequest例外。

我想知道的是: 有没有一种方法可以静静地将请求出队而不是引发异常? 作为一种审美的东西并不是一个艰难的要求;我不想在我的错误日志中看到这些错误 - 我只想看到真正的错误。

我在他们使用什么样子的主调度重复过滤(scrapy /核心/ scheduler.py)一个杂牌的Scrapy来源看:

def enqueue_request(self, request): 
    if not request.dont_filter and self.df.request_seen(request): 
     self.df.log(request, self.spider) 
     return False 
+1

你可以分享你在日志中看到?而你想要什么呢?关于默认重复过滤,默认确实很简单,基于[请求的指纹(来自规范化URL,HTTP方法,正文和头文件)(https://github.com/scrapy/scrapy/blob/75cd056223a5a8da87a361aee42a541afcf27553/scrapy /utils/request.py#L19)),但[dupefilter是可定制的](http://doc.scrapy.org/en/latest/topics/settings.html?#std:setting-DUPEFILTER_CLASS)。如果您有关于如何改进或简化或澄清设计的想法,请随时在Github上打开讨论。 –

+0

错误:在信号处理程序上发生错误:> Traceback(最近调用最后一次): 文件“/Library/Python/2.7/site-packages/scrapy/utils/signal.py”,第26行,在send_catch_log中 *参数,** named ) 文件“/Library/Python/2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py”,第57行,在robustApply return receiver(* arguments,** named) 文件“.../deferred py”为22行,在process_request 提高IgnoreRequest( 'URL被推迟') IgnoreRequest:URL被推迟 – infomaniac

+0

我中间件是这样的: '高清__init __(自我,履带): self.client = Redis的( ) self.crawler = crawler self.crawler.signals.connect(self.process_request,signals.request_scheduled) 高清process_request(个体经营,要求,蜘蛛): 如果没有self.client.is_deferred(request.url): #URL不推迟,正常进行 返回无 raise IgnoreRequest('URL is deferred')' – infomaniac

回答

0

Scrapy使用Python模块logging来记录东西。因为你想要的只是一种审美的东西,你可以写一个logging filter来过滤掉你不想看到的东西。从OP

3

中间件代码注释

def __init__(self, crawler): 
    self.client = Redis() 
    self.crawler = crawler 
    self.crawler.signals.connect(self.process_request, signals.request_scheduled) 

def process_request(self, request, spider): 
    if not self.client.is_deferred(request.url): # URL is not deferred, proceed as normal 
     return None 
    raise IgnoreRequest('URL is deferred') 

的问题是让您在signals.request_scheduled添加的信号处理程序。如果它引发异常,it will appear in the logs

我相信这里注册process_request作为信号处理程序是不正确的(或无形)。

我能与此类似(非正确的)测试的中间件,它忽略它看到所有其他请求,重现您的控制台错误:

from scrapy import log, signals 
from scrapy.exceptions import IgnoreRequest 

class TestMiddleware(object): 

    def __init__(self, crawler): 
     self.counter = 0 

    @classmethod 
    def from_crawler(cls, crawler): 
     o = cls(crawler) 
     crawler.signals.connect(o.open_spider, signals.spider_opened) 

     # this raises an exception always and will trigger errors in the console 
     crawler.signals.connect(o.process, signals.request_scheduled) 
     return o 

    def open_spider(self, spider): 
     spider.logger.info('TestMiddleware.open_spider()') 

    def process_request(self, request, spider): 
     spider.logger.info('TestMiddleware.process_request()') 
     self.counter += 1 
     if (self.counter % 2) == 0: 
      raise IgnoreRequest("ignoring request %d" % self.counter) 

    def process(self, *args, **kwargs): 
     raise Exception 

看到控制台说什么运行这个蜘蛛时中间件:

2016-04-06 00:16:58 [scrapy] ERROR: Error caught on signal handler: <bound method ?.process of <mwtest.middlewares.TestMiddleware object at 0x7f83d4a73f50>> 
Traceback (most recent call last): 
    File "/home/paul/.virtualenvs/scrapy11rc3.py27/local/lib/python2.7/site-packages/scrapy/utils/signal.py", line 30, in send_catch_log 
    *arguments, **named) 
    File "/home/paul/.virtualenvs/scrapy11rc3.py27/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply 
    return receiver(*arguments, **named) 
    File "/home/paul/tmp/mwtest/mwtest/middlewares.py", line 26, in process 
    raise Exception 
Exception 

The code is here

比较本:

$ cat middlewares.py 
from scrapy import log, signals 
from scrapy.exceptions import IgnoreRequest 

class TestMiddleware(object): 

    def __init__(self, crawler): 
     self.counter = 0 

    @classmethod 
    def from_crawler(cls, crawler): 
     o = cls(crawler) 
     crawler.signals.connect(o.open_spider, signals.spider_opened) 
     return o 

    def open_spider(self, spider): 
     spider.logger.info('TestMiddleware.open_spider()') 

    def process_request(self, request, spider): 
     spider.logger.info('TestMiddleware.process_request()') 
     self.counter += 1 
     if (self.counter % 2) == 0: 
      raise IgnoreRequest("ignoring request %d" % self.counter) 

IgnoreRequest不是在日志打印,但你必须在统计异常计数结尾:

$ scrapy crawl httpbin 
2016-04-06 00:27:24 [scrapy] INFO: Scrapy 1.1.0rc3 started (bot: mwtest) 
(...) 
2016-04-06 00:27:24 [scrapy] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'mwtest.middlewares.TestMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
(...) 
2016-04-06 00:27:24 [scrapy] INFO: Spider opened 
2016-04-06 00:27:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.open_spider() 
2016-04-06 00:27:24 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [scrapy] DEBUG: Crawled (200) <GET http://www.httpbin.org/user-agent> (referer: None) 
2016-04-06 00:27:25 [scrapy] DEBUG: Crawled (200) <GET http://www.httpbin.org/> (referer: None) 
2016-04-06 00:27:25 [scrapy] DEBUG: Crawled (200) <GET http://www.httpbin.org/headers> (referer: None) 
2016-04-06 00:27:25 [scrapy] INFO: Closing spider (finished) 
2016-04-06 00:27:25 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/exception_count': 2, 
'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 2, 
'downloader/request_bytes': 665, 
'downloader/request_count': 3, 
'downloader/request_method_count/GET': 3, 
'downloader/response_bytes': 13006, 
'downloader/response_count': 3, 
'downloader/response_status_count/200': 3, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2016, 4, 5, 22, 27, 25, 596652), 
'log_count/DEBUG': 4, 
'log_count/INFO': 13, 
'log_count/WARNING': 1, 
'response_received_count': 3, 
'scheduler/dequeued': 5, 
'scheduler/dequeued/memory': 5, 
'scheduler/enqueued': 5, 
'scheduler/enqueued/memory': 5, 
'start_time': datetime.datetime(2016, 4, 5, 22, 27, 24, 661345)} 
2016-04-06 00:27:25 [scrapy] INFO: Spider closed (finished) 
相关问题