2014-11-21 115 views
2

我开始抓取CrawlSpider Derived类,并用Ctrl + C暂停它。当我再次执行命令以恢复它时,它不会继续。Scrapy抓取简历不抓取任何东西,只是完成

我的启动和恢复命令:

scrapy crawl mycrawler -s JOBDIR=crawls/test5_mycrawl 

Scrapy创建该文件夹。权限是777

当我恢复抓取,它只是输出:

/home/adminuser/.virtualenvs/rg_harvest/lib/python2.7/site-packages/twisted/internet/_sslverify.py:184: UserWarning: You do not have the service_identity module installed. Please install it from <https://pypi.python.org/pypi/service_identity>. Without the service_identity module and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimentary TLS client hostnameverification. Many valid certificate/hostname mappings may be rejected. 
    verifyHostname, VerificationError = _selectVerifyImplementation() 
2014-11-21 11:05:10-0500 [scrapy] INFO: Scrapy 0.24.4 started (bot: rg_harvest_scrapy) 
2014-11-21 11:05:10-0500 [scrapy] INFO: Optional features available: ssl, http11, django 
2014-11-21 11:05:10-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'rg_harvest_scrapy.spiders', 'SPIDER_MODULES': ['rg_harvest_scrapy.spiders'], 'BOT_NAME': 'rg_harvest_scrapy'} 
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState 
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled item pipelines: ValidateMandatory, TypeConversion, ValidateRange, ValidateLogic, RestegourmetImagesPipeline, SaveToDB 
2014-11-21 11:05:10-0500 [mycrawler] INFO: Spider opened 
2014-11-21 11:05:10-0500 [mycrawler] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080 
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Crawled (200) <GET http://eatsmarter.de/suche/rezepte> (referer: None) 
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Filtered duplicate request: <GET http://eatsmarter.de/suche/rezepte?page=1> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates) 
2014-11-21 11:05:10-0500 [mycrawler] INFO: Closing spider (finished) 
2014-11-21 11:05:10-0500 [mycrawler] INFO: Dumping Scrapy stats: 
    {'downloader/request_bytes': 225, 
    'downloader/request_count': 1, 
    'downloader/request_method_count/GET': 1, 
    'downloader/response_bytes': 19242, 
    'downloader/response_count': 1, 
    'downloader/response_status_count/200': 1, 
    'dupefilter/filtered': 29, 
    'finish_reason': 'finished', 
    'finish_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 733196), 
    'log_count/DEBUG': 4, 
    'log_count/INFO': 7, 
    'request_depth_max': 1, 
    'response_received_count': 1, 
    'scheduler/dequeued': 1, 
    'scheduler/dequeued/disk': 1, 
    'scheduler/enqueued': 1, 
    'scheduler/enqueued/disk': 1, 
    'start_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 528629)} 

我有一个START_URL。这可能是原因吗?我的抓取工具使用一个START_URL,然后遵循与LinkExtractor规则分页,并通过特定的URL格式调用解析项目:

我的蜘蛛代码:

class MyCrawlSpiderBase(CrawlSpider): 
    name = 'test_spider' 

    testmode = True 
    crawl_start = datetime.utcnow().isoformat() 

    def __init__(self, testmode=True, urls=None, *args, **kwargs):   
     self.testmode = bool(int(testmode)) 
     super(MyCrawlSpiderBase, self).__init__(*args, **kwargs)   

    def parse_item(self, response): 
     # Item Values 
     l = MyItemLoader(RecipeItem(), response=response) 

     l.replace_value('url', response.url) 
     l.replace_value('crawl_start', self.crawl_start) 

     return l.load_item() 


class MyCrawlSpider(MyCrawlSpiderBase): 
    name = 'example_de' 
    allowed_domains = ['example.de'] 
    start_urls = [ 
     "http://example.de", 

    ] 

    rules = (
     Rule( 
      LinkExtractor( 
       allow=('/search/entry\?page=',) 
      ) 
     ), 


     Rule(
      LinkExtractor(
       allow=('/entry/[0-9A-z\-]{3,}$',), 
      ), 
      callback='parse_item' 
     ), 
    ) 

    def parse_item(self, response): 
     item = super(MyCrawlSpider, self).parse_item(response) 

     l = MyItemLoader(item=item, response=response) 

     l.replace_xpath("name", "//h1[@class='fn title']/text()")   

     (...) 

     return l.load_item() 
+0

请张贴您的蜘蛛代码。你使用cookies吗?或请求序列化? – 2014-11-21 19:59:51

+0

我添加了蜘蛛代码。我不使用cookies,我不确定我是否使用请求序列化... – user1383029 2014-11-22 16:17:14

回答

1

如果您单击Ctrl + C两次(强制停止),它将无法继续。单击Ctrl + C一次并等待。

5

由于您的网址始终是相同的,这些请求很可能被过滤。 您可以通过两种方式解决这个问题:

  1. 在你settings.py文件,加入这一行:
    DUPEFILTER_CLASS = 'scrapy.dupefilter.BaseDupeFilter'
    这将替换默认RFPDupeFilterBaseDupeFilter,不会过滤任何请求。如果你实际上想过滤出与这个问题无关的其他请求,我不会成为你想要的。

  2. 您可以更多地参与创建请求的过程,并使用参数dont_filter=True创建它们,该参数将禁用基于每个请求的过滤。为了达到这个目的,你可以删除start_urls并用一个方法start_requests()代替它,这会产生解析请求。查看official documentation中的更多信息。