2017-10-10 280 views
1

我让Scrapy抓取我的站点,找到404响应的链接并将它们返回给JSON文件。这工作得很好。使用scrapy获取404错误的所有实例

但是,我不知道如何获取该错误链接的所有实例,因为重复过滤器正在捕获这些链接,而不是重试它们。

由于我们的网站有成千上万的页面,这些部分由多个团队管理,我需要能够为每个部分创建一个坏链接报告,而不是找到一个报告并在整个网站上进行搜索替换。

任何帮助将不胜感激。

我目前的履带:

import scrapy 
from scrapy.spiders import CrawlSpider, Rule 
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor 
from scrapy.item import Item, Field 

# Add Items for exporting to JSON 
class DevelopersLinkItem(Item): 
    url = Field() 
    referer = Field() 
    link_text = Field() 
    status = Field() 
    time = Field() 

class DevelopersSpider(CrawlSpider): 
    """Subclasses Crawlspider to crawl the given site and parses each link to JSON""" 

    # Spider name to be used when calling from the terminal 
    name = "developers_prod" 

    # Allow only the given host name(s) 
    allowed_domains = ["example.com"] 

    # Start crawling from this URL 
    start_urls = ["https://example.com"] 

    # Which status should be reported 
    handle_httpstatus_list = [404] 

    # Rules on how to extract links from the DOM, which URLS to deny, and gives a callback if needed 
    rules = (Rule(LxmlLinkExtractor(deny=([ 
     '/android/'])), callback='parse_item', follow=True),) 

    # Called back to for each requested page and used for parsing the response 
    def parse_item(self, response): 
     if response.status == 404: 
      item = DevelopersLinkItem() 
      item['url'] = response.url 
      item['referer'] = response.request.headers.get('Referer') 
      item['link_text'] = response.meta.get('link_text') 
      item['status'] = response.status 
      item['time'] = self.now.strftime("%Y-%m-%d %H:%M") 

      return item 

我已经尝试了一些自定义过滤器重复数据删除,但最终没有一次成功。

回答