2017-04-19 143 views
1

我今天一直在阅读Scrapy文档,试图在真实世界的示例中获得工作版本 - https://docs.scrapy.org/en/latest/intro/tutorial.html#our-first-spider。我的例子是,它有2次未来的页面略有不同,即Scrapy,从第二组链接中删除页面

START_URL>城市页面>单元页

这是单位的网页我想从获取数据。

我的代码:

import scrapy 


class QuotesSpider(scrapy.Spider): 
    name = "quotes" 
    start_urls = [ 
     'http://www.unitestudents.com/', 
      ] 

    def parse(self, response): 
     for quote in response.css('div.property-body'): 
      yield { 
       'name': quote.xpath('//span/a/text()').extract(), 
       'type': quote.xpath('//div/h4/text()').extract(), 
       'price_amens': quote.xpath('//div/p/text()').extract(), 
       'distance_beds': quote.xpath('//li/p/text()').extract() 
      } 

      # Purpose is to crawl links of cities 
      next_page = response.css('a.listing-item__link::attr(href)').extract_first() 
      if next_page is not None: 
       next_page = response.urljoin(next_page) 
       yield scrapy.Request(next_page, callback=self.parse) 

      # Purpose is to crawl links of units 
      next_unit_page = response.css(response.css('a.text-highlight__inner::attr(href)').extract_first()) 
      if next_unit_page is not None: 
              next_unit_page = response.urljoin(next_unit_page) 
              yield scrapy.Request(next_unit_page, callback=self.parse) 

但是当我运行此我得到:

INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 

所以我想我的代码没有设置检索在上述流程中的链接,但我不确定如何做到最好?

更新流程:

首页>市页>建筑页>单位页

它仍然是单位页我想从获取数据。

更新代码:

import scrapy 


class QuotesSpider(scrapy.Spider): 
    name = "quotes" 
    start_urls = [ 
     'http://www.unitestudents.com/', 
      ] 

    def parse(self, response): 
     for quote in response.css('div.site-wrapper'): 
      yield { 
       'area_name': quote.xpath('//div/ul/li/a/span/text()').extract(), 
       'type': quote.xpath('//div/div/div/h1/span/text()').extract(), 
       'period': quote.xpath('/html/body/div/div/section/div/form/h4/span/text()').extract(), 
       'duration_weekly': quote.xpath('//html/body/div/div/section/div/form/div/div/em/text()').extract(), 
       'guide_total': quote.xpath('//html/body/div/div/section/div/form/div/div/p/text()').extract(),    
       'amenities': quote.xpath('//div/div/div/ul/li/p/text()').extract(),    
      } 

      # Purpose is to crawl links of cities 
      next_page = response.xpath('//html/body/div/footer/div/div/div/ul/li/a[@class="listing-item__link"]/@href').extract() 
      if next_page is not None: 
       next_page = response.urljoin(next_page) 
       yield scrapy.Request(next_page, callback=self.parse) 

      # Purpose is to crawl links of units 
      next_unit_page = response.xpath('//li/div/h3/span/a/@href').extract() 
      if next_unit_page is not None: 
              next_unit_page = response.urljoin(next_unit_page) 
              yield scrapy.Request(next_unit_page, callback=self.parse) 

      # Purpose to crawl crawl pages on full unit info 

      last_unit_page = response.xpath('//div/div/div[@class="content__btn"]/a/@href').extract() 
      if last_unit_page is not None: 
       last_unit_page = response.urljoin(last_unit_page) 
       yield scrapy.Request(last_unit_page, callback=self.parse) 
+0

您是否使用'scrapy shell http:// www.unitestudents.com /'测试了XPath? –

+0

@paultrmbrth我测试了它们,但注意到一些缺陷,所以我做了一些修改 - 请参阅更新。你怎么看? – Maverick

回答

4

让我们先从逻辑:

  1. 刮主页 - 获取所有城市
  2. 刮城市页面 - 获取所有单元的URL
  3. 刮单位页 - 得到所有想要的数据

我已经做了一个例子,说明如何在下面的scrapy蜘蛛中实现它。我无法找到您在示例代码中提到的所有信息,但我希望代码足够清晰,以便您了解其功能以及如何添加所需的信息。

import scrapy 


class QuotesSpider(scrapy.Spider): 
    name = "quotes" 
    start_urls = [ 
     'http://www.unitestudents.com/', 
      ] 

    # Step 1 
    def parse(self, response): 
     for city in response.xpath('//select[@id="frm_homeSelect_city"]/option[not(contains(text(),"Select your city"))]/text()').extract(): # Select all cities listed in the select (exclude the "Select your city" option) 
      yield scrapy.Request(response.urljoin("/"+city), callback=self.parse_citypage) 

    # Step 2 
    def parse_citypage(self, response): 
     for url in response.xpath('//div[@class="property-header"]/h3/span/a/@href').extract(): #Select for each property the url 
      yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage) 

     # I could not find any pagination. Otherwise it would go here. 

    # Step 3 
    def parse_unitpage(self, response): 
     unitTypes = response.xpath('//div[@class="room-type-block"]/h5/text()').extract() + response.xpath('//h4[@class="content__header"]/text()').extract() 
     for unitType in unitTypes: # There can be multiple unit types so we yield an item for each unit type we can find. 
      yield { 
       'name': response.xpath('//h1/span/text()').extract_first(), 
       'type': unitType, 
       # 'price': response.xpath('XPATH GOES HERE'), # Could not find a price on the page 
       # 'distance_beds': response.xpath('XPATH GOES HERE') # Could not find such info 
      } 

我觉得代码非常干净和简单。评论应该澄清为什么我选择使用for循环。如果有什么不清楚的地方,请告诉我,我会尽力解释它。

+0

感谢您的帮助!我想知道这是从哪里来的 - '// div [@ class =“room-type-block”]/h5/text()'?我似乎无法在任何页面上找到它。 – Maverick

+0

事实证明,我只拿了一个单位页面,假设其他所有单位页面都一样,但事实证明它不是(实际上这个页面更像是一个例外)。我用[这个网页](http://www.unitestudents.com/bristol/waverley-house),但你是对的;我无法在很多[其他单元页面]上找到这个元素(http://www.unitestudents.com/glasgow/blackfriars)。我更新了处理这两个页面的代码。 (注意:我没有测试完整的代码) – Casper

+0

谢谢你,这很好!我只是想知道,为什么这部分 - 'callback = self.parse_citypage' - 指向下一个函数?对不起,这感觉这是一个RTFM问题! – Maverick