2017-09-27 175 views
0

我学习scrapy,想scrapy从这个页面的几个项目: https://www.gumtree.com/search?sort=date&search_category=flats-houses&q=box&search_location=Vale+of+Glamorganscrapy爬0页(0页/分钟),刮0件(0个/分钟)

为了避免机器人.txt政策等我已经保存在我的高清页面和测试我的xpaths使用scrapy外壳。他们似乎按预期工作。但是,当我运行与scrapy crawl basic命令蜘蛛(因为它在这本书我读,建议),我得到了以下的输出:

2017-09-27 12:05:02 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: properties) 
2017-09-27 12:05:02 [scrapy.utils.log] INFO: Overridden settings: {'USER_AGENT': 'Mozila/5.0', 'SPIDER_MODULES': ['properties.spiders'], 'BOT_NAME': 'properties', 'NEWSPIDER_MODULE': 'properties.spiders'} 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.memusage.MemoryUsage', 
'scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.corestats.CoreStats'] 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-09-27 12:05:03 [scrapy.core.engine] INFO: Spider opened 
2017-09-27 12:05:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2017-09-27 12:05:03 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6026 
2017-09-27 12:05:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET file:///home/albert/Documents/programming/python/scrapy/properties/properties/tests/test_page.html> (referer: None) 
2017-09-27 12:05:04 [basic] DEBUG: title: 
2017-09-27 12:05:04 [basic] DEBUG: price: 
2017-09-27 12:05:04 [basic] DEBUG: description: 
2017-09-27 12:05:04 [basic] DEBUG: address: 
2017-09-27 12:05:04 [basic] DEBUG: image_urls: 
2017-09-27 12:05:04 [scrapy.core.engine] INFO: Closing spider (finished) 
2017-09-27 12:05:04 [scrapy.statscollectors] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 262, 
'downloader/request_count': 1, 
'downloader/request_method_count/GET': 1, 
'downloader/response_bytes': 270547, 
'downloader/response_count': 1, 
'downloader/response_status_count/200': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2017, 9, 27, 9, 5, 4, 91741), 
'log_count/DEBUG': 7, 
'log_count/INFO': 7, 
'memusage/max': 50790400, 
'memusage/startup': 50790400, 
'response_received_count': 1, 
'scheduler/dequeued': 1, 
'scheduler/dequeued/memory': 1, 
'scheduler/enqueued': 1, 
'scheduler/enqueued/memory': 1, 
'start_time': datetime.datetime(2017, 9, 27, 9, 5, 3, 718976)} 
2017-09-27 12:05:04 [scrapy.core.engine] INFO: Spider closed (finished) 
[email protected]:properties$ scrapy crawl basic 
2017-09-27 12:10:13 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: properties) 
2017-09-27 12:10:13 [scrapy.utils.log] INFO: Overridden settings: {'SPIDER_MODULES': ['properties.spiders'], 'BOT_NAME': 'properties', 'NEWSPIDER_MODULE': 'properties.spiders', 'USER_AGENT': 'Mozila/5.0'} 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.memusage.MemoryUsage', 
'scrapy.extensions.corestats.CoreStats', 
'scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.telnet.TelnetConsole'] 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-09-27 12:10:13 [scrapy.core.engine] INFO: Spider opened 
2017-09-27 12:10:13 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2017-09-27 12:10:13 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6026 
2017-09-27 12:10:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET file:///home/albert/Documents/programming/python/scrapy/properties/properties/tests/test_page.html> (referer: None) 
2017-09-27 12:10:13 [basic] DEBUG: title: 
2017-09-27 12:10:13 [basic] DEBUG: price: 
2017-09-27 12:10:13 [basic] DEBUG: description: 
2017-09-27 12:10:13 [basic] DEBUG: address: 
2017-09-27 12:10:13 [basic] DEBUG: image_urls: 
2017-09-27 12:10:13 [scrapy.core.engine] INFO: Closing spider (finished) 
2017-09-27 12:10:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 262, 
'downloader/request_count': 1, 
'downloader/request_method_count/GET': 1, 
'downloader/response_bytes': 270547, 
'downloader/response_count': 1, 
'downloader/response_status_count/200': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2017, 9, 27, 9, 10, 13, 927817), 
'log_count/DEBUG': 7, 
'log_count/INFO': 7, 
'memusage/max': 51032064, 
'memusage/startup': 51032064, 
'response_received_count': 1, 
'scheduler/dequeued': 1, 
'scheduler/dequeued/memory': 1, 
'scheduler/enqueued': 1, 
'scheduler/enqueued/memory': 1, 
'start_time': datetime.datetime(2017, 9, 27, 9, 10, 13, 722731)} 
2017-09-27 12:10:13 [scrapy.core.engine] INFO: Spider closed (finished) 

这里是我的items.py

from scrapy.item import Item, Field 


class PropertiesItem(Item): 
    title = Field() 
    price = Field() 
    description = Field() 
    address = Field() 
    image_urls = Field() 

    images = Field() 
    location = Field() 

    url = Field() 
    project = Field() 
    spider = Field() 
    server = Field() 
    date = Field() 

而这里的蜘蛛basic.py

import scrapy 


class BasicSpider(scrapy.Spider): 
    name = 'basic' 
    start_urls = ['file:///home/albert/Documents/programming/python/scrapy/properties/properties/site/test_page.html'] 

    def parse(self, response): 
     self.log('title: '.format(response.xpath(
      "//h2[@class='listing-title' and not(span)]/text()").extract())) 
     self.log('price: '.format(response.xpath(
      "//meta[@itemprop='price']/@content").extract())) 
     self.log("description: ".format(response.xpath(
      "//p[@itemprop='description' and not(span)]/text()").extract())) 
     self.log('address: '.format(response.xpath(
      "//span[@class='truncate-line']/text()[2]").re('\|(\s+\w+.+)'))) 
     self.log('image_urls: '.format(response.xpath(
      "//noscript/img/@src").extract())) 

的XPath是一个有点笨拙,但日工作。但是不收集物品。我想知道为什么。

+0

加上'打印(response.body)'和'打印(类型(响应))'在分析功能看,如果你得到HTMLResponse和正确的身体与所有预期的HTML? –

+0

@TarunLalwani让我检查一下。但我试过在scrapy shell中运行这个保存的页面并实现xpaths,并且它们工作正常,我认为这是HTML体的正确标志。 – Albert

+0

@TarunLalwani'print(type(response))'产生''和print(response.body)'打印html文档的主体。乍一看,一切似乎都很好。 – Albert

回答

1

你问题在于你没有在字符串中的任何位置插入格式化函数的输出。所以你需要将title更改为title {},所以格式会插入值。还可以使用extract_first()而不是extract()。所以,你得到的一个字符串输出代替数组

class BasicSpider(scrapy.Spider): 
    name = 'basic' 
    start_urls = ['file:///home/albert/Documents/programming/python/scrapy/properties/properties/site/test_page.html'] 

    def parse(self, response): 
     self.log('title: {}'.format(response.xpath(
      "//h2[@class='listing-title' and not(span)]/text()").extract_first())) 
     self.log('price: {}'.format(response.xpath(
      "//meta[@itemprop='price']/@content").extract_first())) 
     self.log("description: {}".format(response.xpath(
      "//p[@itemprop='description' and not(span)]/text()").extract_first())) 
     self.log('address: {}'.format(response.xpath(
      "//span[@class='truncate-line']/text()[2]").re('\|(\s+\w+.+)'))) 
     self.log('image_urls: {}'.format(response.xpath(
      "//noscript/img/@src").extract_first())) 
+0

ooooh我....我没有注意到它......我不相信我错过了它......我觉得很愚蠢。 .. 谢谢!是的,现在它按照它应该的方式工作! – Albert

1

我不尝试Scrapy在本地的文件,但如果你想scrapy的东西,你必须初始化Items第一,必须分配Itemdict在python,终于yield itempipeline

import scrapy 
from properties.items import PropertiesItem 

class BasicSpider(scrapy.Spider): 
    name = 'basic' 
    start_urls = ['file:///home/albert/Documents/programming/python/scrapy/properties/properties/site/test_page.html'] 

    def parse(self, response): 
     item = PropertiesItem()  # init Item 
     # assignment 
     item['title'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['price'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['description'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['address'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['image_urls'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     # yield item 
     yield item 
+0

嗯,这段代码比我在教程中编写的内容更有意义。其实,他们刮了一些其他的网站,但我只是重复他们的代码与所需的所有需要​​刮我想要的网站。我不明白他们的例子如何用这种方式工作... 谢谢澄清!顺便说一下,它应该是'from properties.items import PropertiesItem'。 – Albert

+0

是的,它应该是'从properties.items导入PropertiesItem',我编辑它 – zhongjiajie