2017-05-07 33 views
0

它显示在执行过程中没有错误,但我得到了蜘蛛空白文件output.My代码如下:如何解决下面的python-scrapy错误?

from scrapy.spider import BaseSpider 
    from scrapy.selector import HtmlXPathSelector 
    from example.items import exampleItem 
    class MySpider(BaseSpider): 
    name = "eg" 
    allowed_domains = ["timeanddate.com"] 
    start_urls = ["https://www.timeanddate.com/worldclock/"] 
    def parse(self, response): 
      titles = response.selector.xpath("/html/body/div[1]/div[8]/section[2]/div[1]/table/tbody") 
    items = [] 
    for titles in titles: 
     item = exampleItem() 
     item["title"] = titles.xpath("//tr/td[@a]/text()").extract() 
     item["link"] = titles.xpath("//tr/td[@class=rbi]").extract() 
     items.append(item) 
    return items 

代码Scraper.item如下:

from scrapy.item import Item, Field 

    class exampleItem(Item): 
    title = Field() 
    link = Field() 

日志文件输出如下,displayedin这个唯一的错误是: )>:HTTP状态代码是不 处理或不允许

 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: example) 
     [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 
     'example.spiders', 'FEED_URI': 'items.csv', 'SPIDER_MODULES': 
     ['example.spiders'], 'BOT_NAME': 'example', 'ROBOTSTXT_OBEY': True, 
     'FEED_FORMAT': 'csv'} 
     [scrapy.middleware] INFO: Enabled extensions: 
     ['scrapy.extensions.feedexport.FeedExporter', 
     'scrapy.extensions.logstats.LogStats', 
     'scrapy.extensions.telnet.TelnetConsole', 
     'scrapy.extensions.corestats.CoreStats'] 
     [scrapy.middleware] INFO: Enabled downloader middlewares: 
     ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 
     'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
    'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
    'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
    'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
    'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
    'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
    [scrapy.middleware] INFO: Enabled spider middlewares: 
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
    'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
    'scrapy.spidermiddlewares.referer.RefererMiddleware', 
    'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
    'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
     [scrapy.middleware] INFO: Enabled item pipelines: 
     [] 
     [scrapy.core.engine] INFO: Spider opened 
     [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), 
     scraped 0 items (at 0 items/min) 
     [scrapy.extensions.telnet] DEBUG: Telnet console listening on 
     127.0.0.1:6023 
     [scrapy.core.engine] DEBUG: Crawled (200) <GET 
     https://www.timeanddate.com/robots.txt> (referer: None) 
     [scrapy.core.engine] DEBUG: Crawled (404) <GET 
     https://www.timeanddate.com/worldclock/)> (referer: None) 
    [scrapy.spidermiddlewares.httperror] INFO: Ignoring response 
    <404 https://www.timeanddate.com/worldclock/)>: HTTP status code is not 
    handled or not allowed 
    [scrapy.core.engine] INFO: Closing spider (finished) 
    [scrapy.statscollectors] INFO: Dumping Scrapy stats: 
    {'downloader/request_bytes': 456, 
    'downloader/request_count': 2, 
    'downloader/request_method_count/GET': 2, 
    'downloader/response_bytes': 6109, 
    'downloader/response_count': 2, 
    'downloader/response_status_count/200': 1, 
    'downloader/response_status_count/404': 1, 
    'finish_reason': 'finished', 
    'finish_time': datetime.datetime(2017, 5, 7, 14, 2, 24, 993404), 
     'log_count/DEBUG': 3, 
    'log_count/INFO': 8, 
    'response_received_count': 2, 
    'scheduler/dequeued': 1, 
    'scheduler/dequeued/memory': 1, 
     'scheduler/enqueued': 1, 
    'scheduler/enqueued/memory': 1, 
    'start_time': datetime.datetime(2017, 5, 7, 14, 2, 23, 158763)} 
    [scrapy.core.engine] INFO: Spider closed (finished) 

回答

0

原因是您的XPath selectors与页面源不匹配。

webbrowser开发人员工具的一个已知问题是,它们显示tbody元素,其中响应数据中不存在此类元素。所以总是值得尝试不包括它们或绕过它们。

然后也有一些代码问题。有一个for titles in titles:。看起来有点不对我。

然后,循环中的XPath语句也与我的开发人员工具不匹配。没有匹配//tr/td[@a]。它应该是//tr/td[a]。但后者一次匹配全部锚,这使得难以提取单件物品。

于是,我就重写代码,并与下面上来:

import scrapy 

class MyItem(scrapy.Item): 
    title = scrapy.Field() 
    link = scrapy.Field() 


class MySpider(scrapy.Spider): 
    name = "eg" 
    allowed_domains = ["timeanddate.com"] 
    start_urls = ["https://www.timeanddate.com/worldclock/"] 

    def parse(self, response): 

     for sel_row in response.xpath('//table//tr'): 
      for sel_td in sel_row.xpath('./td[a]'): 

       item = MyItem() 
       item["title"] = sel_td.xpath('./a/text()').extract_first() 
       item["link"] = sel_td.xpath('./following::td[@class="rbi"][1]').extract_first() 
       yield item 

该代码与scrapy 1.3.2/Python 2.7.11测试。

+0

我试图执行由“scrapy爬行如-o items.csv -t CSV”命令的代码,但它仍然是给一个空白文件作为输出 – Priyanka

+0

嗯...我与144行的文件。这个命令的输出是什么样的?任何错误? –

+0

不,它在执行命令时不显示任何错误。我刚刚更新了我的items.py编码。 – Priyanka