2016-03-07 227 views
1

我正在做scrapy项目我想同时运行多个蜘蛛 这是脚本运行蜘蛛的代码。我得到错误..怎么办从scrapy脚本运行多个蜘蛛

from spiders.DmozSpider import DmozSpider 
from spiders.CraigslistSpider import CraigslistSpider 

from scrapy import signals, log 
from twisted.internet import reactor 
from scrapy.crawler import Crawler 
from scrapy.settings import Settings 

TO_CRAWL = [DmozSpider, CraigslistSpider] 


RUNNING_CRAWLERS = [] 

def spider_closing(spider): 
"""Activates on spider closed signal""" 
log.msg("Spider closed: %s" % spider, level=log.INFO) 
RUNNING_CRAWLERS.remove(spider) 
if not RUNNING_CRAWLERS: 
    reactor.stop() 

log.start(日志等级= log.DEBUG) 的蜘蛛在TO_CRAWL: 设置=设置()

# crawl responsibly 
settings.set("USER_AGENT", "Kiran Koduru (+http://kirankoduru.github.io)") 
crawler = Crawler(settings) 
crawler_obj = spider() 
RUNNING_CRAWLERS.append(crawler_obj) 

# stop reactor when spider closes 
crawler.signals.connect(spider_closing, signal=signals.spider_closed) 
crawler.configure() 
crawler.crawl(crawler_obj) 
crawler.start() 

块进行处理,从而始终保持为最后一条语句

reactor.run()

+0

可以提高你的代码的格式? 你有什么错误?你能提供一个回溯? –

回答

1

抱歉回答这个问题的精灵,但只是引起你的注意力scrapydscrapinghub(至少对于快速测试)。 reactor.run()(当你创建它时)将在单个CPU上运行任意数量的Scrapy实例。你想要这个副作用吗?即使你看了scrapyd的代码,他们也不会用一个线程运行多个实例,但它们的确是fork/spawn subprocesses

2

您需要类似下面的代码。您可以轻松地从Scrapy文档找到它:)

,你可以用它来运行你的蜘蛛首先效用 scrapy.crawler.CrawlerProcess。本课程将为您启动Twisted reactor ,配置日志记录和设置关闭处理程序。这个 类是所有Scrapy命令使用的类。

# -*- coding: utf-8 -*- 
import sys 
import logging 
import traceback 
from scrapy.crawler import CrawlerProcess 
from scrapy.conf import settings 
from scrapy.utils.project import get_project_settings 
from spiders.DmozSpider import DmozSpider 
from spiders.CraigslistSpider import CraigslistSpider 

SPIDER_LIST = [ 
    DmozSpider, CraigslistSpider 
] 

if __name__ == "__main__": 
    try: 
     ## set up the crawler and start to crawl one spider at a time 
     process = CrawlerProcess(get_project_settings()) 
     for spider in SPIDER_LIST: 
      process.crawl(spider) 
     process.start() 
    except Exception, e: 
     exc_type, exc_obj, exc_tb = sys.exc_info() 
     logging.info('Error on line {}'.format(sys.exc_info()[-1].tb_lineno)) 
     logging.info("Exception: %s" % str(traceback.format_exc())) 

参考文献: http://doc.scrapy.org/en/latest/topics/practices.html

+0

谢谢你,但我认为这在单处理器上运行。但是我有100000个域名的列表,我希望在AWS EC2中运行30个实例。如何在30个实例中执行域列表队列以运行蜘蛛。一次有30个蜘蛛在这30个实例中运行。如何做 –

+0

您可以为不同的实例制作独立的脚本。每个实例运行一组您的蜘蛛。对不起,但我不太明白你的问题。 – hungneox