2017-02-19 71 views
5

我需要什么:如何重新启动Scrapy蜘蛛

  1. 开始履带
  2. 履带完成任务
  3. 等待1分钟
  4. 开始履带再次

我试试这个:

from scrapy.crawler import CrawlerProcess 
from scrapy.utils.project import get_project_settings 
from time import sleep 

while True: 
    process = CrawlerProcess(get_project_settings()) 
    process.crawl('spider_name') 
    process.start() 
    sleep(60) 

但得到的错误:

twisted.internet.error.ReactorNotRestartable

请帮我做的是正确

的Python 3.6
Scrapy 1.3.2
的Linux

+0

看看http://stackoverflow.com/a/39955 395/2572383 –

回答

2

我想我找到了解决办法:

from scrapy.utils.project import get_project_settings 
from scrapy.crawler import CrawlerRunner 
from twisted.internet import reactor 
from twisted.internet import task 


timeout = 60 


def run_spider(): 
    l.stop() 
    runner = CrawlerRunner(get_project_settings()) 
    d = runner.crawl('spider_name') 
    d.addBoth(lambda _: l.start(timeout, False)) 


l = task.LoopingCall(run_spider) 
l.start(timeout) 

reactor.run() 
+0

我该如何输出日志? – Baks