2017-04-15 104 views
0

我想通过使用peewee将我的数据保存到远程机器。当我运行我的履带我发现下面的错误,如何用scrapinghub使用peewee

File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run 
    self.crawler_process.crawl(spname, **opts.spargs) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 163, in crawl 
    return self._crawl(crawler, *args, **kwargs) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 167, in _crawl 
    d = crawler.crawl(*args, **kwargs) 
    File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1445, in unwindGenerator 
    return _inlineCallbacks(None, gen, Deferred()) 
--- <exception caught here> --- 
    File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks 
    result = g.send(result) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 90, in crawl 
    six.reraise(*exc_info) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 72, in crawl 
    self.engine = self._create_engine() 
    File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 97, in _create_engine 
    return ExecutionEngine(self, lambda _: self.stop()) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 70, in __init__ 
    self.scraper = Scraper(crawler) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/core/scraper.py", line 71, in __init__ 
    self.itemproc = itemproc_cls.from_crawler(crawler) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 58, in from_crawler 
    return cls.from_settings(crawler.settings, crawler) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 34, in from_settings 
    mwcls = load_object(clspath) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object 
    mod = import_module(module) 
    File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module 
    __import__(name) 
    File "/app/__main__.egg/annuaire_agence_bio/pipelines.py", line 8, in <module> 

exceptions.ImportError: No module named peewee 

任何建议是非常欢迎。

回答

0

你不能在Scrapinhub上安装你自己选择的模块......根据我的知识,你只能安装MySQLDB来做到这一点。

在项目的主文件夹中创建一个名为scrapinghub.yml的文件,其中包含以下内容。

projects: 
    default: 111149 
requirements: 
    file: requirements.txt 

其中111149是我在scrapinghub上的项目ID。

在同一目录下创建另一个文件requirements.txt

,把你需要的模块,您使用的是像这样该文件的版本号一起,

MySQL-python==1.2.5 

PS:我用MySQLdb的模块,所以我把那个。

+0

不知道我是否理解第一句。实际上,您可以在Scrapinghub上安装您选择的任何模块。 –

+0

只需将peewee添加到requirements.txt – coleifer