我有一个功能scrapy项目,然后我决定清理它。为了做到这一点,我将我的数据库模块从我的项目的scrapy部分中取出,并且不能再包含它。目前该项目看起来是这样的:scrapy无法导入模块,而它在我的pythonpath
myProject/
database/
__init__.py
model.py
databaseFactory.py
myScrapy/
__init__.py
settings.py
myScrapy/
__init__.py
pipeline.py
spiders/
spiderA.py
spiderB.py
api/
__init__.py
config/
__init__.py
我想用databaseFactory在scrapy(只涉及到我的问题文件显示)。
我已经加入到我的.bashrc下面几行:
PYTHONPATH=$PYTHONPATH:my/path/to/my/project
export PYTHONPATH
所以当推出IPython中,我可以做以下的事情:
In [1]: import database.databaseFactory as databaseFactory
In [2]: databaseFactory
Out[2]: <module 'database.databaseFactory' from '/my/path/to/my/project/database/databaseFactory.pyc'>
但是...
当我尝试启动报废,用
sudo scrapy crawl spiderName 2> error.log
我可以享受以下消息:
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 60, in run
self.crawler_process.start()
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 92, in start
if self.start_crawling():
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 124, in start_crawling
return self._start_crawler() is not None
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 139, in _start_crawler
crawler.configure()
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 47, in configure
self.engine = ExecutionEngine(self, self._spider_closed)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 65, in __init__
self.scraper = Scraper(crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/scraper.py", line 66, in __init__
self.itemproc = itemproc_cls.from_crawler(crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 50, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 29, in from_settings
mwcls = load_object(clspath)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 42, in load_object
raise ImportError("Error loading object '%s': %s" % (path, e))
ImportError: Error loading object 'myScrapy.pipelines.QueueExportPipe': No module named database.databaseFactory
为什么scrapy不理我PYHTONPATH?我现在该怎么办?我真的不希望在我的代码
我已经在我的.bashrc文件中做了。 如果你是对的,并且我神秘地必须在发布废品之前在控制台中完成,我试了一下。当然它不起作用,因为它没有改变任何东西 – Borbag 2014-11-21 08:14:30