2013-02-13 61 views
5

此外壳命令的robots.txt成功HTTP 403错误检索与机械化

$ curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)" http://fifa-infinity.com/robots.txt 

并打印的robots.txt。省略用户代理选项会导致服务器发生403错误。检查robots.txt文件显示允许抓取http://www.fifa-infinity.com/board下的内容。但是,下面的失败(Python代码):

import logging 
import mechanize 
from mechanize import Browser 

ua = 'Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)' 
br = Browser() 
br.addheaders = [('User-Agent', ua)] 
br.set_debug_http(True) 
br.set_debug_responses(True) 
logging.getLogger('mechanize').setLevel(logging.DEBUG) 
br.open('http://www.fifa-infinity.com/robots.txt') 

我的控制台上的输出是:

No handlers could be found for logger "mechanize.cookies" 
send: 'GET /robots.txt HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: www.fifa-infinity.com\r\nConnection: close\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)\r\n\r\n' 
reply: 'HTTP/1.1 403 Bad Behavior\r\n' 
header: Date: Wed, 13 Feb 2013 15:37:16 GMT 
header: Server: Apache 
header: X-Powered-By: PHP/5.2.17 
header: Vary: User-Agent,Accept-Encoding 
header: Connection: close 
header: Transfer-Encoding: chunked 
header: Content-Type: text/html 
Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/home/moshev/Projects/forumscrawler/lib/python2.7/site-packages/mechanize/_mechanize.py", line 203, in open 
    return self._mech_open(url, data, timeout=timeout) 
    File "/home/moshev/Projects/forumscrawler/lib/python2.7/site-packages/mechanize/_mechanize.py", line 255, in _mech_open 
    raise response 
mechanize._response.httperror_seek_wrapper: HTTP Error 403: Bad Behavior 

奇怪的是,使用curl没有设置用户代理结果“403:禁止”,而比“403:不良行为”。

我在某种程度上做错了什么,或者这是机械化/ urllib2中的错误?我不明白如何简单地让robots.txt成为“不良行为”?

+0

和头的另一个例子嗅探坏了。另一方面,服务器正在更多地查看UA代理,检查curl发送的是什么头,将它们与'mechanize'正在使用的内容进行比较,调整,重复。这是*不是* python问题。 – 2013-02-13 15:48:58

+0

此问题与[urllib2.HTTPError:HTTP Error 403:Forbidden]非常类似(https://stackoverflow.com/questions/13303449/urllib2-httperror-http-error-403-forbidden/46213623#46213623) – djinn 2017-11-06 16:31:30

回答

9

经实验验证,您需要添加Accept标头来指定可接受的内容类型(只要存在“Accept”标头,任何类型都可以)。例如,它会改变工作后:

br.addheaders = [('User-Agent', ua)] 

到:

br.addheaders = [('User-Agent', ua), ('Accept', '*/*')] 
+0

谢谢,就是这样! – Moshev 2013-02-14 07:35:23

+0

我希望早些时候见过这样的事......这会为我节省几个小时的工作!谢谢Hui! – 2015-06-11 02:31:34