2017-04-23 123 views
2

下载它我尝试使用requestsBeautifulSoup模块从google.scholar.com下载页面。网址是https://scholar.google.com/citations?mauthors=computer+science&hl=en&view_op=search_authors。当我将网址复制到Chrome时,我可以查看该页面。但是,如果我尝试使用requests下载它,它让我404 NOT FOUND error为什么我可以在浏览器中查看页面,但可以使用代码使用URL

<!DOCTYPE html> 
<html lang="en"> 
<head><meta charset="utf-8"/> 
<meta content="initial-scale=1, minimum-scale=1, width=device-width" name="viewport"/> 
<title>Error 404 (Not Found)!!1</title> 
<style> 
    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} 
    </style> 
</head><body><a href="//www.google.com/"><span aria-label="Google" id="logo"></span></a> 
<p><b>404.</b> <ins>That’s an error.</ins> 
</p><p>The requested URL <code>/citations</code> was not found on this server. <ins>That’s all we know.</ins> 
</p></body></html> 

剧本,除了报头(因为它是太长),下载页面

url = "https://scholar.google.com/citations?mauthors=computer+science&hl=en&view_op=search_authors" 
for i in range(1200): 
    r = requests.get(url, headers=headers) 
    soup = BeautifulSoup(r.content, "lxml") 
    print soup 

莫非任何人都可以帮助我?

回答

0

谷歌会阻止你的请求,因为它被视为不是来自浏览器。替代使用,将是卷曲。如果你想在Python脚本中使用它,你可以使用下面的代码。

import os 

html_content = os.popen('curl https://scholar.google.com/citations?mauthors=computer%20science&hl=en&view_op=search_authors').read() 
0

如果您使用Python您可以使用BeautifulSoupurllib2做这样的事情:

from urllib2 import Request, urlopen 
from bs4 import BeautifulSoup as soup 

url = "https://scholar.google.com/citations?mauthors=computer%20science&hl=en&view_op=search_authors" 

def load_url(url): 
    request = Request(url) 
    # Add your header here 
    request.add_header('Referer', 'python.org') 
    # Note here: 
    # The charset used in your current website is: 'iso-8859-1' 
    # data = urlopen(request).read().decode("iso-8859-1") 
    data = urlopen(request).read() 

    return soup(data, "lxml") 


data = load_url(url) 
m = data.findAll("h3", {"class": "gsc_1usr_name"}) 
for k in m: 
    print k.get_text() 

否则,如果你正在使用Python3您可以使用BeautifulSoupurllib.request做这样的事情:

from urllib.request import Request, urlopen 
from bs4 import BeautifulSoup as soup 

url = "https://scholar.google.com/citations?mauthors=computer%20science&hl=en&view_op=search_authors" 

def load_url(url): 
    request = Request(url) 
    # Add headers 
    request.add_header('Referer', 'python.org') 

    with urlopen(request) as f: 
     # The charset used 
     # charset = f.info().get_content_charset() 
     # Debug 
     # print("The current charset is:", charset) 
     data = f.read() 

    return soup(data, 'lxml') 

data = load_url(url) 
m = data.findAll("h3", {"class": "gsc_1usr_name"}) 
for k in m: 
    print(k.get_text()) 

输出(对于使用的两个代码和Python3):

Herbert Simon 
Geoffrey Hinton 
William H. Press 
Jiawei Han 
Stephen Boyd 
anupam gupta 
David S. Johnson 
Scott Shenker 
Jeffrey Ullman 
Deborah Estrin 
+0

嗨,你会被封锁吗?我尝试更改User-Agent并在头文件中添加一些内容,但1分钟后仍然被阻止 – dashenswen

+0

不,我没有被阻止。一切对我来说都很好。你能再试一次并粘贴你的错误吗? –

+0

他们返回一份文件并提示他们认为这是一项不寻常的要求。所以我被封锁了。我的朋友在10-20秒之间发送随机超时。 10分钟后,他也被封锁了......但我确实检查了你的代码。如果你运行1000次,会发生 – dashenswen

相关问题