0

我应该如何优化我的时间在提出请求优化刮请求网页

link=['http://youtube.com/watch?v=JfLt7ia_mLg', 
'http://youtube.com/watch?v=RiYRxPWQnbE' 
'http://youtube.com/watch?v=tC7pBOPgqic' 
'http://youtube.com/watch?v=3EXl9xl8yOk' 
'http://youtube.com/watch?v=3vb1yIBXjlM' 
'http://youtube.com/watch?v=8UBY0N9fWtk' 
'http://youtube.com/watch?v=uRPf9uDplD8' 
'http://youtube.com/watch?v=Coattwt5iyg' 
'http://youtube.com/watch?v=WaprDDYFpjE' 
'http://youtube.com/watch?v=Pm5B-iRlZfI' 
'http://youtube.com/watch?v=op3hW7tSYCE' 
'http://youtube.com/watch?v=ogYN9bbU8bs' 
'http://youtube.com/watch?v=ObF8Wz4X4Jg' 
'http://youtube.com/watch?v=x1el0wiePt4' 
'http://youtube.com/watch?v=kkeMYeAIcXg' 
'http://youtube.com/watch?v=zUdfNvqmTOY' 
'http://youtube.com/watch?v=0ONtIsEaTGE' 
'http://youtube.com/watch?v=7QedW6FcHgQ' 
'http://youtube.com/watch?v=Sb33c9e1XbY'] 

我的第一页的YouTube搜索结果15-20链接列表现在的任务是让喜欢,不喜欢,认为从每个视频的网址,并为我做了什么计数

def parse(url,i,arr): 
    req=requests.get(url) 
    soup = bs4.BeautifulSoup(req.text,"lxml")#, 'html5lib') 
    try: 
     likes=int(soup.find("button",attrs={"title": "I like this"}).getText().__str__().replace(",","")) 
    except: 
     likes=0 
    try: 
     dislikes=int(soup.find("button",attrs={"title": "I dislike this"}).getText().__str__().replace(",","")) 
    except: 
     dislikes=0 
    try: 
     view=int(soup.find("div",attrs={"class": "watch-view-count"}).getText().__str__().split()[0].replace(",","")) 
    except: 
     view=0 
    arr[i]=(likes,dislikes,view,url) 
    time.sleep(0.3) 

def parse_list(link): 
    arr=len(link)*[0] 
    threadarr=len(link)*[0] 
    import threading 
    a=time.clock() 
    for i in range(len(link)): 
     threadarr[i]=threading.Thread(target=parse,args=(link[i],i,arr)) 
     threadarr[i].start() 
    for i in range(len(link)): 
     threadarr[i].join() 
    print(time.clock()-a) 
    return arr 

arr=parse_list(link) 

现在我得到约6 seconds.Is有没有更快的方法我可以得到我的阵列(ARR)的填充结果阵列等等它需要比6秒更少的时间

我的数组前4种元素的样子,让你得到一个粗略的想法

[(105, 11, 2836, 'http://youtube.com/watch?v=JfLt7ia_mLg'), 
(32, 18, 5420, 'http://youtube.com/watch?v=RiYRxPWQnbE'), 
(45, 3, 7988, 'http://youtube.com/watch?v=tC7pBOPgqic'), 
(106, 38, 4968, 'http://youtube.com/watch?v=3EXl9xl8yOk')] 

Thanks in advance :) 
+2

如果你的代码的工作,但你要找一些改进,你应该问你的问题上[代码审查(https://codereview.stackexchange.com/) – Andersson

回答

1

我会用多池对象为特定的情况下。

import requests 
import bs4 
from multiprocessing import Pool, cpu_count 


links = [ 
'http://youtube.com/watch?v=JfLt7ia_mLg', 
'http://youtube.com/watch?v=RiYRxPWQnbE', 
'http://youtube.com/watch?v=tC7pBOPgqic', 
'http://youtube.com/watch?v=3EXl9xl8yOk' 
] 

def parse_url(url): 
    req=requests.get(url) 
    soup = bs4.BeautifulSoup(req.text,"lxml")#, 'html5lib') 
    try: 
     likes=int(soup.find("button", attrs={"title": "I like this"}).getText().__str__().replace(",","")) 
    except: 
     likes=0 
    try: 
     dislikes=int(soup.find("button", attrs={"title": "I dislike this"}).getText().__str__().replace(",","")) 
    except: 
     dislikes=0 
    try: 
     view=int(soup.find("div", attrs={"class": "watch-view-count"}).getText().__str__().split()[0].replace(",","")) 
    except: 
     view=0 
    return (likes, dislikes, view, url) 

pool = Pool(cpu_count) # number of processes 
data = pool.map(parse_url, links) # this is where your results are 

这样比较干净,因为只有一个函数可以编写,并且结果完全相同。

+0

错误:类型错误:“<”不支持'方法'和'int' –

0

这不是一种解决方法,但它可以保存您的脚本使用“try/except块”,它肯定起到一定的作用,可以减慢操作速度。

for url in links: 
    response = requests.get(url).text 
    soup = BeautifulSoup(response,"html.parser") 
    for item in soup.select("div#watch-header"): 
     view = item.select("div.watch-view-count")[0].text 
     likes = item.select("button[title~='like'] span.yt-uix-button-content")[0].text 
     dislikes = item.select("button[title~='dislike'] span.yt-uix-button-content")[0].text 
     print(view, likes, dislikes) 
+0

的实例之间的尝试,除了有点必要在我的程序中使用,因为一些视频也禁用显示喜欢和不喜欢等 –

+0

但您上面提供的链接没有问题没有他们。我测试了它.. – SIM

+0

amm试试这.......“https://www.youtube.com/watch?v=frw6uu3nonQ” –