2016-08-16 128 views
1


我想让我的实际爬虫多线程。
当我设置多线程时,该函数的几个实例将被启动。避免重复结果多线程Python

例:

如果我的功能我用print range(5),我将有1,1,2,2,3,3,4,4,5,5如果我有2个线程。

如何在Multithread中得到结果1,2,3,4,5

我实际的代码是履带,你可以看到下:

import requests 
from bs4 import BeautifulSoup 

def trade_spider(max_pages): 
    page = 1 
    while page <= max_pages: 
     url = "http://stackoverflow.com/questions?page=" + str(page) 
     source_code = requests.get(url) 
     plain_text = source_code.text 
     soup = BeautifulSoup(plain_text, "html.parser") 
     for link in soup.findAll('a', {'class': 'question-hyperlink'}): 
      href = link.get('href') 
      title = link.string 
      print(title) 
      get_single_item_data("http://stackoverflow.com/" + href) 
     page += 1 

def get_single_item_data(item_url): 
    source_code = requests.get(item_url) 
    plain_text = source_code.text 
    soup = BeautifulSoup(plain_text, "html.parser") 
    res = soup.find('span', {'class': 'vote-count-post '}) 
    print("UpVote : " + res.string) 

trade_spider(1) 

我怎么能调用多线程trade_spider()不重复的链接?

+0

您是否尝试过使用[shared_ multiprocessing.Value'](https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes)? –

+0

还没有,我会尝试 – Pixel

+0

@DavidCullen请你给我一个例子,我不明白共享多处理在文档中是如何工作的。谢谢 – Pixel

回答

1

是否将页码作为trade_spider函数的参数。

使用不同的页码在每个进程中调用该函数,以便每个线程都获得唯一的页面。

例如:

import multiprocessing 

def trade_spider(page): 
    url = "http://stackoverflow.com/questions?page=%s" % (page,) 
    source_code = requests.get(url) 
    plain_text = source_code.text 
    soup = BeautifulSoup(plain_text, "html.parser") 
    for link in soup.findAll('a', {'class': 'question-hyperlink'}): 
     href = link.get('href') 
     title = link.string 
     print(title) 
     get_single_item_data("http://stackoverflow.com/" + href) 

# Pool of 10 processes 
max_pages = 100 
num_pages = range(1, max_pages) 
pool = multiprocessing.Pool(10) 
# Run and wait for completion. 
# pool.map returns results from the trade_spider 
# function call but that returns nothing 
# so ignoring it 
pool.map(trade_spider, num_pages) 
+0

我可以举个例子吗。 – Pixel

+1

用示例更新 – danny

1

试试这个:

from multiprocessing import Process, Value 
import time 

max_pages = 100 
shared_page = Value('i', 1) 
arg_list = (max_pages, shared_page) 
process_list = list() 
for x in range(2): 
    spider_process = Process(target=trade_spider, args=arg_list) 
    spider_process.daemon = True 
    spider_process.start() 
    process_list.append(spider_process) 
for spider_process in process_list: 
    while spider_process.is_alive(): 
     time.sleep(1.0) 
    spider_process.join() 

变化trade_spider的参数列表

def trade_spider(max_pages, page) 

并取出

page = 1 

这将创建两个进程,通过共享page值来处理页面列表。