2016-06-13 62 views
0

我正在用Sidekiq和Mechanize构建一个简单的网络蜘蛛。Sidekiq机械化覆盖实例

当我运行这个为一个域,它工作正常。当我为多个域运行它时,它会失败。我相信原因是被另一个Sidekiq工作者实例化时被覆盖,但我不确定这是真的还是如何解决它。

# my scrape_search controller's create action searches on google. 
def create 
    @scrape = ScrapeSearch.build(keywords: params[:keywords], profession: params[:profession]) 
    agent = Mechanize.new 
    scrape_search = agent.get('http://google.com/') do |page| 
    search_result = page.form... 
    search_result.css("h3.r").map do |link| 
     result = link.at_css('a')['href'] # Narrowing down to real search results 
     @domain = Domain.new(some params) 
     ScrapeDomainWorker.perform_async(@domain.url, @domain.id, remaining_keywords) 
    end 
    end 
end 

我为每个域创建一个Sidekiq作业。我所寻找的大部分域名都应该只包含几页,因此不需要每页的子作业。

这是我的工作人员:

class ScrapeDomainWorker 
    include Sidekiq::Worker 
    ... 

    def perform(domain_url, domain_id, keywords) 
    @domain  = Domain.find(domain_id) 
    @domain_link = @domain.protocol + '://' + domain_url 
    @keywords  = keywords 

    # First we scrape the homepage and get the first links 
    @domain.to_parse = ['/'] # to_parse is an array of PATHS to parse for the domain 
    mechanize_path('/') 
    @domain.verified << '/' # verified is an Array field containing valid domain paths 
    get_paths(@web_page) # Now we should have to_scrape populated with homepage links 

    @domain.scraped = 1 # Loop counter 
    while @domain.scraped < 100 
     @domain.to_parse.each do |path| 
     @domain.to_parse.delete(path) 
     @domain.scraped += 1 
     mechanize_path(path) # We create a Nokogiri HTML doc with mechanize for the valid path 
     ... 
     get_paths(@web_page) # Fire this to repopulate to_scrape !!! 
     end 
    end 
    @domain.save 
    end 

    def mechanize_path(path) 
    agent = Mechanize.new 
    begin 
     @web_page = agent.get(@domain_link + path) 
    rescue Exception => e 
     puts "Mechanize Exception for #{path} :: #{e.message}" 
    end 
    end 

    def get_paths(web_page) 
    paths = web_page.links.map {|link| link.href.gsub((@domain.protocol + '://' + @domain.url), "") } ## This works when I scrape a single domain, but fails with ".gsub for nil" when I scrape a few domains. 
    paths.uniq.each do |path| 
     @domain.to_parse << path 
    end 
    end 

end 

这时候我刮了单一领域的作品,但没有与.gsub for nilweb_page,当我刮了几个域。

+0

欢迎来到Stack Overflow。请阅读“[mcve]”。请将代码减少到最低程度,以再现问题。 –

回答

0

你可以用你密码在另一个类,然后创建和你的工人中该类的对象:

class ScrapeDomainWrapper 
    def initialize(domain_url, domain_id, keywords) 
    # ... 
    end 

    def mechanize_path(path) 
    # ... 
    end 

    def get_paths(web_page) 
    # ... 
    end 
end 

而且你的工人:

class ScrapeDomainWorker 
    include Sidekiq::Worker 

    def perform(domain_url, domain_id, keywords) 
    ScrapeDomainWrapper.new(domain_url, domain_id, keywords) 
    end 
end 

另外,请记住,Mechanize::Page#links可能是nil

+0

我按照你的建议来包装它。我还将所有实例变量转换为本地变量(@ web_page成为web_page等)。我仍然得到一个“未定义的方法'gsub'for nil:NilClass”for paths = web_page.links.map {| link | link.href.gsub((@ domain.protocol +'://'+ @ domain.url),“”)}。奇怪的是,如果我单独运行它,它工作得很好。 – Ben

+0

如果您将代码移到另一个类中,则不需要重命名变量。只要它们不是类变量,而是实例变量,一切都可以。另外,我认为在某些情况下'Mechanize :: Link#href'可能是'nil'。你应该检查一下。 – Wikiti

+0

是的,我为此添加了失败保护。谢谢您的帮助! – Ben