2017-04-12 173 views
1

我有一个链接列表,其中也有一些interesting urls如何从链接列表中删除?

start_urls = ['link1.com', 'link2.com', 'link3.com', ...,'linkN.com'] 

使用scrapy,我怎么能得到?:

'link1.com' 'extracted1.link.com' 
'link2.com' 'extracted2.link.com' 
'link3.com' 'extracted3.link.com' 
... 
'linkN.com' 'extractedN.link.com' 

由于我是新与scrapy我想这只是一个链接:

class ToySpider(scrapy.Spider): 
    name = "toy" 
    allowed_domains = ["https://www.example.com/"] 
    start_urls = ['link1.com'] 


    def parse(self, response): 

     for link in response.xpath(".//*[@id='object']//tbody//tr//td//span//a[2]"): 
      item = ToyItem() 
      item['link'] = link.xpath('@href').extract_first() 
      item['interesting_link'] = link 
      yield item 

不过,这回我:

{'link': 'extracted1.link.com', 
'name': <Selector xpath=".//*[@id='object']//tbody//tr//td//span//a[2]" data='<a href="extracted1.link.com'>} 

我如何能做到上面的start_urls所有元素,并返回下面的列表:

[ 
{'link': 'extracted1.link.com', 
    'name': 'link1.com'}, 
{'link': 'extracted2.link.com', 
    'name': 'link2.com'}, 
{'link': 'extracted3.link.com', 
    'name': 'link3.com'}, 
.... 
{'link': 'extractedN.link.com', 
    'name': 'linkN.com'} 
] 

UPDATE

试图@Granitosaurus答案是返回NaN对谁做友情链接后没有:response.xpath(".//*[@id='object']//tbody//tr//td//span//a[2]")我做了:

def parse(self, response): 
    links = response.xpath(".//*[@id='object']//tbody//tr//td//span//a[2]") 
    if not links: 
     item = ToyItem() 
     item['link'] = 'NaN' 
     item['name'] = response.url 
     return item 

    for links in links: 
     item = ToyItem() 
     item['link'] = links.xpath('@href').extract_first() 
     item['name'] = response.url # <-- see here 
    yield item 

    list_of_dics = [] 
    list_of_dics.append(item) 
    df = pd.DataFrame(list_of_dics) 
    print(df) 
    df.to_csv('/Users/user/Desktop/crawled_table.csv', index=False) 

但是,不是返回(*)

'link1.com' 'NaN' 
'link2.com' 'NAN' 
'link3.com' 'extracted3.link.com' 

我:

'link3.com' 'extracted3.link.com' 

我怎样才能返回(*)

回答

1

您可以检索当前的URL你的蜘蛛从response.url属性爬行:

start_urls = ['link1.com', 'link2.com', 'link3.com', ...,'linkN.com'] 

def parse(self, response): 
    links = response.xpath(".//*[@id='object']//tbody//tr//td//span//a[2]") 
    if not links: 
     item = ToyItem() 
     item['link'] = None 
     item['name'] = response.url 
     return item 
    for links in links: 
     item = ToyItem() 
     item['link'] = links.xpath('@href').extract_first() 
     item['name'] = response.url # <-- see here 
     yield item 
+0

感谢帮帮我。我还有另外一个问题......我注意到一些'linksN.com'没有:'.//*[@id='object']//tbody//tr//td//span//一个[2]“'。如何返回:'linkN,NaN'这样的实例? – tumbleweed

+1

@tumbleweed你可以检查是否有任何链接发现,请参阅我的编辑:) – Granitosaurus

+0

非常感谢,你能检查我的更新吗?..我不知道如何返回'NaN'为网站的价值蜘蛛没有找到'响应' – tumbleweed