2016-11-08 51 views
0

即时通讯使用beautifulsoup4解析网页,并使用此代码如何解决,找出两个的各个环节(Beautifulsoup,蟒蛇)

#Collect links from 'new' page 
pageRequest = requests.get('http://www.supremenewyork.com/shop/all/shirts') 
soup = BeautifulSoup(pageRequest.content, "html.parser") 
links = soup.select("div.turbolink_scroller a") 

allProductInfo = soup.find_all("a", class_="name-link") 
print allProductInfo 

linksList1 = [] 
for href in allProductInfo: 
    linksList1.append(href.get('href')) 

print(linksList1) 

linksList1打印两各环节的收集所有的HREF值。我相信这是因为它从标题以及项目颜色中获取链接。我已经尝试了一些东西,但不能让BS只解析标题链接,并且每个链接都有一个列表,而不是两个链接。我想象它真的很简单,但我很想念它。在此先感谢

+1

make linksList1 a set()而不是list() –

+0

非常感谢你 – Harvey

回答

0
alldiv = soup.findAll("div", {"class":"inner-article"}) 
for div in alldiv: 
    linkList1.append(div.h1.a['href']) 
0

这个代码将会给你的结果没有得到重复结果 (也使用set()可能是一个好主意,因为@Tarum古普塔) 但我改变了你爬

import requests 
from bs4 import BeautifulSoup 

#Collect links from 'new' page 
pageRequest = requests.get('http://www.supremenewyork.com/shop/all/shirts') 
soup = BeautifulSoup(pageRequest.content, "html.parser") 
links = soup.select("div.turbolink_scroller a") 

# Gets all divs with class of inner-article then search for a with name-link class 
that is inside an h1 tag 
allProductInfo = soup.select("div.inner-article h1 a.name-link") 
# print (allProductInfo) 

linksList1 = [] 
for href in allProductInfo: 
    linksList1.append(href.get('href')) 

print(linksList1) 
0
set(linksList1)  # use set() to remove duplicate link 
list(set(linksList1)) # use list() convert set to list if you need