2017-07-24 66 views
0

我做了一个网站刮板从看起来像这样的网页刮数据(它刮掉表):https://www.techpowerup.com/gpudb/2/我如何从这个链接的网页刮去子标题?

的问题是,我的程序,由于某种原因,只刮值,而不是副标题。例如(点击链接),它只会删除“R420”,“130nm”,“160万”等,但不包括“GPU名称”,“工艺尺寸”,“晶体管”等。

我要添加哪些代码才能获取副标题?这是我的代码:

import csv 
import requests 
import bs4 
url = "https://www.techpowerup.com/gpudb/2" 


#obtain HTML and parse through it 
response = requests.get(url) 
html = response.content 
import sys 
reload(sys) 
sys.setdefaultencoding('utf-8') 
soup = bs4.BeautifulSoup(html, "lxml") 
tables = soup.findAll("table") 

#reading every value in every row in each table and making a matrix 
tableMatrix = [] 
for table in tables: 
    list_of_rows = [] 
    for row in table.findAll('tr'): 
     list_of_cells = [] 
     for cell in row.findAll('td'): 
      text = cell.text.replace(' ', '') 
      list_of_cells.append(text) 
     list_of_rows.append(list_of_cells) 
    tableMatrix.append((list_of_rows, list_of_cells)) 

#(YOU CAN PROBABLY IGNORE THIS)placeHolder used to avoid duplicate data from appearing in list 
placeHolder = 0 
excelTable = [] 
for table in tableMatrix: 
    for row in table: 
     if placeHolder == 0: 
      for entry in row: 
       excelTable.append(entry) 
      placeHolder = 1 
     else: 
      placeHolder = 0 
    excelTable.append('\n') 

for value in excelTable: 
    print value 
    print '\n' 


#create excel file and write the values into a csv 
fl = open(str(count) + '.csv', 'w') 
writer = csv.writer(fl) 
for values in excelTable: 
    writer.writerow(values) 
fl.close() 

回答

0

如果您检查页面源,那些单元格是标题单元格。所以他们没有使用TD标签,而是使用TH标签。你可能想要更新你的循环以包含TH细胞和TD细胞。

+0

当我添加“th”,所以它变成了“findAll('td','th'),然后它只是不显示任何东西,我不应该添加它吗? –

+0

这是不正确的用法findAll方法,请试试这个:'findAll(re.compile('td | th'))'基本上第一个参数是标签名称,第二个参数是属性,所以你写的东西试图找到一个td标签属性(不要忘记导入) – xycf7

+0

谢谢! –