2017-08-13 70 views
0

我正在报废html文件。 我写了下面的代码。使用嵌套循环的web scrapping,python3中的BeautifulSoup

[<td colspan="2" style="text-align:left"><b>Gainers (% price change)</b> 
</td>, <td width="15%">Last Trade 
</td>, <td width="20%">Change 
</td>, <td width="15%"> 
Mkt Cap 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:GFI&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Gold Fields Limited (ADR)</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:GFI&amp;ei=H7pKWbBtgoabAZ7Kv7gI">GFI</a> 
</td>, <td>3.53 
</td>, <td width="20%"> 
<span class="chg">+0.11</span> 
<span class="chg">(3.22%)</span> 
</td>, <td>2.84B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:VALE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Vale SA (ADR)</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:VALE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">VALE</a> 
</td>, <td>7.94 
</td>, <td width="20%"> 
<span class="chg">+0.17</span> 
<span class="chg">(2.19%)</span> 
</td>, <td>39.61B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:CLF&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Cliffs Natural Resources</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:CLF&amp;ei=H7pKWbBtgoabAZ7Kv7gI">CLF</a> 
</td>, <td>5.97 
</td>, <td width="20%"> 
<span class="chg">+0.12</span> 
<span class="chg">(2.14%)</span> 
</td>, <td>1.69B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:AUY&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Yamana Gold Inc. (USA)</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:AUY&amp;ei=H7pKWbBtgoabAZ7Kv7gI">AUY</a> 
</td>, <td>2.40 
</td>, <td width="20%"> 
<span class="chg">+0.05</span> 
<span class="chg">(1.91%)</span> 
</td>, <td>2.27B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:HL&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Hecla Mining Company</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:HL&amp;ei=H7pKWbBtgoabAZ7Kv7gI">HL</a> 
</td>, <td>5.20 
</td>, <td width="20%"> 
<span class="chg">+0.09</span> 
<span class="chg">(1.86%)</span> 
</td>, <td>2.03B 
</td>] 
[<td colspan="2" style="text-align:left"><b>Losers (% price change)</b> 
</td>, <td colspan="3"> 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?cid=717954&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Jaguar Mining Inc (USA)</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?cid=717954&amp;ei=H7pKWbBtgoabAZ7Kv7gI"></a> 
</td>, <td>11.92 
</td>, <td width="20%"> 
<span class="chr">-0.74</span> 
<span class="chr">(-5.85%)</span> 
</td>, <td>2.52B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:OLN&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Olin Corporation</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:OLN&amp;ei=H7pKWbBtgoabAZ7Kv7gI">OLN</a> 
</td>, <td>28.64 
</td>, <td width="20%"> 
<span class="chr">-1.52</span> 
<span class="chr">(-5.04%)</span> 
</td>, <td>4.81B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NASDAQ:GPRE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Green Plains Inc</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NASDAQ:GPRE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">GPRE</a> 
</td>, <td>19.12 
</td>, <td width="20%"> 
<span class="chr">-0.98</span> 
<span class="chr">(-4.85%)</span> 
</td>, <td>708.77M 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:IPI&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Intrepid Potash, Inc.</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:IPI&amp;ei=H7pKWbBtgoabAZ7Kv7gI">IPI</a> 
</td>, <td>2.09 
</td>, <td width="20%"> 
<span class="chr">-0.09</span> 
<span class="chr">(-4.13%)</span> 
</td>, <td>261.35M 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NASDAQ:CENX&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Century Aluminum Co</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NASDAQ:CENX&amp;ei=H7pKWbBtgoabAZ7Kv7gI">CENX</a> 
</td>, <td>13.62 
</td>, <td width="20%"> 
<span class="chr">-0.56</span> 
<span class="chr">(-3.95%)</span> 
</td>, <td>1.17B 
</td>] 
[<td colspan="2" style="text-align:left"><b>Most Actives (dollar volume)</b> 
</td>, <td colspan="3"> 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:X&amp;ei=H7pKWbBtgoabAZ7Kv7gI">United States Steel Corp.</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:X&amp;ei=H7pKWbBtgoabAZ7Kv7gI">X</a> 
</td>, <td>21.27 
</td>, <td width="20%"> 
<span class="chg">+0.20</span> 
<span class="chg">(0.95%)</span> 
</td>, <td>3.77B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:DOW&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Dow Chemical Co</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:DOW&amp;ei=H7pKWbBtgoabAZ7Kv7gI">DOW</a> 
</td>, <td>64.01 
</td>, <td width="20%"> 
<span class="chr">-1.09</span> 
<span class="chr">(-1.67%)</span> 
</td>, <td>78.06B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:NUE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Nucor Corporation</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:NUE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">NUE</a> 
</td>, <td>56.15 
</td>, <td width="20%"> 
<span class="chg">+0.02</span> 
<span class="chg">(0.04%)</span> 
</td>, <td>18.02B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:VALE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">Vale SA (ADR)</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:VALE&amp;ei=H7pKWbBtgoabAZ7Kv7gI">VALE</a> 
</td>, <td>7.94 
</td>, <td width="20%"> 
<span class="chg">+0.17</span> 
<span class="chg">(2.19%)</span> 
</td>, <td>39.61B 
</td>] 
[<td style="text-align:left;"> 
<a href="/finance?q=NYSE:MT&amp;ei=H7pKWbBtgoabAZ7Kv7gI">ArcelorMittal SA (ADR)</a> 
</td>, <td style="text-align:left;"> 
<a href="/finance?q=NYSE:MT&amp;ei=H7pKWbBtgoabAZ7Kv7gI">MT</a> 
</td>, <td>20.16 
</td>, <td width="20%"> 
<span class="chg">+0.28</span> 
<span class="chg">(1.38%)</span> 
</td>, <td>20.06B 
</td>][/python] 

现在我想找到的第3“”的标签,并为这些“A”的标签文字:

with open('Basic Materials.htm') as fp: 
    soup=BeautifulSoup(fp,'lxml') 
    table=soup.find('div',{'class':'sfe-break-bottom'}) 
    for row in table.find_all('tr'): 
     cells=row.find_all('td') 
     print(cells) 

现在输出的打印(细胞)如下。 所以取出打印在上面的代码中(细胞)语句和下面给出重新写代码:

[python] 
with open('Basic Materials.htm') as fp: 
    soup=BeautifulSoup(fp,'lxml') 
    table=soup.find('div',{'class':'sfe-break-bottom'}) 
    for row in table.find_all('tr'): 
     cells=row.find_all('td') 
     for link in cells.find_all('a', limit=3): 
      print(link.get_text()) # gets the name 
      print(link.get('href')) # gets the links 

但我收到以下错误

AttributeError Traceback (most recent call last) in() 4 for row in table.find_all('tr'): 5 cells=row.find_all('td') ----> 6 for link in cells.find_all('a', limit=3): 7 print(link.get_text()) # gets the name 8 print(link.get('href')) # gets the links ~\Anaconda3\envs\practice\lib\site-packages\bs4\element.py in getattr(self, key) 1805 def getattr(self, key): 1806 raise AttributeError( -> 1807 "ResultSet object has no attribute '%s'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?" % key 1808 ) AttributeError: ResultSet object has no attribute 'find_all'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?

请你能告诉我为什么我收到这个错误? 如何获得第一个3'a'和带有这些标签的文本。 感谢

+1

如果Aguiar先生提供了您需要的答案,那么您应该将其答案标记为“已接受”。 –

+0

我得到错误 – Jonelya

回答

1

cells是一个列表,所以你不能直接的方法.findAll打电话,尝试创建一个列表,将取代由cells.find_all('a', limit=3)你的意思,你可以这样做:

for cell in cells: 
    atags = cell.findAll('a',limit=3) 
    for link in atags: 
     print(link.text) 
     print(link['href']) 

或使用列表理解:

atags = [cell.findAll('a',limit=3) for cell in cells] 
for link in atags: 
    print(link[0].text) 
    print(link[0]['href']) 
+0

@Jonelya很高兴帮助,请让我知道如果工作! –

+0

我按照你的指导改变了代码:open('Basic Materials.htm')为fp:soup = BeautifulSoup(fp,'lxml')table = soup.find('div',{'class':'sfe ('tr'):cells = row.find_all('td')atags = [cell.find_all('a',limit = 3)for cell in cells] for链接atags:print(links.get_text())print(links.get('href'))但是我又收到错误消息:AttributeError:ResultSet对象没有属性'get_text'。 – Jonelya

+0

@Jonelya你是对的,检查我的编辑,而不是'.text'。 –