2016-06-28 45 views
1

我需要从BeautifulSoup中已解析的四个字符串中提取数据。它们是:如何从无间隔字符串提取数据?

Arkansas72.21:59 AM76.29:04 AM5.22977.37:59 AM 

Ashley71.93:39 AM78.78:59 AM0.53678.78:59 AM 

Bradley72.64:49 AM77.28:59 AM2.41877.28:49 AM 

Chicot-40.19:04 AM-40.19:04 AM2.573-40.112:09 AM 

从第一串中的数据,例如,是阿肯色州,72.1,上午01时59分,76.2,上午09时04分,5.2,29,77.3,和上午07时59。有没有简单的方法来做到这一点?

编辑:全码

import urllib2 
from bs4 import BeautifulSoup 
import time 

def scraper(): 

    #Arkansas State Plant Board Weather Web data 
    url1 = 'http://170.94.200.136/weather/Inversion.aspx' 

    #opens url and parses HTML into Unicode 
    page1 = urllib2.urlopen(url1) 
    soup1 = BeautifulSoup(page1, 'lxml') 

    #print(soup.get_text()) gives a single Unicode string of relevant data in strings from the url 
    #Without print(), returns everything in without proper spacing 
    sp1 = soup1.get_text() 

    #datasp1 is the chunk with the website data in it so the search for Arkansas doesn't return the header 
    #everything else finds locations for Unicode strings for first four stations 
    start1 = sp1.find('Today') 
    end1 = sp1.find('new Sys.') 
    datasp1 = sp1[start1:end1-10] 

    startArkansas = datasp1.find('Arkansas') 
    startAshley = datasp1.find('Ashley') 
    dataArkansas = datasp1[startArkansas:startAshley-2] 

    startBradley = datasp1.find('Bradley') 
    dataAshley = datasp1[startAshley:startBradley-2] 

    startChicot = datasp1.find('Chicot') 
    dataBradley = datasp1[startBradley:startChicot-2] 

    startCleveland = datasp1.find('Cleveland') 
    dataChicot = datasp1[startChicot:startCleveland-2] 


    print(dataArkansas) 
    print(dataAshley) 
    print(dataBradley) 
    print(dataChicot) 
+3

还可以显示'BeautifulSoup'特定部分?我怀疑问题可能在于你如何从HTML中提取这些数据。 – alecxe

+0

你可以做正则表达式 – Copperfield

+2

@Copperfield:正则表达式符合法案。但我认为alecxe是正确的,认为这是一个[XY问题](http://www.perlmonks.org/?node=XY+Problem)。 –

回答

2

只是提高你的方式提取表格数据。我会用pandas.read_html()读取到的数据帧其中,我敢肯定,你会觉得方便一起工作:

import pandas as pd 

df = pd.read_html("http://170.94.200.136/weather/Inversion.aspx", attrs={"id": "MainContent_GridView1"})[0] 
print(df) 
+0

我如何获得每个表值作为自变量? –

+0

@MichaelFisher是的,如果你之前没有使用熊猫,请花一些时间研究如何使用它。这是值得的。你可以用许多不同的方式遍历它,例如:http://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe。 – alecxe

+1

这比我之前做的要容易100倍。我会更多地关注熊猫。我是Python新手。 –

1

您需要使用beautifulsoup解析HTML页面和检索数据:

url1 = 'http://170.94.200.136/weather/Inversion.aspx' 

#opens url and parses HTML into Unicode 
page1 = urlopen(url1) 
soup1 = BeautifulSoup(page1) 

# get the table 
table = soup1.find(id='MainContent_GridView1') 

# find the headers 
headers = [h.get_text() for h in table.find_all('th')] 

# retrieve data 
data = {} 
tr_elems = table.find_all('tr') 
for tr in tr_elems: 
    tr_content = [td.get_text() for td in tr.find_all('td')] 
    if tr_content: 
     data[tr_content[0]] = dict(zip(headers[1:], tr_content[1:])) 

print(data) 

这个例子将显示:

{ 
    "Greene West": { 
    "Low Temp (\u00b0F)": "67.7", 
    "Time Of High": "10:19 AM", 
    "Wind Speed (MPH)": "0.6", 
    "High Temp (\u00b0F)": "83.2", 
    "Wind Dir (\u00b0)": "20", 
    "Time Of Low": "6:04 AM", 
    "Current Time": "10:19 AM", 
    "Current Temp (\u00b0F)": "83.2" 
    }, 
    "Cleveland": { 
    "Low Temp (\u00b0F)": "70.8", 
    "Time Of High": "10:14 AM", 
    "Wind Speed (MPH)": "1.9", 
    [.....] 

}