2017-01-03 91 views
1

这是一个简单而基本的问题,我想。但我没有设法找到一个清晰而简单的答案。 这里是我的问题:Python打开并逐个解析URL的a.txt

我有一个.txt文件与每行(大约300)的网址。我从一个python脚本中获得了这些url。 我想通过一个这些URL打开一个执行这个脚本为每一个得到一些信息,我感兴趣的是:

import urllib2 
from bs4 import BeautifulSoup 
page = urllib2.urlopen("http://www.aerodromes.fr/aeroport-de-saint-martin-grand-case-ttfg-a413.html") 
soup = BeautifulSoup(page, "html.parser") 
info_tag = soup.find_all('b') 
info_nom =info_tag[2].string 
info_pos =info_tag[4].next_sibling 
info_alt =info_tag[5].next_sibling 
info_pis =info_tag[6].next_sibling 
info_vil =info_tag[7].next_sibling 
print(info_nom +","+ info_pos+","+ info_alt +","+ info_pis +","+info_vil) 

aero-url.txt

http://www.aerodromes.fr/aeroport-de-la-reunion-roland-garros-fmee-a416.html, 
http://www.aerodromes.fr/aeroport-de-saint-pierre---pierrefonds-fmep-a417.html, 
http://www.aerodromes.fr/base-aerienne-de-moussoulens-lf34-a433.html, 
http://www.aerodromes.fr/aerodrome-d-yvetot-lf7622-a469.html, 
http://www.aerodromes.fr/aerodrome-de-dieppe---saint-aubin-lfab-a1.html, 
http://www.aerodromes.fr/aeroport-de-calais---dunkerque-lfac-a2.html, 
http://www.aerodromes.fr/aerodrome-de-compiegne---margny-lfad-a3.html, 
http://www.aerodromes.fr/aerodrome-d-eu---mers---le-treport-lfae-a4.html, 
http://www.aerodromes.fr/aerodrome-de-laon---chambry-lfaf-a5.html, 
http://www.aerodromes.fr/aeroport-de-peronne---saint-quentin-lfag-a6.html, 
http://www.aerodromes.fr/aeroport-de-nangis-les-loges-lfai-a7.html, 
... 

我想我必须使用循环与这样的事情:

import urllib2 
from bs4 import BeautifulSoup 

# Open the file for reading 
infile = open("aero-url.txt", 'r') 

# Read every single line of the file into an array of lines 
lines = infile.readline().rstrip('\n\r') 

for line in infile 

page = urllib2.urlopen(lines) 
soup = BeautifulSoup(page, "html.parser") 

#find the places of each info 
info_tag = soup.find_all('b') 
info_nom =info_tag[2].string 
info_pos =info_tag[4].next_sibling 
info_alt =info_tag[5].next_sibling 
info_pis =info_tag[6].next_sibling 
info_vil =info_tag[7].next_sibling 

#Print them on the terminal. 
print(info_nom +","+ info_pos+","+ info_alt +","+ info_pis +","+info_vil) 

我会写这些结果后的txt文件。但我的问题在于如何将我的分析脚本应用到我的URL文本文件。

+1

'lines'不是行列表。正如你无论如何似乎打算循环在'infile'中的每一行,我相信''lines'不是必需的。此外,您还缺少一些缩进等等。 –

回答

0

使用line代替lines中的urlopen

page = urllib2.urlopen(line) 

,因为你是在循环使用infile,你不需要lines线

lines = infile.readline().rstrip('\n\r') 

也缩进是错误的循环。
纠正这些你的代码应该如下所示。

import urllib2 
from bs4 import BeautifulSoup 

# Open the file for reading 
infile = open("aero-url.txt", 'r') 

for line in infile: 

    page = urllib2.urlopen(line) 
    soup = BeautifulSoup(page, "html.parser") 

    #find the places of each info 
    info_tag = soup.find_all('b') 
    info_nom =info_tag[2].string 
    info_pos =info_tag[4].next_sibling 
    info_alt =info_tag[5].next_sibling 
    info_pis =info_tag[6].next_sibling 
    info_vil =info_tag[7].next_sibling 

    #Print them on the terminal. 
    print(info_nom +","+ info_pos+","+ info_alt +","+ info_pis +","+info_vil) 
+0

嗨安巴拉桑,谢谢你的回答。不幸的是,我有一个语法错误'for infile line',我会尝试找出有什么问题。如果你有一个想法,我会接受它! – Befup

+0

应该在for行末尾的冒号缺失。我编辑了包含它的答案。 – Anbarasan