这是我第一次尝试使用编程来获得有用的东西,所以请耐心等待。建设性的反馈是非常感谢:)创建来自特定网站的URL列表
我正在建立一个数据库与欧洲议会的所有新闻稿。到现在为止,我已经构建了一个可以从一个特定URL检索我想要的数据的刮板。但是,在阅读了几篇教程之后,我仍然无法弄清楚如何创建一个包含来自这个特定站点的所有新闻稿的URL列表。
也许这是关系到网站的构建方式,或者我(可能)只是缺少一些明显的事情,一个有经验的项目将实现向右走,但是我真的不知道如何从这里着手。
这是启动URL:http://www.europarl.europa.eu/news/en/press-room
这是我的代码:
links = [] # Until now I have just manually pasted a few links
# into this list, but I need it to contain all the URLs to scrape
# Function for removing html tags from text
TAG_RE = re.compile(r'<[^>]+>')
def remove_tags(text):
return TAG_RE.sub('', text)
# Regex to match dates with pattern DD-MM-YYYY
date_match = re.compile(r'\d\d-\d\d-\d\d\d\d')
# For-loop to scrape variables from site
for link in links:
# Opening up connection and grabbing page
uClient = uReq(link)
# Saves content of page in new variable (still in HTML!!)
page_html = uClient.read()
# Close connection
uClient.close()
# Parsing page with soup
page_soup = soup(page_html, "html.parser")
# Grabs page
pr_container = page_soup.findAll("div",{"id":"website"})
# Scrape date
date_container = pr_container[0].time
date = date_container.text
date = date_match.search(date)
date = date.group()
# Scrape title
title = page_soup.h1.text
title_clean = title.replace("\n", " ")
title_clean = title_clean.replace("\xa0", "")
title_clean = ' '.join(title_clean.split())
title = title_clean
# Scrape institutions involved
type_of_question_container = pr_container[0].findAll("div", {"class":"ep_subtitle"})
text = type_of_question_container[0].text
question_clean = text.replace("\n", " ")
question_clean = text.replace("\xa0", " ")
question_clean = re.sub("\d+", "", question_clean) # Redundant?
question_clean = question_clean.replace("-", "")
question_clean = question_clean.replace(":", "")
question_clean = question_clean.replace("Press Releases"," ")
question_clean = ' '.join(question_clean.split())
institutions_mentioned = question_clean
# Scrape text
text_container = pr_container[0].findAll("div", {"class":"ep-a_text"})
text_with_tags = str(text_container)
text_clean = remove_tags(text_with_tags)
text_clean = text_clean.replace("\n", " ")
text_clean = text_clean.replace(",", " ") # Removing commas to avoid trouble with .csv-format later on
text_clean = text_clean.replace("\xa0", " ")
text_clean = ' '.join(text_clean.split())
# Calculate word count
word_count = len(text_clean.split())
word_count = str(word_count)
print("Finished scraping: " + link)
time.sleep(randint(1,5))
f.write(date + "," + title + ","+ institutions_mentioned + "," + word_count + "," + text_clean + "\n")
f.close()
HTML有电流法puting的URL,在HTML我们有:SRC,所有链接href和行动,为SRC =>( '脚本', 'IMG', '源', '视频',“ ('a','link','area','base')和action =>('form','if','input' ),首先你需要将这些标签解压缩,然后提取它们的每个src,href和action sub_tag(不需要解析任何东西或删除脏字符串),用这种方法可以提取所有标准的html url,你可以用beautifulsoup模块和两个FORS! – DRPK