2015-12-02 78 views
0

我目前在学校学习python,并一直在玩BeautifulSoup,它一直非常简单。我现在正尝试使用python的csv模块导出列表,但它不能按照我希望的方式运行。这里是我的代码:python CSV模块 - 只获取一个单元格填充

import csv 
import requests 
from bs4 import BeautifulSoup 
import pprint 
import sys 

url = 'http://www.yellowpages.com/search?search_terms=restaurants&geo_location_terms=Charleston%2C%20SC' 
response = requests.get(url) 
html = response.content 

soup = BeautifulSoup(html, "html.parser") 
g_data = soup.find_all("div", {"class": "info"}) #this isolates the big chunks of data which houses our child tags 
for item in g_data: #iterates through big chunks  
    try: 
     eateryName = (item.contents[0].find_all("a", {"class": "business-name"})[0].text) 
    except: 
     pass 

    print(eateryName) 
with open('csvnametest.csv', "w") as csv_file: 
    writer = csv.writer(csv_file) 
    writer.writerow([eateryName]) 

我让所有的餐厅的名称(如证据打印功能),但是当我打开Excel文档,它只是有名单上的最后一个名称,而不是所有的名字。我试图追加eateryName但随后它把所有的名字在一个cell.enter代码在这里

+0

当你在Python中使用csv时,我建议你使用'pandas'。它会让你的生活更轻松。 – burhan

回答

0

你可以试试这个:

with open('csvnametest.csv', "w") as csv_file: 
    writer = csv.writer(csv_file) 
    for row in eateryName: 
     writer.writerow(row) 
0

好像你正试图写出整个列入CSV。你应该做的,而不是执行以下操作:

import csv 
import requests 
from bs4 import BeautifulSoup 
import pprint 
import sys 

url = 'http://www.yellowpages.com/search?search_terms=restaurants&geo_location_terms=Charleston%2C%20SC' 
response = requests.get(url) 
html = response.content 

soup = BeautifulSoup(html, "html.parser") 
g_data = soup.find_all("div", {"class": "info"}) #this isolates the big chunks of data which houses our child tags 
for item in g_data: #iterates through big chunks  
    try: 
     eateryName = (item.contents[0].find_all("a", {"class": "business-name"})[0].text) 
    except: 
     pass 

    print(eateryName) 
    with open('csvnametest.csv', "wa") as csv_file: 
     writer = csv.writer(csv_file) 
     writer.writerow([eateryName]) 

的原因是你写的是外循环,所以你只写了最后一个条目,和你写只有“W”只写一遍又犯规追加。

相关问题