2017-07-24 47 views
-1

我正在使用python在csv文件中写入一些文本.. 下面是以哪种方式获取文件中的写入数据的屏幕截图。 enter image description here在使用python的文件中的打印文本中的一点点误差

您可以看到,在Channel Social Media Links列中,所有链接都在其他下一行单元格中写入良好,但第一个链接未在Channel Social Media Links列中写入。请我怎么能这样写? enter image description here

我的Python脚本是在这里

from urllib.request import urlopen as uReq 
from bs4 import BeautifulSoup as soup 

myUrl='https://www.youtube.com/user/HolaSoyGerman/about' 


uClient = uReq(myUrl) 
page_html = uClient.read() 
uClient.close() 

page_soup = soup(page_html, "html.parser") 

containers = page_soup.findAll("h1",{"class":"branded-page-header-title"}) 

filename="Products2.csv" 
f = open(filename,"w") 

headers = "Channel Name,Channel Description,Channel Social Media Links\n" 

f.write(headers) 

channel_name = containers[0].a.text 
print("Channel Name :" + channel_name) 

# For About Section Info 
aboutUrl='https://www.youtube.com/user/HolaSoyGerman/about' 


uClient1 = uReq(aboutUrl) 
page_html1 = uClient1.read() 
uClient1.close() 

page_soup1 = soup(page_html1, "html.parser") 

description_div = page_soup.findAll("div",{"class":"about-description 
branded-page-box-padding"}) 
channel_description = description_div[0].pre.text 
print("Channel Description :" + channel_description) 
f.write(channel_name+ "," +channel_description) 
links = page_soup.findAll("li",{"class":"channel-links-item"}) 
for link in links: 
social_media = link.a.get("href") 
f.write(","+","+social_media+"\n") 
f.close() 
+1

你不包括换行符f.write后'(CHANNEL_NAME + “” + CHANNEL_DESCRIPTION)',所以当然第一行将会进一步结束。另外请注意'',“+”,“==”,,“',并且CSV模块支持从序列中写入,而不是自己添加逗号。 – jonrsharpe

+0

那么我怎样才能实现这个请给我例子。我希望社交媒体链接不以逗号分隔。我希望在社交媒体链接中的所有链接应写入社交媒体链接列的新下一行单元 –

回答

1

,如果你写你的文件时利用Python的CSV库这将有助于。这可以将项目列表转换为正确的逗号分隔值。

from urllib.request import urlopen as uReq 
from bs4 import BeautifulSoup as soup 
import csv 

myUrl = 'https://www.youtube.com/user/HolaSoyGerman/about' 

uClient = uReq(myUrl) 
page_html = uClient.read() 
uClient.close() 

page_soup = soup(page_html, "html.parser") 
containers = page_soup.findAll("h1",{"class":"branded-page-header-title"}) 
filename = "Products2.csv" 

with open(filename, "w", newline='') as f: 
    csv_output = csv.writer(f) 
    headers = ["Channel Name", "Channel Description", "Channel Social Media Links"] 
    csv_output.writerow(headers) 

    channel_name = containers[0].a.text 
    print("Channel Name :" + channel_name) 

    # For About Section Info 
    aboutUrl = 'https://www.youtube.com/user/HolaSoyGerman/about' 

    uClient1 = uReq(aboutUrl) 
    page_html1 = uClient1.read() 
    uClient1.close() 

    page_soup1 = soup(page_html1, "html.parser") 

    description_div = page_soup.findAll("div",{"class":"about-description branded-page-box-padding"}) 
    channel_description = description_div[0].pre.text 
    print("Channel Description :" + channel_description) 

    links = [link.a.get('href') for link in page_soup.findAll("li",{"class":"channel-links-item"})] 
    csv_output.writerow([channel_name, channel_description, links[0]]) 

    for link in links[1:]: 
     csv_output.writerow(['', '', link]) 

这会给你一个单行每个在最后一列的HREFs,例如:

Channel Name,Channel Description,Channel Social Media Links 
HolaSoyGerman.,Los Hombres De Verdad Usan Pantuflas De Perrito,http://www.twitter.com/germangarmendia 
,,http://instagram.com/germanchelo 
,,http://www.youtube.com/juegagerman 
,,http://www.youtube.com/juegagerman 
,,http://www.twitter.com/germangarmendia 
,,http://instagram.com/germanchelo 
,,https://plus.google.com/108460714456031131326 

每个writerow()通话将写值的列表,以该文件为逗号分隔值并在最后自动为你添加换行符。所需要的就是构建每行的值列表。首先将您的链接的第一个,并在您的频道描述后,使其成为列表中的最后一个值。其次,为前两列有空值的剩余链接写一行。


为了回答您的评论,下面应该让你开始:

from urllib.request import urlopen as uReq 
from bs4 import BeautifulSoup as soup 
import csv 

def get_data(url, csv_output): 

    if not url.endswith('/about'): 
     url += '/about' 

    print("URL: {}".format(url)) 
    uClient = uReq(url) 
    page_html = uClient.read() 
    uClient.close() 

    page_soup = soup(page_html, "html.parser") 
    containers = page_soup.findAll("h1", {"class":"branded-page-header-title"}) 

    channel_name = containers[0].a.text 
    print("Channel Name :" + channel_name) 

    description_div = page_soup.findAll("div", {"class":"about-description branded-page-box-padding"}) 
    channel_description = description_div[0].pre.text 
    print("Channel Description :" + channel_description) 

    links = [link.a.get('href') for link in page_soup.findAll("li", {"class":"channel-links-item"})] 
    csv_output.writerow([channel_name, channel_description, links[0]]) 

    for link in links[1:]: 
     csv_output.writerow(['', '', link]) 

    #TODO - get list of links for the related channels 

    return related_links 


my_url = 'https://www.youtube.com/user/HolaSoyGerman' 
filename = "Products2.csv" 

with open(filename, "w", newline='') as f: 
    csv_output = csv.writer(f) 
    headers = ["Channel Name", "Channel Description", "Channel Social Media Links"] 
    csv_output.writerow(headers) 

    for _ in range(5): 
     next_links = get_data(my_url, csv_output) 
     my_url = next_links[0]  # e.g. follow the first of the related links 
+0

我使用网络抓取从YouTube的频道获取信息并将其保存在csv文件中。现在我希望当任何youtube的频道信息已经在csv文件中获取,然后从“相关频道”部分自动获取第一频道的网址将获取一个变量,然后整个过程应该完成,并继续进行5次。我怎样才能做到这一点? –

+0

我会说这是一个完全不同的问题。你需要用一个例子来更好地解释它。如果我的答案解决了您的第一个问题,我建议您接受解决方案(点击上/下按钮下方的灰色勾号),然后开始更详细的第二个问题,并提供更多详细信息。 –

+0

我想要我的脚本如下流程 –

相关问题