2011-08-18 78 views
2

我使用网络爬虫来抓取数据,并将结果 - (来自Twitter页面的推文)作为单独的html文件存储在我正在爬取的每个用户中。我打算稍后解析html文件并将数据存储到数据库中以供分析。但是,我有一个奇怪的问题。Python网络爬虫的文件存储问题

当我运行下面的程序 - 从整体履带小片段 - 我能获得针对每个跟随一个单独的HTML文件:

import re 
import urllib2 
import twitter 

start_follower = "NYTimesKrugman" 
depth = 3 

searched = set() 

api = twitter.Api() 

def crawl(follower, in_depth): 
    if in_depth > 0: 
     searched.add(follower) 
     directory = "C:\\Python28\\Followertest1\\" + follower + ".html" 
     output = open(directory, 'a') 
     output.write(follower) 
     output.write('\n\n') 
     users = api.GetFriends(follower) 
     names = set([str(u.screen_name) for u in users]) 
     names -= searched 
     for name in list(names)[0:5]: 
      crawl(name, in_depth-1) 

crawl(start_follower, depth) 

for x in searched: 
    print x 
print "Program is completed." 

然而,当我运行的全履带,我做没有得到一个单独的文件每个跟随:

import twitter 
import urllib 
from BeautifulSoup import BeautifulSoup 
import re 
import time 

start_follower = "NYTimeskrugman" 
depth = 2 
searched = set() 

api = twitter.Api() 


def add_to_U(user): 
    U.append(user) 

def site(follower): #creates a twitter site url in string format based on the follower username 
    followersite = "http://mobile.twitter.com/" + follower 
    return followersite 

def getPage(follower): #obtains access to a webapge 
    url = site(follower) 
    response = urllib.urlopen(url) 
    return response 

def getSoup(response): #creates the parsing module 
    html = response.read() 
    soup = BeautifulSoup(html) 
    return soup 

def gettweets(soup, output): 
    tags = soup.findAll('div', {'class' : "list-tweet"})#to obtain tweet of a follower 
    for tag in tags: 
     a = tag.renderContents() 
     b = str (a) 
     output.write(b) 
     output.write('\n\n') 

def are_more_tweets(soup):#to check whether there is more than one page on mobile twitter 
    links = soup.findAll('a', {'href': True}, {id: 'more_link'}) 
    for link in links: 
     b = link.renderContents() 
     test_b = str(b) 
     if test_b.find('more') != -1: 
      return True 
    return False 

def getnewlink(soup): #to get the link to go to the next page of tweets on twitter 
    links = soup.findAll('a', {'href': True}, {id : 'more_link'}) 
    for link in links: 
     b = link.renderContents() 
     if str(b) == 'more': 
      c = link['href'] 
      d = 'http://mobile.twitter.com' +c 
      return d 

def crawl(follower, in_depth): #main method of sorts 
    if in_depth > 0: 
     searched.add(follower) 
     directory = "C:\\Python28\\Followertest2\\" + follower + ".html" 
     output = open(directory, 'a') 
     output.write(follower) 
     output.write('\n\n') 
     a = getPage(follower) 
     soup = getSoup(a) 
     gettweets(soup, output) 
     tweets = are_more_tweets(soup) 
     while(tweets): 
      b = getnewlink(soup) 
      red = urllib.urlopen(b) 
      html = red.read() 
      soup = BeautifulSoup(html) 
      gettweets(soup, output) 
      tweets = are_more_tweets(soup) 
     users = api.GetFriends(follower) 
     names = set([str(u.screen_name) for u in users]) 
     names -= searched 
     for name in list(names)[0:5]: 
      print name 
      crawl(name, in_depth - 1) 

crawl(start_follower, depth) 
print("Program done. Look at output file.") 

更具体地说,我似乎得到一个独立的HTML文件有关的第一个五年的追随者,然后没有新的文件显示要创建。任何帮助,将不胜感激!

+0

我知道这可能无助于你的问题在这里 - 但一段时间后,我在twitter上进行数据挖掘,请问为什么你不只是使用API​​? – eWizardII

+0

Python28 ?????? –

+0

啊..我正在使用API​​来获取用户的朋友列表,但我不想使用该API来获取由于费率限制而发出的推文。我相信,使用当前的代码(鉴于爬行程序必须暂停以获取用户的所有推文),我不会超过当前的速率限制。 – snehoozle

回答

1

depth值与代码片段和完整代码之间的值不同(您将只在完整代码中获得一个级别的递归)。此外,您只需从关注者名单中获取前五名:for name in list(names)[0:5]:因此,您总共可以获得六个人:首发追随者和他们的前五名好友。

+0

Omg。我觉得很傻...好吧,让我试着调整这个,看看我得到了什么...啊! – snehoozle

+0

嘿,只是想谢谢你... – snehoozle