我正在尝试使用Python的BeautifulSoup库进行一些简单的网页抓取,并且在尝试解析大多数YouTube网页时遇到UnicodeDecodeError。BeautifulSoup无法解析YouTube页面
看来YouTube正在为HTMl提供无效字符。当然,这是他们的一个问题,但我认为BeautifulSoup的重点在于它可以处理不正确的页面,并尽力猜测结果。如果它丢弃了无效字符,我会很高兴。我远离Unicode专家,我尝试过的各种魔法咒语encode
和decode
对我没有任何好处。
有没有人对如何处理这个错误有任何建议。我不想让我的代码专用YouTube,因为它需要处理大量用户指定的网页。
以下是一个演示问题非常简单的代码片段:
import urllib
from bs4 import BeautifulSoup
url='https://www.youtube.com/watch?v=W9MzrirPrCI'
text = urllib.urlopen(url).read()
soup = BeautifulSoup(text)
最后一行导致以下错误:
UnicodeDecodeError Traceback (most recent call last)
/cygdrive/d/home/ll-virtualenv/lib/python2.7/site-packages/Django-1.5.1-py2.7.egg/django/core/management/commands/shell.pyc in <module>()
----> 1 soup = BeautifulSoup(text)
/cygdrive/d/home/ll-virtualenv/lib/python2.7/site-packages/bs4/__init__.pyc in __init__(self, markup, features, builder, parse_only, from_encoding, **kwargs)
170
171 try:
--> 172 self._feed()
173 except StopParsing:
174 pass
/cygdrive/d/home/ll-virtualenv/lib/python2.7/site-packages/bs4/__init__.pyc in _feed(self)
183 self.builder.reset()
184
--> 185 self.builder.feed(self.markup)
186 # Close out any unfinished strings and close all the open tags.
187 self.endData()
/cygdrive/d/home/ll-virtualenv/lib/python2.7/site-packages/bs4/builder/_lxml.pyc in feed(self, markup)
193 def feed(self, markup):
194 self.parser.feed(markup)
--> 195 self.parser.close()
196
197 def test_fragment_to_document(self, fragment):
/usr/lib/python2.7/site-packages/lxml-3.1.0-py2.7-cygwin-1.7.17-i686.egg/lxml/etree.dll in lxml.etree._FeedParser.close (src/lxml/lxml.etree.c:88786)()
/usr/lib/python2.7/site-packages/lxml-3.1.0-py2.7-cygwin-1.7.17-i686.egg/lxml/etree.dll in lxml.etree._TargetParserContext._handleParseResult (src/lxml/lxml.etree.c:98085)()
/usr/lib/python2.7/site-packages/lxml-3.1.0-py2.7-cygwin-1.7.17-i686.egg/lxml/etree.dll in lxml.etree._TargetParserContext._handleParseResult (src/lxml/lxml.etree.c:97909)()
/usr/lib/python2.7/site-packages/lxml-3.1.0-py2.7-cygwin-1.7.17-i686.egg/lxml/etree.dll in lxml.etree._ExceptionContext._raise_if_stored (src/lxml/lxml.etree.c:9071)()
/usr/lib/python2.7/site-packages/lxml-3.1.0-py2.7-cygwin-1.7.17-i686.egg/lxml/etree.dll in lxml.etree._handleSaxData (src/lxml/lxml.etree.c:94081)()
UnicodeDecodeError: 'utf8' codec can't decode byte 0xd7 in position 22: invalid continuation byte
尝试使用scrapy代替。 – 2013-05-03 16:46:05
我使用的版本是4.1.3,它工作正常 – Moj 2013-05-03 17:05:29
如果我回到BeautifulSoup的第3版,它的工作原理。 4.1.3仍然没有给出上述错误。 Moj,你是否像我一样使用相同的URL? – 2013-05-04 13:47:26