2016-01-22 64 views
0

我正在使用Nutch 2.1爬行整个域名(例如company.com)。我曾遇到过这个问题,由于Apache Nutch中设置的内容限制,我没有收到我想要爬取的所有链接。通常,当我检查内容时,只有页面的上半部分存储在数据库中,因此下半部分的链接没有被抓取。使用Nutch内容限制的建议

为了解决这个问题,我改变了的Nutch-site.xml中使内容限制如下:

<property> 
    <name>http.content.limit</name> 
    <value>-1</value> 
    <description>The length limit for downloaded content using the http 
    protocol, in bytes. If this value is nonnegative (>=0), content longer 
    than it will be truncated; otherwise, no truncation at all. Do not 
    confuse this setting with the file.content.limit setting. 
    </description> 
</property> 

这样做解决了这个问题,但在某些时候,我遇到了内存溢出错误,由该输出在分析证明:

ParserJob: starting 
ParserJob: resuming: false 
ParserJob: forced reparse: false 
ParserJob: parsing all 
Exception in thread "main" java.lang.RuntimeException: job failed: name=parse, jobid=job_local_0001 
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:54) 
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:251) 
at org.apache.nutch.parse.ParserJob.parse(ParserJob.java:259) 
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:302) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
at org.apache.nutch.parse.ParserJob.main(ParserJob.java:306) 

这里是我的hadoop.log(错误附近的部分) :

2016-01-22 02:02:35,898 INFO crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature 
2016-01-22 02:02:37,255 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2016-01-22 02:02:39,130 INFO mapreduce.GoraRecordReader - gora.buffer.read.limit = 10000 
2016-01-22 02:02:39,255 INFO mapreduce.GoraRecordWriter - gora.buffer.write.limit = 10000 
2016-01-22 02:02:39,322 INFO crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature 
2016-01-22 02:02:53,018 WARN mapred.FileOutputCommitter - Output path is null in cleanup 
2016-01-22 02:02:53,031 WARN mapred.LocalJobRunner - job_local_0001 
java.lang.OutOfMemoryError: Java heap space 
    at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3051) 
    at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2991) 
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3532) 
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:943) 
    at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1441) 
    at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2936) 
    at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:477) 
    at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2631) 
    at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1800) 
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2221) 
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624) 
    at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127) 
    at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2293) 
    at org.apache.gora.sql.store.SqlStore.execute(SqlStore.java:423) 
    at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:71) 
    at org.apache.gora.mapreduce.GoraRecordReader.executeQuery(GoraRecordReader.java:66) 
    at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:102) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532) 
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) 
    at org.apache.hadoop.map 

当我将内容限制设置为-1时,我只遇到了这个问题。但是,如果我不这样做,有可能我无法获得我想要的所有链接。有关如何使用内容限制的任何建议?这样做不是很明智吗?如果是这样,我可以使用什么样的替代方案?谢谢!

+0

你为什么不增加内存,看看它是如何工作的? – ameertawfik

+0

有没有办法让Nutch增加内存? – dagitab

回答

0

问题是您将爬网深度设置为无限制(-1)。当您的抓取工具系统遇到重度URL(例如https://en.wikipedia.org, https://wikipedia.org and https://en.wikibooks.org)时,您的系统在抓取过程中可能会耗尽内存。您应该通过设置NUTCH_HEAPSIZE环境变量值e.g., export NUTCH_HEAPSIZE=4000来增加Nuch的内存(请参阅Nutch脚本中的详细信息)。请注意,此值等同于Hadoop的HADOOP_HEAPSIZE。如果仍然无法正常工作,你应该增加你的系统^^

希望这有助于物理内存,

李全安待办事项