2014-04-02 57 views
0

我有问题,而对于注入 以下运行Nutch的是命令我运行Nutch的:作业失败

斌/ Nutch的注射斌/爬行/ crawldb斌/网址

上面的命令运行后,得到以下错误

Injector: starting at 2014-04-02 13:02:29 
Injector: crawlDb: bin/crawl/crawldb 
Injector: urlDir: bin/urls/seed.txt 
Injector: Converting injected urls to crawl db entries. 
Injector: total number of urls rejected by filters: 2 
Injector: total number of urls injected after normalization and filtering: 0 
Injector: Merging injected urls into crawl db. 
Injector: overwrite: false 
Injector: update: false 
Injector: java.io.IOException: Job failed! 
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357) 
    at org.apache.nutch.crawl.Injector.inject(Injector.java:294) 
    at org.apache.nutch.crawl.Injector.run(Injector.java:316) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
    at org.apache.nutch.crawl.Injector.main(Injector.java:306) 

我第一次运行nutch。 我已经检查过solr,nutch安装正确。

下面详细说明是从日志文件

java.io.IOException: The temporary job-output directory file:/usr/share/apache-nutch-1.8/bin/crawl/crawldb/1639805438/_temporary doesn't exist! 
    at org.apache.hadoop.mapred.FileOutputCommitter.getWorkPath(FileOutputCommitter.java:250) 
    at org.apache.hadoop.mapred.FileOutputFormat.getTaskOutputPath(FileOutputFormat.java:244) 
    at org.apache.hadoop.mapred.MapFileOutputFormat.getRecordWriter(MapFileOutputFormat.java:46) 
    at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.<init>(ReduceTask.java:449) 
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:491) 
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:398) 
2014-04-02 12:54:46,251 ERROR crawl.Injector - Injector: java.io.IOException: Job failed! 
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357) 
    at org.apache.nutch.crawl.Injector.inject(Injector.java:294) 
    at org.apache.nutch.crawl.Injector.run(Injector.java:316) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
    at org.apache.nutch.crawl.Injector.main(Injector.java:306) 
+0

请帮我.. – Lussi

+0

根据你的日志你有权限的问题。大概这个工作没有权限在/ usr/...中创建文件夹... – Mysterion

+0

@Mysterion谢谢你的回复..因为你建议我改变了权限..但仍然得到相同的错误。 – Lussi

回答

0

使用仓/ Nutch的注入仓/爬行/ crawldb仓/网址命令注入

代替仓/ Nutch的注射爬行/ crawldb斌/网址是

解决了这个错误。

为了获取网址,我对regex-urlfilter.txt文件做了更改,现在可以获取网址。