2012-04-24 108 views
0

我刚刚在小群集上成功安装了Hadoop。现在我试图运行wordcount的例子,但我得到这个错误:运行Hadoop wordcount示例时未找到Job Token文件示例

****hdfs://localhost:54310/user/myname/test11 
12/04/24 13:26:45 INFO input.FileInputFormat: Total input paths to process : 1 
12/04/24 13:26:45 INFO mapred.JobClient: Running job: job_201204241257_0003 
12/04/24 13:26:46 INFO mapred.JobClient: map 0% reduce 0% 
12/04/24 13:26:50 INFO mapred.JobClient: Task Id : attempt_201204241257_0003_m_000002_0, Status : FAILED 
Error initializing attempt_201204241257_0003_m_000002_0: 
java.io.IOException: Exception reading file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken 
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:135) 
    at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:165) 
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1179) 
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1116) 
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2404) 
    at java.lang.Thread.run(Thread.java:722) 
Caused by: java.io.FileNotFoundException: File file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken does not exist. 
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397) 
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) 
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125) 
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283) 
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427) 
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:129) 
    ... 5 more 

任何帮助吗?

+2

是否路径'的/ tmp/mapred/local'存在,并执行用户在其下的Hadoop服务运行有权写入这个目录? – 2012-04-24 18:30:59

+0

IIRC你必须制作该目录或成为具有这些权限的组中的用户。否则,你会得到fnf – apesa 2012-04-24 21:50:59

回答

2

我刚刚通过这个相同的错误 - 递归地在我的Hadoop目录设置权限没有帮助。在Mohyt的推荐here之后,我修改了core-site.xml(在hadoop/conf /目录中)以删除我指定临时目录(XML中的hadoop.tmp.dir)的地方。在允许Hadoop创建自己的临时目录后,我运行无错误。

0

最好是创建自己的临时目录。

<configuration> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/home/unmesha/mytmpfolder/tmp</value> 
<description>A base for other temporary directories.</description> 
</property> 
..... 

并给予许可

[email protected]:~$chmod 750 /mytmpfolder/tmp 

check this核心-site.xml中配置