2016-01-22 302 views
0

我想让我的笔记本电脑上设置hadoop。我已经遵循了几个关于设置hadoop的教程。如果我再次运行它,它说,已经存在hadoop输入路径不存在

bin/hdfs dfs -mkdir /user/<username> 

我跑了这个命令。

我尝试使用以下命令运行测试jar文件:

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input output 'dfs[a-z.]+' 

,并收到这个异常

16/01/22 15:11:06 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/<username>/.staging/job_1453492366595_0006 org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/<username>/grep-temp-891167560

我不知道,我这个错误之前收到此:

16/01/22 15:51:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
16/01/22 15:51:51 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 
16/01/22 15:51:51 INFO input.FileInputFormat: Total input paths to process : 33 
16/01/22 15:51:52 INFO mapreduce.JobSubmitter: number of splits:33 
16/01/22 15:51:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1453492366595_0009 
16/01/22 15:51:52 INFO impl.YarnClientImpl: Submitted application application_1453492366595_0009 
16/01/22 15:51:52 INFO mapreduce.Job: The url to track the job: http://Marys-MacBook-Pro.local:8088/proxy/application_1453492366595_0009/ 
16/01/22 15:51:52 INFO mapreduce.Job: Running job: job_1453492366595_0009 
16/01/22 15:51:56 INFO mapreduce.Job: Job job_1453492366595_0009 running in uber mode : false 
16/01/22 15:51:56 INFO mapreduce.Job: map 0% reduce 0% 
16/01/22 15:51:56 INFO mapreduce.Job: Job job_1453492366595_0009 failed with state FAILED due to: Application application_1453492366595_0009 failed 2 times due to AM Container for appattempt_1453492366595_0009_000002 exited with exitCode: 127 
For more detailed output, check application tracking page:http://Marys-MacBook-Pro.local:8088/cluster/app/application_1453492366595_0009Then, click on links to logs of each attempt. 
Diagnostics: Exception from container-launch. 
Container id: container_1453492366595_0009_02_000001 
Exit code: 127 
Stack trace: ExitCodeException exitCode=127: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) 
    at org.apache.hadoop.util.Shell.run(Shell.java:456) 
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) 
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 


Container exited with a non-zero exit code 127 
Failing this attempt. Failing the application. 

有一个堆栈跟踪遵循这一点。 我在Mac电脑上。

+0

那个JAR文件是做什么的? 'grep input output'dfs [az。] +''是参数,所以我假设它在模式'dfs [az。] +'的'input'目录/文件上运行'grep'并将结果放入'output'目录? –

+0

这是几个教程提供的示例。你的假设似乎是正确的。我下面这个网站:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html – user2983836

+0

你运行'斌/ HDFS DFS -put等/ Hadoop的input'那个链接提到? –

回答

1

我用Hadoop 2.7.2,而在跟着the Official Docs的时候,我起初也遇到这个问题。

原因是我忘了关注“准备启动Hadoop集群”一章。

我在etc/hadoop/hadoop-env.sh设置JAVA_HOME解决它。

1

对于我来说,这是因为使用了错误版本的JDK使用Hadoop。我用hadoop 2.6.5。起初,我使用oracle JDK 1.8.0_131启动hadoop,运行示例jar并发生错误。在使用JDK 1.7.0_80之后,该示例就像一个魅力。

有一个关于HadoopJavaVersions页面。

相关问题