2015-11-07 196 views
0

虽然HDFS上得到错误执行的JAR文件命令如下的Hadoop jar命令错误

#hadoop jar WordCountNew.jar WordCountNew /MRInput57/Input-Big.txt /MROutput57 
15/11/06 19:46:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
15/11/06 19:46:32 INFO mapred.JobClient: Cleaning up the staging area hdfs://localhost:8020/var/lib/hadoop-0.20/cache/mapred/mapred/staging/root/.staging/job_201511061734_0003 
15/11/06 19:46:32 ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory /MRInput57/Input-Big.txt already exists 
Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory /MRInput57/Input-Big.txt already exists 
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132) 
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:921) 
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) 
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882) 
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:526) 
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:556) 
    at MapReduce.WordCountNew.main(WordCountNew.java:114) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
    at java.lang.reflect.Method.invoke(Method.java:597) 
    at org.apache.hadoop.util.RunJar.main(RunJar.java:197) 


My Driver class Program is as below 

    public static void main(String[] args) throws IOException, Exception { 
     // Configutation details w. r. t. Job, Jar file 
     Configuration conf = new Configuration(); 
     Job job = new Job(conf, "WORDCOUNTJOB"); 

     // Setting Driver class 
     job.setJarByClass(MapReduceWordCount.class); 
     // Setting the Mapper class 
     job.setMapperClass(TokenizerMapper.class); 
     // Setting the Combiner class 
     job.setCombinerClass(IntSumReducer.class); 
     // Setting the Reducer class 
     job.setReducerClass(IntSumReducer.class); 
     // Setting the Output Key class 
     job.setOutputKeyClass(Text.class); 
     // Setting the Output value class 
     job.setOutputValueClass(IntWritable.class); 
     // Adding the Input path 
     FileInputFormat.addInputPath(job, new Path(args[0])); 
     // Setting the output path 
     FileOutputFormat.setOutputPath(job, new Path(args[1])); 

     // System exit strategy 
     System.exit(job.waitForCompletion(true) ? 0 : 1); 
    } 

有人请纠正这个问题在我的代码?

问候 Pranav

回答

1

您需要检查输出目录不存在,并删除它,如果它。 MapReduce不能(或不会)将文件写入存在的目录。它需要创建目录来确保。

补充一点:

Path outPath = new Path(args[1]); 
FileSystem dfs = FileSystem.get(outPath.toUri(), conf); 
if (dfs.exists(outPath)) { 
    dfs.delete(outPath, true); 
} 
0

输出目录不应该执行程序前存在。删除现有目录或提供新目录或删除程序中的输出目录。

我希望在从命令提示符执行程序之前,从命令提示符处删除输出目录。

从命令提示符:

hdfs dfs -rm -r <your_output_directory_HDFS_URL> 

从Java:

Chris Gerken code is good enough. 
0
您正在尝试创建存储输出

输出目录已经present.So尝试删除同名的一级目录或更改输出目录的名称。

0

正如其他人已经注意到的那样,您会收到错误消息,因为输出目录已经存在,很可能是因为您之前尝试过执行此作业。

您可以删除现有的输出目录运行工作的权利之前,即:

#hadoop fs -rm -r /MROutput57 && \ 
hadoop jar WordCountNew.jar WordCountNew /MRInput57/Input-Big.txt /MROutput57