2014-09-05 75 views
1

我试图在hadoop中使用mapreduce wordcount代码,但reducer类永远不会被调用,程序在运行mapper类后终止。Hadoop:Reducer类甚至没有被覆盖

import java.io.IOException; 
import java.util.*; 

import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.conf.*; 
import org.apache.hadoop.io.*; 
import org.apache.hadoop.mapreduce.*; 
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; 
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; 

public class WordCount { 

public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { 
    private final static IntWritable one = new IntWritable(1); 
    private Text word = new Text(); 
    @Override 
    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { 
     String line = value.toString(); 
     StringTokenizer tokenizer = new StringTokenizer(line); 
     while (tokenizer.hasMoreTokens()) { 
      word.set(tokenizer.nextToken()); 
      context.write(word, one); 
     } 
    } 
} 

public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { 


    @Override 
    public void reduce(Text key, Iterable<IntWritable> values, Context context) 
     throws IOException, InterruptedException { 
     int sum = 0; 
     for (IntWritable val : values) { 
      sum += val.get(); 
     } 
     context.write(key, new IntWritable(sum)); 
    } 
} 

public static void main(String[] args) throws Exception { 
    Configuration conf = new Configuration(); 

     Job job = new Job(conf, "wordcount"); 

    job.setOutputKeyClass(Text.class); 
    job.setOutputValueClass(IntWritable.class); 

    job.setMapperClass(Map.class); 
    job.setReducerClass(Reduce.class); 

    job.setInputFormatClass(TextInputFormat.class); 
    job.setOutputFormatClass(TextOutputFormat.class); 

    FileInputFormat.addInputPath(job, new Path(args[0])); 
    FileOutputFormat.setOutputPath(job, new Path(args[1])); 

    job.waitForCompletion(true); 
} 

} 

我甚至根据需要重写了类。

IDE:Eclipse的月神
的Hadoop:2.5版

+1

它终止成功吗?任何错误消息?你检查了日志吗? – vefthym 2014-09-05 07:28:14

+0

它终止成功返回0. – ayush1794 2014-09-05 12:17:20

+0

最奇怪的部分是,它只在eclipse中运行时才这样做。但不是当我使用hadoop cli直接运行它时。 – ayush1794 2014-09-18 04:22:40

回答

0

一个作业对象形成作业的规范,让您在作业的运行控制。当我们在Hadoop集群上运行这个工作时,我们会将代码打包成一个JAR文件(Hadoop将在集群中分布)。

我们可以在Job的setJarByClass()方法中传递一个类,而Hadoop将通过查找包含此类的JAR文件来定位相关的JAR文件,而不是显式指定JAR文件的名称。

我在主要方法中看不到声明。因此,包括这个,然后编译并运行代码。

job.setJarByClass(WordCount.class);

相关问题