2015-04-04 125 views
5

到目前为止还没有看到我的特定问题的解决方案。至少它不工作。它让我非常疯狂。这个特别的组合在谷歌空间似乎没有很多。我的错误发生在工作从我所知道的映射器中进入映射器时。这个工作的输入是avro模式的输出,虽然我尝试了未压缩,但是用deflate压缩。找到的接口org.apache.hadoop.mapreduce.TaskAttemptContext

的Avro:1.7.7 的Hadoop:2.4.1

我得到这个错误,我不知道为什么。这是我的工作,mapper和减少。当映射进来的错误是发生

样品未压缩的Avro输入文件(StockReport.SCHEMA这样定义)

{"day": 3, "month": 2, "year": 1986, "stocks": [{"symbol": "AAME", "timestamp": 507833213000, "dividend": 10.59}]} 

工作

@Override 
public int run(String[] strings) throws Exception { 
    Job job = Job.getInstance(); 
    job.setJobName("GenerateGraphsJob"); 
    job.setJarByClass(GenerateGraphsJob.class); 

    configureJob(job); 

    int resultCode = job.waitForCompletion(true) ? 0 : 1; 

    return resultCode; 
} 

private void configureJob(Job job) throws IOException { 
    try { 
     Configuration config = getConf(); 
     Path inputPath = ConfigHelper.getChartInputPath(config); 
     Path outputPath = ConfigHelper.getChartOutputPath(config); 

     job.setInputFormatClass(AvroKeyInputFormat.class); 
     AvroKeyInputFormat.addInputPath(job, inputPath); 
     AvroJob.setInputKeySchema(job, StockReport.SCHEMA$); 


     job.setMapperClass(StockAverageMapper.class); 
     job.setCombinerClass(StockAverageCombiner.class); 
     job.setReducerClass(StockAverageReducer.class); 

     FileOutputFormat.setOutputPath(job, outputPath); 

    } catch (IOException | ClassCastException e) { 
     LOG.error("An job error has occurred.", e); 
    } 
} 

映射:

public class StockAverageMapper extends 
     Mapper<AvroKey<StockReport>, NullWritable, StockYearSymbolKey, StockReport> { 
    private static Logger LOG = LoggerFactory.getLogger(StockAverageMapper.class); 

private final StockReport stockReport = new StockReport(); 
private final StockYearSymbolKey stockKey = new StockYearSymbolKey(); 

@Override 
protected void map(AvroKey<StockReport> inKey, NullWritable ignore, Context context) 
     throws IOException, InterruptedException { 
    try { 
     StockReport inKeyDatum = inKey.datum(); 
     for (Stock stock : inKeyDatum.getStocks()) { 
      updateKey(inKeyDatum, stock); 
      updateValue(inKeyDatum, stock); 
      context.write(stockKey, stockReport); 
     } 
    } catch (Exception ex) { 
     LOG.debug(ex.toString()); 
    } 
} 

地图输出键的模式:

{ 
    "namespace": "avro.model", 
    "type": "record", 
    "name": "StockYearSymbolKey", 
    "fields": [ 
    { 
     "name": "year", 
     "type": "int" 
    }, 
    { 
     "name": "symbol", 
     "type": "string" 
    } 
    ] 
} 

堆栈跟踪:

java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected 
    at org.apache.avro.mapreduce.AvroKeyInputFormat.createRecordReader(AvroKeyInputFormat.java:47) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:492) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:735) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 

编辑:这不是问题,但我的工作,以减少这个数据我可以从创建的JFreeChart输出。没有通过映射器,所以不应该关联。

回答

6

问题是,org.apache.hadoop.mapreduce.TaskAttemptContext是一个class in Hadoop 1,但成为interface in Hadoop 2

这是为什么依赖Hadoop库的库需要为Hadoop 1和Hadoop 2分别编译jar文件的原因之一。基于你的堆栈跟踪,看起来你有一个Hadoop1编译的Avro jarfile,尽管使用Hadoop 2.4.1运行。

download mirrors for Avroavro-mapred-1.7.7-hadoop1.jaravro-mapred-1.7.7-hadoop2.jar提供了很好的单独下载。

+0

我会试试看。这些编译的Avro类与我的其他工作一起工作。它只是在这个使用共享库的工作中。我的pom有1.7.7 avro-mapred,avro-tools和avro。我用一个名为avro-tools-1.7.7.jar的jar手动编译avro模式。 – Rig 2015-04-05 02:20:24

+0

你钉了它。谢谢。 – Rig 2015-04-05 14:09:43

1

问题是Avro 1.7.7支持2个版本的Hadoop,因此取决于两个Hadoop版本。默认情况下,Avro 1.7.7 jar依赖于旧的Hadoop版本。 要建立与的Avro 1.7.7Hadoop2只需添加额外的classifier行Maven依赖:

<dependency> 
     <groupId>org.apache.avro</groupId> 
     <artifactId>avro-mapred</artifactId> 
     <version>1.7.7</version> 
     <classifier>hadoop2</classifier> 
    </dependency> 

这将告诉Maven来寻找avro-mapred-1.7.7-hadoop2.jar,不avro-mapred-1.7.7.jar

同样适用于Avro的1.7 .4以上

相关问题