我想通过一些wikipedia转储(使用压缩bz2格式)在Java Mapper/Reducer上运行hadoop流式作业。我试图使用WikiHadoop,这是维基媒体最近发布的一个界面。使用Java Mapper/Reducer进行Hadoop流式处理
WikiReader_Mapper.java
package courseproj.example; // Mapper: emits (token, 1) for every article occurrence. public class WikiReader_Mapper extends MapReduceBase implements Mapper<Text, Text, Text, IntWritable> { // Reuse objects to save overhead of object creation. private final static Text KEY = new Text(); private final static IntWritable VALUE = new IntWritable(1); @Override public void map(Text key, Text value, OutputCollector<Text, IntWritable> collector, Reporter reporter) throws IOException { KEY.set("article count"); collector.collect(KEY, VALUE); } }
WikiReader_Reducer.java
package courseproj.example; //Reducer: sums up all the counts. public class WikiReader_Reducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { private final static IntWritable SUM = new IntWritable(); public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> collector, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } SUM.set(sum); collector.collect(key, SUM); } }
我跑的命令是
hadoop jar lib/hadoop-streaming-2.0.0-cdh4.2.0.jar \ -libjars lib2/wikihadoop-0.2.jar \ -D mapreduce.input.fileinputformat.split.minsize=300000000 \ -D mapreduce.task.timeout=6000000 \ -D org.wikimedia.wikihadoop.previousRevision=false \ -input enwiki-latest-pages-articles10.xml-p000925001p001325000.bz2 \ -output out \ -inputformat org.wikimedia.wikihadoop.StreamWikiDumpInputFormat \ -mapper WikiReader_Mapper \ -reducer WikiReader_Reducer
,我收到错误消息
Error: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:72)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:130)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:424)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:103)
Caused by: java.io.IOException: Cannot run program "WikiReader_Mapper": java.io.IOException: error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209)
我更熟悉新的Hadoop API主场迎战老。由于我的映射器和Reducer代码位于两个不同的文件中,因此,在同时按照hadoop流的命令结构(明确设置映射器和reducer类)的同时,我可以在哪里定义作业的JobConf配置参数。有没有一种方法可以将mapper和reducer代码全部封装到一个类中(扩展了Configure和Implement Tool,这是新API中完成的工作),并将类名传递给hadoop流命令行与设置分别映射和减少类别?
我给你建议的修改。谢谢,我不知道hadoop流只在旧的API中。虽然我仍然收到与JobConf配置相关的错误消息(已更新新的错误消息)。 –
hadoop流假定映射器和reducer参数是可执行文件 - 您已经通过了java类。为什么你使用hadoop流如果你的地图和减少实现是用Java编写的? –
关于使用Java类作为映射器/缩减器实现的信息,请参见本页:http://hadoop.apache.org/docs/r1.1.2/streaming.html#Specifying+a+Java+Class+as+the+Mapper%2FReducer –