我在reduce函数中有下面的代码。当我尝试使用CollectionUtils.addAll制作浅拷贝时,副本不成功;所有的项目都会有LAST项目的引用,而不是迭代器中的其他项目。Reducer可迭代的值似乎在Java MapReduce中不一致
下面是从我减速代码:
public void reduce(Text key, Iterable<ArrayListWritable<Writable>> values, Context context)
throws IOException, InterruptedException {
ArrayList<ArrayListWritable<Writable>> listOfWordPairs = new ArrayList<ArrayListWritable<Writable>>();
// CollectionUtils.addAll(listOfWordPairs, values.iterator());
// listOfWordPairs seems to all be the last item in the iterator
Iterator<ArrayListWritable<Writable>> iter = values.iterator();
// Manually do the copy
while (iter.hasNext()) {
// listOfWordPairs.add(iter.next());
//Same behaviour as CollectionUtils.addAll()
listOfWordPairs.add(new ArrayListWritable<Writable>(iter.next()));
//Only working way to do it -> deep copy :(
}
}
任何人有为什么发生这种情况的任何想法?我可以看到,如果MR以这种方式实现它,它可以节省大量的内存,但似乎有一些魔法会在这里实现。我是新来的MR所以希望这个问题是不是太傻了
这里是我的地图代码的人谁是感兴趣
@Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
Map<String, HMapStFW> stripes = new HashMap<>();
List<String> tokens = Tokenizer.tokenize(value.toString());
if (tokens.size() < 2) return;
context.getCounter(StripesPmiEnums.TOTALENTRIES).increment(tokens.size());
for (int i = 0; i < tokens.size() && i<40; i++) {
for (int j = 0;j<tokens.size() && j<40;j++){
if (j == i)
continue;
//Make Stripe if doesn't exist
if (!stripes.containsKey(tokens.get(i))){
HMapStFW newStripe = new HMapStFW();
stripes.put(tokens.get(i), newStripe);
}
HMapStFW stripe = stripes.get(tokens.get(i));
if (stripe.containsKey(tokens.get(j))){
stripe.put(tokens.get(j), stripe.get(tokens.get(j))+1.0f);
}else{
stripe.put(tokens.get(j), 1.0f);
}
}
}
for (String word1 : stripes.keySet()) {
TEXT.set(word1);
context.write(TEXT, stripes.get(word1));
}
}
而且还ArrayListWritable可在这里 https://github.com/lintool/tools/blob/master/lintools-datatypes/src/main/java/tl/lin/data/array/ArrayListWritable.java
在我看来,你重写可写接口方法可以与映射器代码一起共享该代码。 –
@siddhartha jain我将它添加到OP – user3538310
[Hadoop MapReduce迭代reduce调用的输入值的可能重复](http://stackoverflow.com/questions/15976981/hadoop-mapreduce-iterate-over-input-values -of-A-减少呼叫) –