2017-02-19 57 views
2

我想在key中使用intersection()或在spark中使用filter()如何通过键或过滤器()与两个RDD使用火花相交()?

但我真的不知道如何使用intersection()按键。

所以我尝试使用filter(),但它没有奏效。

例子 - 在这里是二RDD:

data1 //RDD[(String, Int)] = Array(("a", 1), ("a", 2), ("b", 2), ("b", 3), ("c", 1)) 
data2 //RDD[(String, Int)] = Array(("a", 3), ("b", 5)) 

val data3 = data2.map{_._1} 

data1.filter{_._1 == data3}.collect //Array[(String, Int] = Array() 

我希望得到一个(键,值)对基于该data2有钥匙一样的密钥data1

Array(("a", 1), ("a", 2), ("b", 2), ("b", 3))是我想要的结果。

有没有一种方法来解决这个问题,使用intersection()通过键或filter()

回答

0

我试图改善与broadcast变量您的解决方案在filter()

val data1 = sc.parallelize(Seq(("a", 1), ("a", 2), ("b", 2), ("b", 3), ("c", 1))) 
val data2 = sc.parallelize(Seq(("a", 3), ("b", 5))) 

// broadcast data2 key list to use in filter method, which runs in executor nodes 
val bcast = sc.broadcast(data2.map(_._1).collect()) 

val result = data1.filter(r => bcast.value.contains(r._1)) 


println(result.collect().toList) 
//Output 
List((a,1), (a,2), (b,2), (b,3)) 

EDIT1:(按注释使用collect(),以解决与出可扩展性)

val data1 = sc.parallelize(Seq(("a", 1), ("a", 2), ("b", 2), ("b", 3), ("c", 1))) 
val data2 = sc.parallelize(Seq(("a", 3), ("b", 5))) 

val cogroupRdd: RDD[(String, (Iterable[Int], Iterable[Int]))] = data1.cogroup(data2) 
/* List(
    (a, (CompactBuffer(1, 2), CompactBuffer(3))), 
    (b, (CompactBuffer(2, 3), CompactBuffer(5))), 
    (c, (CompactBuffer(1), CompactBuffer())) 
) */ 

//Now filter keys which have two non empty CompactBuffer. You can do that with 
//filter(row => row._2._1.nonEmpty && row._2._2.nonEmpty) also. 
val filterRdd = cogroupRdd.filter { case (k, (v1, v2)) => v1.nonEmpty && v2.nonEmpty } 
/* List(
    (a, (CompactBuffer(1, 2), CompactBuffer(3))), 
    (b, (CompactBuffer(2, 3), CompactBuffer(5))) 
) */ 

//As we care about first data only, lets pick first compact buffer only 
// by doing v1.map(val1 => (k, val1)) 
val result = filterRdd.flatMap { case (k, (v1, v2)) => v1.map(val1 => (k, val1)) } 
//List((a, 1), (a, 2), (b, 2), (b, 3)) 

EDIT2:

val resultRdd = data1.join(data2).map(r => (r._1, r._2._1)).distinct() 
//List((b,2), (b,3), (a,2), (a,1)) 

这里data1.join(data2)拥有与普通钥匙对(内加入

//List((a,(1,3)), (a,(2,3)), (b,(2,5)), (b,(2,1)), (b,(3,5)), (b,(3,1))) 
4

对于你的问题,我觉得cogroup()更适合。 intersection()方法将考虑您的数据中的键和值,并将导致一个空的rdd

功能cogroup()基团两者rdd的通过密钥的值,使我们(key, vals1, vals2),其中vals1vals2包含的data1data2的值分别为每一个键。请注意,如果某个键没有在两个数据集中共享的vals1vals2一个将被作为空Seq返回,因此,我们首先必须过滤掉这些元组在抵达十字路口两个rdd的的

接下来,我们会抓住vals1 - 其中包含data1为共同 - 并将其转换为格式化(key, Array)。最后,我们使用flatMapValues()将结果解压缩为(key, value)的格式。

val result = (data1.cogroup(data2) 
    .filter{case (k, (vals1, vals2)) => vals1.nonEmpty && vals2.nonEmpty } 
    .map{case (k, (vals1, vals2)) => (k, vals1.toArray)} 
    .flatMapValues(identity[Array[Int]])) 

result.collect() 
// Array[(String, Int)] = Array((a,1), (a,2), (b,2), (b,3)) 
+0

我不明白'cogroup'很好,如果我想使用的功能操作,比如'VAL结果什么= data1.filter(R => bcast.value.contains(myFuncOper(r._1)) )'在'cogroup'中? –