我需要为包含许多列的数据表生成row_numbers的完整列表。如何获取Spark RDD的SQL row_number等效项?
在SQL中,这应该是这样的:
select
key_value,
col1,
col2,
col3,
row_number() over (partition by key_value order by col1, col2 desc, col3)
from
temp
;
现在,让我们在星火说,我有以下形式的RDD(K,V),其中V =(COL1,COL2,COL3)所以我的条目都喜欢
(key1, (1,2,3))
(key1, (1,4,7))
(key1, (2,2,3))
(key2, (5,5,5))
(key2, (5,5,9))
(key2, (7,5,5))
etc.
我想用正确的ROW_NUMBER
(key1, (1,2,3), 2)
(key1, (1,4,7), 1)
(key1, (2,2,3), 3)
(key2, (5,5,5), 1)
(key2, (5,5,9), 2)
(key2, (7,5,5), 3)
etc.
订购这些使用命令,如sortBy(),sortWith(),sortByKey(),zipWithIndex等,并有一个新的RDD
(我不在乎括号,所以表格也可以是(K,(col1,col2,col3,rownum)))
我该怎么做?
这是我第一次尝试:
val sample_data = Seq(((3,4),5,5,5),((3,4),5,5,9),((3,4),7,5,5),((1,2),1,2,3),((1,2),1,4,7),((1,2),2,2,3))
val temp1 = sc.parallelize(sample_data)
temp1.collect().foreach(println)
// ((3,4),5,5,5)
// ((3,4),5,5,9)
// ((3,4),7,5,5)
// ((1,2),1,2,3)
// ((1,2),1,4,7)
// ((1,2),2,2,3)
temp1.map(x => (x, 1)).sortByKey().zipWithIndex.collect().foreach(println)
// ((((1,2),1,2,3),1),0)
// ((((1,2),1,4,7),1),1)
// ((((1,2),2,2,3),1),2)
// ((((3,4),5,5,5),1),3)
// ((((3,4),5,5,9),1),4)
// ((((3,4),7,5,5),1),5)
// note that this isn't ordering with a partition on key value K!
val temp2 = temp1.???
还要注意的是,功能sortBy不能直接应用于RDD,但必须首先运行收集(),然后将输出不是RDD,无论是,但数组
temp1.collect().sortBy(a => a._2 -> -a._3 -> a._4).foreach(println)
// ((1,2),1,4,7)
// ((1,2),1,2,3)
// ((1,2),2,2,3)
// ((3,4),5,5,5)
// ((3,4),5,5,9)
// ((3,4),7,5,5)
这里有一个小更多的进步,但仍然不分区:
val temp2 = sc.parallelize(temp1.map(a => (a._1,(a._2, a._3, a._4))).collect().sortBy(a => a._2._1 -> -a._2._2 -> a._2._3)).zipWithIndex.map(a => (a._1._1, a._1._2._1, a._1._2._2, a._1._2._3, a._2 + 1))
temp2.collect().foreach(println)
// ((1,2),1,4,7,1)
// ((1,2),1,2,3,2)
// ((1,2),2,2,3,3)
// ((3,4),5,5,5,4)
// ((3,4),5,5,9,5)
// ((3,4),7,5,5,6)
这个问题的其他几个部分回答问题的延伸,即http://stackoverflow.com/questions/23838614/how-to-sort-an-rdd-in-scala-spark,http://qnalist.com/questions/5086896/spark-sql-how-to-select-first-row-in-each-group -by-group,http://mail-archives.apache.org/mod_mbox/spark-user/201408.mbox/%3CD01B658B.2BF52%[email protected]%3E,http://stackoverflow.com/问题/ 270220 59/filter-rdd-based-on-row-number,http://stackoverflow.com/questions/24677180/how-do-i-select-a-range-of-elements-in-spark-rdd – 2014-11-20 22:03:13
I'米也想回答这个问题。 [Hive添加了分析函数(包括0.11中的'row_number()')(https://issues.apache.org/jira/browse/HIVE-896),并且Spark 1.1支持HiveQL/Hive 0.12。所以看起来'sqlContext.hql(“select row_number()over(partition by ...')应该可以,但我得到一个错误。 – dnlbrky 2014-11-23 03:52:44