2016-02-29 42 views
2

我有这样一个数据帧:降低火花的数据帧以省略空单元

val df = sc.parallelize(List((1, 2012, 3, 5), (2, 2012, 4, 7), (1,2013, 1, 3), (2, 2013, 9, 5))).toDF("id", "year", "propA", "propB") 

如此代码由Pivot Spark Dataframe启发的结果:

import org.apache.spark.sql.functions._ 
import sq.implicits._ 
years = List("2012", "2013") 
val numYears = years.length - 1 
// 
var query2 = "select id, " 
for (i <- 0 to numYears-1) { 
    query2 += "case when year = " + years(i) + " then propA else 0 end as " + "propA" + years(i) + ", " 
    query2 += "case when year = " + years(i) + " then propB else 0 end as " + "propB" + years(i) + ", " 
} 
query2 += "case when year = " + years.last + " then propA else 0 end as " + "propA" + years.last + ", " 
query2 += "case when year = " + years.last + " then propB else 0 end as " + "propB" + years.last + " from myTable" 
// 
df.registerTempTable("myTable") 
// 
val myDF1 = sq.sql(query2) 

我设法得到:

+---+---------+---------+---------+---------+ 
//| | id|propA2012|propB2012|propA2013|propB2013| 
//| +---+---------+---------+---------+---------+ 
//| | 1|  3|  5|  0|  0| 
//| | 2|  4|  7|  0|  0| 
//| | 1|  0|  0|  1|  3| 
//| | 2|  0|  0|  9|  5| 
//| +---+---------+---------+---------+---------+ 

我设法减少

使用
id propA-2012 propB-2012 propA-2013 propB-2013 
1   3   5   1   3 
2   4   7   9   5 

val df2 = myDF1.groupBy("id").agg(
       "propA2012" -> "sum", 
       "propA2013" -> "sum", 
       "propB2013" -> "sum", 
       "propB2012" -> "sum") 

有没有办法只是遍历所有列,而不指定的列名?

+1

也许[.groupBy](https://spark.apache.org/docs/最新/ API /蟒蛇/ pyspark.sql.html?突显=数据帧#pyspark.sql.DataFrame.groupBy)? –

+0

如果我使用myDF1.groupBy(“id”)我没有数据框作为结果,但分组的数据,并不知道如何管理...如果你能产生一个片段,我会接受你的回答 – user299791

+1

是的,你得到[GroupedData](https://spark.apache.org/docs/1.5.2/api/python/pyspark.sql.html#pyspark.sql.GroupedData)所以你需要聚合 –

回答

3

从我的头顶,这里是一个办法做到这一点使用聚合表达式列表:

import org.apache.spark.sql.Column 
import org.apache.spark.sql.functions.sum 

val funs: List[(String => Column)] = List(sum) 
val exprs = myDF1.dtypes.filter(_._1.contains("prop")).flatMap(ct => funs.map(fun => fun(ct._1))).toList 

myDF1.groupBy('id).agg(exprs.head, exprs.tail :_*).show 

# +---+--------------+--------------+--------------+--------------+ 
# | id|sum(propA2012)|sum(propB2012)|sum(propA2013)|sum(propB2013)| 
# +---+--------------+--------------+--------------+--------------+ 
# | 1|    3|    5|    1|    3| 
# | 2|    4|    7|    9|    5| 
# +---+--------------+--------------+--------------+--------------+ 
+1

你的头顶是一个聪明的地方伙计 – user299791

+0

谢谢,我还在学习。 – eliasah