2016-08-19 44 views
2

我有一个数据帧,其模式是我下面的字段值:如何在数据帧划分上阶

root 
|-- school: string (nullable = true) 
|-- questionName: string (nullable = true) 
|-- difficultyValue: double (nullable = true) 

的数据是这样的:

school | questionName | difficultyValue 
school1 | q1   | 0.32 
school1 | q2   | 0.13 
school1 | q3   | 0.58 
school1 | q4   | 0.67 
school1 | q5   | 0.59 
school1 | q6   | 0.43 
school1 | q7   | 0.31 
school1 | q8   | 0.15 
school1 | q9   | 0.21 
school1 | q10   | 0.92 

但现在我想分区场“难度值”根据其值,并将该数据帧转换为以下模式的新数据帧:

root 
|-- school: string (nullable = true) 
|-- difficulty1: double (nullable = true) 
|-- difficulty2: double (nullable = true) 
|-- difficulty3: double (nullable = true) 
|-- difficulty4: double (nullable = true) 
|-- difficulty5: double (nullable = true) 

a第二新数据表是在这里:

school | difficulty1 | difficulty2 | difficulty3 | difficulty4 | difficulty5 
school1 | 2   | 3   | 3   | 1   |1 

的字段“difficulty1”的值是“difficultyValue” < 0.2的数;

“难度2”字段的值是“难度值”< 0.4和“难度值”> = 0.2的值;

“难度3”字段的值是“难度值”< 0.6和“难度值”> = 0.4的值;

“难度4”字段的值是“难度值”< 0.8和“难度值”> = 0.6的值;

字段“难度5”的值是“难度值”< 1.0和“难度值”> = 0.8的值;

我不知道如何改变它,我该怎么办?

回答

1
// First create a test data frame with the schema of your given source. 
val df = { 
    import org.apache.spark.sql._ 
    import org.apache.spark.sql.types._ 
    import scala.collection.JavaConverters._ 

    val simpleSchema = StructType(
     StructField("school", StringType, false) :: 
     StructField("questionName", StringType, false) :: 
     StructField("difficultyValue", DoubleType) :: Nil) 

    val data = List(
     Row("school1", "q1", 0.32), 
     Row("school1", "q2", 0.45), 
     Row("school1", "q3", 0.22), 
     Row("school1", "q4", 0.12), 
     Row("school2", "q1", 0.32), 
     Row("school2", "q2", 0.42), 
     Row("school2", "q3", 0.52), 
     Row("school2", "q4", 0.62) 
    )  

    spark.createDataFrame(data.asJava, simpleSchema) 
} 
// Add a new column that is the 1-5 category. 
val df2 = df.withColumn("difficultyCat", floor(col("difficultyValue").multiply(5.0)) + 1) 
// groupBy and pivot to get the final view that you want. 
// Here, we know 1-5 values before-hand, if you don't you can omit with performance cost. 
val df3 = df2.groupBy("school").pivot("difficultyCat", Seq(1, 2, 3, 4, 5)).count() 

df3.show() 
+0

克莱嗨,你的答案是伟大的,因为我只有五列,所以我可以指定disticnt值列表转动上,就这样'''VAL DF3 = df2.groupBy(“schoolID” ).pivot(“difficultyCat”,Seq(1,2,3,4,5))。count()''',非常感谢。 – StrongYoung

+0

是的,你是对的。如果您事先知道可能的值,就像我们在这种情况下所做的那样,为了性能原因,您应该将它们传递给pivot函数。我更新了答案中的代码。 – clay

0

以下功能:

def valueToIndex(v: Double): Int = scala.math.ceil(v*5).toInt 

将决定你在难度值所需的指标,因为你只是想5个均匀箱。您可以使用此函数使用withColumnudf创建新派生列,然后可以使用pivot来生成每个索引的行数。