2017-06-13 218 views
0

所以基本上我试图实现的是 - 我有一个有4列(比如说)的表格,并将它公开给DataFrame - DF1。现在我想将DF1的每一行存储到另一个hive表(基本上DF2,其模式为 - Column1,Column2,Column3),而column3的值将是' - '分隔的DataFrame DF1行。将列表或RDD的列表转换为Spark-Scala中的DataFrame

val df = hiveContext.sql("from hive_table SELECT *") 
val writeToHiveDf = df.filter(new Column("id").isNotNull) 

var builder : List[(String, String, String)] = Nil 
    var finalOne = new ListBuffer[List[(String, String, String)]]() 
    writeToHiveDf.rdd.collect().foreach { 
     row => 
     val item = row.mkString("[email protected]") 
     builder = List(List("dummy", "NEVER_NULL_CONSTRAINT", "some alpha")).map{case List(a,b,c) => (a,b,c)} 
     finalOne += builder 
    } 

现在我有finalOne的名单,我想直接或通过RDD转换成数据帧的列表。

var listRDD = sc.parallelize(finalOne) //Converts to RDD - It works. 
val dataFrameForHive : DataFrame = listRDD.toDF("table_name", "constraint_applied", "data") //Doesn't work 

错误:

java.lang.ClassCastException: org.apache.spark.sql.types.ArrayType cannot be cast to org.apache.spark.sql.types.StructType 
    at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:414) 
    at org.apache.spark.sql.SQLImplicits.rddToDataFrameHolder(SQLImplicits.scala:94) 

能有人帮助我理解将此转换为数据帧的正确途径。提前感谢您的支持。

+1

什么模式你希望数据框有,字符串类型的3列或类型的1列数组的元素是结构体(3个字符串)? –

回答

1

,如果你想在你的数据帧类型的字符串3列,你应该扁平化的List[List[(String,String,String)]]List[(String,String,String)]

var listRDD = sc.parallelize(finalOne.flatten) // makes List[(String,String,String)] 
val dataFrameForHive : DataFrame = listRDD.toDF("table_name", "constraint_applied", "data") 
相关问题