2017-04-19 101 views
0

我们有一串数据通过卡夫卡主题。我读过使用Spark Streaming。Spark Streaming - Kafka- createStream - RDD到数据帧

val ssc = new StreamingContext(l_sparkcontext, Seconds(30)) 
    val kafkaStream = KafkaUtils.createStream(ssc, "xxxx.xx.xx.com:2181", "new-spark-streaming-group", Map("event_log" -> 10)) 

这很好用。我想要的是通过将列分配给流数据来编写Parquet文件。因此,我做以下

kafkaStream.foreachRDD(rdd => { 
    if (rdd.count() == 0) { 
    println("No new SKU's received in this time interval " + Calendar.getInstance().getTime()) 
    } 
    else { 
    println("No of SKUs received " + rdd.count()) 
    rdd.map(record => { 
     record._2 
    }).toDF("customer_id","sku","type","event","control_group","event_date").write.mode(SaveMode.Append).format("parquet").save(outputPath) 

然而,这给出了一个错误

java.lang.IllegalArgumentException: requirement failed: The number of columns doesn't match. 
Old column names (1): _1 
New column names (6): customer_id, sku, type, event, control_group, event_date 
    at scala.Predef$.require(Predef.scala:233) 
    at org.apache.spark.sql.DataFrame.toDF(DataFrame.scala:224) 
    at org.apache.spark.sql.DataFrameHolder.toDF(DataFrameHolder.scala:36) 
    at kafka_receive_messages$$anonfun$main$1.apply(kafka_receive_messages.scala:77) 
    at kafka_receive_messages$$anonfun$main$1.apply(kafka_receive_messages.scala:69) 

那是什么我想提出请错误。我们是否应该在地图上分割?如果我们这样做,那么我们不会将它转换为DF(“..列..”)

感谢您的帮助。

问候

巴拉

回答

0

感谢串门。我已经整理出来了。这是一个编码问题。对于那些谁想要做到这一点,今后请更换其他部分如下

kafkaStream.foreachRDD(rdd => { 
    if (rdd.count() == 0) { 
    println("No new SKU's received in this time interval " + Calendar.getInstance().getTime()) 
    } 
    else { 
    println("No of SKUs received " + rdd.count()) 
    rdd.map(record => (record._2).split(",")) 
    }.map(r => (r(0).replace(Quote,"").toInt,r(1).replace(Quote,"").toInt,r(2),r(3),r(4),r(5))).toDF("customer_id","sku","type","event","control_group","event_date").write.mode(SaveMode.Append).format("parquet").save(outputPath) 
    }) 

再次感谢

巴拉

相关问题