2017-01-03 130 views
0

我的数据帧是这样的:如何获取每个列表的第一行数据?

+------------------------+----------------------------------------+ 
|ID      |probability        | 
+------------------------+----------------------------------------+ 
|583190715ccb64f503a|[0.49128147201958017,0.5087185279804199]| 
|58326da75fc764ad200|[0.42143416087939345,0.5785658391206066]| 
|583270ff17c76455610|[0.3949217100212508,0.6050782899787492] | 
|583287c97ec7641b2d4|[0.4965059792664432,0.5034940207335569] | 
|5832d7e279c764f52e4|[0.49128147201958017,0.5087185279804199]| 
|5832e5023ec76406760|[0.4775830044196701,0.52241699558033] | 
|5832f88859cb64960ea|[0.4360509428173421,0.563949057182658] | 
|58332e6238c7643e6a7|[0.48730029128352853,0.5126997087164714]| 

,我得到概率的使用

val proVal = Data.select("probability").rdd.map(r => r(0)).collect() 
proVal.foreach(println) 

结果列是:

[0.49128147201958017,0.5087185279804199] 
[0.42143416087939345,0.5785658391206066] 
[0.3949217100212508,0.6050782899787492] 
[0.4965059792664432,0.5034940207335569] 
[0.49128147201958017,0.5087185279804199] 
[0.4775830044196701,0.52241699558033] 
[0.4360509428173421,0.563949057182658] 
[0.48730029128352853,0.5126997087164714] 

,但我想要得到的数据的第一列对于每一行,如下所示:

0.49128147201958017 
0.42143416087939345 
0.3949217100212508 
0.4965059792664432 
0.49128147201958017 
0.4775830044196701 
0.4360509428173421 
0.48730029128352853 

这怎么办?

输入是标准的随机森林输入,上述输入val Data = predictions.select("docID", "probability")

predictions.printSchema() 

root |-- docID: string (nullable = true) |-- label: double (nullable = false) |-- features: vector (nullable = true) |-- indexedLabel: double (nullable = true) |-- rawPrediction: vector (nullable = true) |-- probability: vector (nullable = true) |-- prediction: double (nullable = true) |-- predictedLabel: string (nullable = true)

,我想要得到的“概率”的第一个值列

回答

2

可以使用Column.apply方法来获得数组列上的第n项 - 在这种情况下,第一列(使用索引0):

import sqlContext.implicits._ 
val proVal = Data.select($"probability"(0)).rdd.map(r => r(0)).collect() 

顺便说一句,如果你使用的Spark 1.6或更高版本,你也可以使用DataSet API更清洁的方式来将数据帧转换成双打:

val proVal = Data.select($"probability"(0)).as[Double].collect() 
+0

谢谢,我使用的方法,但是这两种方法抛出同样的错误:线程“main”中的异常org.apache.spark.sql.AnalysisException:无法从概率#177提取值;但是第177行的结构与其他行相同 – John

+0

如果您可以提供一个输入失败的示例输入 - 我可以尝试提供帮助,否则我无法看到任何明显的原因。另外 - 你可以编辑问题并添加Data.printSchema()'的结果吗? –

+0

输入是标准的随机森林输入,最终结果是“概率”列的第一个值,'Data.printSchema()'的结果是:根 | - docID:string(nullable = true) | - label:double(nullable = false) | - 特征:vector(nullable = true) | - indexedLabel:double(nullable = true) | - rawPrediction:vector(nullable = true) | - probability: vector(nullable = true) | - prediction:double(nullable = true) | - predictedLabel:string(nullable = true) – John

相关问题