2017-04-04 43 views
0

我有这个模式,我想将结果的内部分割成列以便让col1:EventCode,col2:Message等...我使用Pyspark,我尝试了爆炸函数,但它似乎无法在structType上工作,有没有办法在Spark中做到这一点?将数据框的行分成Pyspark中的简单行

root 
|-- result: struct (nullable = true) 
| |-- EventCode: string (nullable = true) 
| |-- Message: string (nullable = true) 
| |-- _bkt: string (nullable = true) 
| |-- _cd: string (nullable = true) 
| |-- _indextime: string (nullable = true) 
| |-- _pre_msg: string (nullable = true) 
| |-- _raw: string (nullable = true) 
| |-- _serial: string (nullable = true) 
| |-- _si: array (nullable = true) 
| | |-- element: string (containsNull = true) 
| |-- _sourcetype: string (nullable = true) 
| |-- _time: string (nullable = true) 
| |-- host: string (nullable = true) 
| |-- index: string (nullable = true) 
| |-- linecount: string (nullable = true) 
| |-- source: string (nullable = true) 
| |-- sourcetype: string (nullable = true) 

回答

1

将数据帧的行分成简单的行很容易。您只需从数据框中选择所有列并将其分配给另一个数据框。事情是这样的:

simpleDF = df.select("result.*") 

将上面给出的架构转换成以下模式:

simpleDF.printSchema 

root 
|-- EventCode: string (nullable = true) 
|-- Message: string (nullable = true) 
|-- _bkt: string (nullable = true) 
|-- _cd: string (nullable = true) 
|-- _indextime: string (nullable = true) 
|-- _pre_msg: string (nullable = true) 
|-- _raw: string (nullable = true) 
|-- _serial: string (nullable = true) 
|-- _si: array (nullable = true) 
| |-- element: string (containsNull = true) 
|-- _sourcetype: string (nullable = true) 
|-- _time: string (nullable = true) 
|-- host: string (nullable = true) 
|-- index: string (nullable = true) 
|-- linecount: string (nullable = true) 
|-- source: string (nullable = true) 
|-- sourcetype: string (nullable = true)