2
我有一个返回数据的SQL查询的数据帧如何转换平坦的数据帧到火花嵌套JSON(斯卡拉或Java)
id,type,name,ppu,batter.id,batter.type,topping.id,topping.type
101,donut,cake,0_55,1001,Regular,5001,None
101,donut,cake,0_55,1002,Chocolate,5001,None
101,donut,cake,0_55,1003,Blueberry,5001,None
101,donut,cake,0_55,1004,Devil's Food,5001,None
101,donut,cake,0_55,1001,Regular,5002,Glazed
101,donut,cake,0_55,1002,Chocolate,5002,Glazed
101,donut,cake,0_55,1003,Blueberry,5002,Glazed
101,donut,cake,0_55,1004,Devil's Food,5002,Glazed
101,donut,cake,0_55,1001,Regular,5003,Chocolate
101,donut,cake,0_55,1002,Chocolate,5003,Chocolate
101,donut,cake,0_55,1003,Blueberry,5003,Chocolate
101,donut,cake,0_55,1004,Devil's Food,5003,Chocolate
这样设置可我需要覆盖到嵌套这json结构像这样。
{
"id": "101",
"type": "donut",
"name": "Cake",
"ppu": 0.55,
"batter":
[
{ "id": "1001", "type": "Regular" },
{ "id": "1002", "type": "Chocolate" },
{ "id": "1003", "type": "Blueberry" },
{ "id": "1004", "type": "Devil's Food" }
],
"topping":
[
{ "id": "5001", "type": "None" },
{ "id": "5002", "type": "Glazed" },
{ "id": "5003", "type": "Chocolate" }
]
}
我们是否有可能在Dataframe聚合或自定义转换中执行此操作,我必须编写它。
在这里找到了类似的问题 Writing nested JSON in spark scala 但是没有相当正确的答案。
谢谢Chitral,但不幸的是,在我的情况下,对于100列和至少2层嵌套,这可能会变得很复杂。另外在我的情况下,数据可能不适合驱动程序,所以我期待在分布式节点中进行转换。我在某种程度上可以使用UDAF,但只能用于只有一个嵌套对象的简单对象。我听说Spark 2.0对此有更好的支持。但不确定。再次感谢您的努力:) – BRKumaran