0
我使用jupyter笔记本与熊猫,但是当我使用Spark时,我想使用Spark DataFrame转换或计算而不是熊猫。请帮助我将一些计算转换为Spark DataFrame或RDD。Spark DataFrame运算符(nunique,乘法)
数据帧:
df =
+--------+-------+---------+--------+
| userId | item | price | value |
+--------+-------+---------+--------+
| 169 | I0111 | 5300 | 1 |
| 169 | I0973 | 70 | 1 |
| 336 | C0174 | 455 | 1 |
| 336 | I0025 | 126 | 1 |
| 336 | I0973 | 4 | 1 |
| 770963 | B0166 | 2 | 1 |
| 1294537| I0110 | 90 | 1 |
+--------+-------+---------+--------+
1.使用熊猫计算:
(1) userItem = df.groupby(['userId'])['item'].nunique()
和结果是一系列对象:
+--------+------+
| userId | |
+--------+------+
| 169 | 2 |
| 336 | 3 |
| 770963 | 1 |
| 1294537| 1 |
+--------+------+
2.使用乘法
data_sum = df.groupby(['userId', 'item'])['value'].sum() --> result is Series object
average_played = np.mean(userItem) --> result is number
(2) weighted_games_played = data_sum * (average_played/userItem)
使用星火据帧和Opertors在星火办请帮助我,这(1)和(2)
我的意思是在大熊猫系列对象之间的乘法,但与火花我不能 ( ** weighted_games_played = data_sum *(average_played/userItem)** ) –
gotcha,我会修改答案 – ags29
嗯,它的工作。 –