2016-10-04 46 views
1

我需要为ID字段生成自动递增的值。我的方法是使用windows函数和max函数。Spark数据框将窗口函数的结果添加到常规函数中,如max。自动增量

我试图找到纯数据框解决方案(无rdd)。

所以我做right-outer join后,我得到这个数据帧:

df2 = sqlContext.createDataFrame([(1,2), (3, None), (5, None)], ['someattr', 'id']) 

# notice null values? it's a new records that don't have id just yet. 
# The task is to generate them. Preferably with one query. 

df2.show() 

+--------+----+ 
|someattr| id| 
+--------+----+ 
|  1| 2| 
|  3|null| 
|  5|null| 
+--------+----+ 

我需要生成id场自动递增的值。我的方法是使用windows功能

df2.withColumn('id', when(df2.id.isNull(), row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')).otherwise(df2.id)) 

当我做到这一点的以下异常加薪:

AnalysisException       Traceback (most recent call last) 
<ipython-input-102-b3221098e895> in <module>() 
    10 
    11 
---> 12 df2.withColumn('hello', when(df2.id.isNull(), row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')).otherwise(df2.id)).show() 

/Users/ipolynets/workspace/spark-2.0.0/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col) 
    1371   """ 
    1372   assert isinstance(col, Column), "col should be Column" 
-> 1373   return DataFrame(self._jdf.withColumn(colName, col._jc), self.sql_ctx) 
    1374 
    1375  @ignore_unicode_prefix 

/Users/ipolynets/workspace/spark-2.0.0/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py in __call__(self, *args) 
    931   answer = self.gateway_client.send_command(command) 
    932   return_value = get_return_value(
--> 933    answer, self.gateway_client, self.target_id, self.name) 
    934 
    935   for temp_arg in temp_args: 

/Users/ipolynets/workspace/spark-2.0.0/python/pyspark/sql/utils.pyc in deco(*a, **kw) 
    67            e.java_exception.getStackTrace())) 
    68    if s.startswith('org.apache.spark.sql.AnalysisException: '): 
---> 69     raise AnalysisException(s.split(': ', 1)[1], stackTrace) 
    70    if s.startswith('org.apache.spark.sql.catalyst.analysis'): 
    71     raise AnalysisException(s.split(': ', 1)[1], stackTrace) 

AnalysisException: u"expression '`someattr`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;" 

不知道这是什么例外抱怨说实话。

请注意我如何将window函数添加到常规函数max()

row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')

我不知道这是甚至不允许。

哦..这是预期的查询输出。正如你可能已经想到的那样。

+--------+----+ 
|someattr| id| 
+--------+----+ 
|  1| 2| 
|  3| 3| 
|  5| 4| 
+--------+----+ 

回答

1

您正在添加列,所以在结果DataFrame中也会有someattr列。

您必须在group by中包含someattr或在某些聚合函数中使用它。

然而,这是简单的做这样:

df2.registerTempTable("test") 
df3 = sqlContext.sql(""" 
    select t.someattr, nvl (t.id, row_number(partition by id) over() + maxId.maxId) as id 
    from test t 
    cross join (select max(id) as maxId from test) as maxId 
""") 

当然,你可以把它翻译到DSL,但是SQL似乎这个任务对我来说更容易

相关问题