2016-03-08 72 views
2

我读过这个线程关于分组和获取最大:Apply vs transform on a group object在GroupBy忽略最大重复 - 大熊猫

它的工作原理非常完美,如果你的max对于一个组来说是唯一的,但我遇到了一个忽略来自组的重复的问题,获得独特项目的最大值,然后将其放回到DataSeries中。

输入(名为DF1):

date  val 
2004-01-01 0 
2004-02-01 0 
2004-03-01 0 
2004-04-01 0 
2004-05-01 0 
2004-06-01 0 
2004-07-01 0 
2004-08-01 0 
2004-09-01 0 
2004-10-01 0 
2004-11-01 0 
2004-12-01 0 
2005-01-01 11 
2005-02-01 11 
2005-03-01 8 
2005-04-01 5 
2005-05-01 0 
2005-06-01 0 
2005-07-01 2 
2005-08-01 1 
2005-09-01 0 
2005-10-01 0 
2005-11-01 3 
2005-12-01 3 

我的代码:

df1['peak_month'] = df1.groupby(df1.date.dt.year)['val'].transform(max) == df1['val'] 

我的输出:

date  val max 
2004-01-01 0  true #notice how all duplicates are true in 2004 
2004-02-01 0  true 
2004-03-01 0  true 
2004-04-01 0  true 
2004-05-01 0  true 
2004-06-01 0  true 
2004-07-01 0  true 
2004-08-01 0  true 
2004-09-01 0  true 
2004-10-01 0  true 
2004-11-01 0  true 
2004-12-01 0  true 
2005-01-01 11 true #notice how these two values 
2005-02-01 11 true #are the max values for 2005 and are true 
2005-03-01 8  false 
2005-04-01 5  false 
2005-05-01 0  false 
2005-06-01 0  false 
2005-07-01 2  false 
2005-08-01 1  false 
2005-09-01 0  false 
2005-10-01 0  false 
2005-11-01 3  false 
2005-12-01 3  false 

预期输出:

date  val max 
2004-01-01 0  false #notice how all duplicates are false in 2004 
2004-02-01 0  false #because they are the same and all vals are max 
2004-03-01 0  false 
2004-04-01 0  false 
2004-05-01 0  false 
2004-06-01 0  false 
2004-07-01 0  false 
2004-08-01 0  false 
2004-09-01 0  false 
2004-10-01 0  false 
2004-11-01 0  false 
2004-12-01 0  false 
2005-01-01 11 false #notice how these two values 
2005-02-01 11 false #are the max values for 2005 but are false 
2005-03-01 8  true #this is the second max val and is true 
2005-04-01 5  false 
2005-05-01 0  false 
2005-06-01 0  false 
2005-07-01 2  false 
2005-08-01 1  false 
2005-09-01 0  false 
2005-10-01 0  false 
2005-11-01 3  false 
2005-12-01 3  false 

参考:

df1 = pd.DataFrame({'val':[0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 11, 11, 8, 5, 0 , 0, 2, 1, 0, 0, 3, 3], 
'date':['2004-01-01','2004-02-01','2004-03-01','2004-04-01','2004-05-01','2004-06-01','2004-07-01','2004-08-01','2004-09-01','2004-10-01','2004-11-01','2004-12-01','2005-01-01','2005-02-01','2005-03-01','2005-04-01','2005-05-01','2005-06-01','2005-07-01','2005-08-01','2005-09-01','2005-10-01','2005-11-01','2005-12-01',]}) 
+0

这个问题不清楚,你有太多的数据来说明你的观点。我不知道你为什么要忽略重复。 [5,5,2,2]的最大值与[5,2]的最大值相同。 – Alexander

+0

我需要最多一年的价值,或者如果它们相同,则不需要。 – ethanenglish

回答

2

不灵巧的解决方案,但它的工作原理。这个想法是首先确定每年出现的独特价值,然后对这些独特价值进行转型。

# Determine the unique values appearing in each year. 
df1['year'] = df1.date.dt.year 
unique_vals = df1.drop_duplicates(subset=['year', 'val'], keep=False) 

# Max transform on the unique values. 
df1['peak_month'] = unique_vals.groupby('year')['val'].transform(max) == unique_vals['val'] 

# Fill NaN's as False, drop extra column. 
df1['peak_month'].fillna(False, inplace=True) 
df1.drop('year', axis=1, inplace=True) 
+0

不,'keep = False'关键字参数强制'drop_duplicates'放弃重复数据的所有副本。如果没有这个关键字参数,你的关注将是有效的,因为'drop_duplicates'默认保持第一个重复记录。我的代码产生预期的输出。 – root

+0

@Parfait这就像一个魅力。感谢您浏览并浏览逻辑! – ethanenglish