2017-05-09 53 views
4

我有一个列表字典(它具有可变长度),我期待着从中创建一个DataFrame的有效方法。
假设我有最小列表长度,所以我可以在创建数据框时截断更大列表的大小。
这里是我的伪代码从长度不等的列表中创建一个DataFrame

data_dict = {'a': [1,2,3,4], 'b': [1,2,3], 'c': [2,45,67,93,82,92]} 
min_length = 3 

有我可以的10K或20K键的字典,所以寻找一个有效的方法来创建一个像波纹管

>>> df 
    a b c 
0 1 1 2 
1 2 2 45 
2 3 3 67 

回答

1

一个数据帧可以在过滤的dictvaluesdict comprehension,然后DataFrame完美的作品:

print ({k:v[:min_length] for k,v in data_dict.items()}) 
{'b': [1, 2, 3], 'c': [2, 45, 67], 'a': [1, 2, 3]} 


df = pd.DataFrame({k:v[:min_length] for k,v in data_dict.items()}) 
print (df) 
    a b c 
0 1 1 2 
1 2 2 45 
2 3 3 67 

如果有可能一些长度可少为min_length添加Series

data_dict = {'a': [1,2,3,4], 'b': [1,2], 'c': [2,45,67,93,82,92]} 
min_length = 3 

df = pd.DataFrame({k:pd.Series(v[:min_length]) for k,v in data_dict.items()}) 
print (df) 
    a b c 
0 1 1.0 2 
1 2 2.0 45 
2 3 NaN 67 

时序

In [355]: %timeit (pd.DataFrame({k:v[:min_length] for k,v in data_dict.items()})) 
The slowest run took 5.32 times longer than the fastest. This could mean that an intermediate result is being cached. 
1000 loops, best of 3: 520 µs per loop 

In [356]: %timeit (pd.DataFrame({k:pd.Series(v[:min_length]) for k,v in data_dict.items()})) 
The slowest run took 4.50 times longer than the fastest. This could mean that an intermediate result is being cached. 
1000 loops, best of 3: 937 µs per loop 

#Allen's solution 
In [357]: %timeit (pd.DataFrame.from_dict(data_dict,orient='index').T.dropna()) 
1 loop, best of 3: 16.7 s per loop 

代码定时

np.random.seed(123) 
L = list('ABCDEFGH') 
N = 500000 
min_length = 10000 

data_dict = {k:np.random.randint(10, size=np.random.randint(N)) for k in L} 
+0

我对这个解决方案有点熟悉。我想知道,有没有其他有效的方法来做到这一点(而不是迭代所有的键)? –

+0

但是我在你的结果中注意到的最奇怪的事情之一。 “最慢的跑步比最快的跑了5.32倍。这可能意味着中间结果正在被缓存。“熊猫缓存结果吗? –

+1

难题,我真的不知道答案。也许最好的创建新的问题或在stackoverflow找到答案。 – jezrael

0

一个班轮溶液:

#Construct the df horizontally and then transpose. Finally drop rows with nan. 
pd.DataFrame.from_dict(data_dict,orient='index').T.dropna() 
Out[326]: 
    a b  c 
0 1.0 1.0 2.0 
1 2.0 2.0 45.0 
2 3.0 3.0 67.0