2016-11-19 76 views
2

我有两个dataframes字符串匹配和数据之间的分配帧

(1st Dataframe) 
**Sentences** 
hello world 
live in the world 
haystack in the needle 

(2nd Dataframe in descending order by Weight) 
**Words** **Weight** 
world   80 
hello   60 
haystack  40 
needle   20 

我要检查每个句子的第一个数据帧,如果在句子中的任何字包含在第二个数据帧,并选择字具有最高上市字重量数字。然后,我将分配第一个数据帧中发现的最重的词。所以结果应该是:

**Sentence**    **Assigned Word** 
hello world     world 
live in the world    world 
needle in the haystack  haystack 

我想用两个for循环,但表现可能是,如果有几百万的句子或单词的慢。什么是在Python中做到这一点的最佳方式?谢谢!

回答

0

笛卡尔乘积 - >筛选 - >排序 - >groupby.head(1)

这种方法涉及几个步骤,但它是最好的熊猫式的方法,我能想到的。

import pandas as pd 
import numpy as np 

list1 = ['hello world', 
'live in the world', 
'haystack in the needle'] 

list2 = [['world',80], 
     ['hello',60], 
     ['haystack',40], 
     ['needle',20]] 

df1 = pd.DataFrame(list1,columns=['Sentences']) 
df2 = pd.DataFrame(list2,columns=['Words','Weight']) 


# Creating a new column `Word_List` 
df1['Word_List'] = df1['Sentences'].apply(lambda x : x.split(' ')) 

# Need a common key for cartesian product 
df1['common_key'] = 1 
df2['common_key'] = 1 

# Cartesian Product 
df3 = pd.merge(df1,df2,on='common_key',copy=False) 

# Filtering only words that matched 
df3['Match'] = df3.apply(lambda x : x['Words'] in x['Word_List'] ,axis=1) 
df3 = df3[df3['Match']] 

# Sorting values by sentences and weight 
df3.sort_values(['Sentences','Weight'],axis=0,inplace=True,ascending=False) 

# Keeping only the first element in each group 
final_df = df3.groupby('Sentences').head(1).reset_index()[['Sentences','Words']] 
final_df 

输出: Sentences Words 0 live in the world world 1 hello world world 2 haystack in the needle haystack

性能: 10 loops, best of 3: 41.5 ms per loop