2017-05-07 176 views
8

我有2列的数据集 - 每一列一组文档。我必须将Col A中的文档与Col B中提供的文档相匹配。这是一个监督分类问题。所以我的训练数据包含一个标签栏,指出文件是否匹配。LSTM模型辅助输入

为了解决这个问题,我已经创建的一组特性,比如F1-F25(通过比较两份文件),然后训练的这些特征的二元分类。这种方法工作得很好,但现在我想评估Deep Learning模型(特别是LSTM模型)。

我使用Python中keras库。通过keras文档和其他教程可在网上去后,我已成功地做到以下几点:

from keras.layers import Input, Embedding, LSTM, Dense 
from keras.models import Model 

# Each document contains a series of 200 words 
# The necessary text pre-processing steps have been completed to transform 
    each doc to a fixed length seq 
main_input1 = Input(shape=(200,), dtype='int32', name='main_input1') 
main_input2 = Input(shape=(200,), dtype='int32', name='main_input2') 

# Next I add a word embedding layer (embed_matrix is separately created  
for each word in my vocabulary by reading from a pre-trained embedding model) 
x = Embedding(output_dim=300, input_dim=20000, 
input_length=200, weights = [embed_matrix])(main_input1) 
y = Embedding(output_dim=300, input_dim=20000, 
input_length=200, weights = [embed_matrix])(main_input2) 

# Next separately pass each layer thru a lstm layer to transform seq of 
vectors into a single sequence 
lstm_out_x1 = LSTM(32)(x) 
lstm_out_x2 = LSTM(32)(y) 

# concatenate the 2 layers and stack a dense layer on top 
x = keras.layers.concatenate([lstm_out_x1, lstm_out_x2]) 
x = Dense(64, activation='relu')(x) 
# generate intermediate output 
auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(x) 

# add auxiliary input - auxiliary inputs contains 25 features for each document pair 
auxiliary_input = Input(shape=(25,), name='aux_input') 

# merge aux output with aux input and stack dense layer on top 
main_input = keras.layers.concatenate([auxiliary_output, auxiliary_input]) 
x = Dense(64, activation='relu')(main_input) 
x = Dense(64, activation='relu')(x) 

# finally add the main output layer 
main_output = Dense(1, activation='sigmoid', name='main_output')(x) 

model = Model(inputs=[main_input1, main_input2, auxiliary_input], outputs= main_output) 
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) 

model.fit([x1, x2,aux_input], y, 
     epochs=3, batch_size=32) 

然而,当我这个分数在训练数据上,我得到了相同的概率。所有情况下得分。这个问题似乎与辅助输入的输入方式有关(因为当我移除辅助输入时它会产生有意义的输出)。 我也尝试在网络中的不同位置插入辅助输入。但不知何故,我无法得到这个工作。

任何指针?

+0

不知道这是有意的,但只有auxiliary_output是(1)。它真的是你期望的吗?只有一个结果合并25个辅助输入? - 当您仅训练最后部分时,辅助输出之前的模型是否意图“不可训练”? –

+0

嗯,这是一个二元分类模型,所以最终的输出是(1,)。辅助输出应该不同吗?我只是在辅助输入中添加25个特征,因此(25,)形状 – Dataminer

+0

您是否尝试过更多的时代? –

回答

0

好了,这是开了好几个月,人们投票起来。
我最近使用this dataset做了一些非常类似的事情,可以用来预测信用卡默认值,它包含客户的分类数据(性别,教育程度,婚姻状况等)以及支付历史作为时间序列。所以我必须将时间序列与非系列数据合并。我的解决方案与LSTM结合非常相似,我尝试采用解决问题的方法。对我来说,辅助输入上的密集层是很有效的。

此外,在你的情况下,共享层是有意义所以相同的权重来“读”两个文件。我对你的数据测试的建议:

from keras.layers import Input, Embedding, LSTM, Dense 
from keras.models import Model 

# Each document contains a series of 200 words 
# The necessary text pre-processing steps have been completed to transform 
    each doc to a fixed length seq 
main_input1 = Input(shape=(200,), dtype='int32', name='main_input1') 
main_input2 = Input(shape=(200,), dtype='int32', name='main_input2') 

# Next I add a word embedding layer (embed_matrix is separately created  
for each word in my vocabulary by reading from a pre-trained embedding model) 
x1 = Embedding(output_dim=300, input_dim=20000, 
input_length=200, weights = [embed_matrix])(main_input1) 
x2 = Embedding(output_dim=300, input_dim=20000, 
input_length=200, weights = [embed_matrix])(main_input2) 

# Next separately pass each layer thru a lstm layer to transform seq of 
vectors into a single sequence 
# Comment Manngo: Here I changed to shared layer 
# Also renamed y as input as it was confusing 
# Now x and y are x1 and x2 
lstm_reader = LSTM(32) 
lstm_out_x1 = lstm_reader(x1) 
lstm_out_x2 = lstm_reader(x2) 

# concatenate the 2 layers and stack a dense layer on top 
x = keras.layers.concatenate([lstm_out_x1, lstm_out_x2]) 
x = Dense(64, activation='relu')(x) 
x = Dense(32, activation='relu')(x) 
# generate intermediate output 
# Comment Manngo: This is created as a dead-end 
# It will not be used as an input of any layers below 
auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(x) 

# add auxiliary input - auxiliary inputs contains 25 features for each document pair 
# Comment Manngo: Dense branch on the comparison features 
auxiliary_input = Input(shape=(25,), name='aux_input') 
auxiliary_input = Dense(64, activation='relu')(auxiliary_input) 
auxiliary_input = Dense(32, activation='relu')(auxiliary_input) 

# OLD: merge aux output with aux input and stack dense layer on top 
# Comment Manngo: actually this is merging the aux output preparation dense with the aux input processing dense 
main_input = keras.layers.concatenate([x, auxiliary_input]) 
main = Dense(64, activation='relu')(main_input) 
main = Dense(64, activation='relu')(main) 

# finally add the main output layer 
main_output = Dense(1, activation='sigmoid', name='main_output')(main) 

# Compile 
# Comment Manngo: also define weighting of outputs, main as 1, auxiliary as 0.5 
model.compile(optimizer=adam, 
       loss={'main_output': 'w_binary_crossentropy', 'aux_output': 'binary_crossentropy'}, 
       loss_weights={'main_output': 1.,'auxiliary_output': 0.5}, 
       metrics=['accuracy']) 

# Train model on main_output and on auxiliary_output as a support 
# Comment Manngo: Unknown information marked with placeholders ____ 
# We have 3 inputs: x1 and x2: the 2 strings 
# aux_in: the 25 features 
# We have 2 outputs: main and auxiliary; both have the same targets -> (binary)y 


model.fit({'main_input1': __x1__, 'main_input2': __x2__, 'auxiliary_input' : __aux_in__}, {'main_output': __y__, 'auxiliary_output': __y__}, 
       epochs=1000, 
       batch_size=__, 
       validation_split=0.1, 
       callbacks=[____]) 

我不知道有多少这可以帮助,因为我没有你的数据,所以我不能尝试。尽管如此,这是我的最佳选择。
由于显而易见的原因,我没有运行上面的代码。