2017-08-12 200 views
4

我不明白Keras的嵌入层。虽然有很多文章解释它,但我仍然感到困惑。例如,从IMDB情感分析用例子说明:如何在keras中嵌入图层

top_words = 5000 
max_review_length = 500 
embedding_vecor_length = 32  

model = Sequential() 
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length)) 
model.add(LSTM(100)) 
model.add(Dense(1, activation='sigmoid')) 
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) 
print(model.summary()) 
model.fit(X_train, y_train, nb_epoch=3, batch_size=64) 

下面的代码在这段代码,完全嵌入层做什么,什么是埋层的输出,这将是很好,如果有人可以用一些例子来解释它可能是!!

+1

可能重复[什么是在Keras中嵌入?](https://stackoverflow.com/questions/38189713/what-is-an-embedding-in-keras) – DJK

+0

它与theano解释但它会更容易通过keras中的示例来了解 – user1670773

+0

层的数学遵循相同的原则。 – DJK

回答

2

嵌入层从输入单词中创建嵌入向量(我自己仍然不理解数学),就像word2vec或预先计算的手套一样。

在开始您的代码之前,我们来举个简单的例子。

texts = ['This is a text','This is not a text'] 

首先,我们把这些句子成整数的向量,其中每个字是分配给在所述载体的字典和顺序字的数创建的字序列。

from keras.preprocessing.text import Tokenizer 
from keras.preprocessing.sequence import pad_sequences 
from keras.utils import to_categorical 

max_review_length = 6 #maximum length of the sentence 
embedding_vecor_length = 3 
top_words = 10 

#num_words is tne number of unique words in the sequence, if there's more top count words are taken 
tokenizer = Tokenizer(top_words) 
tokenizer.fit_on_texts(texts) 
sequences = tokenizer.texts_to_sequences(texts) 
word_index = tokenizer.word_index 
input_dim = len(word_index) + 1 
print('Found %s unique tokens.' % len(word_index)) 

#max_review_length is the maximum length of the input text so that we can create vector [... 0,0,1,3,50] where 1,3,50 are individual words 
data = pad_sequences(sequences, max_review_length) 

print('Shape of data tensor:', data.shape) 
print(data) 

[Out:] 
'This is a text' --> [0 0 1 2 3 4] 
'This is not a text' --> [0 1 2 5 3 4] 

现在可以输入到这些埋入层

from keras.models import Sequential 
from keras.layers import Embedding 

model = Sequential() 
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length,mask_zero=True)) 
model.compile(optimizer='adam', loss='categorical_crossentropy') 
output_array = model.predict(data) 

output_array包含尺寸的阵列(2,6,3):在我的情况2个输入评论或句子,图6是最大数在每个评论(max_review_length)和3是embedding_vecor_length。 例如

array([[[-0.01494285, -0.007915 , 0.01764857], 
    [-0.01494285, -0.007915 , 0.01764857], 
    [-0.03019481, -0.02910612, 0.03518577], 
    [-0.0046863 , 0.04763055, -0.02629668], 
    [ 0.02297204, 0.02146662, 0.03114786], 
    [ 0.01634104, 0.02296363, -0.02348827]], 

    [[-0.01494285, -0.007915 , 0.01764857], 
    [-0.03019481, -0.02910612, 0.03518577], 
    [-0.0046863 , 0.04763055, -0.02629668], 
    [-0.01736645, -0.03719328, 0.02757809], 
    [ 0.02297204, 0.02146662, 0.03114786], 
    [ 0.01634104, 0.02296363, -0.02348827]]], dtype=float32) 

你的情况,你有5000个单词的列表,它可以创造的最大500个字的评论(更会被剪掉),并把每一种500个字成大小的矢量32

你可以通过运行得到了这个词索引和嵌入矢量之间的映射:

model.layers[0].get_weights() 

在下面top_words的情况下为10,所以我们有10个字的映射,你可以看到该映射0,1,2,3, 4和5等于上面的output_array。

[array([[-0.01494285, -0.007915 , 0.01764857], 
    [-0.03019481, -0.02910612, 0.03518577], 
    [-0.0046863 , 0.04763055, -0.02629668], 
    [ 0.02297204, 0.02146662, 0.03114786], 
    [ 0.01634104, 0.02296363, -0.02348827], 
    [-0.01736645, -0.03719328, 0.02757809], 
    [ 0.0100757 , -0.03956784, 0.03794377], 
    [-0.02672029, -0.00879055, -0.039394 ], 
    [-0.00949502, -0.02805768, -0.04179233], 
    [ 0.0180716 , 0.03622523, 0.02232374]], dtype=float32)] 

https://stats.stackexchange.com/questions/270546/how-does-keras-embedding-layer-work提到的这些向量被发起随机和优化由netword优化就像网络的任何其它参数。