2016-12-03 89 views
2

我正在试图按照this blog中给出的示例构建autoencoder模型。mnist案例的自动编码器模型中解码器层的定义

input_img = Input(shape=(784,)) 
encoded = Dense(128, activation='relu')(input_img) 
encoded = Dense(64, activation='relu')(encoded) 
encoded = Dense(32, activation='relu')(encoded) 
decoded = Dense(64, activation='relu')(encoded) 
decoded = Dense(128, activation='relu')(decoded) 
decoded = Dense(784, activation='sigmoid')(decoded) 

# this model maps an input to its reconstruction 
autoencoder = Model(input=input_img, output=decoded) 

encoded_input = Input(shape=(encoding_dim,)) 
decoder_layer = autoencoder.layers[-1] 
decoder = Model(input=encoded_input, output=decoded) 
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') 

我所做的修改是decoder = Model(input=encoded_input, output=decoded),这是在原来的职位写为decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))。以前的版本适用于单个隐藏层。这就是我做出上述修改的原因。但是,编译上述模型会提供以下错误消息。任何建议,高度赞赏。

Traceback (most recent call last): 
File "train.py", line 37, in <module> 
decoder = Model(input=encoded_input, output=decoded) 
File "tfw/lib/python3.4/site-packages/Keras-1.0.3-py3.4.egg/keras/engine/topology.py", line 1713, in __init__ 
str(layers_with_complete_input)) 
Exception: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 784), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: [] 

回答

0

我有这个相同的问题,并设法把一个混乱但有效的解决方案。将您定义解码器的行更改为:

decoder = Model(input=encoded_input, output=autoencoder.layers[6](autoencoder.layers[5](autoencoder.layers[4](encoded_input)))) 

您看到的错误表示存在断开连接的图。在这种情况下,定义为encoded_input的输入张量被直接馈送到最终输出张量,定义为最终解码层(具有784维的致密层)。中间张量(具有64和128维度的致密层)被跳过。我的解决方案将这些图层嵌套起来,以便每个图层都可以作为下一个的输入,而最内部的张量将encoded_input作为输入。

相关问题