2017-04-17 81 views
6

我试图获得Keras中间层的输出,下面的输出是我的代码:获得在TensorFlow/Keras中间层

XX = model.input # Keras Sequential() model object 
YY = model.layers[0].output 
F = K.function([XX], [YY]) # K refers to keras.backend 


Xaug = X_train[:9] 
Xresult = F([Xaug.astype('float32')]) 

运行此,我得到了一个错误:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'dropout_1/keras_learning_phase' with dtype bool 

我开始知道,因为我在我的模型中使用了丢失图层,所以我必须为我的函数指定一个learning_phase()标志,如keras documentation。 我改变了我的代码如下:

XX = model.input 
YY = model.layers[0].output 
F = K.function([XX, K.learning_phase()], [YY]) 


Xaug = X_train[:9] 
Xresult = F([Xaug.astype('float32'), 0]) 

现在我得到一个新的错误,那是我无法弄清楚:

TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a int into a Tensor. 

任何帮助,将不胜感激。 PS:我是TensorFlow和Keras的新手。

编辑1: 以下是我使用的完整代码。我使用的空间变压器网络在此讨论NIPS paper,它是蝼的实施here

input_shape = X_train.shape[1:] 

# initial weights 
b = np.zeros((2, 3), dtype='float32') 
b[0, 0] = 1 
b[1, 1] = 1 
W = np.zeros((100, 6), dtype='float32') 
weights = [W, b.flatten()] 

locnet = Sequential() 
locnet.add(Convolution2D(64, (3, 3), input_shape=input_shape, padding='same')) 
locnet.add(Activation('relu')) 
locnet.add(Convolution2D(64, (3, 3), padding='same')) 
locnet.add(Activation('relu')) 
locnet.add(MaxPooling2D(pool_size=(2, 2))) 
locnet.add(Convolution2D(128, (3, 3), padding='same')) 
locnet.add(Activation('relu')) 
locnet.add(Convolution2D(128, (3, 3), padding='same')) 
locnet.add(Activation('relu')) 
locnet.add(MaxPooling2D(pool_size=(2, 2))) 
locnet.add(Convolution2D(256, (3, 3), padding='same')) 
locnet.add(Activation('relu')) 
locnet.add(Convolution2D(256, (3, 3), padding='same')) 
locnet.add(Activation('relu')) 
locnet.add(MaxPooling2D(pool_size=(2, 2))) 
locnet.add(Dropout(0.5)) 
locnet.add(Flatten()) 
locnet.add(Dense(100)) 
locnet.add(Activation('relu')) 
locnet.add(Dense(6, weights=weights)) 


model = Sequential() 

model.add(SpatialTransformer(localization_net=locnet, 
          output_size=(128, 128), input_shape=input_shape)) 

model.add(Convolution2D(64, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(Convolution2D(64, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Convolution2D(128, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(Convolution2D(128, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Convolution2D(256, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(Convolution2D(256, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Convolution2D(256, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(Convolution2D(256, (3, 3), padding='same')) 
model.add(Activation('relu')) 
model.add(MaxPooling2D(pool_size=(2, 2))) 
model.add(Dropout(0.5)) 
model.add(Flatten()) 
model.add(Dense(256)) 
model.add(Activation('relu')) 

model.add(Dense(num_classes)) 
model.add(Activation('softmax')) 

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) 

#============================================================================== 
# Start Training 
#============================================================================== 
#define training results logger callback 
csv_logger = keras.callbacks.CSVLogger(training_logs_path+'.csv') 
model.fit(X_train, y_train, 
      batch_size=batch_size, 
      epochs=20, 
      validation_data=(X_valid, y_valid), 
      shuffle=True, 
      callbacks=[SaveModelCallback(), csv_logger]) 




#============================================================================== 
# Visualize what Transformer layer has learned 
#============================================================================== 

XX = model.input 
YY = model.layers[0].output 
F = K.function([XX, K.learning_phase()], [YY]) 


Xaug = X_train[:9] 
Xresult = F([Xaug.astype('float32'), 0]) 

# input 
for i in range(9): 
    plt.subplot(3, 3, i+1) 
    plt.imshow(np.squeeze(Xaug[i])) 
    plt.axis('off') 

for i in range(9): 
    plt.subplot(3, 3, i + 1) 
    plt.imshow(np.squeeze(Xresult[0][i])) 
    plt.axis('off') 
+0

这应该工作。 1)你能告诉我们你的模型吗? 2)你可以尝试模型中的另一层。 3)如果你没有太多的麻烦,你可以尝试使用功能风格来构建模型吗? – putonspectacles

+0

@putonspectacles我也添加了我的模型架构。 – Ahmed

回答

1

最简单的方法是创建Keras一种新的模式,而无需调用后端。你需要为这个功能模型API:

from keras.models import Model 

XX = model.input 
YY = model.layers[0].output 
new_model = Model(XX, YY) 

Xaug = X_train[:9] 
Xresult = new_model.predict(Xaug)