2017-07-25 177 views
2

给定一个预定义的Keras模型,我试图首先加载预先训练好的权重,然后删除一到三个模型内部(非最后几个)图层,然后然后用另一层替换它。删除然后在Keras模型中插入一个新的中间层

我似乎无法找到关于keras.io的任何文档,即将完成此类事情或从预定义模型中删除图层。

我使用的模型是其在功能实例如下所示的良好的OLE VGG-16网络:

def model(self, output_shape): 

    # Prepare image for input to model 
    img_input = Input(shape=self._input_shape) 

    # Block 1 
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input) 
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x) 
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x) 

    # Block 2 
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x) 
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x) 
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x) 

    # Block 3 
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x) 
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x) 
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x) 
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x) 

    # Block 4 
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x) 
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x) 
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x) 
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x) 

    # Block 5 
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x) 
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x) 
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x) 
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x) 

    # Classification block 
    x = Flatten(name='flatten')(x) 
    x = Dense(4096, activation='relu', name='fc1')(x) 
    x = Dropout(0.5)(x) 
    x = Dense(4096, activation='relu', name='fc2')(x) 
    x = Dropout(0.5)(x) 
    x = Dense(output_shape, activation='softmax', name='predictions')(x) 

    inputs = img_input 

    # Create model. 
    model = Model(inputs, x, name=self._name) 

    return model 

因此,作为一个例子,我想利用两个转化率层在在将原始权重加载到所有其他图层后,将块1替换为仅有一个Conv图层。

任何想法?

回答

2

假设您有一个型号​​,可以通过上面的函数或keras.applications.VGG16(weights='imagenet')进行初始化。现在,您需要在中间插入一个新图层,以便保存其他图层的权重。

这个想法是将整个网络拆分成不同的层,然后将其组装回来。这是专门为你的任务代码:

vgg_model = applications.VGG16(include_top=True, weights='imagenet') 

# Disassemble layers 
layers = [l for l in vgg_model.layers] 

# Defining new convolutional layer. 
# Important: the number of filters should be the same! 
# Note: the receiptive field of two 3x3 convolutions is 5x5. 
new_conv = Conv2D(filters=64, 
        kernel_size=(5, 5), 
        name='new_conv', 
        padding='same')(layers[0].output) 

# Now stack everything back 
# Note: If you are going to fine tune the model, do not forget to 
#  mark other layers as un-trainable 

x = new_conv 
for i in range(3, len(layers)): 
    layers[i].trainable = False 
    x = layers[i](x) 

# Final touch 
result_model = Model(input=layer[0].input, output=x) 
result_model.summary() 

而且上面的代码的输出:

_________________________________________________________________ 
Layer (type)     Output Shape    Param # 
================================================================= 
input_50 (InputLayer)  (None, 224, 224, 3)  0   
_________________________________________________________________ 
new_conv (Conv2D)   (None, 224, 224, 64)  1792  
_________________________________________________________________ 
block1_pool (MaxPooling2D) (None, 112, 112, 64)  0   
_________________________________________________________________ 
block2_conv1 (Conv2D)  (None, 112, 112, 128)  73856  
_________________________________________________________________ 
block2_conv2 (Conv2D)  (None, 112, 112, 128)  147584  
_________________________________________________________________ 
block2_pool (MaxPooling2D) (None, 56, 56, 128)  0   
_________________________________________________________________ 
block3_conv1 (Conv2D)  (None, 56, 56, 256)  295168  
_________________________________________________________________ 
block3_conv2 (Conv2D)  (None, 56, 56, 256)  590080  
_________________________________________________________________ 
block3_conv3 (Conv2D)  (None, 56, 56, 256)  590080  
_________________________________________________________________ 
block3_pool (MaxPooling2D) (None, 28, 28, 256)  0   
_________________________________________________________________ 
block4_conv1 (Conv2D)  (None, 28, 28, 512)  1180160 
_________________________________________________________________ 
block4_conv2 (Conv2D)  (None, 28, 28, 512)  2359808 
_________________________________________________________________ 
block4_conv3 (Conv2D)  (None, 28, 28, 512)  2359808 
_________________________________________________________________ 
block4_pool (MaxPooling2D) (None, 14, 14, 512)  0   
_________________________________________________________________ 
block5_conv1 (Conv2D)  (None, 14, 14, 512)  2359808 
_________________________________________________________________ 
block5_conv2 (Conv2D)  (None, 14, 14, 512)  2359808 
_________________________________________________________________ 
block5_conv3 (Conv2D)  (None, 14, 14, 512)  2359808 
_________________________________________________________________ 
block5_pool (MaxPooling2D) (None, 7, 7, 512)   0   
_________________________________________________________________ 
flatten (Flatten)   (None, 25088)    0   
_________________________________________________________________ 
fc1 (Dense)     (None, 4096)    102764544 
_________________________________________________________________ 
fc2 (Dense)     (None, 4096)    16781312 
_________________________________________________________________ 
predictions (Dense)   (None, 1000)    4097000 
================================================================= 
Total params: 138,320,616 
Trainable params: 1,792 
Non-trainable params: 138,318,824 
_________________________________________________________________ 
+0

优雅:值得商榷。功能:当然。谢谢@FalconUA! 如果有任何问题,我仍然愿意采用更少的破坏性/重建方法! – RACKGNOME

相关问题