2017-10-05 1013 views
1

我想实现CNN的分类任务。我想看看每个时代的权重是如何优化的。为此,我需要倒数第二层的值。另外,我会自己编写最后一层和反向传播。请推荐API以及哪些有用的API。如何获得倒数第二层的值卷积神经网络(CNN)?

编辑:我从keras实例加入的码。期待编辑它。 This链接提供了一些线索。我已经提到了需要输出的层。

from __future__ import print_function 

from keras.preprocessing import sequence 
from keras.models import Sequential 
from keras.layers import Dense, Dropout, Activation 
from keras.layers import Embedding 
from keras.layers import Conv1D, GlobalMaxPooling1D 
from keras.datasets import imdb 

# set parameters: 
max_features = 5000 
maxlen = 400 
batch_size = 100 
embedding_dims = 50 
filters = 250 
kernel_size = 3 
hidden_dims = 250 
epochs = 100 

print('Loading data...') 
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) 
print(len(x_train), 'train sequences') 
print(len(x_test), 'test sequences') 

print('Pad sequences (samples x time)') 
x_train = sequence.pad_sequences(x_train, maxlen=maxlen) 
x_test = sequence.pad_sequences(x_test, maxlen=maxlen) 
print('x_train shape:', x_train.shape) 
print('x_test shape:', x_test.shape) 

print('Build model...') 
model = Sequential() 

# we start off with an efficient embedding layer which maps 
# our vocab indices into embedding_dims dimensions 
model.add(Embedding(max_features, 
        embedding_dims, 
        input_length=maxlen)) 
model.add(Dropout(0.2)) 

# we add a Convolution1D, which will learn filters 
# word group filters of size filter_length: 
model.add(Conv1D(filters, 
       kernel_size, 
       padding='valid', 
       activation='relu', 
       strides=1)) 
# we use max pooling: 
model.add(GlobalMaxPooling1D()) 

# We add a vanilla hidden layer: 
model.add(Dense(hidden_dims)) 
model.add(Dropout(0.2)) 
model.add(Activation('relu')) 

# We project onto a single unit output layer, and squash it with a sigmoid: 
model.add(Dense(1)) 
model.add(Activation('sigmoid')) #<======== I need output after this. 



model.compile(loss='binary_crossentropy', 
       optimizer='adam', 
       metrics=['accuracy']) 
model.fit(x_train, y_train, 
      batch_size=batch_size, 
      epochs=epochs, 
      validation_data=(x_test, y_test)) 

回答

0

你可以得到你的模型的各个层是这样的:

num_layer = 7 # Dense(1) layer 
layer = model.layers[num_layer] 

我想看看如何权重,在每个时期被优化。

要得到层使用layer.get_weights()的权重是这样的:

w, b = layer.get_weights() # weights and bias of Dense(1) 

我需要的倒数第二层的值。

要得到最后一层使用model.predict()的评价值:

prediction = model.predict(x_test) 

要得到任何其他层的评价与tensorflow做这样的:

input = tf.placeholder(tf.float32) # Create input placeholder 
layer_output = layer(input) # create layer output operation 

init_op = tf.global_variables_initializer() # initialize variables 

with tf.Session() as sess: 
    sess.run(init_op) 

    # evaluate layer output 
    output = sess.run(layer_output, feed_dict = {input: x_test}) 
    print(output) 
+0

我想得到倒数第二层的输出,即在它进入最后一层之前。其实我想用我自己的优化器而不是使用keras提供的任何优化器。我认为倒数第二层的输出是'model.add(Activation('relu'))'层的输出。因此,对于25000个数据点,我想输出为25000 * 250。纠正我我错了某个地方。 –

+0

我的回答的最后一位可以让你做到这一点,请务必使用正确的层'层= model.layers [8]'。那么'layer_output'是一个张量,所以你可以继续添加纯张量流的逻辑。 –

+0

我用我在[问题]提及(https://stackoverflow.com/questions/46885680/why-different-intermediate-layer-ouput-of-cnn-in-keras)的代码,以获取中间层输出。 –

相关问题