2016-10-04 81 views
4

我一直在尝试运行下面的代码,我从here得到,甚至尽管除了图像大小(350,350而不是150,150)之外,我几乎没有改变任何东西,但仍然无法使其工作。我得到了上面的过滤器错误(在标题中),但我没有做错,所以我不明白这一点。它基本上说我不能有比输入更多的节点,对吗?Tensorflow + Keras + Convolution2d:ValueError:过滤器不能大于输入:过滤器:(5,5)输入:(3,350)

我能够最终通过改变这一行我路劈死的解决方案:

model.add(Convolution2D(32, 5, 5, border_mode='valid', input_shape=(3, IMG_WIDTH, IMG_HEIGHT))) 

与此:

model.add(Convolution2D(32, 5, 5, border_mode='valid', input_shape=(IMG_WIDTH, IMG_HEIGHT, 3))) 

,但我仍想明白为什么这个工作。

这是下面的代码以及我得到的错误。希望得到一些帮助(我正在使用Python Anaconda 2.7.11)。

# IMPORT LIBRARIES --------------------------------------------------------------------------------# 
import glob 
import tensorflow 
from keras.preprocessing.image import ImageDataGenerator 
from keras.models import Sequential 
from keras.layers import Convolution2D, MaxPooling2D 
from keras.layers import Activation, Dropout, Flatten, Dense 
from settings import RAW_DATA_ROOT 

# GLOBAL VARIABLES --------------------------------------------------------------------------------# 
TRAIN_PATH = RAW_DATA_ROOT + "/train/" 
TEST_PATH = RAW_DATA_ROOT + "/test/" 

IMG_WIDTH, IMG_HEIGHT = 350, 350 

NB_TRAIN_SAMPLES = len(glob.glob(TRAIN_PATH + "*")) 
NB_VALIDATION_SAMPLES = len(glob.glob(TEST_PATH + "*")) 
NB_EPOCH = 50 

# FUNCTIONS ---------------------------------------------------------------------------------------# 
def baseline_model(): 
    """ 
    The Keras library provides wrapper classes to allow you to use neural network models developed 
    with Keras in scikit-learn. The code snippet below is used to construct a simple stack of 3 
    convolution layers with a ReLU activation and followed by max-pooling layers. This is very 
    similar to the architectures that Yann LeCun advocated in the 1990s for image classification 
    (with the exception of ReLU). 
    :return: The training model. 
    """ 
    model = Sequential() 
    model.add(Convolution2D(32, 5, 5, border_mode='valid', input_shape=(3, IMG_WIDTH, IMG_HEIGHT))) 
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2, 2))) 

    model.add(Convolution2D(32, 5, 5, border_mode='valid')) 
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2, 2))) 

    model.add(Convolution2D(64, 5, 5, border_mode='valid')) 
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2, 2))) 

    # Add a fully connected layer layer that converts our 3D feature maps to 1D feature vectors 
    model.add(Flatten()) 
    model.add(Dense(64)) 
    model.add(Activation('relu')) 

    # Use a dropout layer to reduce over-fitting, by preventing a layer from seeing twice the exact 
    # same pattern (works by switching off a node once in a while in different epochs...). This 
    # will also serve as out output layer. 
    model.add(Dropout(0.5)) 
    model.add(Dense(8)) 
    model.add(Activation('softmax')) 

    # Compile model 
    model.compile(loss='categorical_crossentropy', 
        optimizer='adam', 
        metrics=['accuracy']) 

    return model 

def train_model(model): 
    """ 
    Simple script that uses the baseline model and returns a trained model. 
    :param model: model 
    :return: model 
    """ 

    # Define the augmentation configuration we will use for training 
    TRAIN_DATAGEN = ImageDataGenerator(
      rescale=1./255, 
      shear_range=0.2, 
      zoom_range=0.2, 
      horizontal_flip=True) 

    # Build the train generator 
    TRAIN_GENERATOR = TRAIN_DATAGEN.flow_from_directory(
      TRAIN_PATH, 
      target_size=(IMG_WIDTH, IMG_HEIGHT), 
      batch_size=32, 
      class_mode='categorical') 

    TEST_DATAGEN = ImageDataGenerator(rescale=1./255) 

    # Build the validation generator 
    TEST_GENERATOR = TEST_DATAGEN.flow_from_directory(
      TEST_PATH, 
      target_size=(IMG_WIDTH, IMG_HEIGHT), 
      batch_size=32, 
      class_mode='categorical') 

    # Train model 
    model.fit_generator(
      TRAIN_GENERATOR, 
      samples_per_epoch=NB_TRAIN_SAMPLES, 
      nb_epoch=NB_EPOCH, 
      validation_data=TEST_GENERATOR, 
      nb_val_samples=NB_VALIDATION_SAMPLES) 

    # Always save your weights after training or during training 
    model.save_weights('first_try.h5') 

# END OF FILE -------------------------------------------------------------------------------------# 

和错误:

Using TensorFlow backend. 
Training set: 0 files. 
Test set: 0 files. 
Traceback (most recent call last): 
    File "/Users/christoshadjinikolis/GitHub_repos/datareplyuk/ODSC_Facial_Sentiment_Analysis/src/model/__init__.py", line 79, in <module> 
    model = baseline_model() 
    File "/Users/christoshadjinikolis/GitHub_repos/datareplyuk/ODSC_Facial_Sentiment_Analysis/src/model/training_module.py", line 31, in baseline_model 
    model.add(Convolution2D(32, 5, 5, border_mode='valid', input_shape=(3, IMG_WIDTH, IMG_HEIGHT))) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/keras/models.py", line 276, in add 
    layer.create_input_layer(batch_input_shape, input_dtype) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 370, in create_input_layer 
    self(x) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 514, in __call__ 
    self.add_inbound_node(inbound_layers, node_indices, tensor_indices) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 572, in add_inbound_node 
    Node.create_node(self, inbound_layers, node_indices, tensor_indices) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 149, in create_node 
    output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0])) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/keras/layers/convolutional.py", line 466, in call 
    filter_shape=self.W_shape) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1579, in conv2d 
    x = tf.nn.conv2d(x, kernel, strides, padding=padding) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 394, in conv2d 
    data_format=data_format, name=name) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op 
    op_def=op_def) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2319, in create_op 
    set_shapes_for_outputs(ret) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1711, in set_shapes_for_outputs 
    shapes = shape_func(op) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 246, in conv2d_shape 
    padding) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 184, in get2d_conv_output_size 
    (row_stride, col_stride), padding_type) 
    File "/Users/christoshadjinikolis/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 149, in get_conv_output_size 
    "Filter: %r Input: %r" % (filter_size, input_size)) 
ValueError: Filter must not be larger than the input: Filter: (5, 5) Input: (3, 350) 
+0

Tensorflow通常使用NHWC格式,这意味着被指定为(的batch_size,高度,宽度,通道的形状)。从快速浏览keras文档(https:// keras。io/getting-started/sequential-model-guide /),keras的一个选项是分别指定形状(通道,高度,宽度)和batch_size,在您的示例中也是如此。所以看起来你的例子是正确的,应该已经工作了,修复没有意义。如果我是你,我会使用pdb来遍历调用堆栈,找出错误的形状从keras到tensorflow的位置。 –

+0

谢谢,下周晚些时候我会看看并发表我的发现。 –

+0

另一种可能性是该示例仅适用于Tensorflow以外的某个框架,并且此框架指定了顺序(通道,高度,宽度)的形状。对于Tensorflow,您可能确实需要更改订单。但是这让我感到困惑,因为我认为keras应该可以跨不同的机器学习框架进行移植。 –

回答

7

问题是,input_shape()的顺序根据您使用的后端(tensorflow或theano)而变化。

我发现的最佳解决方案是在文件~/.keras/keras.json中定义此订单。

Try to use the theano order with tensorflow backend, or theano order with theano backend.

在家里创建keras目录,并创建keras JSON:mkdir ~/.keras && touch ~/.keras/keras.json

{ 
    "image_dim_ordering": "th", 
    "epsilon": 1e-07, 
    "floatx": "float32", 
    "backend": "tensorflow" 
} 
+0

〜/ .keras/keras.json可能已经存在。修改它可能比创建新的更好,因为可能有其他设置不想更改。 – pyan

+0

@pyan我提到创建一个**目录**和** keras.json **的命令,它只有在没有** keras.json **文件时才会起作用。因此运行是安全的,并且不会修改现有文件。 – psylo

5

正好遇到了同样的问题我自己,当我下面的教程。正如@Yao Zhang指出的那样,错误是由input_shape中的命令造成的。有多种方法可以解决这个问题。

  • 选项1:更改顺序input_shape

代码

model.add(Convolution2D(32, 5, 5, border_mode='valid', input_shape=(3, IMG_WIDTH, IMG_HEIGHT))) 

的行应改为

model.add(Convolution2D(32, 5, 5, border_mode='valid', input_shape=(IMG_WIDTH, IMG_HEIGHT, 3))) 

这应该是罚款即可。

  • 选项2:指定在层image_dim_ordering

  • 选项3:通过在〜/ .keras/keras.json

  • '改变TF' 到 '日' 修改keras配置文件