2016-04-21 150 views
0

我试图建立一个“双”层先卷积然后最大池。该网络将被馈送20x20输入图像,并且应该从[0,25]输出一个类。尝试构建函数时,激活卷积池图层时出现错误TypeError: conv2d() got multiple values for argument 'input'Theano卷积:TypeError:conv2d()得到了多个值的参数'输入'

class ConvPoolLayer: 
    conv_func = T.nnet.conv2d 
    pool_func = max_pool_2d 

    def __init__(self, image_shape, n_feature_maps, act_func, 
       local_receptive_field_size=(5,5), pool_size=(2,2), 
       init_weight_func=init_rand_weights, init_bias_weight_func=init_rand_weights): 
     """ 
     Generate a convolutional and a subsequent pooling layer with one bias node for each channel in the pooling layer. 
     :param image_shape: tuple(batch size, input channels, input rows, input columns) where 
      input_channels = number of feature maps in upstream layer 
      input rows, input columns = output size of upstream layer 
     :param n_feature_maps: number of feature maps/filters in this layer 
     :param local_receptive_field_size: = size of local receptive field 
     :param pool_size: 
     :param act_func: 
     :param init_weight_func: 
     :param init_bias_weight_func: 
     """ 
     self.image_shape = image_shape 
     self.filter_shape = (n_feature_maps, image_shape[1]) + local_receptive_field_size 
     self.act_func = act_func 
     self.pool_size = pool_size 
     self.weights = init_weight_func(self.filter_shape) 
     self.bias_weights = init_bias_weight_func((n_feature_maps,)) 
     self.params = [self.weights, self.bias_weights] 
     self.output_values = None 

    def activate(self, input_values): 
     """ 
     :param input_values: the output from the upstream layer (which is input to this layer) 
     :return: 
     """ 
     input_values = input_values.reshape(self.image_shape) 
     conv = self.conv_func(
      input=input_values, 
      image_shape=self.image_shape, 
      filters=self.weights, 
      filter_shape=self.filter_shape 
     ) 
     pooled = self.pool_func(
      input=conv, 
      ds=self.pool_size, 
      ignore_border=True 
     ) 
     self.output_values = self.act_func(pooled + self.bias_weights.dimshuffle('x', 0, 'x', 'x')) 

    def output(self): 
     assert self.output_values is not None, 'Asking for output before activating layer' 
     return self.output_values 


def test_conv_layer(): 
    batch_size = 10 
    input_shape = (20, 20) 
    output_shape = (26,) 
    image_shape = (batch_size, 1) + input_shape # e.g image_shape = (10, 1, 20, 20) 
    n_feature_maps = 10 
    convpool_layer = ConvPoolLayer(image_shape, n_feature_maps, T.nnet.relu) 

    x = T.fmatrix('X') 
    y = T.fmatrix('Y') 

    convpool_layer.activate(x) 


test_conv_layer() 

回答

1

问题是,您将conv_func()设置为类ConvPoolLayer()的方法。所以,当你这样做:

conv = self.conv_func(input=input_values, 
         image_shape=self.image_shape, 
         filters=self.weights, 
         filter_shape=self.filter_shape) 

Python中,后方的场景做到这一点:

conv = ConvPoolLayer.conv_func(self, input=input_values, 
           image_shape=self.image_shape, 
           filters=self.weights, 
           filter_shape=self.filter_shape) 

而且由于input是第一个参数,那么你为它提供多个值。

可以通过缠绕在静态方法()这样的方法避免了这一点:

conv_func = staticmethod(T.nnet.conv2d) 

或通过从__init__内设置conv_func属性。请注意,您将遇到pool_func的相同问题。

+0

非常感谢。我一直在磨这个小时,没有到任何地方!可选后续问题:为什么在将偏置权重添加到池层之前需要对其进行“减肥”? 'self.output_values = self.act_func(pooled + self.bias_weights.dimshuffle('x',0,'x','x'))'是否有可能首先将偏置权重塑造成正确的形状? – tsorn

+1

如果你愿意,你可以将bias_weights塑造成(1,n_feature_maps,1,1),但是dimshuffling只是暂时的。 你需要的原因是事物需要具有相同的(广播兼容)形状才能加在一起。 – abergeron