2016-02-12 57 views
4

我读的净adressing人们忘了目标向量变为矩阵问题的所有帖子,并作为问题仍然存在这种变化之后,我决定在这里提出我的问题。下面提到了解决方法,但新问题显示,我很感谢您的建议!输入尺寸不符的二进制crossentropy意大利千层面和Theano

使用卷积网络设置和乙状结肠激活功能二进制crossentropy,我得到一个尺寸不符的问题,但不是在训练数据,只有在确认/测试数据评估。出于某种奇怪的原因,我的验证集向量的维度切换,我不知道,为什么。如上所述,培训工作正常。代码如下,非常感谢您的帮助(抱歉劫持线程,但我没有理由创建一个新线程),其中大部分来自lasagne教程示例。

解决方法和新的问题:

  1. 删除“轴= 1”的valAcc定义帮助,但验证精度保持为零和测试分类总是返回相同的结果,不管有多少个节点,层,过滤器等我有。即使改变训练集的大小(我有大约350个样本,每个类与48x64灰度图像)不会改变这一点。所以,有些事情似乎离

网络创作:

def build_cnn(imgSet, input_var=None): 
# As a third model, we'll create a CNN of two convolution + pooling stages 
# and a fully-connected hidden layer in front of the output layer. 

# Input layer using shape information from training 
network = lasagne.layers.InputLayer(shape=(None, \ 
    imgSet.shape[1], imgSet.shape[2], imgSet.shape[3]), input_var=input_var) 
# This time we do not apply input dropout, as it tends to work less well 
# for convolutional layers. 

# Convolutional layer with 32 kernels of size 5x5. Strided and padded 
# convolutions are supported as well; see the docstring. 
network = lasagne.layers.Conv2DLayer(
     network, num_filters=32, filter_size=(5, 5), 
     nonlinearity=lasagne.nonlinearities.rectify, 
     W=lasagne.init.GlorotUniform()) 

# Max-pooling layer of factor 2 in both dimensions: 
network = lasagne.layers.MaxPool2DLayer(network, pool_size=(2, 2)) 

# Another convolution with 16 5x5 kernels, and another 2x2 pooling: 
network = lasagne.layers.Conv2DLayer(
     network, num_filters=16, filter_size=(5, 5), 
     nonlinearity=lasagne.nonlinearities.rectify) 

network = lasagne.layers.MaxPool2DLayer(network, pool_size=(2, 2)) 

# A fully-connected layer of 64 units with 25% dropout on its inputs: 
network = lasagne.layers.DenseLayer(
     lasagne.layers.dropout(network, p=.25), 
     num_units=64, 
     nonlinearity=lasagne.nonlinearities.rectify) 

# And, finally, the 2-unit output layer with 50% dropout on its inputs: 
network = lasagne.layers.DenseLayer(
     lasagne.layers.dropout(network, p=.5), 
     num_units=1, 
     nonlinearity=lasagne.nonlinearities.sigmoid) 

return network 

所有则将目标矩阵是这样的(训练目标向量为例)创建

targetsTrain = np.vstack((targetsTrain, [[targetClass], ]*numTr)); 

...和theano像这样的变量

inputVar = T.tensor4('inputs') 
targetVar = T.imatrix('targets') 
network = build_cnn(trainset, inputVar) 
predictions = lasagne.layers.get_output(network) 
loss = lasagne.objectives.binary_crossentropy(predictions, targetVar) 
loss = loss.mean() 
params = lasagne.layers.get_all_params(network, trainable=True) 
updates = lasagne.updates.nesterov_momentum(loss, params, learning_rate=0.01, momentum=0.9) 
valPrediction = lasagne.layers.get_output(network, deterministic=True) 
valLoss = lasagne.objectives.binary_crossentropy(valPrediction, targetVar) 
valLoss = valLoss.mean() 
valAcc = T.mean(T.eq(T.argmax(valPrediction, axis=1), targetVar), dtype=theano.config.floatX) 
train_fn = function([inputVar, targetVar], loss, updates=updates, allow_input_downcast=True) 
val_fn = function([inputVar, targetVar], [valLoss, valAcc]) 

最后,这里是两个循环,训练和测试。首先是罚款,第二引发错误,下面

# -- Neural network training itself -- # 
numIts = 100 
for itNr in range(0, numIts): 
train_err = 0 
train_batches = 0 
for batch in iterate_minibatches(trainset.astype('float32'), targetsTrain.astype('int8'), len(trainset)//4, shuffle=True): 
    inputs, targets = batch 
    print (inputs.shape) 
    print(targets.shape)   
    train_err += train_fn(inputs, targets) 
    train_batches += 1 

# And a full pass over the validation data: 
val_err = 0 
val_acc = 0 
val_batches = 0 

for batch in iterate_minibatches(valset.astype('float32'), targetsVal.astype('int8'), len(valset)//3, shuffle=False): 
    [inputs, targets] = batch 
    [err, acc] = val_fn(inputs, targets) 
    val_err += err 
    val_acc += acc 
    val_batches += 1 

Erorr摘录(摘录)

Exception "unhandled ValueError" 
Input dimension mis-match. (input[0].shape[1] = 52, input[1].shape[1] = 1) 
Apply node that caused the error: Elemwise{eq,no_inplace}(DimShuffle{x,0}.0, targets) 
Toposort index: 36 
Inputs types: [TensorType(int64, row), TensorType(int32, matrix)] 
Inputs shapes: [(1, 52), (52, 1)] 
Inputs strides: [(416, 8), (4, 4)] 
Inputs values: ['not shown', 'not shown'] 

再次,感谢您的帮助!

回答

3

如此看来错误是在验证准确的评估。 当您在计算中删除“axis = 1”时,argmax将继续执行任何操作,只返回一个数字。 然后,广播步骤,这就是为什么你会看到整个集合相同的价值。

但是从您已发布的错误,将“T.eq”运算引发错误,因为它有一个52 X 1与1×52向量(矩阵theano/numpy的)进行比较。 所以,我建议你尝试更换行:

valAcc = T.mean(T.eq(T.argmax(valPrediction, axis=1), targetVar.T)) 

我希望这应该修正这个错误,但我还没有测试它自己。

编辑: 错误在于argmax运算时调用。 通常情况下,argmax用于确定哪个输出单位最活跃。 但是,在您的设置中,您只有一个输出神经元,这意味着所有输出神经元的argmax将始终返回0(对于第一个参数)。

这就是为什么你的网络给你的印象总是0作为输出。

通过更换:

valAcc = T.mean(T.eq(T.argmax(valPrediction, axis=1), targetVar.T)) 

有:

binaryPrediction = valPrediction > .5 
    valAcc = T.mean(T.eq(binaryPrediction, targetVar.T) 

你应该得到期望的结果。

我只是不确定,如果转置仍然是必要的。

+0

如上所述,当轴= 1时,尺寸不匹配发生。一旦我将其删除,错误消失,但培训似乎不起作用。我也试图展望预测,这导致了另一个尺寸误差。 – gilgamash

+0

好吧,使用转置工程,但现在在每一步验证的准确性保持不变,从第一次到最后一次迭代... – gilgamash

+0

你可以发布网络输出的尺寸? 您的输入似乎是一个形状为_batchsize_ x _stacksize_ x _row_ x _col_的张量4或不是? – romeasy