2017-07-05 55 views
0

我遇到问题。我想制作3D卷积U-net。为此我使用Keras。无法将数据拟合至三维卷积U-net Keras

我的数据是来自Data Science Bowl 2017 Competition的MRI图像。 (所有像素中为0到1)所有MRI的已保存在numpy的阵列,形状:

data_ch.shape 
(94, 50, 50, 50, 1) 

94 - 患者中,50×50图像的50个MRI切片,1个通道: The patients MRI from dataset

欲制作三维卷积U-net,所以这个网络的输入和输出是相同的三维阵列。 三维U形网:

input_img= Input(shape=(data_ch.shape[1], data_ch.shape[2], data_ch.shape[3], data_ch.shape[4])) 
x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(input_img) 
x=MaxPooling3D(pool_size=(2, 2, 2), padding='same')(x) 
x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(x) 
x=MaxPooling3D(pool_size=(2, 2, 2), padding='same')(x) 

x=UpSampling3D(size=(2, 2, 2))(x) 
x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(x) # PADDING IS NOT THE SAME!!!!! 
x=UpSampling3D(size=(2, 2, 2))(x) 
x=Conv3D(filters=1, kernel_size=(3, 3, 3), activation='sigmoid')(x) 

model=Model(input_img, x) 
model.compile(optimizer='adadelta', loss='binary_crossentropy') 

model.summary() 
Layer (type)     Output Shape    Param # 
================================================================= 
input_5 (InputLayer)   (None, 50, 50, 50, 1)  0   
_________________________________________________________________ 
conv3d_27 (Conv3D)   (None, 50, 50, 50, 8)  224  
_________________________________________________________________ 
max_pooling3d_12 (MaxPooling (None, 25, 25, 25, 8)  0   
_________________________________________________________________ 
conv3d_28 (Conv3D)   (None, 25, 25, 25, 8)  1736  
_________________________________________________________________ 
max_pooling3d_13 (MaxPooling (None, 13, 13, 13, 8)  0   
_________________________________________________________________ 
up_sampling3d_12 (UpSampling (None, 26, 26, 26, 8)  0   
_________________________________________________________________ 
conv3d_29 (Conv3D)   (None, 26, 26, 26, 8)  1736  
_________________________________________________________________ 
up_sampling3d_13 (UpSampling (None, 52, 52, 52, 8)  0   
_________________________________________________________________ 
conv3d_30 (Conv3D)   (None, 50, 50, 50, 1)  217  
================================================================= 
Total params: 3,913 
Trainable params: 3,913 
Non-trainable params: 0 

但是,当我试图将数据符合该网:

model.fit(data_ch, data_ch, epochs=1, batch_size=10, shuffle=True, verbose=1) 

程序显示错误:

ValueError        Traceback (most recent call last) 
C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs) 
    883    outputs =\ 
--> 884     self.fn() if output_subset is None else\ 
    885     self.fn(output_subset=output_subset) 

ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 1, destination=13, source=14 

During handling of the above exception, another exception occurred: 

ValueError        Traceback (most recent call last) 
<ipython-input-26-b334d38d9608> in <module>() 
----> 1 model.fit(data_ch, data_ch, epochs=1, batch_size=10, shuffle=True, verbose=1) 

C:\Users\Taranov\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs) 
    1496        val_f=val_f, val_ins=val_ins, shuffle=shuffle, 
    1497        callback_metrics=callback_metrics, 
-> 1498        initial_epoch=initial_epoch) 
    1499 
    1500  def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None): 

C:\Users\Taranov\Anaconda3\lib\site-packages\keras\engine\training.py in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch) 
    1150     batch_logs['size'] = len(batch_ids) 
    1151     callbacks.on_batch_begin(batch_index, batch_logs) 
-> 1152     outs = f(ins_batch) 
    1153     if not isinstance(outs, list): 
    1154      outs = [outs] 

C:\Users\Taranov\Anaconda3\lib\site-packages\keras\backend\theano_backend.py in __call__(self, inputs) 
    1156  def __call__(self, inputs): 
    1157   assert isinstance(inputs, (list, tuple)) 
-> 1158   return self.function(*inputs) 
    1159 
    1160 

C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs) 
    896      node=self.fn.nodes[self.fn.position_of_error], 
    897      thunk=thunk, 
--> 898      storage_map=getattr(self.fn, 'storage_map', None)) 
    899    else: 
    900     # old-style linkers raise their own exceptions 

C:\Users\Taranov\Anaconda3\lib\site-packages\theano\gof\link.py in raise_with_op(node, thunk, exc_info, storage_map) 
    323   # extra long error message in that case. 
    324   pass 
--> 325  reraise(exc_type, exc_value, exc_trace) 
    326 
    327 

C:\Users\Taranov\Anaconda3\lib\site-packages\six.py in reraise(tp, value, tb) 
    683    value = tp() 
    684   if value.__traceback__ is not tb: 
--> 685    raise value.with_traceback(tb) 
    686   raise value 
    687 

C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs) 
    882   try: 
    883    outputs =\ 
--> 884     self.fn() if output_subset is None else\ 
    885     self.fn(output_subset=output_subset) 
    886   except Exception: 

ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 1, destination=13, source=14 
Apply node that caused the error: GpuAlloc(GpuDimShuffle{0,2,x,3,4,1}.0, Shape_i{0}.0, TensorConstant{13}, TensorConstant{2}, TensorConstant{13}, TensorConstant{13}, TensorConstant{8}) 
Toposort index: 163 
Inputs types: [CudaNdarrayType(float32, (False, False, True, False, False, False)), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int8, scalar), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar)] 
Inputs shapes: [(10, 14, 1, 14, 14, 8),(),(),(),(),(),()] 
Inputs strides: [(21952, 196, 0, 14, 1, 2744),(),(),(),(),(),()] 
Inputs values: ['not shown', array(10, dtype=int64), array(13, dtype=int64), array(2, dtype=int8), array(13, dtype=int64), array(13, dtype=int64), array(8, dtype=int64)] 
Outputs clients: [[GpuReshape{5}(GpuAlloc.0, MakeVector{dtype='int64'}.0)]] 

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. 
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node. 

我试图按照建议和使用theano标志:

import theano 
import os 
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32, optimizer='None',exception_verbosity=high" 

但它仍然不起作用。

你能帮我吗? 非常感谢!

+0

问题不在于您发布的代码中。你怎么称呼“适合”方法?你传递给该方法的所有数组的形状是什么? –

+0

我编辑了我的问题表单。我使用了model.fit(data_ch,data_ch,epochs = 1,batch_size = 10,shuffle = True,verbose = 1)。阵列的形状 - (94,50,50,50,1)。 94名患者,50个小区,50x50像素,1个通道 –

回答

0

好....听起来很奇怪,但MaxPooling3D有一些与padding='same'错误。所以我写了你的代码没有它,并增加了初期填补只是为了让你的尺寸兼容:

import keras.backend as K 

inputShape = (data_ch.shape[1], data_ch.shape[2], data_ch.shape[3], data_ch.shape[4]) 
paddedShape = (data_ch.shape[1]+2, data_ch.shape[2]+2, data_ch.shape[3]+2, data_ch.shape[4]) 

#initial padding 
input_img= Input(shape=inputShape) 
x = Lambda(lambda x: K.spatial_3d_padding(x, padding=((1, 1), (1, 1), (1, 1))), 
    output_shape=paddedShape)(input_img) #Lambda layers require output_shape 

#your original code without padding for MaxPooling layers (replace input_img with x) 
x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x) 
x=MaxPooling3D(pool_size=2)(x) 
x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x) 
x=MaxPooling3D(pool_size=2)(x) 

x=UpSampling3D(size=2)(x) 
x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x) # PADDING IS NOT THE SAME!!!!! 
x=UpSampling3D(size=2)(x) 
x=Conv3D(filters=1, kernel_size=3, activation='sigmoid')(x) 

model=Model(input_img, x) 
model.compile(optimizer='adadelta', loss='binary_crossentropy') 
model.summary() 
print(model.predict(data_ch)[1]) 
model.fit(data_ch,data_ch,epochs=1,verbose=2,batch_size=10) 
+0

它的工作原理!谢谢!!!我是否正确理解你,这个错误只与padding ='same'有关? –

+0

是的,它只与MaxPooling图层中的'padding ='same''有关。 (理论上,我们不应该使用它,我认为...)。它意味着在卷积层中使用。 –

+0

丹尼尔,对不起,还有一个问题。你能解释一下,K.spatial_3d_padding是什么?我无法理解它(((它是否与padding ='same'在池中相同?) –

0

尝试减少批量大小,以类似2,如果你看,你的网络需要更多的GPU,所以努力升级,以及。