1

我正在使用张量流来构建卷积神经网络。给定形状的张量(无,16,16,4,192),我想要执行转置卷积,导致形状(无,32,32,7,192)。转置卷积(反卷积)算法

[2,2,4,192,192]的筛选器大小和[2,2,1,1,1]的步幅会产生我想要的输出形状吗?

+0

会发生什么事,如果你尝试了吗? – mrry

回答

1

是的,你几乎是正确的。

一个次要校正是tf.nn.conv3d_transpose预计NCDHWNDHWC或输入格式(你看上去是NHWDC)和所述过滤器的形状预期为[depth, height, width, output_channels, in_channels]。这会影响尺寸在filterstride顺序:

# Original format: NHWDC. 
original = tf.placeholder(dtype=tf.float32, shape=[None, 16, 16, 4, 192]) 
print original.shape 

# Convert to NDHWC format. 
input = tf.reshape(original, shape=[-1, 4, 16, 16, 192]) 
print input.shape 

# input shape: [batch, depth, height, width, in_channels]. 
# filter shape: [depth, height, width, output_channels, in_channels]. 
# output shape: [batch, depth, height, width, output_channels]. 
filter = tf.get_variable('filter', shape=[4, 2, 2, 192, 192], dtype=tf.float32) 
conv = tf.nn.conv3d_transpose(input, 
           filter=filter, 
           output_shape=[-1, 7, 32, 32, 192], 
           strides=[1, 1, 2, 2, 1], 
           padding='SAME') 
print conv.shape 

final = tf.reshape(conv, shape=[-1, 32, 32, 7, 192]) 
print final.shape 

,输出:

(?, 16, 16, 4, 192) 
(?, 4, 16, 16, 192) 
(?, 7, 32, 32, 192) 
(?, 32, 32, 7, 192)