2017-10-28 168 views
1

我正在使用TensorFlow处理与CNN相关的项目。 我输入使用图像(20个这样的图像)使用Conv2d对图像进行调整

for filename in glob.glob('input_data/*.jpg'): 
input_images.append(cv2.imread(filename,0)) 

image_size_input = len(input_images[0]) 

这些图像是尺寸(250250),因为灰度的。 但是对于conv2D,它需要一个4D输入张量来馈送。我的输入张量看起来像

x = tf.placeholder(tf.float32,shape=[None,image_size_output,image_size_output,1], name='x') 

所以我无法将上面的2D图像转换成给定的形状(4D)。如何处理“无”字段。 我尝试这样做:

input_images_padded = [] 
for image in input_images: 
temp = np.zeros((1,image_size_output,image_size_output,1)) 
for i in range(image_size_input): 
    for j in range(image_size_input): 
     temp[0,i,j,0] = image[i,j] 
input_images_padded.append(temp) 

我得到了以下错误:

File "/opt/intel/intelpython3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 975, in _run 
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape()))) 

ValueError: Cannot feed value of shape (20, 1, 250, 250, 1) for Tensor 'x_11:0', which has shape '(?, 250, 250, 1)' 

这里的整个代码(仅供参考):

import tensorflow as tf 
from PIL import Image 
import glob 
import cv2 
import os 
import numpy as np 
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 

input_images = [] 
output_images = [] 

for filename in glob.glob('input_data/*.jpg'): 
    input_images.append(cv2.imread(filename,0)) 

for filename in glob.glob('output_data/*.jpg'): 
    output_images.append(cv2.imread(filename,0))  

image_size_input = len(input_images[0]) 
image_size_output = len(output_images[0]) 

''' 
now adding padding to the input images to convert from 125x125 to 250x2050 sized images 
''' 
input_images_padded = [] 
for image in input_images: 
    temp = np.zeros((1,image_size_output,image_size_output,1)) 
    for i in range(image_size_input): 
     for j in range(image_size_input): 
      temp[0,i,j,0] = image[i,j] 
    input_images_padded.append(temp) 

output_images_padded = [] 
for image in output_images: 
    temp = np.zeros((1,image_size_output,image_size_output,1)) 
    for i in range(image_size_input): 
     for j in range(image_size_input): 
      temp[0,i,j,0] = image[i,j] 
    output_images_padded.append(temp) 



sess = tf.Session() 
''' 
Creating tensor for the input 
''' 
x = tf.placeholder(tf.float32,shape= [None,image_size_output,image_size_output,1], name='x') 
''' 
Creating tensor for the output 
''' 
y = tf.placeholder(tf.float32,shape= [None,image_size_output,image_size_output,1], name='y') 


def create_weights(shape): 
    return tf.Variable(tf.truncated_normal(shape, stddev=0.05)) 

def create_biases(size): 
    return tf.Variable(tf.constant(0.05, shape=[size])) 

def create_convolutional_layer(input, bias_count, filter_height, filter_width, num_input_channels, num_out_channels, activation_function): 


    weights = create_weights(shape=[filter_height, filter_width, num_input_channels, num_out_channels]) 

    biases = create_biases(bias_count) 


    layer = tf.nn.conv2d(input=input, 
        filter=weights, 
       strides=[1, 1, 1, 1], 
       padding='SAME') 

    layer += biases 


layer = tf.nn.max_pool(value=layer, 
         ksize=[1, 2, 2, 1], 
         strides=[1, 1, 1, 1], 
         padding='SAME') 

if activation_function=="relu": 
    layer = tf.nn.relu(layer) 

return layer 


''' 
Conv. Layer 1: Patch extraction 
64 filters of size 1 x 9 x 9 
Activation function: ReLU 
Output: 64 feature maps 
Parameters to optimize: 
    1 x 9 x 9 x 64 = 5184 weights and 64 biases 
''' 
layer1 = create_convolutional_layer(input=x, 
           bias_count=64, 
           filter_height=9, 
           filter_width=9, 
           num_input_channels=1, 
           num_out_channels=64, 
           activation_function="relu") 

''' 
Conv. Layer 2: Non-linear mapping 
32 filters of size 64 x 1 x 1 
Activation function: ReLU 
Output: 32 feature maps 
Parameters to optimize: 64 x 1 x 1 x 32 = 2048 weights and 32 biases 
''' 

layer2 = create_convolutional_layer(input=layer1, 
           bias_count=32, 
           filter_height=1, 
           filter_width=1, 
           num_input_channels=64, 
           num_out_channels=32, 
           activation_function="relu") 

'''Conv. Layer 3: Reconstruction 
1 filter of size 32 x 5 x 5 
Activation function: Identity 
Output: HR image 
Parameters to optimize: 32 x 5 x 5 x 1 = 800 weights and 1 bias''' 
layer3 = create_convolutional_layer(input=layer2, 
           bias_count=1, 
           filter_height=5, 
           filter_width=5, 
           num_input_channels=32, 
           num_out_channels=1, 
           activation_function="identity") 

'''print(layer1.get_shape().as_list()) 
print(layer2.get_shape().as_list()) 
print(layer3.get_shape().as_list())''' 

''' 
    applying gradient descent algorithm 
''' 
#loss_function 
loss = tf.reduce_sum(tf.square(layer3-y)) 
#optimiser 
optimizer = tf.train.GradientDescentOptimizer(0.01) 
train = optimizer.minimize(loss) 


init = tf.global_variables_initializer() 
sess.run(init) 
for i in range(len(input_images)): 
    sess.run(train,{x: input_images_padded, y:output_images_padded}) 


curr_loss = sess.run([loss], {x: x_train, y: y_train}) 
print("loss: %s"%(curr_loss)) 
+0

另外,为什么你在形状'形状= [None,image_size_output,image_size_output,1]'的末尾有1?你的意思是说它是用于“灰度”图像吗?即只有一个频道? – kmario23

+0

是,1代表通道数量。所以灰度我放在那里1 –

回答

1

我认为你的image_padded不正确。我没有tf代码写作经验(尽管已经阅读了一些代码)。但试试这个:

// imgs is your input-image-sequences 
// padded is to feed 
cnt = len(imgs) 
H,W = imgs[0].shape[:2] 
padded = np.zeros((cnt, H, W, 1)) 
for i in range(cnt): 
    padded[i, :,:,0] = img[i] 
1

一种选择是忽略给予shape当您创建占位符时,它会接受您在sess.run()

期间喂食的任何形状的张量

从文档:

shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape.

或者,您可以指定20,这是你的批量大小。请注意张量中的第一维总是对应于batch_size

+0

感谢您的信息。它的工作 –

+0

我没有太多的声望点,所以我现在不能upvote它。 –

+0

接受答案应该工作,我猜; – kmario23