2017-06-12 98 views
0

我一直在这一整天工作,我不认为别人会有所作为!无法得到tf.train.shuffle_batch()正常工作

我有一个.png文件,从我做了> 400个拷贝[ I got to use images with different shapes, but for now I just want to get this starting ]

这里我用跳跃到的图像与标签张量的代码:基于什么我

import tensorflow as tf 
import os 
import numpy 
batch_Size =20 
num_epochs = 100 
files = os.listdir("Test_PNG") 
files = ["Test_PNG/" + s for s in files] 
files = [os.path.abspath(s) for s in files ] 


def read_my_png_files(filename_queue): 
    reader = tf.WholeFileReader() 
    imgName,imgTensor = reader.read(filename_queue) 
    img = tf.image.decode_png(imgTensor,channels=0) 
    # Processing should be add 
    return img,imgName 

def inputPipeline(filenames, batch_Size, num_epochs= None): 
    filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs,shuffle =True) 
    img_file, label = read_my_png_files(filename_queue) 
    min_after_dequeue = 100 
    capacity = min_after_dequeue+3*batch_Size 
    img_batch,label_batch = tf.train.shuffle_batch([img_file,label],batch_size=batch_Size,enqueue_many=True, 
                allow_smaller_final_batch=True, capacity=capacity, 
                min_after_dequeue =min_after_dequeue, shapes=[w,h,d]) 
    return img_batch,label_batch 

images, Labels = inputPipeline(files,batch_Size,num_epochs) 

我知道我应该得到20倍的图像张量和他们的标签。 当我运行的代码波纹管这里就是我得到:

--------------------------------------------------------------------------- 
ValueError        Traceback (most recent call last) 
<ipython-input-3-08857195e465> in <module>() 
    34  return img_batch,label_batch 
    35 
---> 36 images, Labels = inputPipeline(files,batch_Size,num_epochs) 

<ipython-input-3-08857195e465> in inputPipeline(filenames, batch_Size, num_epochs) 
    31  img_batch,label_batch = tf.train.shuffle_batch([img_file,label],batch_size=batch_Size,enqueue_many=True, 
    32              allow_smaller_final_batch=True, capacity=capacity, 
---> 33              min_after_dequeue =min_after_dequeue, shapes=[w,h,d]) 
    34  return img_batch,label_batch 
    35 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\training\input.py in shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads, seed, enqueue_many, shapes, allow_smaller_final_batch, shared_name, name) 
    1212  allow_smaller_final_batch=allow_smaller_final_batch, 
    1213  shared_name=shared_name, 
-> 1214  name=name) 
    1215 
    1216 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\training\input.py in _shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads, seed, enqueue_many, shapes, allow_smaller_final_batch, shared_name, name) 
    767  queue = data_flow_ops.RandomShuffleQueue(
    768   capacity=capacity, min_after_dequeue=min_after_dequeue, seed=seed, 
--> 769   dtypes=types, shapes=shapes, shared_name=shared_name) 
    770  _enqueue(queue, tensor_list, num_threads, enqueue_many, keep_input) 
    771  full = (math_ops.cast(math_ops.maximum(0, queue.size() - min_after_dequeue), 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\ops\data_flow_ops.py in __init__(self, capacity, min_after_dequeue, dtypes, shapes, names, seed, shared_name, name) 
    626   shared_name=shared_name, name=name) 
    627 
--> 628  super(RandomShuffleQueue, self).__init__(dtypes, shapes, names, queue_ref) 
    629 
    630 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\ops\data_flow_ops.py in __init__(self, dtypes, shapes, names, queue_ref) 
    151  if shapes is not None: 
    152  if len(shapes) != len(dtypes): 
--> 153   raise ValueError("Queue shapes must have the same length as dtypes") 
    154  self._shapes = [tensor_shape.TensorShape(s) for s in shapes] 
    155  else: 

ValueError: Queue shapes must have the same length as dtypes 

我宣布型波纹管在tf.train.shuffle_batch功能使用,但我仍然有一个形状误差!

任何想法如何解决这个问题?

回答

1

你的问题是,无论是从

  • enqueue_many=True说法,
  • shapes参数,其中label尺寸是不存在的形状。

所以我会试着用enqueue_many=Falseshapes=[[h, w, c], []])

事实上,如果你看一下shuffle_batch DOC:

如果enqueue_manyFalse,假设tensors来表示 一个例子。形状为[x, y, z]的输入张量将作为形状为[batch_size, x, y, z]的张量输出 。

如果enqueue_manyTrue,假定tensors来表示 批次的例子,其中第一维是由例如, 索引和tensors所有成员应该在 第一尺寸相同的尺寸。如果输入张量的形状为[*, x, y, z],则 输出将具有形状[batch_size, x, y, z]

但在你的代码,看来你出列,只有一个单一的文件: img_file, label = read_my_png_files(filename_queue)和你直接将它传递给shuffle_batch功能: img_batch,label_batch = tf.train.shuffle_batch([img_file,label], ...) 所以*尺寸失踪,TensorFlow期待的是第一维[img_file,label]是一些例子。

另外请注意,enqueue_manydequeue_many是独立的;即

  • *:你排队到队列实例的数量,是独立的
  • batch_size:从队列中拉出新批大小。
+0

谢谢你回复enqueue_many的默认值是false我已经设置它为True,因为批处理将具有batchSize形状的时间png形状?无论如何它没有任何工作方式! – Engine

+0

你试过了吗?我可以使用任何你有的PNG文件! – Engine

+0

非常感谢您的帮助,现在正在工作,但是我设置了一个问题来理解它背后的机制。 tf.train.shuffle_batch([img_file,label] ..)告诉批处理函数应该使用哪个队列来获取文件及其标签,并且参数batchSize告诉函数应该排出多少元素,对吧? – Engine