我想用张量“图像”来提供CNN。当占位符is_training为True时,我希望此张量包含来自训练集(具有FIXED大小)的图像,否则我希望它包含来自测试集(不是固定大小)的图像。如何使用Tensorflow的tf.cond()与两个不同的数据集迭代器而不迭代两者?
这是必要的,因为在训练中,我从训练图像中随机取出固定的作物,而在测试中,我想执行密集评估并馈送网络内的整个图像(它是完全卷积的,因此它会接受它们)
当前不工作的方法是创建两个不同的迭代器,并尝试在session.run(images,{is_training:True/False})处用tf.cond选择训练/测试输入。
问题是迭代器被评估。训练和测试数据集也有不同的大小,所以我不能将它们迭代到最后。有没有办法做到这一点?或者以更聪明的方式重写?
我已经看到了一些关于这个问题的答案,但他们总是使用tf.assign,它需要一个numpy数组并将其赋值给张量。在这种情况下,我不能使用tf.assign,因为我已经有了来自迭代器的张量。
我现在的代码是这一个。它只是简单地检查了张量的形状“图像”:
train_filenames, train_labels = list_images(args.train_dir)
val_filenames, val_labels = list_images(args.val_dir)
graph = tf.Graph()
with graph.as_default():
# Preprocessing (for both training and validation):
def _parse_function(filename, label):
image_string = tf.read_file(filename)
image_decoded = tf.image.decode_jpeg(image_string, channels=3)
image = tf.cast(image_decoded, tf.float32)
return image, label
# Preprocessing (for training)
def training_preprocess(image, label):
# Random flip and crop
image = tf.image.random_flip_left_right(image)
image = tf.random_crop(image, [args.crop,args.crop, 3])
return image, label
# Preprocessing (for validation)
def val_preprocess(image, label):
flipped_image = tf.image.flip_left_right(image)
batch = tf.stack([image,flipped_image],axis=0)
return batch, label
# Training dataset
train_filenames = tf.constant(train_filenames)
train_labels = tf.constant(train_labels)
train_dataset = tf.contrib.data.Dataset.from_tensor_slices((train_filenames, train_labels))
train_dataset = train_dataset.map(_parse_function,num_threads=args.num_workers, output_buffer_size=args.batch_size)
train_dataset = train_dataset.map(training_preprocess,num_threads=args.num_workers, output_buffer_size=args.batch_size)
train_dataset = train_dataset.shuffle(buffer_size=10000)
batched_train_dataset = train_dataset.batch(args.batch_size)
# Validation dataset
val_filenames = tf.constant(val_filenames)
val_labels = tf.constant(val_labels)
val_dataset = tf.contrib.data.Dataset.from_tensor_slices((val_filenames, val_labels))
val_dataset = val_dataset.map(_parse_function,num_threads=1, output_buffer_size=1)
val_dataset = val_dataset.map(val_preprocess,num_threads=1, output_buffer_size=1)
train_iterator = tf.contrib.data.Iterator.from_structure(batched_train_dataset.output_types,batched_train_dataset.output_shapes)
val_iterator = tf.contrib.data.Iterator.from_structure(val_dataset.output_types,val_dataset.output_shapes)
train_images, train_labels = train_iterator.get_next()
val_images, val_labels = val_iterator.get_next()
train_init_op = train_iterator.make_initializer(batched_train_dataset)
val_init_op = val_iterator.make_initializer(val_dataset)
# Indicates whether we are in training or in test mode
is_training = tf.placeholder(tf.bool)
def f_true():
with tf.control_dependencies([tf.identity(train_images)]):
return tf.identity(train_images)
def f_false():
return val_images
images = tf.cond(is_training,f_true,f_false)
num_images = images.shape
with tf.Session(graph=graph) as sess:
sess.run(train_init_op)
#sess.run(val_init_op)
img = sess.run(images,{is_training:True})
print(img.shape)
的问题是,当我想只使用训练迭代器,我的评论行初始化val_init_op,但有以下错误:
FailedPreconditionError (see above for traceback): GetNext() failed because the iterator has not been initialized. Ensure that you have run the initializer operation for this iterator before getting the next element.
[[Node: IteratorGetNext_1 = IteratorGetNext[output_shapes=[[2,?,?,3], []], output_types=[DT_FLOAT, DT_INT32], _device="/job:localhost/replica:0/task:0/cpu:0"](Iterator_1)]]
如果我不评论那条线,一切正常,当is_training为true时,我得到训练图像,当is_training为False时,我得到验证图像。问题是这两个迭代器都需要初始化,当我评估其中一个迭代器时,另一个迭代器也会增加。正如我所说,他们是不同的大小,这会导致一个问题。
我希望有办法解决它!在此先感谢
它的工作!非常感谢你!只是一点点(愚蠢)的修正,使其真正的工作:添加返回值的两个函数。 – simo23
有没有更好的方法来提供这些不同大小的验证图像?现在我需要单独提供每个验证图像(带有翻转版本),如您所能想象的那样非常慢。 – simo23
谢谢,我纠正了代码示例。提供不同尺寸图像的批量支持并不多。使用'tf.data' API的一个选择是将所有验证计算移动到并行的'Dataset.map()'转换中。通过将'num_parallel_calls'参数设置为'N',可以并行处理'N'图像。 – mrry