我有以下代码,它从本地磁盘上的文件读取一批10张图像。读取一批图像很慢?
问题是代码似乎运行速度很慢。大约需要5-6分钟才能完成。包含图像的目录包含约。 25.000图像。
代码是否正确或者我是否做了一些愚蠢的事情?
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import tensorflow as tf
image_width = 202
image_height = 180
num_channels = 3
filenames = tf.train.match_filenames_once("./train/Resized/*.jpg")
def read_image(filename_queue):
image_reader = tf.WholeFileReader()
key, image_filename = image_reader.read(filename_queue)
image = tf.image.decode_jpeg(image_filename)
image.set_shape((image_height, image_width, 3))
return image
def input_pipeline(filenames, batch_size, num_epochs=None):
filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs, shuffle=True)
input_image = read_image(filename_queue)
min_after_dequeue = 10000
capacity = min_after_dequeue + 3 * batch_size
image_batch = tf.train.shuffle_batch(
[input_image], batch_size=batch_size, capacity=capacity,
min_after_dequeue=min_after_dequeue)
return image_batch
new_batch = input_pipeline(filenames, 10)
with tf.Session() as sess:
# Required to get the filename matching to run.
tf.global_variables_initializer().run()
# Coordinate the loading of image files.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
b1 = sess.run(new_batch)
# Finish off the filename queue coordinator.
coord.request_stop()
coord.join(threads)
为了缩小问题,您可以计算出您怀疑是罪魁祸首的每个函数调用,例如,时间'image_reader.read(..)'和'tf.image.decode_jpeg(..)'。 – kaufmanu