2017-05-29 198 views
0

我按照特定顺序将项目排入TensorFlow FIFOQueue,并希望能够以相同的顺序将它们出列,但这不是我所观察的行为。TensorFlow FIFOQueue不是FIFO?

运行以下独立代码可演示方法和行为。这已经在TensorFlow 1.1上运行在Python 2.7上(但可能在Python 3中运行)。

​​

预期的输出是

Batch 0, step 0 
[[ 0 1] 
[ 5 6] 
[10 11]] 
Batch 0, step 1 
[[ 2 3] 
[ 7 8] 
[12 13]] 
Batch 0, step 2 
[[ 4] 
[ 9] 
[14]] 
Batch 1, step 0 
[[15 16] 
[20 21] 
[25 26]] 
Batch 1, step 1 
[[17 18] 
[22 23] 
[27 28]] 
Batch 1, step 2 
[[19] 
[24] 
[29]] 
Batch 2, step 0 
[[30 31]] 
Batch 2, step 1 
[[32 33]] 
Batch 2, step 2 
[[34]] 

实际输出是

Batch 0, step 0 
[[ 0 1] 
[ 5 6] 
[10 11]] 
Batch 0, step 1 
[[ 4] 
[ 9] 
[14]] 
Batch 0, step 2 
[[ 2 3] 
[ 7 8] 
[12 13]] 
Batch 1, step 0 
[[15 16] 
[20 21] 
[25 26]] 
Batch 1, step 1 
[[19] 
[24] 
[29]] 
Batch 1, step 2 
[[17 18] 
[22 23] 
[27 28]] 
Batch 2, step 0 
[[30 31]] 
Batch 2, step 1 
[[32 33]] 
Batch 2, step 2 
[[34]] 

注意的分批0和1的步骤的顺序不正确。我一直无法确定这些步骤的顺序。看起来批次总是有序的,但每个批次中的步骤都是以“随机”顺序出现的:它看起来是确定性的,但不是FIFO。

我已经尝试过,没有在上面代码中使用的显式依赖声明。我试图将队列容量设置为1.我尝试设置enqueue_ops=enqueue_ops而不是使用tf.group,但这些更改都没有帮助,最后一个导致了非常奇怪的输出。

也许tf.group以某种方式忽略依赖关系?

回答

0

看起来tensorflow.python.ops.control_flow_ops.with_dependencies不能正常工作,或者我错误地使用了它。如果我切换到使用tf.control_dependencies相反,我得到我所需要的行为:

from __future__ import division, print_function, unicode_literals 
import math 
import numpy 
import tensorflow as tf 
from tensorflow.python.training import queue_runner 

row_count, column_count = 7, 5 
batch_size, step_size = 3, 2 

# Create some random data 
data = numpy.arange(row_count * column_count).reshape(
    (row_count, column_count)) 
print(data) 

batch_count = int(math.ceil(row_count/batch_size)) 
step_count = int(math.ceil(column_count/step_size)) 
print(batch_count, step_count) 

slices = tf.train.slice_input_producer([data], num_epochs=1, shuffle=False) 
batch = tf.train.batch(slices, batch_size, allow_smaller_final_batch=True) 

queue = tf.FIFOQueue(32, dtypes=[batch.dtype]) 
enqueue_ops = [] 
dependency = None 

for step_index in range(step_count): 
    step = tf.strided_slice(
     batch, [0, step_index * step_size], 
     [tf.shape(batch)[0], (step_index + 1) * step_size]) 

    if dependency is None: 
     dependency = queue.enqueue(step) 
    else: 
     with tf.control_dependencies([dependency]): 
      step = queue.enqueue(step) 
      dependency = step 

    enqueue_ops.append(step) 

queue_runner.add_queue_runner(queue_runner.QueueRunner(
    queue=queue, enqueue_ops=[tf.group(*enqueue_ops)])) 
step = queue.dequeue() 

supervisor = tf.train.Supervisor() 

with supervisor.managed_session() as session: 
    for batch_index in range(batch_count): 
     for step_index in range(step_count): 
      print("Batch %d, step %d" % (batch_index, step_index)) 
      print(session.run(step)) 

此答案由answer to another SO question动机。