我们已经阅读了关于调度的TensorFlow的论文。它可能会预先执行Graph
并找到放置操作的“正确”设备。Can TensorFlow可以自动将操作安排到所有可用的GPU?
但是我们已经测试过使用tf.Session(config=tf.ConfigProto(log_device_placement=True))
而没有指定任何设备来运行。我们发现所有的操作都放在第一个GPU中。
日志看起来像这样。
Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/epsilon: /job:localhost/replica:0/task:0/gpu:0
Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta2: /job:localhost/replica:0/task:0/gpu:0
Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/beta1: /job:localhost/replica:0/task:0/gpu:0
Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Adam/learning_rate: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_3/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam_1/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/read: /job:localhost/replica:0/task:0/gpu:0
Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_2/Adam/Assign: /job:localhost/replica:0/task:0/gpu:0
Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:818] Variable_1/Adam_1: /job:localhost/replica:0/task:0/gpu:0
Variable
也放置在GPU中。我保证调度器不够好,用户的最佳做法是我们应该指定使用CPU或GPU的操作,特别是当我们有多个GPU时。是对的吗?
太好了。感谢您的详细解释。我们将使用'CUDA_VISIBLE_DEVICES'和'tf.device()'来灵活地放置它们。 – tobe