2016-08-17 194 views
1

我是TensorFlow的新手,已根据TensorFlow网站上的说明安装了CUDA-7.5和cudnn-v4。调整TensorFlow配置文件,并试图运行从网站下面的例子后:TensorFlow从多个GPU选择GPU使用

python -m tensorflow.models.image.mnist.convolutional 

我敢肯定,但是TensorFlow使用的GPU,而不是其他的一个,我想它使用速度更快。我想知道这个示例代码是否默认使用它找到的第一个GPU。如果是这样,我该如何选择在我的TensorFlow代码中使用哪个GPU?

运行示例代码时,我得到的消息是:

ldt-tesla:~$ python -m tensorflow.models.image.mnist.convolutional 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally 
Extracting data/train-images-idx3-ubyte.gz 
Extracting data/train-labels-idx1-ubyte.gz 
Extracting data/t10k-images-idx3-ubyte.gz 
Extracting data/t10k-labels-idx1-ubyte.gz 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: Tesla K20c 
major: 3 minor: 5 memoryClockRate (GHz) 0.7055 
pciBusID 0000:03:00.0 
Total memory: 4.63GiB 
Free memory: 4.57GiB 
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x2f27390 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties: 
name: Quadro K2200 
major: 5 minor: 0 memoryClockRate (GHz) 1.124 
pciBusID 0000:02:00.0 
Total memory: 3.95GiB 
Free memory: 3.62GiB 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 1 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y N 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1: N Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20c, pci bus id: 0000:03:00.0) 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:793] Ignoring gpu device (device: 1, name: Quadro K2200, pci bus id: 0000:02:00.0) with Cuda multiprocessor count: 5. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT. 
Initialized! 

回答

6

可以设置CUDA_VISIBLE_DEVICES环境变量只露出你想要的那些,引用在masking gpus这个例子:

CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen 
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible 
CUDA_VISIBLE_DEVICES=”0,1” Same as above, quotation marks are optional 
CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked 
+0

谢谢!这似乎做了这项工作,并摆脱了那个错误:)。我还会收到一条消息,指出“使用cuda multiprocessor count忽略GPU设备5.所需的最小数量为8.您可以使用...调整此需求”。做同样的事情你建议,我可以使用环境变量来改变计数,但我不知道这意味着什么。计数/最小计数是什么意思?谢谢! –

+0

其他选项 - https://stackoverflow.com/questions/40069883/how-to-set-specific-gpu-in-tensorflow/44848050#44848050 – Nandeesh