Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

I have 3 GTX Titan GPUs in my machine. I run the example provided in Cifar10 with cifar10_train.py and got the following output:

I tensorflow/core/common_runtime/gpu/gpu_init.cc:60] cannot enable peer access from device ordinal 0 to device ordinal 1

I tensorflow/core/common_runtime/gpu/gpu_init.cc:60] cannot enable peer access from device ordinal 1 to device ordinal 0

I tensorflow/core/common_runtime/gpu/gpu_init.cc:127] DMA: 0 1 

I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 0:   Y N 

I tensorflow/core/common_runtime/gpu/gpu_init.cc:137] 1:   N Y 

I tensorflow/core/common_runtime/gpu/gpu_device.cc:694] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN, pci bus id: 0000:03:00.0)

I tensorflow/core/common_runtime/gpu/gpu_device.cc:694] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX TITAN, pci bus id: 0000:84:00.0)

It looks to me that TensorFlow is trying to initialize itself on two devices (gpu0 and gpu1).

My question is why it only does that on two devices and is there any way to prevent that? (I only want it to run as if there is a single GPU)

1 Answer

0 votes
by (33.1k points)

Using a single GPU on a multi-GPU system

If you have more than one GPU in your system, the GPU with the lowest ID will be selected by default. 

Using a different GPU:

# Creates a graph.

with tf.device('/gpu:2'):

  a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')

  b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')

  c = tf.matmul(a, b)

# Creates a session with log_device_placement set to True.

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

# Runs the op.

print sess.run(c)

Learn Tensorflow for better insights. Machine Learning Algorithms would be also beneficial to master Tensorflow.

Hope this answer helps you!

Browse Categories

...