Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Python by (47.6k points)

I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.

For small to moderate size models, the 12GB of the Titan X is usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once.

The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X is used up.

Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?

1 Answer

0 votes
by (106k points)

To prevent tensorflow from allocating the totality of a GPU memory you can use the below-mentioned code:-

config = tf.ConfigProto() 

config.gpu_options.allow_growth=True 

sess = tf.Session(config=config)

Browse Categories

...