Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AWS by (19.1k points)

After installing TensorFlow and its dependencies on a g2.2xlarge EC2 instance I tried to run an MNIST example from the getting started page:

python tensorflow/models/image/mnist/convolutional.py

But I get the following warning:

I tensorflow/core/common_runtime/gpu/gpu_device.cc:611] Ignoring gpu device 

(device: 0, name: GRID K520, pci bus id: 0000:00:03.0) with Cuda compute 

capability 3.0. The minimum required Cuda capability is 3.5.

Is this a hard requirement? Any chance I could comment that check out in a fork of TensorFlow? It would be super nice to be able to train models in AWS.

1 Answer

0 votes
by (44.4k points)

There is a GitHub Issue solution which you can follow in this link for the current issue - 

https://github.com/tensorflow/tensorflow/issues/25

Also, there is a fix but you have to build Tensorflow from the source. So, do as it is below

$ TF_UNOFFICIAL_SETTING=1 ./configure

# Same as the official settings above

WARNING: You are configuring unofficial settings in TensorFlow. Because some

external libraries are not backward compatible, these settings are largely

untested and unsupported.

Please specify a list of comma-separated Cuda compute capabilities you want to

build with. You can find the compute capability of your device at:

https://developer.nvidia.com/cuda-gpus.

Please note that each additional compute capability significantly increases

your build time and binary size. [Default is: "3.5,5.2"]: 3.0

Setting up Cuda include

Setting up Cuda lib64

Setting up Cuda bin

Setting up Cuda nvvm

Configuration finished

This [Default is: "3.5,5.2"]: 3.0 part will solve your issue.

Related questions

Want to get 50% Hike on your Salary?

Learn how we helped 50,000+ professionals like you !

0 votes
1 answer
+1 vote
1 answer
asked Nov 23, 2020 in Python by ashely (50.2k points)

Browse Categories

...