0
votes

I have created a DLAMI (Deep Learning AMI (Amazon Linux) Version 8.0 - ami-9109beee on g2.8xlarge) and installed jupyter notebook to create a simple Keras LSTM. When I try to turn my Keras model into a GPU model using the multi_gpu_model function, I see the following error logged:

Ignoring visible gpu device (device: 0, name: GRID K520, pci bus id: 0000:00:03.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.

I've tried reinstalling tensorflow-gpu to no avail. Is there any way to align the compatibilities on this AMI?

2
That looks more like the GPU or CUDA version is too old.e0k
Your GPU is correctly identified as the rather old capability level 3.0. This level is not enough for the software you are using.Klaus D.

2 Answers

2
votes

This was resolved by uninstalling, then reinstalling tensorflow-gpu through the conda environment provided by the AMI.

1
votes

TensorFlow binaries that you install through pip or similar are only built to support CUDA compute capability 3.5, but TensorFlow does support compute capability 3.0.

Unfortunately the only way to obtain a TensorFlow installation that supports compute capability 3.0 is by building from source.