1
votes

I am using the TensorFlow-GPU 1.12.0 version, CUDA version 9, CUDNN version 7.0.5, bazel version 0.15 and python version 3.5.2 as mentioned in Which TensorFlow and CUDA version combinations are compatible? for compatibility. The machine has Nvidia driver 384.130.

But the GPU of my machine is not detected by the Tensorflow by running the following command

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

the output is following as:

[

name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456

locality {}

incarnation: 13408836213255819255 , name: "/device:XLA_CPU:0" device_type: "XLA_CPU" memory_limit: 17179869184

locality {}

incarnation: 17981738971591465658 physical_device_desc: "device: XLA_CPU device" ]

I tried the solutions in the following links

1- Tensorflow doesn't seem to see my gpu

2- list_local_device tensorflow does not detect gpu

1

1 Answers

0
votes

This solution works in my case.

The problem with the CUDNN version in the above configuration. I update the CUDNN version to the 7.1.4 for CUDA version 9 and my code start using GPU.