I have the tensorflow and tensorflow-gpu 1.8.0 conda (not pip) packages installed in a conda environment on Ubuntu 16.04.4:
conda list t.*flow
# packages in environment at /home/lebedov/miniconda3/envs/TF:
#
# Name Version Build Channel
_tflow_180_select 1.0 gpu
tensorflow 1.8.0 py36_1 conda-forge
tensorflow-gpu 1.8.0 h7b35bdc_0
I have CUDA 9.0 installed on my system, which has a Quadro M2200 GPU. I can see the GPU listed in the output of nvidia-smi and can also access the GPU using other deep learning frameworks such as PyTorch 0.4.0, but for some reason TensorFlow doesn't seem to see it:
Python 3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:39:56)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import tensorflow as tf
...: sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
2018-07-11 23:21:11.827064: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Device mapping: no known devices.
2018-07-11 23:21:11.827942: I tensorflow/core/common_runtime/direct_session.cc:284] Device mapping:
If I downgrade to tensorflow-gpu 1.7.0, however, I can see the GPU. Any thoughts as to why the GPU isn't being detected by TensorFlow 1.8.0?