3
votes

The docs say that when using a TensorFlow backend, Keras automatically runs on a GPU if it is detected. I'm logged into a remote GPU, and I try to run a Keras program, but I'm only using the CPUs for some reason. How can I force my Keras program to run on the GPU to speed things up?

If it helps, this is what the model looks like:

model = Sequential()
model.add(SimpleRNN(out_dim, input_shape = (X_train.shape[1], X_train.shape[2]), return_sequences = False))
model.add(Dense(num_classes, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer = "adam", metrics = ['accuracy'])
hist = model.fit(X_train, dummy_y, validation_data=(X_test, dummy_y_test), nb_epoch = epochs, batch_size = b_size)

and here's the output of which python and proof that Keras is using the TensorFlow backend:

user@GPU6:~$ which python
/mnt/data/user/pkgs/anaconda2/bin/python
user@GPU6:~$ python
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul  2 2016, 17:42:40) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import keras
Using TensorFlow backend.

and here's the output of nvidia-smi. I have several processes like the one above currently running, but they're only using the CPU:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX TIT...  Off  | 0000:03:00.0     Off |                  N/A |
| 26%   27C    P8    13W / 250W |      9MiB /  6082MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX TIT...  Off  | 0000:83:00.0     Off |                  N/A |
| 26%   31C    P8    13W / 250W |      9MiB /  6082MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX TIT...  Off  | 0000:84:00.0     Off |                  N/A |
| 26%   31C    P8    14W / 250W |      9MiB /  6082MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      2408    G   Xorg                                             9MiB |
|    1      2408    G   Xorg                                             9MiB |
|    2      2408    G   Xorg                                             9MiB |
+-----------------------------------------------------------------------------+

None of my processes are running on the GPU. How can I fix this?

1

1 Answers

3
votes

You may have the the CPU version of tensorflow installed.

Since it seems that your are using Anaconda and py2.7: follow these steps to install GPU version of tensorflow in a conda env using py2.7

conda create -n tensorflow
source activate tensorflow
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.1-cp27-none-linux_x86_64.whl

see this github issue