0
votes

I only have one GPU(GTX 1070, 8GB VRAM) and I would like to using tensorflow-gpu with another CUDA code simultaneously, on the same GPU. But, using CUDA code and tensorflow-gpu at the same time slows tensorflow-gpu down about twice time. Is there any solutions to speed up when tensorflow-gpu and CUDA code are used together?

1
The one word answer is notalonmies

1 Answers

1
votes

A slightly longer version of @talonmies comment:

GPUs are awesome, but they still have finite resources. Any competently-built application that uses the GPU will do its best to saturate the device, leaving few resources for other applications. In fact, one of the goals and challenges of optimizing GPU code - whether it be a shader, CUDA or CL kernel - is making sure that all CUs are used as efficiently as possible.

Assuming that TF is already doing that: When running another GPU-heavy application, or you're sharing a resource that's already running full-tilt. So, things slow down.

Some options are:

  1. Get a second, or faster, GPU.

  2. Optimize your CUDA kernels to reduce requirements and simplify your TF stuff. While this is always important to keep in mind when developing for GPGPU, it's unlikely to help with your current problem.

  3. Don't run these things at the same time. This may turn out to be slightly faster than this quasi time-slicing situation that you currently have.