17
votes

A piece of code that takes well over 1 minute on the command line was done in a matter of seconds in NVIDIA Visual Profiler (running the same .exe). So the natural question is why? Is there something wrong with command line, or does Visual Profiler do something different and not really execute everything as on the command line?

I'm using CUBLAS, Thrust and cuRAND.

Incidentally, there's been a noticeable slowdown in compiled code on my machine very recently, even old code that previously ran quickly, hence I'm getting suspicious.

Update:

  • I have checked that the calculated output on command line and Visual Profiler is identical - i.e. all required code has been run in both cases.
  • GPU-shark indicated that my performance state was unchanged at P0 when I switched from command line to Visual Profiler.
  • However, GPU usage was reported at 0.0% when run with Visual Profiler, but went as high as 98% when run off command line.
  • Moreover, far less memory is used with Visual Profiler. When run off command line, task manager indicates usage of 650-700MB of memory (spikes at the first cudaFree(0) call). In Visual Profiler that figure goes down to ~100MB.
3
Well, the piece of code in question is actually a project spanning 15 interdependent files, so probably out of the scope of this question. I was simply wondering if anyone else had encountered the Visual Profiler phenomenon and had an explanation for it.mchen
The CUDA profilers (Nsight VSE, Visual Profiler, nvprof, and CUDA command line profiler) put the GPU in the highest P-State to make sure the results are consistent. This should not make a difference of more than a few percent. The more likely cause is that you application is failing when you run the profiler. Please confirm that your application runs to completion and no errors occur?Greg Smith
And what is a P-state?mchen
I mean that not all the code in your application is running (or running the same volume of instructions) when you are profiling it.talonmies
If the code runs 10x faster in the tool while the GPU usage is so much lower at the same time, the only decent idea I have is that some emulation mode is used when you run under the tool. And for this particular workload running on CPU yields better performance - which is not rare given that much of caching happens automatically on the CPU side while it requires explicit thinking and explicit work when working in environments like CUDA and OpenCL. I'd recommend you take a look at various build options and tool settings to see if there is anything saying about emulation mode.Alexey Alexandrov

3 Answers

6
votes

This is an old question, but I've just finished chasing the same issue (though the cause may not be the same).

Namely: my app achieved between 900 and 1100 frames (synchronous launches) per second when running under NVVP, but around 100-120 when running outside of the profiler.

The cause appears to be a status message I was printing to the console via cout. I had intended for this to only happen about once every 100-200 frames. Instead, it was printing the status message for every frame, and the console IO became the bottleneck.

By only printing the status message every 100 frames (though the optimal number here would depend on your application), the frame rate jumped back up to match what I was seeing in NVVP. Of course, this could also be handled in a separate CPU thread if that sort of overhead is unacceptable in your circumstances.

NVVP has to redirect stdout to its own internal buffer in order to capture the application's output (which it shows in its console tab). It appears that NVVP's mechanism for buffering or processing that output has significantly less overhead than allowing the operating system to handle it directly. It looks like NVVP is buffering everything, and displaying it in a separate thread, or just saving a bunch of output until some threshold is reached, when it adds that buffer to its own console tab.

So, my advice would be to disable any console IO, and see if or how that affects things.

(It didn't help that VS2012 refused to profile my CUDA app. It would have been nice to see that 80% of the execution time was spent on console IO.)

Hope this helps!

0
votes

This should not happen. I've never seen anything like it; probably something in your setup.

0
votes

It could be that some JIT-compile step is skipped by the profiler. This could explain the difference in memory usage. Try creating a fat binary?