2
votes

I am currently profiling my code, which is C99 compliant, with gprof. My program is defined recursively. I am compiling using gcc with the -pg flag with some warnings enabled, and no optimisation flags. Then I am running the code followed by the command:

gprof ./myProgram > outputFile.txt

I am very confused as to what the output is telling me, though. It is saying that I am spending a lot of time in a function that I know for a fact I do not spend a lot of time in. The function in question has a loop inside and performs a task a set number of times. This is a tiny task, though, and by increasing the number of times the task is completed from, say, 5 to 100 I get no visible change in the total amount of time taken for my program to execute. However, gprof is telling me that I have spent almost 50% of the execution time in this function.

I have no idea what the issue is here. Has anyone encountered something like this before? Can I resolve this myself? I am using a CentOS distribution of Linux. I have tried to install OProfile as a different profiler, but don't have write access to /usr/local, so the make install fails, so not sure if this is only an issue with gprof

Here are the relevant parts of the output file. The weird output is for the function nl_jacobi_to_converge:

From the flat profile I have

  %   cumulative   self              self     total           
 time   seconds   seconds    calls   s/call   s/call  name    
 49.02      7.50     7.50     1562     0.00     0.01  calculate_resid_numeric_jac_diags_tc41
 16.41     10.01     2.51 323878716     0.00     0.00  c
 14.97     12.30     2.29 323878716     0.00     0.00  b
  6.01     13.22     0.92       89     0.01     0.02  calculate_resid_tc41
  5.69     14.09     0.87 10967082     0.00     0.00  rhs_function
  1.11     14.26     0.17     1562     0.00     0.00  single_nl_jacobi
  ...
  0.00     15.30     0.00       11     0.00     0.76  nl_jacobi_to_converge

And from the granularity I have

index % time    self  children    called     name
                0.00    8.32      11/11          nl_multigrid [3]
[5]     54.4    0.00    8.32      11         nl_jacobi_to_converge [5]
                5.28    2.92    1100/1562        calculate_resid_numeric_jac_diags_tc41 [4]
                0.12    0.00    1100/1562        single_nl_jacobi [17]
2
It would help if you pasted the essential part of your output file.Chiel
@Chiel I have put in some extra information from the file. The function nl_jacobi_to_converge, although it sounds like it would take a long time operates on a very small dataset, and should have no effect on the overall run timeKeeran Brabazon
As said in the wikipedia "Mutual recursion and non-trivial cycles are not resolvable by the gprof approach (context-insensitive call graph)". So, you can try gperftools's CPU profiler which has full callchain recording (context-sensitive).osgx
@osgx: Re: gperftools CPU profiler: sigh - so near yet so far. 1) If only it sampled on wall-clock, not CPU time, it could detect needless blocked time. 2) If only it counted only one "hit" per function per stack sample, it would be handling recursion properly. 3) If only it allowed the users to actually see random samples, they could see line-of-code and other issues not exposed by function summaries. 4) If users could more easily throttle the sampling, the sample set would be relevant to the time experienced by the users.Mike Dunlavey

2 Answers

3
votes

You are one of numerous people trying to figure out what gprof is telling them, to four decimal places. I use random pausing and examining the stack.

In the first place, gprof is a "CPU profiler". That means during IO, mutex, paging, or any other blocking system calls, it is shut off, and doesn't count that time. You say you're doing nothing of the sort, but it could be happening deep inside some library function. If it were, gprof masks it. On the other hand, a single stack sample will show what it is waiting for, with probability equal to the fraction of time it is waiting.

Second, as @keshlam said, it's important to understand about "self time". If it's high, that means the program counter was found in that routine a good percent of the CPU time. However, if it's low, that does not mean the function isn't guilty. The function could be spending lots of time, but doing it by calling subfunctions. To see that, you need the gprof "total" column, but as a percent of total time, not as an absolute time per call, which is what it is giving you. If you take a stack sample, then any routine will appear on it with probability equal to the fraction of time it is spending. What's more, you will know exactly why that time is being spent, because the sample will show you the precise line numbers where the calls occur.

ADDED: gprof attempts to handle recursive functions, but as its authors point out, it does not succeed. However, stack samples have no problem with recursion. If a stack sample is taken during a recursive function call, the function appears more than once on the stack, possibly many times. However, it is still the case that the inclusive time cost of a function, or of any line of code that calls a function, is simply the fraction of time it is on the stack. To see this, suppose samples are taken at a constant frequency, for a total of M samples, and a particular function or line of code appears on fraction F of them. If that function or line of code could be made to take no time, such as by deleting it, branching around it, or passing it off to an infinitely fast processor, then it would have no exposure to being sampled. Then the M*F samples on which it had appeared would disappear, shortening execution time by fraction F.

0
votes

"Self" time is time actually spent in that function's code. "Cumulative" time is time spent in that function and anything that function calls (self plus children).

Note that percentages are percentage of measured time. A function which is running a great many times (such as your b) will obviously affect runtime more than a function which only runs a few times (your n1_jacobi_to_converge). The time spent in it per call (/call) may be too small to report, but the total can still be significant and may still suggest that it's worth looking for ways to reduce the number of calls and/or improve the efficiency of each invocation.

WARNING: Measurements before the JIT stabilizes are meaningless. And since JIT is nondeterministic, especially but not solely in large multithreaded applications, performance may vary quite substantially from run to run even when the Java code is unchanged. Take multiple measurement passes, with sufficient warmup before measuring, and even then be prepared to sanity-check the profiler's suggestions because they may be misleading.

ANOTHER WARNING: Check whether semaphore-wait time is being measured, and make sure you know what's happening in all the threads. If you aren't counting waits, you may miss performance bottlenecks that aren't in your direct call stack.

(I spent a lot of time optimizing Apache Xalan and its IBM-internal descendents. There, we also had code generation involved in the mix, which had its own nondeterminacies. The initial performance improvements were easy and obvious; after that it became much more challenging!)