For our project written in c++, we run the processor cores on poll mode to poll the driver (dpdk), but in poll mode the cpu utilization shows up as 100% in top/htop. As we started seeing glitch of packet drops, calculated the number of loops or polls executed per sec on a core (varies based on the processor speed and type).
Sample code used to calculate the polls/second with and with out the overhead of driver poll function is as below.
#include <iostream>
#include <sys/time.h>
int main() {
unsigned long long counter;
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
gettimeofday(&tv2, NULL);
while(1) {
gettimeofday(&tv2, NULL);
//Some function here to measure the overhead
//Poll the driver
if ((double) (tv2.tv_usec - tv1.tv_usec) / 1000000 + (double) (tv2.tv_sec - tv1.tv_sec) > 1.0) {
std::cout << std::dec << "Executions per second = " << counter << " per second" << std::endl;
counter = 0;
gettimeofday(&tv1, NULL);
}
counter++;
}
}
The poll count results are varying, sometimes we see a glitch and the number go down 50% or lower than regular counts, thought this could be problem with the linux scheduling the task so Isolated the cores using linux command line (isolcpus=...), Set affinity, Increase priority for the process/thread to highest nice value and type to realtime (RT)
But no difference.
So questions are, Can we rely on the number of loops/polls per sec executed on a processor core in poll mode?
Is there a way to calculate the CPU occupancy on poll mode since the cores CPU utilization shows up as 100% on top?
Is this the right approach for this problem?
Environment:
- Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
- 8G ram
- Ubuntu virtual machine on Vmware hypervisor.
Not sure if this was previously answered, any references will be helpful.