5
votes

I've set IRQ affinity in the past on Linux by setting values to the proc files. [1] However, I noticed that when I do this on an system that uses MSI-X for the device (PCIe) that I want to set affinity for e.g. NIC, the /proc/interrupt counters increment for each core for the IRQ and not for the single core I set it for. Where in a non- MSI-X system the specified core answers the interrupts.

I'm using Linux kernel 3.11.

Short: Can IRQ affinity be set for devices that use MSI-X interrupts?

[1] https://www.kernel.org/doc/Documentation/IRQ-affinity.txt

1
good freakin questionColin Godsey

1 Answers

0
votes

Unburrying this thread, I am trying to set IRQ (MSI-X) cpu affinity for my SATA controller in order to avoid cpu switching delays. So far, I got the current used IRQ via:

IRQ=$(cat /proc/interrupts | grep ahci | awk -F':' '/ /{gsub(/ /, "", $1); print $1}')

Just looking at the interrupts via cat /proc/interrupts shows that multiple CPUs are involved in my sata controller handling.

I then set the IRQ affinity (cpu 2 in my case) via

echo 02 > /proc/irq/$IRQ/smp_affinity

I can test the effective affinity with

cat /proc/irq/$IRQ/effective_affinity

After a while of disk benchmarking, I noticed that the affinity stays as configured. Example:

Before benchmark, having bound IRQ 134 to cpu 2:

 cat /proc/interrupts | egrep "ahci|CPU"
             CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7
  134:   12421581          1          0         17       4166          0          0          0  IR-PCI-MSI 376832-edge      ahci[0000:00:17.0]

After benchmark:

 cat /proc/interrupts | egrep "ahci|CPU"
            CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7
 134:   12421581    2724836          0         17       4166          0          0          0  IR-PCI-MSI 376832-edge      ahci[0000:00:17.0]

So in my case, the affinity that I've setup stayed as it should. I can only imagine that you have irqbalance running as a service. Have you checked that ? In my case, running irqbalance redistributes the affinity and overrides the one I setup.

My test system: CentOS 8.2 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10 11:09:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

In the end, I did not achieve better disk utilization / performance. My initial problem is that fio benchmarks do not use 100% disk, mere some values between 75-85% (and sometimes 97%, without me knowing why).