- As we known, we can map IRQs of some devices to some CPU-Cores by using IRQ Affinity on Linux
cat <8-bit-core-mask> /proc/irq/[irq-num]/smp_affinity
:
- http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux
- https://community.mellanox.com/docs/DOC-2123
Also
- We known, that we can map IRQ (hardware-interrupts) on the some CPU-Nodes (Processors on motherboard) on NUMA-systems, by using: https://events.linuxfoundation.org/sites/events/files/eeus13_shelton.pdf
cat <8-bit-node-mask> /proc/irq/[irq-num]/node
But if one PCIe-device (Ethernet, GPU, ...) connected to the NUMA-0, and other PCIe-device connected to the NUMA-1, then it would be optimal to use interrupts on those NUMA-nodes (CPU) to which these devices are connected, to avoid high latency communication between nodes: Is CPU access asymmetric to Network card
Does Linux automatically binds IRQs to the nodes to which the PCIe-devices are connected , or does it have to be done manually?
And if we have to do this with our hands, then what is the best way to do this?
Particularly interested in Linux x86_64: Debian 8 (Kernel 3.16) and Red Hat Enterprise Linux 7 (Kernel 3.10), and others...
Motherboard chipsets: Intel C612 / Intel C610, and others...
Ethernet cards: Solarflare Flareon Ultra SFN7142Q Dual-Port 40GbE QSFP+ PCIe 3.0 Server I/O Adapter - Part ID: SFN7142Q