4
votes

I have two physical NICs on my machine. Based on this post, it seems that dpdk should be able to work with virtual NICs.

Thus I created 3 virtual interfaces using the following commands in Linux, where eno1d1 is the name of my physical NIC.

sudo ifconfig eno1d1:0 10.10.1.107
sudo ifconfig eno1d1:1 10.10.1.207
sudo ifconfig eno1d1:2 10.10.2.107

However, when I run my dpdk application, the function rte_eth_dev_count still returns only 2.

What do I need to do to get Dpdk to recognize the virtual NICs?

Here's some information about my DPDK version, which is logged at the beginning of my application.

Using DPDK version DPDK 16.11.0
DPDK: EAL: Detected 16 lcore(s)
DPDK: EAL: Probing VFIO support...
DPDK: EAL: PCI device 0000:09:00.0 on NUMA socket 0
DPDK: EAL:   probe driver: 15b3:1007 net_mlx4
DPDK: PMD: net_mlx4: PCI information matches, using device "mlx4_0" (VF: false)
DPDK: PMD: net_mlx4: 2 port(s) detected
DPDK: PMD: net_mlx4: port 1 MAC address is ec:b1:d7:85:3a:12
DPDK: PMD: net_mlx4: port 2 MAC address is ec:b1:d7:85:3a:13
DPDK: PMD: net_mlx4: 0xae6000: TX queues number update: 0 -> 1
DPDK: PMD: net_mlx4: 0xae6000: RX queues number update: 0 -> 1

Here is the output ifconfig on my machine.

eno1      Link encap:Ethernet  HWaddr ec:b1:d7:85:1a:12  
          inet addr:128.110.153.148  Bcast:128.110.155.255  Mask:255.255.252.0
          inet6 addr: fe80::eeb1:d7ff:fe85:1a12/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15241610 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11238825 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4530541723 (4.5 GB)  TX bytes:8168066799 (8.1 GB)

eno1d1    Link encap:Ethernet  HWaddr ec:b1:d7:85:1a:13  
          inet addr:10.10.1.7  Bcast:10.10.1.255  Mask:255.255.255.0
          inet6 addr: fe80::eeb1:d7ff:fe85:1a13/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3787661978 errors:0 dropped:66084 overruns:0 frame:0
          TX packets:4758273664 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1905977969665 (1.9 TB)  TX bytes:3897938668285 (3.8 TB)

eno1d1:0  Link encap:Ethernet  HWaddr ec:b1:d7:85:1a:13  
          inet addr:10.10.1.107  Bcast:10.255.255.255  Mask:255.0.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eno1d1:1  Link encap:Ethernet  HWaddr ec:b1:d7:85:1a:13  
          inet addr:10.10.1.207  Bcast:10.255.255.255  Mask:255.0.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eno1d1:2  Link encap:Ethernet  HWaddr ec:b1:d7:85:1a:13  
          inet addr:10.10.2.107  Bcast:10.255.255.255  Mask:255.0.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:62313 errors:0 dropped:0 overruns:0 frame:0
          TX packets:62313 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:3557508 (3.5 MB)  TX bytes:3557508 (3.5 MB)
2
Those are NIC aliases, not virtual NICs. They're also obsolescent (use ip address add instead).Ignacio Vazquez-Abrams

2 Answers

5
votes

eno1d1:0 Link encap:Ethernet HWaddr ec:b1:d7:85:1a:13

Those are not Virtual NICs, those are network aliases, i.e. different Linux kernel netdevs referring to the same NIC. Since DPDK does not use Linux kernel, we cannot use those aliases to run DPDK apps.

Nevertheless, we have few options to run a DPDK app without using physical NICs:

Running DPDK inside a Virtual Machine

  1. Run a virtual machine with as many NICs as you need.
  2. Inside Virtual Machine, bind NICs to UIO.
  3. Inside Virtual Machine, run DPDK and it should work OK with NICs inside Virtual Machine.

For more information, please have a look at DPDK Poll Mode Driver for Emulated Virtio NIC.

Using NIC Virtual Functions:

  1. Configure SR-IOV support on the host.
  2. Configure few Virtual Functions on the host NIC passing num_vfs to the MLX4 kernel module driver.
  3. On host, bind few NIV Virtual Functions to vfio-pci
  4. On host, run DPDK and it should work OK with NIC Virtual Functions.

For more information, please have a look at DPDK MLX4 Poll Mode Driver and to HowTo Configure SR-IOV for ConnectX-3

For general description of SR-IOV, you might find useful DPDK Intel Virtual Function Driver. Please note, that the configuration for Mellanox kernel module is slightly different and you should use the num_vfs as described in the links above instead.

Using DPDK Virtual Device

  1. Compile DPDK with libpcap support.
  2. Configure host to run a DPDK app as usual (i.e. enable huge pages etc).
  3. Do not bind any NICs to UIO.
  4. Create few TUN/TAP interfaces, bridge them with a physical NIC.
  5. Run a DPDK application as usual, but pass few --vdev arguments to create few Virtual Devices, for example:

    testpmd -l 0-3 -n 4 \ --vdev 'net_pcap0,iface=tun0' --vdev 'net_pcap1,iface=tun1' ...

For more information, please have a look at DPDK libpcap Poll Mode Driver.

Hope one of those option will suit your needs.

0
votes

You're not talking about the same kinds of virtual NICs. That post refers to NICs for virtual machines (e.g., virtio or an emulated e1000), whereas your trying to have DPDK listen on a Linux virtual NIC.

In that post, Zhandos Zhylkaidar is simply saying you could run DPDK inside a virtual machine, in which case the NICs DPDK sees aren't necessarily physical NICs.