My question is related to a question I asked earlier. Forward packets between SR-IOV Virtual Function (VF) NICs Basically what I want to do is use 4 SR-IOV functions of Intel 82599ES and direct traffic between VFs as I need. The setup is something like this (don't mind the X710, I use 82599ES now)
For the sake of simplicity at testing I'm only using one VM running warp17 to generate traffic, send it though VF1 and receive it back from VF3. Since the new dpdk versions have a switching function as described in https://doc.dpdk.org/guides-18.11/prog_guide/switch_representation.html?highlight=switch , I'm trying to use 'testpmd' to configure switching. But it seems to be test pmd doesn't work with any flow commands I enter. All I get is "Bad argument". For example it doesn't work with this command,
flow create 1 ingress pattern / end actions port_id id 3 / end
My procedure is like this,
Bind my PF(82599ES) with igb_uio driver
Create 4 VFs using following command,
echo "4" | sudo tee /sys/bus/pci/devices/0000:65:00.0/max_vfs
Bind 2 VFs to vfio_pci driver using,
echo "8086 10ed" | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id sudo ./usertools/dpdk-devbind.py -b vfio-pci 0000:65:10.0 0000:65:10.2
Use PCI passthough to bind VFs to VM and start the VM
sudo qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -hda WARP17-disk1.qcow2 -m 6144 \-display vnc=:0 -redir tcp:2222::22
-net nic,model=e1000 -net user,name=mynet0
-device pci-assign,romfile=,host=0000:65:10.0
-device pci-assign,romfile=,host=0000:65:10.2Run testpmd with PF and 2 port representators of VFs
sudo ./testpmd --lcores 1,2 -n 4 -w 65:00.0,representor=0-1 --socket-mem 1024 --socket-mem 1024--proc-type auto --file-prefix testpmd-pf -- -i --port-topology=chained
Am I doing something wrong or is this the nature of testpmd? My dpdk version is 18.11.9

-w 65:00.0. this ingress and egress traffic out. But please tell me is your expectation the PF driver will FWD or ASIC 82599ES will switch packet? - Vipin Varghese