Primary interface configuration
You can choose different ways to consume the host’s primary interface with VPP, usually with a tradeoff between performance and simplicity of configuration. Here are the main supported configurations.
virtiothe interface is consumed with a native VPP driver. Performance is good and set up is simple, but only virtio interfaces are supportedavfwe create a virtual function and consume it with a native VPP driver. Performance is good and setup simple, but only intel AVF interfaces are supportedaf_packetthe interface stays in Linux which passes packets to VPP. Performance is low, but it works out of the box with any interfaceaf_xdppackets are passed via eBPF. This requires a>=5.4kernel, but works out of the box with good performancedpdkthe interface is removed from Linux and consumed with the dpdk library. Performance and support are good, but setup can be complex- other native VPP drivers bring better performance than
dpdkbut require complex manual setup
General mechanics
The main interface configuration is controlled by a variable named CALICOVPP_NATIVE_DRIVER that is passed to the vpp container.
You can edit the .yaml file as follows or use the overlays provided in ./yaml/overlays/*.
By default, leaving CALICOVPP_NATIVE_DRIVER empty (or unspecified) will try all drivers supported in your setup, starting with the most performant. You’ll still
need to allocate hugepages if you want e.g. Virtio to work.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-vpp-node
namespace: calico-vpp-dataplane
spec:
template:
spec:
containers:
- name: vpp
env:
- name: CALICOVPP_NATIVE_DRIVER
value: "af_packet"
For most deployments (except for DPDK) you should ensure that the vpp configuration stanza has DPDK disabled
vpp_config_template: |-
...
# removed dpdk { ... }
...
plugins {
plugin default { enable }
plugin calico_plugin.so { enable }
plugin dpdk_plugin.so { disable }
}
Using the native Virtio driver
You can use this driver if your primary interface is virtio [realpath /sys/bus/pci/devices/<PCI_ID>/driver gives .../virtio-net]
- Ensure you have hugepages available on your system (
sysctl -w vm.nr_hugepages=256) -
Ensure
vfio-pciis loaded (sudo modprobe vfio-pci) - Pass
CALICOVPP_NATIVE_DRIVER = virtioto thevppcontainer - Also ensure that your vpp config has no
dpdkstanza and its plugin disabled - Optionally
CALICOVPP_RX_QUEUEScontrols the number of queues used,CALICOVPP_RING_SIZEtheir size
Using the native AVF driver
You can use this driver if your primary interface is supported by AVF [realpath /sys/bus/pci/devices/<PCI_ID>/driver gives .../i40e]
-
Ensure
vfio-pciis loaded (sudo modprobe vfio-pci) - Pass
CALICOVPP_NATIVE_DRIVER = avfto thevppcontainer - Also ensure that your vpp config has no
dpdkstanza and its plugin disabled - Optionally
CALICOVPP_RX_QUEUEScontrols the number of queues used,CALICOVPP_RING_SIZEtheir size
Using AF_XDP
Caution: Ensure your kernel is at least
5.4withuname -r
- Pass
CALICOVPP_NATIVE_DRIVER = af_xdpto thevppcontainer - Also ensure that your vpp config has no
dpdkstanza and its plugin disabled - Optionally
CALICOVPP_RX_QUEUEScontrols the number of queues used,CALICOVPP_RING_SIZEtheir size CALICOVPP_RX_MODEcontrols whether we busy-poll the interface (polling), only use interrupts to wake us up (interrupt) or switch between both depending on the load (adaptive)- Finally
FELIX_XDPENABLEDshould be set tofalseon thecalico-nodecontainer otherwise felix will periodically cleanup the VPP configurationkind: DaemonSet apiVersion: apps/v1 metadata: name: calico-vpp-node namespace: calico-vpp-dataplane spec: template: spec: containers: - name: vpp env: - name: CALICOVPP_NATIVE_DRIVER value: "af_xdp" - name: CALICOVPP_RING_SIZE value: "1024" - name: CALICOVPP_RX_QUEUES value: "1" - name: CALICOVPP_RX_MODE value: "polling" --- kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system spec: template: spec: containers: - name: calico-node env: - name: FELIX_XDPENABLED value: "false"
With kustomize use kubectl kustomize ./yaml/overlays/af-xdp | kubectl apply -f -
Side nodes
- AF_XDP won’t start if you specify
buffers { buffers-per-numa }to be too big (65536 should work)
Using AF_PACKET
- Pass
CALICOVPP_NATIVE_DRIVER = af_packetto thevppcontainer - Also ensure that your vpp config has no
dpdkstanza and the dpdk plugin is disabled
You can also use kubectl kustomize ./yaml/overlays/af-packet | kubectl apply -f -
Using DPDK
With available hugepages
- Ensure you have hugepages available on your system (
sysctl -w vm.nr_hugepages=256) - Pass
CALICOVPP_NATIVE_DRIVER = noneto thevppcontainer - The vpp config in the
calico-configConfigMap should look like the following,__PCI_DEVICE_ID__will be automatically populated with the PCI ID ofCALICOVPP_INTERFACEat startup. CALICOVPP_RX_QUEUESandCALICOVPP_RING_SIZEhave no more effect. They are controlled by their counterparts in thedpdk {}stanza namelynum-rx-queuesandnum-rx-desc
vpp_config_template: |-
unix {
nodaemon
full-coredump
cli-listen /var/run/vpp/cli.sock
}
api-trace { on }
cpu {
main-core 1
workers 0
}
socksvr {
socket-name /var/run/vpp/vpp-api.sock
}
dpdk {
dev __PCI_DEVICE_ID__ { num-rx-queues 1 num-tx-queues 1 }
}
plugins {
plugin default { enable }
plugin calico_plugin.so { enable }
}
Without hugepages
DPDK can also run without hugepages with the a configuration similar to the previous one
- Pass
CALICOVPP_NATIVE_DRIVER = noneto thevppcontainer - The vpp config in the
calico-configConfigMap should look like the following, with__PCI_DEVICE_ID__automatically populated with that ofCALICOVPP_INTERFACEat startup CALICOVPP_RX_QUEUESandCALICOVPP_RING_SIZEhave no more effect. They are controlled by their counterparts in thedpdk {}stanza namelynum-rx-queuesandnum-rx-desc
You also need to tell dpdk to no try allocating hugepages on its own
dpdk { no-hugetlb iova-mode va }does this for the dpdk pluginbuffers { no-hugetlb }does this for the buffers backing VPP’s packets
Caution: This won’t run with unsafe_iommu on. You need to
echo N | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
vpp_config_template: |-
unix {
nodaemon
full-coredump
cli-listen /var/run/vpp/cli.sock
}
api-trace { on }
cpu {
main-core 1
workers 0
}
socksvr {
socket-name /var/run/vpp/vpp-api.sock
}
dpdk {
dev __PCI_DEVICE_ID__ { num-rx-queues 1 num-tx-queues 1 }
iova-mode va
no-hugetlb
}
buffers {
no-hugetlb
}
plugins {
plugin default { enable }
plugin calico_plugin.so { enable }
}
Using native drivers with vpp’s CLI
This is a rather advanced/experimental setup, we’ll take the example of the AVF driver for this, using vpp cli, but any vpp driver can be used. This allow to efficiently support other interface types.
- Pass
CALICOVPP_NATIVE_DRIVER = noneto thevppcontainer - Same as before, you should remove the
dpdk { ... }section invpp_config_templateand update theplugins { ... }definitions as follows - Also add a
exec /etc/vpp/startup.execentry inunix { .. }
vpp_config_template: |-
unix {
nodaemon
full-coredump
log /var/run/vpp/vpp.log
cli-listen /var/run/vpp/cli.sock
exec /etc/vpp/startup.exec
}
...
# removed dpdk { ... }
...
plugins {
plugin default { enable }
plugin calico_plugin.so { enable }
plugin dpdk_plugin.so { disable }
}
Then update the CALICOVPP_CONFIG_EXEC_TEMPLATE environment variable to pass the interface creation cli(s).
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-vpp-node
namespace: calico-vpp-dataplane
spec:
template:
spec:
containers:
- name: vpp
env:
- name: CALICOVPP_CONFIG_EXEC_TEMPLATE
value: "create interface avf 0000:ab:cd.1 num-rx-queues 1"
In the specific case of the AVF driver, the PCI id must belong to a VF that can be created with the avf.sh script. Different drivers will have different requirements.