Scenarios
This document describes how to use DPDK to test CVM instances for high-throughput network performance.
Directions
Compiling and installing DPDK
yum install -y sysstat wget tar automake make gcc
wget http://git.dpdk.org/dpdk/snapshot/dpdk-17.11.tar.gz
tar -xf dpdk-17.11.tar.gz
3. Modify the txonly engine to allow UDP port traffic change on the DPDK sender CPU to generate multiple data streams.
Run the following command to modify the dpdk/app/test-pmd/txonly.c
file.
vim dpdk/app/test-pmd/txonly.c
Press i to enter the edit mode and make the following configurations:
3.1.1 Locate #include "testpmd.h"
and enter the following content in the next line.
RTE_DEFINE_PER_LCORE(struct udp_hdr, lcore_udp_hdr);
RTE_DEFINE_PER_LCORE(uint16_t, test_port);
The result should be as follows:
3.1.2 Locate ol_flags |= PKT_TX_MACSEC;
and append the following content to the next lines.
/* dummy test udp port */
memcpy(&RTE_PER_LCORE(lcore_udp_hdr), &pkt_udp_hdr, sizeof(pkt_udp_hdr));
3.1.3 Locate for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
. Start a new line and add the following:
RTE_PER_LCORE(test_port)++;
RTE_PER_LCORE(lcore_udp_hdr).src_port = rte_cpu_to_be_16(2222);
RTE_PER_LCORE(lcore_udp_hdr).dst_port = rte_cpu_to_be_16(rte_lcore_id() * 2000 + RTE_PER_LCORE(test_port) % 64);
The result should be as follows:
3.1.4 Replace copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
with the following content:
copy_buf_to_pkt(&RTE_PER_LCORE(lcore_udp_hdr), sizeof(RTE_PER_LCORE(lcore_udp_hdr)), pkt,
The result should be as follows:
Press Esc and enter :wq to save and close the file. Run the following command to modify the dpdk/config/common_base
file.
vim dpdk/config/common_base
Press i to enter the edit mode, and change the value of CONFIG_RTE_MAX_MEMSEG=256
to 1024
as shown below:
Press i to enter the edit mode and locate CONFIG_RTE_MAX_LCORE=128
. Change the value to 256
if your CPU core is over 128.
Press Esc and enter :wq to save and close the file. Note:
Modify these configuration files on both the initiator and receiver servers. You can run the following commands to send the modified file to the peer end to avoid repeated modification.
scp -P 22 /root/dpdk/app/test-pmd/txonly.c root@<IP>:/root/dpdk/app/test-pmd/
scp -P 22 /root/dpdk/config/common_base root@<IP>:/root/dpdk/config
4. Run the following command to replace the IP address of dpdk/app/test-pmd/txonly.c
with the test server IP.
vim dpdk/app/test-pmd/txonly.c
Press i to enter the edit mode.
Replace 198
, 18
, 0
, and 1
in the above contents with the server IP, SRC_ADDR
with the sender IP, and DST_ADDR
with the receiver IP.
5. Run the OS-specific commands to install the numa library.
yum install numactl-devel
apt-get install libnuma-dev
6. Run the following command in the dpdk/
directory to close KNI.
sed -i "s/\\(^CONFIG_.*KNI.*\\)=y/\\1=n/g" ./config/*
7. If your OS uses a later kernel version (for example, 5.3), run the following command to shield the differences.
sed -i "s/\\(^WERROR_FLAGS += -Wundef -Wwrite-strings$\\)/\\1 -Wno-address-of-packed-member/g" ./mk/toolchain/gcc/rte.vars.mk
sed -i "s/fall back/falls through -/g" ./lib/librte_eal/linuxapp/igb_uio/igb_uio.c
8. Run the following command to compile DPDK.
Configuring huge pages
Run the following command to configure huge pages.
echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
If an error message appears, the huge pages are insufficient. In this case, adjust the command, for example:
echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
Loading the kernel module and binding the interface
Note:
You need to use Python for this step. Go to the Python official website to download and install an appropriate version. This document uses Python 3.6.8 as an example. 1. Log in to the Linux instance via VNC. After the ENI driver is bound to igb_uio user mode driver, ENI can only be accessed via VNC or console, instead of an SSH key or IP address. 2. Run the following commands successively to load the UIO module and bind the virtio interface.
insmod /root/dpdk/build/kmod/igb_uio.ko
python3 dpdk-devbind.py --bind=igb_uio 00:05.0
Note:
Replace 00.05.0
in the command with the actual ENI address, which can be obtained using the following command:
python3 dpdk-devbind.py -s
After completing tests, run the following commands to restore ENI.
python3 dpdk-devbind.py --bind=virtio-pci 00:05.0
Testing bandwidth and throughput
Note:
The tests use the txpkts
parameter to control the packet size, for example, 1430B bandwidth and 64B pps.
The command parameters provided in this step are applicable to CentOS 8.2. You need to modify them to suit other system image versions and test again. For example, due to the performance difference between the CentOS 7.4 kernel version 3.10 and the CentOS 8.2 kernel version 4.18, change the nb-cores
in the bandwidth test to 2
. For more information about the command parameters, see estpmd-command-line-options. 1. Run the following command to start testpmd on the sender in the txonly mode, and enable the rxonly mode on the receiver.
Sender:
/root/dpdk/build/app/testpmd -l 8-191 -w 0000:00:05.0 -- --burst=128 --nb-cores=32 --txd=512 --rxd=512 --txq=16 --rxq=16 --forward-mode=txonly --txpkts=1430 --stats-period=1
Note:
Replace -l 8-191 -w 0000:00:05.0
with the actual value of your test environment.
Receiver:
/root/dpdk/build/app/testpmd -l 8-191 -w 0000:00:05.0 -- --burst=128 --nb-cores=32 --txd=512 --rxd=512 --txq=16 --rxq=16 --forward-mode=rxonly --stats-period=1
2. Run the following command to test pps (UDP 64B packets).
Sender:
/root/dpdk/build/app/testpmd -l 8-191 -w 0000:00:05.0 -- --burst=128 --nb-cores=32 --txd=512 --rxd=512 --txq=16 --rxq=16 --forward-mode=txonly --txpkts=64 --stats-period=1
Receiver:
/root/dpdk/build/app/testpmd -l 8-191 -w 0000:00:05.0 -- --burst=128 --nb-cores=32 --txd=512 --rxd=512 --txq=16 --rxq=16 --forward-mode=rxonly --stats-period=1
The test result is as shown below:
Calculating the network bandwidth
The current receiving bandwidth can be calculated according to pps and packet length on the receiver using the following formula:
PPS × packet length × 8bit/B × 10-9 = Bandwidth
You can use the test result to obtain the current bandwidth:
4692725 pps × 1430B × 8 bit/B × 10-9 ≈ 53 Gbps
Note:
The packet length is 1430B, including 14B Ethernet header, 4B CRC and 20B IP header.
Rx-pps in the test result is an instantaneous statistical value. You can conduct several tests and calculate the average to make the result more accurate.
Was this page helpful?