[Intel-wired-lan] [PATCH v2 0/5] Introducing ixgbe AF_XDP ZC support
William Tu
u9012063 at gmail.com
Tue Oct 2 18:43:27 UTC 2018
On Tue, Oct 2, 2018 at 11:39 AM Björn Töpel <bjorn.topel at intel.com> wrote:
>
> On 2018-10-02 20:23, William Tu wrote:
> > On Tue, Oct 2, 2018 at 1:01 AM Björn Töpel <bjorn.topel at gmail.com> wrote:
> >>
> >> From: Björn Töpel <bjorn.topel at intel.com>
> >>
> >> Jeff: Please remove the v1 patches from your dev-queue!
> >>
> >> This patch set introduces zero-copy AF_XDP support for Intel's ixgbe
> >> driver.
> >>
> >> The ixgbe zero-copy code is located in its own file ixgbe_xsk.[ch],
> >> analogous to the i40e ZC support. Again, as in i40e, code paths have
> >> been copied from the XDP path to the zero-copy path. Going forward we
> >> will try to generalize more code between the AF_XDP ZC drivers, and
> >> also reduce the heavy C&P.
> >>
> >> We have run some benchmarks on a dual socket system with two Broadwell
> >> E5 2660 @ 2.0 GHz with hyperthreading turned off. Each socket has 14
> >> cores which gives a total of 28, but only two cores are used in these
> >> experiments. One for TR/RX and one for the user space application. The
> >> memory is DDR4 @ 2133 MT/s (1067 MHz) and the size of each DIMM is
> >> 8192MB and with 8 of those DIMMs in the system we have 64 GB of total
> >> memory. The compiler used is GCC 7.3.0. The NIC is Intel
> >> 82599ES/X520-2 10Gbit/s using the ixgbe driver.
> >>
> >> Below are the results in Mpps of the 82599ES/X520-2 NIC benchmark runs
> >> for 64B and 1500B packets, generated by a commercial packet generator
> >> HW blasting packets at full 10Gbit/s line rate. The results are with
> >> retpoline and all other spectre and meltdown fixes.
> >>
> >> AF_XDP performance 64B packets:
> >> Benchmark XDP_DRV with zerocopy
> >> rxdrop 14.7
> >> txpush 14.6
> >> l2fwd 11.1
> >>
> >> AF_XDP performance 1500B packets:
> >> Benchmark XDP_DRV with zerocopy
> >> rxdrop 0.8
> >> l2fwd 0.8
> >>
> >> XDP performance on our system as a base line.
> >>
> >> 64B packets:
> >> XDP stats CPU Mpps issue-pps
> >> XDP-RX CPU 16 14.7 0
> >>
> >> 1500B packets:
> >> XDP stats CPU Mpps issue-pps
> >> XDP-RX CPU 16 0.8 0
> >>
> >> The structure of the patch set is as follows:
> >>
> >> Patch 1: Introduce Rx/Tx ring enable/disable functionality
> >> Patch 2: Preparatory patche to ixgbe driver code for RX
> >> Patch 3: ixgbe zero-copy support for RX
> >> Patch 4: Preparatory patch to ixgbe driver code for TX
> >> Patch 5: ixgbe zero-copy support for TX
> >>
> >> Changes since v1:
> >>
> >> * Removed redundant AF_XDP precondition checks, pointed out by
> >> Jakub. Now, the preconditions are only checked at XDP enable time.
> >> * Fixed a crash in the egress path, due to incorrect usage of
> >> ixgbe_ring queue_index member. In v2 a ring_idx back reference is
> >> introduced, and used in favor of queue_index. William reported the
> >> crash, and helped me smoke out the issue. Kudos!
> >
> > Thanks! I tested this series and no more crash.
>
> Thank you for spending time on this!
>
> > The number is pretty good (*without* spectre and meltdown fixes)
> > model name : Intel(R) Xeon(R) CPU E5-2440 v2 @ 1.90GHz, total 16 cores/
> >
> > AF_XDP performance 64B packets:
> > Benchmark XDP_DRV with zerocopy
> > rxdrop 20
> > txpush 18
> > l2fwd 20
Sorry please ignore this number!
It's actually 2Mpps from xdpsock but that's because my sender only sends 2Mpps.
>
> What is 20 here? Given that 14.8Mpps is maximum for 64B at 10Gbit/s for
> one queue, is this multiple queues? Is this xdpsock or OvS with AF_XDP?
I'm redoing the experiments with higher traffic rate, will report later..
William
More information about the Intel-wired-lan
mailing list