[Intel-wired-lan] [net-next PATCH v2 2/2] e1000: bundle xdp xmit routines

Tom Herbert tom at herbertland.com
Sat Sep 10 03:12:52 UTC 2016


On Fri, Sep 9, 2016 at 6:40 PM, Alexei Starovoitov
<alexei.starovoitov at gmail.com> wrote:
> On Fri, Sep 09, 2016 at 06:19:56PM -0700, Tom Herbert wrote:
>> On Fri, Sep 9, 2016 at 6:12 PM, John Fastabend <john.fastabend at gmail.com> wrote:
>> > On 16-09-09 06:04 PM, Tom Herbert wrote:
>> >> On Fri, Sep 9, 2016 at 5:01 PM, John Fastabend <john.fastabend at gmail.com> wrote:
>> >>> On 16-09-09 04:44 PM, Tom Herbert wrote:
>> >>>> On Fri, Sep 9, 2016 at 2:29 PM, John Fastabend <john.fastabend at gmail.com> wrote:
>> >>>>> e1000 supports a single TX queue so it is being shared with the stack
>> >>>>> when XDP runs XDP_TX action. This requires taking the xmit lock to
>> >>>>> ensure we don't corrupt the tx ring. To avoid taking and dropping the
>> >>>>> lock per packet this patch adds a bundling implementation to submit
>> >>>>> a bundle of packets to the xmit routine.
>> >>>>>
>> >>>>> I tested this patch running e1000 in a VM using KVM over a tap
>> >>>>> device using pktgen to generate traffic along with 'ping -f -l 100'.
>> >>>>>
>> >>>> Hi John,
>> >>>>
>> >>>> How does this interact with BQL on e1000?
>> >>>>
>> >>>> Tom
>> >>>>
>> >>>
>> >>> Let me check if I have the API correct. When we enqueue a packet to
>> >>> be sent we must issue a netdev_sent_queue() call and then on actual
>> >>> transmission issue a netdev_completed_queue().
>> >>>
>> >>> The patch attached here missed a few things though.
>> >>>
>> >>> But it looks like I just need to call netdev_sent_queue() from the
>> >>> e1000_xmit_raw_frame() routine and then let the tx completion logic
>> >>> kick in which will call netdev_completed_queue() correctly.
>> >>>
>> >>> I'll need to add a check for the queue state as well. So if I do these
>> >>> three things,
>> >>>
>> >>>         check __QUEUE_STATE_XOFF before sending
>> >>>         netdev_sent_queue() -> on XDP_TX
>> >>>         netdev_completed_queue()
>> >>>
>> >>> It should work agree? Now should we do this even when XDP owns the
>> >>> queue? Or is this purely an issue with sharing the queue between
>> >>> XDP and stack.
>> >>>
>> >> But what is the action for XDP_TX if the queue is stopped? There is no
>> >> qdisc to back pressure in the XDP path. Would we just start dropping
>> >> packets then?
>> >
>> > Yep that is what the patch does if there is any sort of error packets
>> > get dropped on the floor. I don't think there is anything else that
>> > can be done.
>> >
>> That probably means that the stack will always win out under load.
>> Trying to used the same queue where half of the packets are well
>> managed by a qdisc and half aren't is going to leave someone unhappy.
>> Maybe in the this case where we have to share the qdisc we can
>> allocate the skb on on returning XDP_TX and send through the normal
>> qdisc for the device.
>
> I wouldn't go to such extremes for e1k.
> The only reason to have xdp in e1k is to use it for testing
> of xdp programs. Nothing else. e1k is, best case, 1Gbps adapter.

I imagine someone may want this for the non-forwarding use cases like
early drop for DOS mitigation. Regardless of the use case, I don't
think we can break the fundamental assumptions made for qdiscs or the
rest of the transmit path. If XDP must transmit on a queue shared with
the stack we need to abide by the stack's rules for transmitting on
the queue-- which would mean alloc skbuff and go through qdisc (which
really shouldn't be difficult to implement). Emulating various
functions of the stack in the XDP TX path, like this patch seems to be
doing for XMIT_MORE, potentially gets us into a wack-a-mole situation
trying to keep things coherent.

> Existing stack with skb is perfectly fine as it is.
> No need to do recycling, batching or any other complex things.
> xdp for e1k cannot be used as an example for other drivers either,
> since there is only one tx ring and any high performance adapter
> has more which makes the driver support quite different.
>


More information about the Intel-wired-lan mailing list