[Intel-wired-lan] [next PATCH v3 11/15] i40e/i40evf: Enable support for SKB_GSO_UDP_TUNNEL_CSUM
Alexander Duyck
alexander.duyck at gmail.com
Wed Feb 3 00:06:56 UTC 2016
On Tue, Feb 2, 2016 at 2:49 PM, Jesse Brandeburg
<jesse.brandeburg at intel.com> wrote:
> On Sun, 24 Jan 2016 21:17:29 -0800
> Alexander Duyck <aduyck at mirantis.com> wrote:
>
>> The XL722 has support for providing the outer UDP tunnel checksum on
>
> X722, not XL722
>
>> transmits. Make use of this feature to support segmenting UDP tunnels with
>> outer checksums enabled.
>>
>> Signed-off-by: Alexander Duyck <aduyck at mirantis.com>
>> ---
>
> ...
>
>> + if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
>> + /* determine offset of outer transport header */
>> + l4_offset = l4.hdr - skb->data;
>> +
>> + /* remove payload length from outer checksum */
>> + paylen = (__force u16)l4.udp->check;
>> + paylen += ntohs(1) * (u16)~(skb->len - l4_offset);
>
> Can we get a comment about how this is supposed to work? Doesn't it
> have endian problems? I understand these lines are removing the payload
> length from the checksum by getting the length of the payload,
> inverting it and removing it from the checksum, and then below,
> updating the checksum, but it is really hard to follow why you use
> ntohs(1) without some explanation.
The logic is actually based off of
csum_tcpudp_nofold(http://lxr.free-electrons.com/source/lib/checksum.c#L193).
The byte swapping is taken care of via a combination of the ntohs
which converts to a shift, and the csum_fold call below. What happens
is that by shifting by 8 we push the lower byte into the upper slot,
and the upper slot into the next 16 bits. Then when csum_fold is run
it shifts the upper 16 bits down by 16 and adds them to the lower 16
which takes care of the other half of the rotation.
>> + l4.udp->check = ~csum_fold((__force __wsum)paylen);
>> + }
>> +
More information about the Intel-wired-lan
mailing list