[Intel-wired-lan] [PATCH next-queue v1 0/3] igc: Add support for multiple TX tstamp requests
Vladimir Oltean
vladimir.oltean at nxp.com
Tue Feb 28 18:27:07 UTC 2023
On Mon, Feb 27, 2023 at 09:45:31PM -0800, Vinicius Costa Gomes wrote:
> Patch 3 - More of an optimization. Use the ptp_aux_work kthread to do
> the work, and also try to do the work "inline" if the timestamp
> is ready already. Suggested by Vladimir Oltean and Kurt
> Kanzenbach.
>
> Evaluation
> ----------
>
> To do the evaluation I am using a simple application that sends
> packets (and waits for the timestamp to be received before sending the
> next packet) and takes two measurements:
If the application never generates multiple requests in flight, then
this evaluation is only testing patch 3 (and patches 1 and 2 only to the
extent that they don't cause a regression), right?
> 1. from the HW timestamp value and the time the application
> retrieves the timestamps (called "HW to Timestamp";
> 2. from just before the sendto() being called in the application to
> the time the application retrieves the timestamp (called "Send to
> Timestamp"). I think this measurement is useful to make sure that
> the total time to send a packet and retrieve its timestamp hasn't
> degraded.
>
> (all tests were done for 1M packets, and times are in nanoseconds)
>
> Before:
>
> HW to Timestamp
> min: 9130
> max: 143183
what margin of error did phc2sys have here? Tens, hundreds, thousands of
ns, more? Was it a controlled variable? "HW to Timestamp" implies a
comparison of 2 times from 2 different time sources, kept in sync with
each other.
> percentile 99: 10379
> percentile 99.99: 11510
> Send to Timestamp
> min: 18431
> max: 196798
> percentile 99: 19937
> percentile 99.99: 26066
>
> After:
>
> HW to Timestamp
> min: 7933
> max: 31934
so the reduction of the max "HW to Timestamp" from 143 us to 32 us all
the way to user space is mostly due to the inline processing of the TX
timestamp, within the hardirq handler, right? Can you measure how much
it is due to that, and how much due to the PTP kthread (simplest way
would be to keep the kthread, but remove the inline processing)? How
many reschedules of the kthread there are per TX timestamp? Even a
single set of 4 numbers, denoting the maximum numbers of reschedules per
timestamp request, would be useful information.
> percentile 99: 8690
> percentile 99.99: 10598
> Send to Timestamp
> min: 17291
> max: 46327
> percentile 99: 18268
> percentile 99.99: 21575
>
> The minimum times are not that different
right, probably because the time to do a context switch to user space dominates
> , but we can see a big improvement in the 'maximum' time.
More information about the Intel-wired-lan
mailing list