[Intel-wired-lan] [PATCH v3 1/4] i40e: Sync DMA region prior skbuff allocation

Björn Töpel bjorn.topel at gmail.com
Tue Dec 13 13:40:04 UTC 2016


From: Björn Töpel <bjorn.topel at intel.com>

This patch prepares i40e_fetch_rx_buffer() for upcoming XDP support,
where there's a need to access the device buffers prior skbuff
allocation.

Signed-off-by: Björn Töpel <bjorn.topel at intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_txrx.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index e88e335c34ef..7dacde6608d1 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1646,6 +1646,13 @@ struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 	page = rx_buffer->page;
 	prefetchw(page);
 
+	/* we are reusing so sync this buffer for CPU use */
+	dma_sync_single_range_for_cpu(rx_ring->dev,
+				      rx_buffer->dma,
+				      rx_buffer->page_offset,
+				      size,
+				      DMA_FROM_DEVICE);
+
 	if (likely(!skb)) {
 		void *page_addr = page_address(page) + rx_buffer->page_offset;
 
@@ -1671,13 +1678,6 @@ struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
 		prefetchw(skb->data);
 	}
 
-	/* we are reusing so sync this buffer for CPU use */
-	dma_sync_single_range_for_cpu(rx_ring->dev,
-				      rx_buffer->dma,
-				      rx_buffer->page_offset,
-				      size,
-				      DMA_FROM_DEVICE);
-
 	/* pull page into skb */
 	if (i40e_add_rx_frag(rx_ring, rx_buffer, size, skb)) {
 		/* hand second half of page back to the ring */
-- 
2.9.3



More information about the Intel-wired-lan mailing list