net: netsec: Sync dma for device on buffer allocation
authorIlias Apalodimas <ilias.apalodimas@linaro.org>
Thu, 4 Jul 2019 14:11:09 +0000 (17:11 +0300)
committerDavid S. Miller <davem@davemloft.net>
Fri, 5 Jul 2019 22:41:24 +0000 (15:41 -0700)
Quoting Arnd,

We have to do a sync_single_for_device /somewhere/ before the
buffer is given to the device. On a non-cache-coherent machine with
a write-back cache, there may be dirty cache lines that get written back
after the device DMA's data into it (e.g. from a previous memset
from before the buffer got freed), so you absolutely need to flush any
dirty cache lines on it first.

Since the coherency is configurable in this device make sure we cover
all configurations by explicitly syncing the allocated buffer for the
device before refilling it's descriptors

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/net/ethernet/socionext/netsec.c

index d8d640b011191e284082de4140a4b8f6097c85dd..f6e261c6a0595a360ab16cb463343d1d3dc71946 100644 (file)
@@ -726,6 +726,7 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
 {
 
        struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
+       enum dma_data_direction dma_dir;
        struct page *page;
 
        page = page_pool_dev_alloc_pages(dring->page_pool);
@@ -741,6 +742,10 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
         * cases and reserve enough space for headroom + skb_shared_info
         */
        *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA;
+       dma_dir = page_pool_get_dma_dir(dring->page_pool);
+       dma_sync_single_for_device(priv->dev,
+                                  *dma_handle - NETSEC_RXBUF_HEADROOM,
+                                  PAGE_SIZE, dma_dir);
 
        return page_address(page);
 }