nvme: add support for pre-mapped IO buffers
authorJens Axboe <axboe@kernel.dk>
Fri, 17 Dec 2021 19:33:56 +0000 (12:33 -0700)
committerJens Axboe <axboe@kernel.dk>
Sun, 27 Mar 2022 21:20:07 +0000 (15:20 -0600)
commit03c40e57f86a1ee75af742e3be28e86fe7481759
tree864550c139381f748a43e60d033e7549bf77b028
parent126eebb0b4015cf5752f4cefdfa280a837848225
nvme: add support for pre-mapped IO buffers

Normally when an IO comes in, NVMe will DMA map the page(s) and setup
the PRP or SG list. When IO completes, the mappings are undone. This
takes time, obviously.

Add support for mapping buffers upfront, by filling in the
mq_ops->dma_map() handler. The ownership of the mappings goes to the
caller, and the caller is responsible for tearing them down.

This is good for a ~4% improvement in peak IOPS without an IOMMU,
much more so with an IOMMU enabled.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
drivers/nvme/host/pci.c