nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O
authorJames Smart <jsmart2021@gmail.com>
Thu, 27 Sep 2018 23:58:54 +0000 (16:58 -0700)
committerChristoph Hellwig <hch@lst.de>
Mon, 1 Oct 2018 21:16:13 +0000 (14:16 -0700)
When an io is rejected by nvmf_check_ready() due to validation of the
controller state, the nvmf_fail_nonready_command() will normally return
BLK_STS_RESOURCE to requeue and retry.  However, if the controller is
dying or the I/O is marked for NVMe multipath, the I/O is failed so that
the controller can terminate or so that the io can be issued on a
different path.  Unfortunately, as this reject point is before the
transport has accepted the command, blk-mq ends up completing the I/O
and never calls nvme_complete_rq(), which is where multipath may preserve
or re-route the I/O. The end result is, the device user ends up seeing an
EIO error.

Example: single path connectivity, controller is under load, and a reset
is induced.  An I/O is received:

  a) while the reset state has been set but the queues have yet to be
     stopped; or
  b) after queues are started (at end of reset) but before the reconnect
     has completed.

The I/O finishes with an EIO status.

This patch makes the following changes:

  - Adds the HOST_PATH_ERROR pathing status from TP4028
  - Modifies the reject point such that it appears to queue successfully,
    but actually completes the io with the new pathing status and calls
    nvme_complete_rq().
  - nvme_complete_rq() recognizes the new status, avoids resetting the
    controller (likely was already done in order to get this new status),
    and calls the multipather to clear the current path that errored.
    This allows the next command (retry or new command) to select a new
    path if there is one.

Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
drivers/nvme/host/fabrics.c
drivers/nvme/host/multipath.c
include/linux/nvme.h

index 206d63cb1afc841507ab60edb898a35177eb5c22..bcd09d3a44dad5cf0d8d99de5c96f25a1aa1ff84 100644 (file)
@@ -552,8 +552,11 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
            ctrl->state != NVME_CTRL_DEAD &&
            !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
                return BLK_STS_RESOURCE;
-       nvme_req(rq)->status = NVME_SC_ABORT_REQ;
-       return BLK_STS_IOERR;
+
+       nvme_req(rq)->status = NVME_SC_HOST_PATH_ERROR;
+       blk_mq_start_request(rq);
+       nvme_complete_rq(rq);
+       return BLK_STS_OK;
 }
 EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command);
 
index bfbc6d5b1d93beb20d3a8c5d57c13891a345ff27..ac16093a7928dfc305fe521c5fa5a94b2c664774 100644 (file)
@@ -77,6 +77,13 @@ void nvme_failover_req(struct request *req)
                        queue_work(nvme_wq, &ns->ctrl->ana_work);
                }
                break;
+       case NVME_SC_HOST_PATH_ERROR:
+               /*
+                * Temporary transport disruption in talking to the controller.
+                * Try to send on a new path.
+                */
+               nvme_mpath_clear_current_path(ns);
+               break;
        default:
                /*
                 * Reset the controller for any non-ANA error as we don't know
index 68e91ef5494c11783ebc32057fd99ee29bca47b2..818dbe9331be3c99d62078a5c0b95f76c5133f84 100644 (file)
@@ -1241,6 +1241,7 @@ enum {
        NVME_SC_ANA_PERSISTENT_LOSS     = 0x301,
        NVME_SC_ANA_INACCESSIBLE        = 0x302,
        NVME_SC_ANA_TRANSITION          = 0x303,
+       NVME_SC_HOST_PATH_ERROR         = 0x370,
 
        NVME_SC_DNR                     = 0x4000,
 };