IB/hfi1: Fix defered ack race with qp destroy
authorMike Marciniszyn <mike.marciniszyn@intel.com>
Sun, 25 Sep 2016 14:41:46 +0000 (07:41 -0700)
committerDoug Ledford <dledford@redhat.com>
Sun, 2 Oct 2016 12:42:14 +0000 (08:42 -0400)
There is a a bug in defered ack stuff that causes a race with the
destroy of a QP.

A packet causes a defered ack to be pended by putting the QP
into an rcd queue.

A return from the driver interrupt processing will process that rcd
queue of QPs and attempt to do a direct send of the ack.   At this
point no locks are held and the above QP could now be put in the reset
state in the qp destroy logic.   A refcount protects the QP while it
is in the rcd queue so it isn't going anywhere yet.

If the direct send fails to allocate a pio buffer,
hfi1_schedule_send() is called to trigger sending an ack from the
send engine. There is no state test in that code path.

The refcount is then dropped from the driver.c caller
potentially allowing the qp destroy to continue from its
refcount wait in parallel with the workqueue scheduling of the qp.

Cc: stable@vger.kernel.org
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
drivers/infiniband/hw/hfi1/rc.c

index d32f0c86a6234e88aab71e83893cb44033196561..e9623d0166df888afbf3987e51779d48b946cc48 100644 (file)
@@ -926,8 +926,10 @@ void hfi1_send_rc_ack(struct hfi1_ctxtdata *rcd, struct rvt_qp *qp,
        return;
 
 queue_ack:
-       this_cpu_inc(*ibp->rvp.rc_qacks);
        spin_lock_irqsave(&qp->s_lock, flags);
+       if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK))
+               goto unlock;
+       this_cpu_inc(*ibp->rvp.rc_qacks);
        qp->s_flags |= RVT_S_ACK_PENDING | RVT_S_RESP_PENDING;
        qp->s_nak_state = qp->r_nak_state;
        qp->s_ack_psn = qp->r_ack_psn;
@@ -936,6 +938,7 @@ queue_ack:
 
        /* Schedule the send tasklet. */
        hfi1_schedule_send(qp);
+unlock:
        spin_unlock_irqrestore(&qp->s_lock, flags);
 }