vhost: Use virtqueue mutex for swapping worker
authorMike Christie <michael.christie@oracle.com>
Sat, 16 Mar 2024 00:47:04 +0000 (19:47 -0500)
committerMichael S. Tsirkin <mst@redhat.com>
Wed, 22 May 2024 12:31:15 +0000 (08:31 -0400)
__vhost_vq_attach_worker uses the vhost_dev mutex to serialize the
swapping of a virtqueue's worker. This was done for simplicity because
we are already holding that mutex.

In the next patches where the worker can be killed while in use, we need
finer grained locking because some drivers will hold the vhost_dev mutex
while flushing. However in the SIGKILL handler in the next patches, we
will need to be able to swap workers (set current one to NULL), kill
queued works and stop new flushes while flushes are in progress.

To prepare us, this has us use the virtqueue mutex for swapping workers
instead of the vhost_dev one.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
Message-Id: <20240316004707.45557-7-michael.christie@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
drivers/vhost/vhost.c

index 45aa83880b2d2d020c3008127c1c7912b22aacb4..245133171a223b3a2429f1b4046c4e36f6bc04de 100644 (file)
@@ -652,16 +652,22 @@ static void __vhost_vq_attach_worker(struct vhost_virtqueue *vq,
 {
        struct vhost_worker *old_worker;
 
-       old_worker = rcu_dereference_check(vq->worker,
-                                          lockdep_is_held(&vq->dev->mutex));
-
        mutex_lock(&worker->mutex);
-       worker->attachment_cnt++;
-       mutex_unlock(&worker->mutex);
+       mutex_lock(&vq->mutex);
+
+       old_worker = rcu_dereference_check(vq->worker,
+                                          lockdep_is_held(&vq->mutex));
        rcu_assign_pointer(vq->worker, worker);
+       worker->attachment_cnt++;
 
-       if (!old_worker)
+       if (!old_worker) {
+               mutex_unlock(&vq->mutex);
+               mutex_unlock(&worker->mutex);
                return;
+       }
+       mutex_unlock(&vq->mutex);
+       mutex_unlock(&worker->mutex);
+
        /*
         * Take the worker mutex to make sure we see the work queued from
         * device wide flushes which doesn't use RCU for execution.