workqueue: lock cwq access in drain_workqueue
authorThomas Tuttle <ttuttle@chromium.org>
Wed, 14 Sep 2011 23:22:28 +0000 (16:22 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 15 Sep 2011 01:09:38 +0000 (18:09 -0700)
Take cwq->gcwq->lock to avoid racing between drain_workqueue checking to
make sure the workqueues are empty and cwq_dec_nr_in_flight decrementing
and then incrementing nr_active when it activates a delayed work.

We discovered this when a corner case in one of our drivers resulted in
us trying to destroy a workqueue in which the remaining work would
always requeue itself again in the same workqueue.  We would hit this
race condition and trip the BUG_ON on workqueue.c:3080.

Signed-off-by: Thomas Tuttle <ttuttle@chromium.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
kernel/workqueue.c

index 25fb1b0e53faa2c0d008d37986ae495495482b96..1783aabc6128f3792b9a68d27481999c1211d783 100644 (file)
@@ -2412,8 +2412,13 @@ reflush:
 
        for_each_cwq_cpu(cpu, wq) {
                struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
+               bool drained;
 
-               if (!cwq->nr_active && list_empty(&cwq->delayed_works))
+               spin_lock_irq(&cwq->gcwq->lock);
+               drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
+               spin_unlock_irq(&cwq->gcwq->lock);
+
+               if (drained)
                        continue;
 
                if (++flush_cnt == 10 ||