md: avoid a possibility that a read error can wrongly propagate through md/raid1...
authorNeilBrown <neilb@suse.de>
Thu, 10 May 2007 10:15:50 +0000 (03:15 -0700)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>
Thu, 10 May 2007 16:26:53 +0000 (09:26 -0700)
When a raid1 has only one working drive, we want read error to propagate up
to the filesystem as there is no point failing the last drive in an array.

Currently the code perform this check is racy.  If a write and a read a
both submitted to a device on a 2-drive raid1, and the write fails followed
by the read failing, the read will see that there is only one working drive
and will pass the failure up, even though the one working drive is actually
the *other* one.

So, tighten up the locking.

Signed-off-by: Neil Brown <neilb@suse.de>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
drivers/md/raid1.c

index 97ee870b265d866ffa818bdf83dc7f551cd40d7b..3a95cc5e029c932eaddf0b85df6d6b0f744c36b8 100644 (file)
@@ -271,21 +271,25 @@ static int raid1_end_read_request(struct bio *bio, unsigned int bytes_done, int
         */
        update_head_pos(mirror, r1_bio);
 
-       if (uptodate || (conf->raid_disks - conf->mddev->degraded) <= 1) {
-               /*
-                * Set R1BIO_Uptodate in our master bio, so that
-                * we will return a good error code for to the higher
-                * levels even if IO on some other mirrored buffer fails.
-                *
-                * The 'master' represents the composite IO operation to
-                * user-side. So if something waits for IO, then it will
-                * wait for the 'master' bio.
+       if (uptodate)
+               set_bit(R1BIO_Uptodate, &r1_bio->state);
+       else {
+               /* If all other devices have failed, we want to return
+                * the error upwards rather than fail the last device.
+                * Here we redefine "uptodate" to mean "Don't want to retry"
                 */
-               if (uptodate)
-                       set_bit(R1BIO_Uptodate, &r1_bio->state);
+               unsigned long flags;
+               spin_lock_irqsave(&conf->device_lock, flags);
+               if (r1_bio->mddev->degraded == conf->raid_disks ||
+                   (r1_bio->mddev->degraded == conf->raid_disks-1 &&
+                    !test_bit(Faulty, &conf->mirrors[mirror].rdev->flags)))
+                       uptodate = 1;
+               spin_unlock_irqrestore(&conf->device_lock, flags);
+       }
 
+       if (uptodate)
                raid_end_bio_io(r1_bio);
-       else {
+       else {
                /*
                 * oops, read error:
                 */
@@ -992,13 +996,14 @@ static void error(mddev_t *mddev, mdk_rdev_t *rdev)
                unsigned long flags;
                spin_lock_irqsave(&conf->device_lock, flags);
                mddev->degraded++;
+               set_bit(Faulty, &rdev->flags);
                spin_unlock_irqrestore(&conf->device_lock, flags);
                /*
                 * if recovery is running, make sure it aborts.
                 */
                set_bit(MD_RECOVERY_ERR, &mddev->recovery);
-       }
-       set_bit(Faulty, &rdev->flags);
+       } else
+               set_bit(Faulty, &rdev->flags);
        set_bit(MD_CHANGE_DEVS, &mddev->flags);
        printk(KERN_ALERT "raid1: Disk failure on %s, disabling device. \n"
                "       Operation continuing on %d devices\n",