md/raid1: Fix stack memory use after return in raid1_reshape
authorWang Jinchao <wangjinchao600@gmail.com>
Thu, 12 Jun 2025 11:28:40 +0000 (19:28 +0800)
committerYu Kuai <yukuai3@huawei.com>
Sat, 5 Jul 2025 11:17:37 +0000 (19:17 +0800)
In the raid1_reshape function, newpool is
allocated on the stack and assigned to conf->r1bio_pool.
This results in conf->r1bio_pool.wait.head pointing
to a stack address.
Accessing this address later can lead to a kernel panic.

Example access path:

raid1_reshape()
{
// newpool is on the stack
mempool_t newpool, oldpool;
// initialize newpool.wait.head to stack address
mempool_init(&newpool, ...);
conf->r1bio_pool = newpool;
}

raid1_read_request() or raid1_write_request()
{
alloc_r1bio()
{
mempool_alloc()
{
// if pool->alloc fails
remove_element()
{
--pool->curr_nr;
}
}
}
}

mempool_free()
{
if (pool->curr_nr < pool->min_nr) {
// pool->wait.head is a stack address
// wake_up() will try to access this invalid address
// which leads to a kernel panic
return;
wake_up(&pool->wait);
}
}

Fix:
reinit conf->r1bio_pool.wait after assigning newpool.

Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()")
Signed-off-by: Wang Jinchao <wangjinchao600@gmail.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@gmail.com
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
drivers/md/raid1.c

index 19c5a0ce5a408f82855718acc47999e2d23f6cfd..fd4ce2a4136f3953883c850ae508bcb1f768a920 100644 (file)
@@ -3428,6 +3428,7 @@ static int raid1_reshape(struct mddev *mddev)
        /* ok, everything is stopped */
        oldpool = conf->r1bio_pool;
        conf->r1bio_pool = newpool;
+       init_waitqueue_head(&conf->r1bio_pool.wait);
 
        for (d = d2 = 0; d < conf->raid_disks; d++) {
                struct md_rdev *rdev = conf->mirrors[d].rdev;