zsmalloc: replace per zpage lock with pool->migrate_lock
authorMinchan Kim <minchan@kernel.org>
Thu, 30 Dec 2021 09:28:50 +0000 (20:28 +1100)
committerStephen Rothwell <sfr@canb.auug.org.au>
Sun, 16 Jan 2022 07:36:57 +0000 (18:36 +1100)
commitaf54b40a10b425d055b6605c79133b8aa3491e1d
tree449d7d535e5eb38a289bb99cd61de9b08e5a7d75
parent118d633eb2bb56618916210a92018465fec73e43
zsmalloc: replace per zpage lock with pool->migrate_lock

The zsmalloc has used a bit for spin_lock in zpage handle to keep zpage
object alive during several operations.  However, it causes the problem
for PREEMPT_RT as well as introducing too complicated.

This patch replaces the bit spin_lock with pool->migrate_lock rwlock.  It
could make the code simple as well as zsmalloc work under PREEMPT_RT.

The drawback is the pool->migrate_lock is bigger granuarity than per zpage
lock so the contention would be higher than old when both IO-related
operations(i.e., zsmalloc, zsfree, zs_[map|unmap]) and
compaction(page/zpage migration) are going in parallel(*, the migrate_lock
is rwlock and IO related functions are all read side lock so there is no
contention).  However, the write-side is fast enough(dominant overhead is
just page copy) so it wouldn't affect much.  If the lock granurity becomes
more problem later, we could introduce table locks based on handle as a
hash value.

Link: https://lkml.kernel.org/r/20211115185909.3949505-9-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
mm/zsmalloc.c