zsmalloc: replace per zpage lock with pool->migrate_lock
The zsmalloc has used a bit for spin_lock in zpage handle to keep zpage
object alive during several operations. However, it causes the problem
for PREEMPT_RT as well as introducing too complicated.
This patch replaces the bit spin_lock with pool->migrate_lock rwlock. It
could make the code simple as well as zsmalloc work under PREEMPT_RT.
The drawback is the pool->migrate_lock is bigger granuarity than per zpage
lock so the contention would be higher than old when both IO-related
operations(i.e., zsmalloc, zsfree, zs_[map|unmap]) and
compaction(page/zpage migration) are going in parallel(*, the migrate_lock
is rwlock and IO related functions are all read side lock so there is no
contention). However, the write-side is fast enough(dominant overhead is
just page copy) so it wouldn't affect much. If the lock granurity becomes
more problem later, we could introduce table locks based on handle as a
hash value.
Link: https://lkml.kernel.org/r/20211115185909.3949505-9-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>