mm/slub.c: replace cpu_slab->partial with wrapped APIs
authorchenqiwu <chenqiwu@xiaomi.com>
Thu, 2 Apr 2020 04:04:16 +0000 (21:04 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 2 Apr 2020 16:35:26 +0000 (09:35 -0700)
There are slub_percpu_partial() and slub_set_percpu_partial() APIs to wrap
kmem_cache->cpu_partial.  This patch will use the two to replace
cpu_slab->partial in slub code.

Signed-off-by: chenqiwu <chenqiwu@xiaomi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Link: http://lkml.kernel.org/r/1581951895-3038-1-git-send-email-qiwuchen55@gmail.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slub.c

index 6589b41d5a6056013a2f31270ad21476bcecb477..db0f657c09a12808c5d2e3e47ba458ff60ec301f 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2205,11 +2205,11 @@ static void unfreeze_partials(struct kmem_cache *s,
        struct kmem_cache_node *n = NULL, *n2 = NULL;
        struct page *page, *discard_page = NULL;
 
-       while ((page = c->partial)) {
+       while ((page = slub_percpu_partial(c))) {
                struct page new;
                struct page old;
 
-               c->partial = page->next;
+               slub_set_percpu_partial(c, page);
 
                n2 = get_node(s, page_to_nid(page));
                if (n != n2) {