mm: fix shmem THP counters on migration
authorJan Glauber <jglauber@digitalocean.com>
Mon, 19 Jun 2023 10:33:51 +0000 (12:33 +0200)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 23 Jun 2023 23:59:26 +0000 (16:59 -0700)
The per node numa_stat values for shmem don't change on page migration for
THP:

  grep shmem /sys/fs/cgroup/machine.slice/.../memory.numa_stat:

    shmem N0=1092616192 N1=10485760
    shmem_thp N0=1092616192 N1=10485760

  migratepages 9181 0 1:

    shmem N0=0 N1=1103101952
    shmem_thp N0=1092616192 N1=10485760

Fix that by updating shmem_thp counters likewise to shmem counters on page
migration.

[jglauber@digitalocean.com: use folio_test_pmd_mappable instead of folio_test_transhuge]
Link: https://lkml.kernel.org/r/20230622094720.510540-1-jglauber@digitalocean.com
Link: https://lkml.kernel.org/r/20230619103351.234837-1-jglauber@digitalocean.com
Signed-off-by: Jan Glauber <jglauber@digitalocean.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/migrate.c

index ce35afdbc1e31f42e40376a77fcef80505f6e130..eca3bf0e93b82912be84639ad4996464c7a9d25f 100644 (file)
@@ -486,6 +486,11 @@ int folio_migrate_mapping(struct address_space *mapping,
                if (folio_test_swapbacked(folio) && !folio_test_swapcache(folio)) {
                        __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
                        __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
+
+                       if (folio_test_pmd_mappable(folio)) {
+                               __mod_lruvec_state(old_lruvec, NR_SHMEM_THPS, -nr);
+                               __mod_lruvec_state(new_lruvec, NR_SHMEM_THPS, nr);
+                       }
                }
 #ifdef CONFIG_SWAP
                if (folio_test_swapcache(folio)) {