From: Miaohe Lin Date: Thu, 19 May 2022 12:50:27 +0000 (+0800) Subject: mm/swapfile: fix lost swap bits in unuse_pte() X-Git-Tag: for-5.19/block-exec-2022-06-02~9^2~7 X-Git-Url: https://git.kernel.dk/?a=commitdiff_plain;h=14a762dd1977cf811516fd97b0262b747cac88f7;p=linux-2.6-block.git mm/swapfile: fix lost swap bits in unuse_pte() This is observed by code review only but not any real report. When we turn off swapping we could have lost the bits stored in the swap ptes. The new rmap-exclusive bit is fine since that turned into a page flag, but not for soft-dirty and uffd-wp. Add them. Link: https://lkml.kernel.org/r/20220519125030.21486-3-linmiaohe@huawei.com Signed-off-by: Miaohe Lin Suggested-by: Peter Xu Reviewed-by: David Hildenbrand Cc: Alistair Popple Cc: David Howells Cc: Hugh Dickins Cc: Matthew Wilcox (Oracle) Cc: Naoya Horiguchi Cc: NeilBrown Cc: Ralph Campbell Cc: Suren Baghdasaryan Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- diff --git a/mm/swapfile.c b/mm/swapfile.c index b86d1cc8d00b..e45874fb2ec7 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1774,7 +1774,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, { struct page *swapcache; spinlock_t *ptl; - pte_t *pte; + pte_t *pte, new_pte; int ret = 1; swapcache = page; @@ -1823,8 +1823,12 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, page_add_new_anon_rmap(page, vma, addr); lru_cache_add_inactive_or_unevictable(page, vma); } - set_pte_at(vma->vm_mm, addr, pte, - pte_mkold(mk_pte(page, vma->vm_page_prot))); + new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot)); + if (pte_swp_soft_dirty(*pte)) + new_pte = pte_mksoft_dirty(new_pte); + if (pte_swp_uffd_wp(*pte)) + new_pte = pte_mkuffd_wp(new_pte); + set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: pte_unmap_unlock(pte, ptl);