mm: introduce fault_in_exact_writeable() to probe for sub-page faults
Patch series "Avoid live-lock in fault-in+uaccess loops with sub-page faults".
There are a few places in the filesystem layer where a uaccess is
performed in a loop with page faults disabled, together with a
fault_in_*() call to pre-fault the pages. On architectures like arm64
with MTE (memory tagging extensions) or SPARC ADI, even if the
fault_in_*() succeeded, the uaccess can still fault indefinitely.
In general this is not an issue since such code restarts the fault_in_*()
from where the uaccess failed, therefore guaranteeing forward progress.
The btrfs search_ioctl(), however, rewinds the fault_in_*() position and
it can live-lock. This was reported by Al here:
https://lore.kernel.org/r/YSqOUb7yZ7kBoKRY@zeniv-ca.linux.org.uk
There's also an analysis by Al of other fault-in places:
https://lore.kernel.org/r/YSldx9uhMYhT/G8X@zeniv-ca.linux.org.uk
and another sub-thread on the same topic:
https://lore.kernel.org/r/YXBFqD9WVuU8awIv@arm.com
So far only btrfs search_ioctl() seems to be affected and that's what this
series addresses. The existing loops like generic_perform_write() already
guarantee forward progress.
Andreas raised a concern about O_DIRECT accesses since on fault the user
address is rewound to a block size boundary. I tried ext4, btrfs and gfs2
and I could not get any of them to live-lock. Depending on the alignment
of the user buffer (page or not), I found two behaviours:
- the copy to or from the user buffer succeeds entirely if it goes
through the kernel mapping (GUP, kmap'ed page; user MTE tags are not
checked) or
- the copy partially succeeds after a few attempts at uaccess on the
faulting same address (the highest number of attempts in my tests was
11 with btrfs).
Given the high cost of such sub-page probing (which is done prior to the
uaccess) my proposal is to only change the btrfs search_ioctl() (as per
the last patch). We can extend the API and call places in the future if
needed but I hope filesystems already deal with this in other ways.
This patch (of 3):
On hardware with features like arm64 MTE or SPARC ADI, an access fault can
be triggered at sub-page granularity. Depending on how the fault_in_*()
functions are used, the caller can get into a live-lock by continuously
retrying the fault-in on an address different from the one where the
uaccess failed.
In the majority of cases progress is ensured by the following conditions:
1. copy_{to,from}_user() guarantees at least one byte access if the user
address is not faulting;
2. The fault_in_*() is attempted on the next address that could not be
accessed by copy_*_user().
In the places where the above conditions are not met or the
fault-in/uaccess loop does not have a mechanism to bail out, the
fault_in_exact_writeable() ensures that the arch code will probe the range
in question at a sub-page fault granularity (e.g. 16 bytes for arm64
MTE). For large ranges, this is significantly more expensive than the
non-exact versions which probe a single byte in each page or use GUP.
The architecture code has to select ARCH_HAS_SUBPAGE_FAULTS and implement
probe_user_writeable().
Link: https://lkml.kernel.org/r/20211124192024.2408218-1-catalin.marinas@arm.com
Link: https://lkml.kernel.org/r/20211124192024.2408218-2-catalin.marinas@arm.com
Fixes:
a48b73eca4ce ("btrfs: fix potential deadlock in the search ioctl")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andreas Gruenbacher <agruenba@redhat.com>
Cc: David Sterba <dsterba@suse.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Will Deacon <will@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>