mm: make lock_folio_maybe_drop_mmap() VMA lock aware
Patch series "Handle more faults under the VMA lock", v2. At this point, we're handling the majority of file-backed page faults under the VMA lock, using the ->map_pages entry point. This patch set attempts to expand that for the following siutations: - We have to do a read. This could be because we've hit the point in the readahead window where we need to kick off the next readahead, or because the page is simply not present in cache. - We're handling a write fault. Most applications don't do I/O by writes to shared mmaps for very good reasons, but some do, and it'd be nice to not make that slow unnecessarily. - We're doing a COW of a private mapping (both PTE already present and PTE not-present). These are two different codepaths and I handle both of them in this patch set. There is no support in this patch set for drivers to mark themselves as being VMA lock friendly; they could implement the ->map_pages vm_operation, but if they do, they would be the first. This is probably something we want to change at some point in the future, and I've marked where to make that change in the code. There is very little performance change in the benchmarks we've run; mostly because the vast majority of page faults are handled through the other paths. I still think this patch series is useful for workloads that may take these paths more often, and just for cleaning up the fault path in general (it's now clearer why we have to retry in these cases). This patch (of 6): Drop the VMA lock instead of the mmap_lock if that's the one which is held. Link: https://lkml.kernel.org/r/20231006195318.4087158-1-willy@infradead.org Link: https://lkml.kernel.org/r/20231006195318.4087158-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
1431996bf9
commit
5d74b2ab2c
13
mm/filemap.c
13
mm/filemap.c
@ -3090,7 +3090,7 @@ static int lock_folio_maybe_drop_mmap(struct vm_fault *vmf, struct folio *folio,
|
||||
|
||||
/*
|
||||
* NOTE! This will make us return with VM_FAULT_RETRY, but with
|
||||
* the mmap_lock still held. That's how FAULT_FLAG_RETRY_NOWAIT
|
||||
* the fault lock still held. That's how FAULT_FLAG_RETRY_NOWAIT
|
||||
* is supposed to work. We have way too many special cases..
|
||||
*/
|
||||
if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT)
|
||||
@ -3100,13 +3100,14 @@ static int lock_folio_maybe_drop_mmap(struct vm_fault *vmf, struct folio *folio,
|
||||
if (vmf->flags & FAULT_FLAG_KILLABLE) {
|
||||
if (__folio_lock_killable(folio)) {
|
||||
/*
|
||||
* We didn't have the right flags to drop the mmap_lock,
|
||||
* but all fault_handlers only check for fatal signals
|
||||
* if we return VM_FAULT_RETRY, so we need to drop the
|
||||
* mmap_lock here and return 0 if we don't have a fpin.
|
||||
* We didn't have the right flags to drop the
|
||||
* fault lock, but all fault_handlers only check
|
||||
* for fatal signals if we return VM_FAULT_RETRY,
|
||||
* so we need to drop the fault lock here and
|
||||
* return 0 if we don't have a fpin.
|
||||
*/
|
||||
if (*fpin == NULL)
|
||||
mmap_read_unlock(vmf->vma->vm_mm);
|
||||
release_fault_lock(vmf);
|
||||
return 0;
|
||||
}
|
||||
} else
|
||||
|
Loading…
Reference in New Issue
Block a user