mm: report success more often from filemap_map_folio_range()
Even though we had successfully mapped the relevant page, we would rarely return success from filemap_map_folio_range(). That leads to falling back from the VMA lock path to the mmap_lock path, which is a speed & scalability issue. Found by inspection. Link: https://lkml.kernel.org/r/20230920035336.854212-1-willy@infradead.org Fixes: 617c28ecab22 ("filemap: batch PTE mappings") Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
7c31515857
commit
a501a07030
@ -3503,7 +3503,7 @@ skip:
|
||||
if (count) {
|
||||
set_pte_range(vmf, folio, page, count, addr);
|
||||
folio_ref_add(folio, count);
|
||||
if (in_range(vmf->address, addr, count))
|
||||
if (in_range(vmf->address, addr, count * PAGE_SIZE))
|
||||
ret = VM_FAULT_NOPAGE;
|
||||
}
|
||||
|
||||
@ -3517,7 +3517,7 @@ skip:
|
||||
if (count) {
|
||||
set_pte_range(vmf, folio, page, count, addr);
|
||||
folio_ref_add(folio, count);
|
||||
if (in_range(vmf->address, addr, count))
|
||||
if (in_range(vmf->address, addr, count * PAGE_SIZE))
|
||||
ret = VM_FAULT_NOPAGE;
|
||||
}
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user