mm/readahead: limit page cache size in page_cache_ra_order()
In page_cache_ra_order(), the maximal order of the page cache to be allocated shouldn't be larger than MAX_PAGECACHE_ORDER. Otherwise, it's possible the large page cache can't be supported by xarray when the corresponding xarray entry is split. For example, HPAGE_PMD_ORDER is 13 on ARM64 when the base page size is 64KB. The PMD-sized page cache can't be supported by xarray. Link: https://lkml.kernel.org/r/20240627003953.1262512-3-gshan@redhat.com Fixes: 793917d997df ("mm/readahead: Add large folio readahead") Signed-off-by: Gavin Shan <gshan@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Darrick J. Wong <djwong@kernel.org> Cc: Don Dutile <ddutile@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: William Kucharski <william.kucharski@oracle.com> Cc: Zhenyu Zhang <zhenyzha@redhat.com> Cc: <stable@vger.kernel.org> [5.18+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
099d90642a
commit
1f789a45c3
@ -503,11 +503,11 @@ void page_cache_ra_order(struct readahead_control *ractl,
|
||||
|
||||
limit = min(limit, index + ra->size - 1);
|
||||
|
||||
if (new_order < MAX_PAGECACHE_ORDER) {
|
||||
if (new_order < MAX_PAGECACHE_ORDER)
|
||||
new_order += 2;
|
||||
new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
|
||||
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
|
||||
}
|
||||
|
||||
new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order);
|
||||
new_order = min_t(unsigned int, new_order, ilog2(ra->size));
|
||||
|
||||
/* See comment in page_cache_ra_unbounded() */
|
||||
nofs = memalloc_nofs_save();
|
||||
|
Loading…
x
Reference in New Issue
Block a user