mm/gup: check every subpage of a compound page during isolation
When pages are isolated in check_and_migrate_movable_pages() we skip
compound number of pages at a time. However, as Jason noted, it is not
necessary correct that pages[i] corresponds to the pages that we
skipped. This is because it is possible that the addresses in this
range had split_huge_pmd()/split_huge_pud(), and these functions do not
update the compound page metadata.
The problem can be reproduced if something like this occurs:
1. User faulted huge pages.
2. split_huge_pmd() was called for some reason
3. User has unmapped some sub-pages in the range
4. User tries to longterm pin the addresses.
The resulting pages[i] might end-up having pages which are not compound
size page aligned.
Link: https://lkml.kernel.org/r/20210215161349.246722-3-pasha.tatashin@soleen.com
Fixes: aa712399c1
("mm/gup: speed up check_and_migrate_cma_pages() on huge page")
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reported-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: James Morris <jmorris@namei.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: Tyler Hicks <tyhicks@linux.microsoft.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
c991ffef7b
commit
83c02c23d0
19
mm/gup.c
19
mm/gup.c
@ -1609,26 +1609,23 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
|
||||
unsigned int gup_flags)
|
||||
{
|
||||
unsigned long i;
|
||||
unsigned long step;
|
||||
bool drain_allow = true;
|
||||
bool migrate_allow = true;
|
||||
LIST_HEAD(cma_page_list);
|
||||
long ret = nr_pages;
|
||||
struct page *prev_head, *head;
|
||||
struct migration_target_control mtc = {
|
||||
.nid = NUMA_NO_NODE,
|
||||
.gfp_mask = GFP_USER | __GFP_NOWARN,
|
||||
};
|
||||
|
||||
check_again:
|
||||
for (i = 0; i < nr_pages;) {
|
||||
|
||||
struct page *head = compound_head(pages[i]);
|
||||
|
||||
/*
|
||||
* gup may start from a tail page. Advance step by the left
|
||||
* part.
|
||||
*/
|
||||
step = compound_nr(head) - (pages[i] - head);
|
||||
prev_head = NULL;
|
||||
for (i = 0; i < nr_pages; i++) {
|
||||
head = compound_head(pages[i]);
|
||||
if (head == prev_head)
|
||||
continue;
|
||||
prev_head = head;
|
||||
/*
|
||||
* If we get a page from the CMA zone, since we are going to
|
||||
* be pinning these entries, we might as well move them out
|
||||
@ -1652,8 +1649,6 @@ check_again:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
i += step;
|
||||
}
|
||||
|
||||
if (!list_empty(&cma_page_list)) {
|
||||
|
Loading…
Reference in New Issue
Block a user