hugetlb: correct page count for surplus huge pages
Free pages in the hugetlb pool are free and as such have a reference count of zero. Regular allocations into the pool from the buddy are "freed" into the pool which results in their page_count dropping to zero. However, surplus pages can be directly utilized by the caller without first being freed to the pool. Therefore, a call to put_page_testzero() is in order so that such a page will be handed to the caller with a correct count. This has not affected end users because the bad page count is reset before the page is handed off. However, under CONFIG_DEBUG_VM this triggers a BUG when the page count is validated. Thanks go to Mel for first spotting this issue and providing an initial fix. Signed-off-by: Adam Litke <agl@us.ibm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Mel Gorman <mel@csn.ul.ie> Cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
842078054d
commit
2668db9111
13
mm/hugetlb.c
13
mm/hugetlb.c
@ -286,6 +286,12 @@ static struct page *alloc_buddy_huge_page(struct vm_area_struct *vma,
|
||||
|
||||
spin_lock(&hugetlb_lock);
|
||||
if (page) {
|
||||
/*
|
||||
* This page is now managed by the hugetlb allocator and has
|
||||
* no users -- drop the buddy allocator's reference.
|
||||
*/
|
||||
put_page_testzero(page);
|
||||
VM_BUG_ON(page_count(page));
|
||||
nid = page_to_nid(page);
|
||||
set_compound_page_dtor(page, free_huge_page);
|
||||
/*
|
||||
@ -369,13 +375,14 @@ free:
|
||||
enqueue_huge_page(page);
|
||||
else {
|
||||
/*
|
||||
* Decrement the refcount and free the page using its
|
||||
* destructor. This must be done with hugetlb_lock
|
||||
* The page has a reference count of zero already, so
|
||||
* call free_huge_page directly instead of using
|
||||
* put_page. This must be done with hugetlb_lock
|
||||
* unlocked which is safe because free_huge_page takes
|
||||
* hugetlb_lock before deciding how to free the page.
|
||||
*/
|
||||
spin_unlock(&hugetlb_lock);
|
||||
put_page(page);
|
||||
free_huge_page(page);
|
||||
spin_lock(&hugetlb_lock);
|
||||
}
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user