mm/hugetlb.c: avoid bogus counter of surplus huge page
If we have to hand back the newly allocated huge page to page allocator, for any reason, the changed counter should be recovered. This affects only s390 at present. Signed-off-by: Hillf Danton <dhillf@gmail.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
1ebb7044c9
commit
ea5768c74b
@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
|
||||
|
||||
if (page && arch_prepare_hugepage(page)) {
|
||||
__free_pages(page, huge_page_order(h));
|
||||
return NULL;
|
||||
page = NULL;
|
||||
}
|
||||
|
||||
spin_lock(&hugetlb_lock);
|
||||
|
Loading…
Reference in New Issue
Block a user