mm/huge_memory: access vm_page_prot with READ_ONCE in remove_migration_pmd
vma->vm_page_prot is read lockless from the rmap_walk, it may be updated concurrently. Using READ_ONCE to prevent the risk of reading intermediate values. Link: https://lkml.kernel.org/r/20220704132201.14611-3-linmiaohe@huawei.com Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Yang Shi <shy828301@gmail.com> Cc: Zach O'Keefe <zokeefe@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
7c38f1812d
commit
4286f14748
@ -3205,7 +3205,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
|
||||
|
||||
entry = pmd_to_swp_entry(*pvmw->pmd);
|
||||
get_page(new);
|
||||
pmde = pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot));
|
||||
pmde = pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)));
|
||||
if (pmd_swp_soft_dirty(*pvmw->pmd))
|
||||
pmde = pmd_mksoft_dirty(pmde);
|
||||
if (is_writable_migration_entry(entry))
|
||||
|
Loading…
x
Reference in New Issue
Block a user