Miaohe Lin d56c5eac84 mm/memremap: fix memunmap_pages() race with get_dev_pagemap()
[ Upstream commit 1e57ffb6e3fd9583268c6462c4e3853575b21701 ]

Think about the below scene:

 CPU1			CPU2
 memunmap_pages
   percpu_ref_exit
     __percpu_ref_exit
       free_percpu(percpu_count);
         /* percpu_count is freed here! */
			 get_dev_pagemap
			   xa_load(&pgmap_array, PHYS_PFN(phys))
			     /* pgmap still in the pgmap_array */
			   percpu_ref_tryget_live(&pgmap->ref)
			     if __ref_is_percpu
			       /* __PERCPU_REF_ATOMIC_DEAD not set yet */
			       this_cpu_inc(*percpu_count)
			         /* access freed percpu_count here! */
      ref->percpu_count_ptr = __PERCPU_REF_ATOMIC_DEAD;
        /* too late... */
   pageunmap_range

To fix the issue, do percpu_ref_exit() after pgmap_array is emptied. So
we won't do percpu_ref_tryget_live() against a being freed percpu_ref.

Link: https://lkml.kernel.org/r/20220609121305.2508-1-linmiaohe@huawei.com
Fixes: b7b3c01b1915 ("mm/memremap_pages: support multiple ranges per invocation")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-08-17 14:23:44 +02:00
..
2021-05-07 00:26:35 -07:00
2021-05-05 11:27:24 -07:00
2021-05-05 11:27:24 -07:00
2021-06-29 10:53:52 -07:00
2021-09-13 10:18:28 -07:00
2021-06-05 20:43:15 +00:00
2021-05-07 00:26:35 -07:00
2021-04-16 16:10:37 -07:00
2021-05-07 00:26:35 -07:00