arm64/mm: Reorganize pfn_valid()

There are multiple instances of pfn_to_section_nr() and __pfn_to_section()
when CONFIG_SPARSEMEM is enabled. This can be optimized if memory section
is fetched earlier. This replaces the open coded PFN and ADDR conversion
with PFN_PHYS() and PHYS_PFN() helpers. While there, also add a comment.
This does not cause any functional change.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/1614921898-4099-3-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
Anshuman Khandual 2021-03-05 10:54:58 +05:30 committed by Will Deacon
parent eeb0753ba2
commit 093bbe211e

View File

@ -219,16 +219,26 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
int pfn_valid(unsigned long pfn)
{
phys_addr_t addr = pfn << PAGE_SHIFT;
phys_addr_t addr = PFN_PHYS(pfn);
if ((addr >> PAGE_SHIFT) != pfn)
/*
* Ensure the upper PAGE_SHIFT bits are clear in the
* pfn. Else it might lead to false positives when
* some of the upper bits are set, but the lower bits
* match a valid pfn.
*/
if (PHYS_PFN(addr) != pfn)
return 0;
#ifdef CONFIG_SPARSEMEM
{
struct mem_section *ms;
if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
return 0;
if (!valid_section(__pfn_to_section(pfn)))
ms = __pfn_to_section(pfn);
if (!valid_section(ms))
return 0;
/*
@ -240,8 +250,9 @@ int pfn_valid(unsigned long pfn)
* memory sections covering all of hotplug memory including
* both normal and ZONE_DEVICE based.
*/
if (!early_section(__pfn_to_section(pfn)))
return pfn_section_valid(__pfn_to_section(pfn), pfn);
if (!early_section(ms))
return pfn_section_valid(ms, pfn);
}
#endif
return memblock_is_map_memory(addr);
}