arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region
[ Upstream commit c8a43c18a97845e7f94ed7d181c11f41964976a2 ] When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), the top 4K of kernel virtual address space may be mapped to physical addresses despite being reserved for ERR_PTR values. Fix the randomization of the linear region so that we avoid mapping the last page of the virtual address space. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: liyueyi <liyueyi@live.com> [will: rewrote commit message; merged in suggestion from Ard] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin (Microsoft) <sashal@kernel.org>
This commit is contained in:
parent
91f69a3c91
commit
da6c4933cd
@ -272,7 +272,7 @@ void __init arm64_memblock_init(void)
|
||||
* memory spans, randomize the linear region as well.
|
||||
*/
|
||||
if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
|
||||
range = range / ARM64_MEMSTART_ALIGN + 1;
|
||||
range /= ARM64_MEMSTART_ALIGN;
|
||||
memstart_addr -= ARM64_MEMSTART_ALIGN *
|
||||
((range * memstart_offset_seed) >> 16);
|
||||
}
|
||||
|
Loading…
x
Reference in New Issue
Block a user