x86/exceptions: Enable IST guard pages

All usage sites which expected that the exception stacks in the CPU entry
area are mapped linearly are fixed up. Enable guard pages between the
IST stacks.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190414160145.349862042@linutronix.de
This commit is contained in:
Thomas Gleixner 2019-04-14 17:59:56 +02:00 committed by Borislav Petkov
parent 3207426925
commit 1bdb67e5aa

View File

@ -26,13 +26,9 @@ struct exception_stacks {
ESTACKS_MEMBERS(0)
};
/*
* The effective cpu entry area mapping with guard pages. Guard size is
* zero until the code which makes assumptions about linear mappings is
* cleaned up.
*/
/* The effective cpu entry area mapping with guard pages. */
struct cea_exception_stacks {
ESTACKS_MEMBERS(0)
ESTACKS_MEMBERS(PAGE_SIZE)
};
/*