2005-04-16 15:20:36 -07:00
/ *
* linux/ a r c h / x86 _ 6 4 / e n t r y . S
*
* Copyright ( C ) 1 9 9 1 , 1 9 9 2 L i n u s T o r v a l d s
* Copyright ( C ) 2 0 0 0 , 2 0 0 1 , 2 0 0 2 A n d i K l e e n S u S E L a b s
* Copyright ( C ) 2 0 0 0 P a v e l M a c h e k < p a v e l @suse.cz>
2015-06-08 20:43:07 +02:00
*
2005-04-16 15:20:36 -07:00
* entry. S c o n t a i n s t h e s y s t e m - c a l l a n d f a u l t l o w - l e v e l h a n d l i n g r o u t i n e s .
*
2011-06-05 13:50:18 -04:00
* Some o f t h i s i s d o c u m e n t e d i n D o c u m e n t a t i o n / x86 / e n t r y _ 6 4 . t x t
*
2008-11-16 15:29:00 +01:00
* A n o t e o n t e r m i n o l o g y :
2015-06-08 20:43:07 +02:00
* - iret f r a m e : A r c h i t e c t u r e d e f i n e d i n t e r r u p t f r a m e f r o m S S t o R I P
* at t h e t o p o f t h e k e r n e l p r o c e s s s t a c k .
2006-09-26 10:52:29 +02:00
*
* Some m a c r o u s a g e :
2015-06-08 20:43:07 +02:00
* - ENTRY/ E N D : D e f i n e f u n c t i o n s i n t h e s y m b o l t a b l e .
* - TRACE_ I R Q _ * : T r a c e h a r d i r q s t a t e f o r l o c k d e b u g g i n g .
* - idtentry : Define e x c e p t i o n e n t r y p o i n t s .
2005-04-16 15:20:36 -07:00
* /
# include < l i n u x / l i n k a g e . h >
# include < a s m / s e g m e n t . h >
# include < a s m / c a c h e . h >
# include < a s m / e r r n o . h >
2015-06-03 18:29:26 +02:00
# include " c a l l i n g . h "
2005-09-09 21:28:48 +02:00
# include < a s m / a s m - o f f s e t s . h >
2005-04-16 15:20:36 -07:00
# include < a s m / m s r . h >
# include < a s m / u n i s t d . h >
# include < a s m / t h r e a d _ i n f o . h >
# include < a s m / h w _ i r q . h >
2009-02-13 11:14:01 -08:00
# include < a s m / p a g e _ t y p e s . h >
2006-07-03 00:24:45 -07:00
# include < a s m / i r q f l a g s . h >
2008-01-30 13:32:08 +01:00
# include < a s m / p a r a v i r t . h >
2009-01-13 20:41:35 +09:00
# include < a s m / p e r c p u . h >
2012-04-20 12:19:50 -07:00
# include < a s m / a s m . h >
2012-09-21 12:43:12 -07:00
# include < a s m / s m a p . h >
x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack
The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer. This
causes some 16-bit software to break, but it also leaks kernel state
to user space. We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.
In checkin:
b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.
This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart. When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace. The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.
(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)
Special thanks to:
- Andy Lutomirski, for the suggestion of using very small stack slots
and copy (as opposed to map) the IRET frame there, and for the
suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.
Reported-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andrew Lutomriski <amluto@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dirk Hohndel <dirk@hohndel.org>
Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
Cc: comex <comexk@gmail.com>
Cc: Alexander van Heukelum <heukelum@fastmail.fm>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: <stable@vger.kernel.org> # consider after upstream merge
2014-04-29 16:46:09 -07:00
# include < a s m / p g t a b l e _ t y p e s . h >
2012-01-03 14:23:06 -05:00
# include < l i n u x / e r r . h >
2005-04-16 15:20:36 -07:00
2008-06-23 15:37:04 -07:00
/* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */
# include < l i n u x / e l f - e m . h >
2015-06-08 20:43:07 +02:00
# define A U D I T _ A R C H _ X 8 6 _ 6 4 ( E M _ X 8 6 _ 6 4 | _ _ A U D I T _ A R C H _ 6 4 B I T | _ _ A U D I T _ A R C H _ L E )
# define _ _ A U D I T _ A R C H _ 6 4 B I T 0 x80 0 0 0 0 0 0
# define _ _ A U D I T _ A R C H _ L E 0 x40 0 0 0 0 0 0
2011-03-07 19:10:39 +01:00
2015-06-08 20:43:07 +02:00
.code64
.section .entry .text , " ax"
2008-05-12 21:20:42 +02:00
2008-01-30 13:32:08 +01:00
# ifdef C O N F I G _ P A R A V I R T
2008-06-25 00:19:28 -04:00
ENTRY( n a t i v e _ u s e r g s _ s y s r e t 6 4 )
2008-01-30 13:32:08 +01:00
swapgs
sysretq
2009-02-23 22:57:00 +03:00
ENDPROC( n a t i v e _ u s e r g s _ s y s r e t 6 4 )
2008-01-30 13:32:08 +01:00
# endif / * C O N F I G _ P A R A V I R T * /
2015-02-26 14:40:30 -08:00
.macro TRACE_IRQS_IRETQ
2006-07-03 00:24:45 -07:00
# ifdef C O N F I G _ T R A C E _ I R Q F L A G S
2015-06-08 20:43:07 +02:00
bt $ 9 , E F L A G S ( % r s p ) / * i n t e r r u p t s o f f ? * /
jnc 1 f
2006-07-03 00:24:45 -07:00
TRACE_ I R Q S _ O N
1 :
# endif
.endm
2012-05-30 11:54:53 -04:00
/ *
* When d y n a m i c f u n c t i o n t r a c e r i s e n a b l e d i t w i l l a d d a b r e a k p o i n t
* to a l l l o c a t i o n s t h a t i t i s a b o u t t o m o d i f y , s y n c C P U s , u p d a t e
* all t h e c o d e , s y n c C P U s , t h e n r e m o v e t h e b r e a k p o i n t s . I n t h i s t i m e
* if l o c k d e p i s e n a b l e d , i t m i g h t j u m p b a c k i n t o t h e d e b u g h a n d l e r
* outside t h e u p d a t i n g o f t h e I S T p r o t e c t i o n . ( T R A C E _ I R Q S _ O N / O F F ) .
*
* We n e e d t o c h a n g e t h e I D T t a b l e b e f o r e c a l l i n g T R A C E _ I R Q S _ O N / O F F t o
* make s u r e t h e s t a c k p o i n t e r d o e s n o t g e t r e s e t b a c k t o t h e t o p
* of t h e d e b u g s t a c k , a n d i n s t e a d j u s t r e u s e s t h e c u r r e n t s t a c k .
* /
# if d e f i n e d ( C O N F I G _ D Y N A M I C _ F T R A C E ) & & d e f i n e d ( C O N F I G _ T R A C E _ I R Q F L A G S )
.macro TRACE_IRQS_OFF_DEBUG
2015-06-08 20:43:07 +02:00
call d e b u g _ s t a c k _ s e t _ z e r o
2012-05-30 11:54:53 -04:00
TRACE_ I R Q S _ O F F
2015-06-08 20:43:07 +02:00
call d e b u g _ s t a c k _ r e s e t
2012-05-30 11:54:53 -04:00
.endm
.macro TRACE_IRQS_ON_DEBUG
2015-06-08 20:43:07 +02:00
call d e b u g _ s t a c k _ s e t _ z e r o
2012-05-30 11:54:53 -04:00
TRACE_ I R Q S _ O N
2015-06-08 20:43:07 +02:00
call d e b u g _ s t a c k _ r e s e t
2012-05-30 11:54:53 -04:00
.endm
2015-02-26 14:40:30 -08:00
.macro TRACE_IRQS_IRETQ_DEBUG
2015-06-08 20:43:07 +02:00
bt $ 9 , E F L A G S ( % r s p ) / * i n t e r r u p t s o f f ? * /
jnc 1 f
2012-05-30 11:54:53 -04:00
TRACE_ I R Q S _ O N _ D E B U G
1 :
.endm
# else
2015-06-08 20:43:07 +02:00
# define T R A C E _ I R Q S _ O F F _ D E B U G T R A C E _ I R Q S _ O F F
# define T R A C E _ I R Q S _ O N _ D E B U G T R A C E _ I R Q S _ O N
# define T R A C E _ I R Q S _ I R E T Q _ D E B U G T R A C E _ I R Q S _ I R E T Q
2012-05-30 11:54:53 -04:00
# endif
2005-04-16 15:20:36 -07:00
/ *
2015-06-08 20:43:07 +02:00
* 6 4 - bit S Y S C A L L i n s t r u c t i o n e n t r y . U p t o 6 a r g u m e n t s i n r e g i s t e r s .
2005-04-16 15:20:36 -07:00
*
2015-06-08 20:43:07 +02:00
* 6 4 - bit S Y S C A L L s a v e s r i p t o r c x , c l e a r s r f l a g s . R F , t h e n s a v e s r f l a g s t o r11 ,
2015-02-26 14:40:32 -08:00
* then l o a d s n e w s s , c s , a n d r i p f r o m p r e v i o u s l y p r o g r a m m e d M S R s .
* rflags g e t s m a s k e d b y a v a l u e f r o m a n o t h e r M S R ( s o C L D a n d C L A C
* are n o t n e e d e d ) . S Y S C A L L d o e s n o t s a v e a n y t h i n g o n t h e s t a c k
* and d o e s n o t c h a n g e r s p .
*
* Registers o n e n t r y :
2005-04-16 15:20:36 -07:00
* rax s y s t e m c a l l n u m b e r
2015-02-26 14:40:32 -08:00
* rcx r e t u r n a d d r e s s
* r1 1 s a v e d r f l a g s ( n o t e : r11 i s c a l l e e - c l o b b e r e d r e g i s t e r i n C A B I )
2005-04-16 15:20:36 -07:00
* rdi a r g 0
* rsi a r g 1
2008-11-16 15:29:00 +01:00
* rdx a r g 2
2015-02-26 14:40:32 -08:00
* r1 0 a r g 3 ( n e e d s t o b e m o v e d t o r c x t o c o n f o r m t o C A B I )
2005-04-16 15:20:36 -07:00
* r8 a r g 4
* r9 a r g 5
2015-06-08 20:43:07 +02:00
* ( note : r1 2 - r15 , r b p , r b x a r e c a l l e e - p r e s e r v e d i n C A B I )
2008-11-16 15:29:00 +01:00
*
2005-04-16 15:20:36 -07:00
* Only c a l l e d f r o m u s e r s p a c e .
*
2015-03-17 14:42:59 +01:00
* When u s e r c a n c h a n g e p t _ r e g s - > f o o a l w a y s f o r c e I R E T . T h a t i s b e c a u s e
2006-04-07 19:50:00 +02:00
* it d e a l s w i t h u n c a n o n i c a l a d d r e s s e s b e t t e r . S Y S R E T h a s t r o u b l e
* with t h e m d u e t o b u g s i n b o t h A M D a n d I n t e l C P U s .
2008-11-16 15:29:00 +01:00
* /
2005-04-16 15:20:36 -07:00
2015-06-08 08:42:03 +02:00
ENTRY( e n t r y _ S Y S C A L L _ 6 4 )
2015-03-19 18:17:47 +01:00
/ *
* Interrupts a r e o f f o n e n t r y .
* We d o n o t f r a m e t h i s t i n y i r q - o f f b l o c k w i t h T R A C E _ I R Q S _ O F F / O N ,
* it i s t o o s m a l l t o e v e r c a u s e n o t i c e a b l e i r q l a t e n c y .
* /
2008-01-30 13:32:08 +01:00
SWAPGS_ U N S A F E _ S T A C K
/ *
* A h y p e r v i s o r i m p l e m e n t a t i o n m i g h t w a n t t o u s e a l a b e l
* after t h e s w a p g s , s o t h a t i t c a n d o t h e s w a p g s
* for t h e g u e s t a n d j u m p h e r e o n s y s c a l l .
* /
2015-06-08 08:42:03 +02:00
GLOBAL( e n t r y _ S Y S C A L L _ 6 4 _ a f t e r _ s w a p g s )
2008-01-30 13:32:08 +01:00
2015-06-08 20:43:07 +02:00
movq % r s p , P E R _ C P U _ V A R ( r s p _ s c r a t c h )
movq P E R _ C P U _ V A R ( c p u _ c u r r e n t _ t o p _ o f _ s t a c k ) , % r s p
2015-03-19 18:17:47 +01:00
/* Construct struct pt_regs on stack */
2015-06-08 20:43:07 +02:00
pushq $ _ _ U S E R _ D S / * p t _ r e g s - > s s * /
pushq P E R _ C P U _ V A R ( r s p _ s c r a t c h ) / * p t _ r e g s - > s p * /
2015-03-17 14:52:24 +01:00
/ *
2015-03-19 18:17:47 +01:00
* Re- e n a b l e i n t e r r u p t s .
* We u s e ' r s p _ s c r a t c h ' a s a s c r a t c h s p a c e , h e n c e i r q - o f f b l o c k a b o v e
* must e x e c u t e a t o m i c a l l y i n t h e f a c e o f p o s s i b l e i n t e r r u p t - d r i v e n
* task p r e e m p t i o n . W e m u s t e n a b l e i n t e r r u p t s o n l y a f t e r w e ' r e d o n e
* with u s i n g r s p _ s c r a t c h :
2015-03-17 14:52:24 +01:00
* /
ENABLE_ I N T E R R U P T S ( C L B R _ N O N E )
2015-06-08 20:43:07 +02:00
pushq % r11 / * p t _ r e g s - > f l a g s * /
pushq $ _ _ U S E R _ C S / * p t _ r e g s - > c s * /
pushq % r c x / * p t _ r e g s - > i p * /
pushq % r a x / * p t _ r e g s - > o r i g _ a x * /
pushq % r d i / * p t _ r e g s - > d i * /
pushq % r s i / * p t _ r e g s - > s i * /
pushq % r d x / * p t _ r e g s - > d x * /
pushq % r c x / * p t _ r e g s - > c x * /
pushq $ - E N O S Y S / * p t _ r e g s - > a x * /
pushq % r8 / * p t _ r e g s - > r8 * /
pushq % r9 / * p t _ r e g s - > r9 * /
pushq % r10 / * p t _ r e g s - > r10 * /
pushq % r11 / * p t _ r e g s - > r11 * /
sub $ ( 6 * 8 ) , % r s p / * p t _ r e g s - > b p , b x , r12 - 1 5 n o t s a v e d * /
testl $ _ T I F _ W O R K _ S Y S C A L L _ E N T R Y , A S M _ T H R E A D _ I N F O ( T I _ f l a g s , % r s p , S I Z E O F _ P T R E G S )
jnz t r a c e s y s
2015-06-08 08:42:03 +02:00
entry_SYSCALL_64_fastpath :
2012-02-19 07:56:26 -08:00
# if _ _ S Y S C A L L _ M A S K = = ~ 0
2015-06-08 20:43:07 +02:00
cmpq $ _ _ N R _ s y s c a l l _ m a x , % r a x
2012-02-19 07:56:26 -08:00
# else
2015-06-08 20:43:07 +02:00
andl $ _ _ S Y S C A L L _ M A S K , % e a x
cmpl $ _ _ N R _ s y s c a l l _ m a x , % e a x
2012-02-19 07:56:26 -08:00
# endif
2015-06-08 20:43:07 +02:00
ja 1 f / * r e t u r n - E N O S Y S ( a l r e a d y i n p t _ r e g s - > a x ) * /
movq % r10 , % r c x
2016-01-28 15:11:25 -08:00
/ *
* This c a l l i n s t r u c t i o n i s h a n d l e d s p e c i a l l y i n s t u b _ p t r e g s _ 6 4 .
* It m i g h t e n d u p j u m p i n g t o t h e s l o w p a t h . I f i t j u m p s , R A X i s
* clobbered.
* /
2015-06-08 20:43:07 +02:00
call * s y s _ c a l l _ t a b l e ( , % r a x , 8 )
2016-01-28 15:11:25 -08:00
.Lentry_SYSCALL_64_after_fastpath_call :
2015-06-08 20:43:07 +02:00
movq % r a x , R A X ( % r s p )
2015-03-25 18:18:13 +01:00
1 :
2005-04-16 15:20:36 -07:00
/ *
2015-03-25 18:18:13 +01:00
* Syscall r e t u r n p a t h e n d i n g w i t h S Y S R E T ( f a s t p a t h ) .
* Has i n c o m p l e t e l y f i l l e d p t _ r e g s .
2008-11-16 15:29:00 +01:00
* /
2007-10-11 22:11:12 +02:00
LOCKDEP_ S Y S _ E X I T
2015-03-31 19:00:03 +02:00
/ *
* We d o n o t f r a m e t h i s t i n y i r q - o f f b l o c k w i t h T R A C E _ I R Q S _ O F F / O N ,
* it i s t o o s m a l l t o e v e r c a u s e n o t i c e a b l e i r q l a t e n c y .
* /
2008-01-30 13:32:08 +01:00
DISABLE_ I N T E R R U P T S ( C L B R _ N O N E )
2015-03-23 12:32:54 -07:00
/ *
* We m u s t c h e c k t i f l a g s w i t h i n t e r r u p t s ( o r a t l e a s t p r e e m p t i o n )
* off b e c a u s e w e m u s t * n e v e r * r e t u r n t o u s e r s p a c e w i t h o u t
* processing e x i t w o r k t h a t i s e n q u e u e d i f w e ' r e p r e e m p t e d h e r e .
* In p a r t i c u l a r , r e t u r n i n g t o u s e r s p a c e w i t h a n y o f t h e o n e - s h o t
* flags ( T I F _ N O T I F Y _ R E S U M E , T I F _ U S E R _ R E T U R N _ N O T I F Y , e t c ) s e t i s
* very b a d .
* /
2015-06-08 20:43:07 +02:00
testl $ _ T I F _ A L L W O R K _ M A S K , A S M _ T H R E A D _ I N F O ( T I _ f l a g s , % r s p , S I Z E O F _ P T R E G S )
jnz i n t _ r e t _ f r o m _ s y s _ c a l l _ i r q s _ o f f / * G o t o t h e s l o w p a t h * /
2015-03-23 12:32:54 -07:00
2015-03-09 19:39:21 +01:00
RESTORE_ C _ R E G S _ E X C E P T _ R C X _ R 1 1
2015-06-08 20:43:07 +02:00
movq R I P ( % r s p ) , % r c x
movq E F L A G S ( % r s p ) , % r11
movq R S P ( % r s p ) , % r s p
2015-02-26 14:40:32 -08:00
/ *
2015-06-08 20:43:07 +02:00
* 6 4 - bit S Y S R E T r e s t o r e s r i p f r o m r c x ,
2015-02-26 14:40:32 -08:00
* rflags f r o m r11 ( b u t R F a n d V M b i t s a r e f o r c e d t o 0 ) ,
* cs a n d s s a r e l o a d e d f r o m M S R s .
2015-03-31 19:00:03 +02:00
* Restoration o f r f l a g s r e - e n a b l e s i n t e r r u p t s .
2015-04-26 16:47:59 -07:00
*
* NB : On A M D C P U s w i t h t h e X 8 6 _ B U G _ S Y S R E T _ S S _ A T T R S b u g , t h e s s
* descriptor i s n o t r e i n i t i a l i z e d . T h i s m e a n s t h a t w e s h o u l d
* avoid S Y S R E T w i t h S S = = N U L L , w h i c h c o u l d h a p p e n i f w e s c h e d u l e ,
* exit t h e k e r n e l , a n d r e - e n t e r u s i n g a n i n t e r r u p t v e c t o r . ( A l l
* interrupt e n t r i e s o n x86 _ 6 4 s e t S S t o N U L L . ) W e p r e v e n t t h a t
* from h a p p e n i n g b y r e l o a d i n g S S i n _ _ s w i t c h _ t o . ( A c t u a l l y
* detecting t h e f a i l u r e i n 6 4 - b i t u s e r s p a c e i s t r i c k y b u t c a n b e
* done. )
2015-02-26 14:40:32 -08:00
* /
2008-06-25 00:19:28 -04:00
USERGS_ S Y S R E T 6 4
2005-04-16 15:20:36 -07:00
2015-07-03 12:44:28 -07:00
GLOBAL( i n t _ r e t _ f r o m _ s y s _ c a l l _ i r q s _ o f f )
TRACE_ I R Q S _ O N
ENABLE_ I N T E R R U P T S ( C L B R _ N O N E )
jmp i n t _ r e t _ f r o m _ s y s _ c a l l
2015-03-17 14:42:59 +01:00
/* Do syscall entry tracing */
2008-11-16 15:29:00 +01:00
tracesys :
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
SAVE_ E X T R A _ R E G S
2015-06-08 20:43:07 +02:00
movq % r s p , % r d i
2016-01-28 15:11:25 -08:00
call s y s c a l l _ t r a c e _ e n t e r
2014-09-05 15:13:56 -07:00
2008-07-09 02:38:07 -07:00
/ *
2015-02-26 14:40:28 -08:00
* Reload r e g i s t e r s f r o m s t a c k i n c a s e p t r a c e c h a n g e d t h e m .
2016-01-28 15:11:25 -08:00
* We d o n ' t r e l o a d % r a x b e c a u s e s y s c a l l _ t r a c e _ e n t e r ( ) r e t u r n e d
2008-07-09 02:38:07 -07:00
* the v a l u e i t w a n t s u s t o u s e i n t h e t a b l e l o o k u p .
* /
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
RESTORE_ C _ R E G S _ E X C E P T _ R A X
2012-02-19 07:56:26 -08:00
# if _ _ S Y S C A L L _ M A S K = = ~ 0
2015-06-08 20:43:07 +02:00
cmpq $ _ _ N R _ s y s c a l l _ m a x , % r a x
2012-02-19 07:56:26 -08:00
# else
2015-06-08 20:43:07 +02:00
andl $ _ _ S Y S C A L L _ M A S K , % e a x
cmpl $ _ _ N R _ s y s c a l l _ m a x , % e a x
2012-02-19 07:56:26 -08:00
# endif
2015-06-08 20:43:07 +02:00
ja 1 f / * r e t u r n - E N O S Y S ( a l r e a d y i n p t _ r e g s - > a x ) * /
movq % r10 , % r c x / * f i x u p f o r C * /
call * s y s _ c a l l _ t a b l e ( , % r a x , 8 )
movq % r a x , R A X ( % r s p )
2016-01-28 15:11:26 -08:00
RESTORE_ E X T R A _ R E G S
2015-03-31 19:00:11 +02:00
1 :
2015-03-17 14:42:59 +01:00
/* Use IRET because user could have changed pt_regs->foo */
2008-11-16 15:29:00 +01:00
/ *
2005-04-16 15:20:36 -07:00
* Syscall r e t u r n p a t h e n d i n g w i t h I R E T .
2015-03-17 14:42:59 +01:00
* Has c o r r e c t i r e t f r a m e .
2006-12-07 02:14:02 +01:00
* /
2009-02-23 22:57:01 +03:00
GLOBAL( i n t _ r e t _ f r o m _ s y s _ c a l l )
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
SAVE_ E X T R A _ R E G S
2015-07-03 12:44:28 -07:00
movq % r s p , % r d i
call s y s c a l l _ r e t u r n _ s l o w p a t h / * r e t u r n s w i t h I R Q s d i s a b l e d * /
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
RESTORE_ E X T R A _ R E G S
2015-07-03 12:44:28 -07:00
TRACE_ I R Q S _ I R E T Q / * w e ' r e a b o u t t o c h a n g e I F * /
2015-04-02 18:46:59 +02:00
/ *
* Try t o u s e S Y S R E T i n s t e a d o f I R E T i f w e ' r e r e t u r n i n g t o
* a c o m p l e t e l y c l e a n 6 4 - b i t u s e r s p a c e c o n t e x t .
* /
2015-06-08 20:43:07 +02:00
movq R C X ( % r s p ) , % r c x
movq R I P ( % r s p ) , % r11
cmpq % r c x , % r11 / * R C X = = R I P * /
jne o p p o r t u n i s t i c _ s y s r e t _ f a i l e d
2015-04-02 18:46:59 +02:00
/ *
* On I n t e l C P U s , S Y S R E T w i t h n o n - c a n o n i c a l R C X / R I P w i l l #G P
* in k e r n e l s p a c e . T h i s e s s e n t i a l l y l e t s t h e u s e r t a k e o v e r
x86/asm/entry/64: Implement better check for canonical addresses
This change makes the check exact (no more false positives
on "negative" addresses).
Andy explains:
"Canonical addresses either start with 17 zeros or 17 ones.
In the old code, we checked that the top (64-47) = 17 bits were all
zero. We did this by shifting right by 47 bits and making sure that
nothing was left.
In the new code, we're shifting left by (64 - 48) = 16 bits and then
signed shifting right by the same amount, this propagating the 17th
highest bit to all positions to its left. If we get the same value we
started with, then we're good to go."
While it isn't really important to be fully correct here -
almost all addresses we'll ever see will be userspace ones,
but OTOH it looks to be cheap enough: the new code uses
two more ALU ops but preserves %rcx, allowing to not
reload it from pt_regs->cx again.
On disassembly level, the changes are:
cmp %rcx,0x80(%rsp) -> mov 0x80(%rsp),%r11; cmp %rcx,%r11
shr $0x2f,%rcx -> shl $0x10,%rcx; sar $0x10,%rcx; cmp %rcx,%r11
mov 0x58(%rsp),%rcx -> (eliminated)
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1429633649-20169-1-git-send-email-dvlasenk@redhat.com
[ Changelog massage. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-21 18:27:29 +02:00
* the k e r n e l , s i n c e u s e r s p a c e c o n t r o l s R S P .
2015-04-02 18:46:59 +02:00
*
x86/asm/entry/64: Implement better check for canonical addresses
This change makes the check exact (no more false positives
on "negative" addresses).
Andy explains:
"Canonical addresses either start with 17 zeros or 17 ones.
In the old code, we checked that the top (64-47) = 17 bits were all
zero. We did this by shifting right by 47 bits and making sure that
nothing was left.
In the new code, we're shifting left by (64 - 48) = 16 bits and then
signed shifting right by the same amount, this propagating the 17th
highest bit to all positions to its left. If we get the same value we
started with, then we're good to go."
While it isn't really important to be fully correct here -
almost all addresses we'll ever see will be userspace ones,
but OTOH it looks to be cheap enough: the new code uses
two more ALU ops but preserves %rcx, allowing to not
reload it from pt_regs->cx again.
On disassembly level, the changes are:
cmp %rcx,0x80(%rsp) -> mov 0x80(%rsp),%r11; cmp %rcx,%r11
shr $0x2f,%rcx -> shl $0x10,%rcx; sar $0x10,%rcx; cmp %rcx,%r11
mov 0x58(%rsp),%rcx -> (eliminated)
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1429633649-20169-1-git-send-email-dvlasenk@redhat.com
[ Changelog massage. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-21 18:27:29 +02:00
* If w i d t h o f " c a n o n i c a l t a i l " e v e r b e c o m e s v a r i a b l e , t h i s w i l l n e e d
2015-04-02 18:46:59 +02:00
* to b e u p d a t e d t o r e m a i n c o r r e c t o n b o t h o l d a n d n e w C P U s .
* /
.ifne __VIRTUAL_MASK_SHIFT - 4 7
.error " virtual a d d r e s s w i d t h c h a n g e d - - S Y S R E T c h e c k s n e e d u p d a t e "
.endif
2015-06-08 20:43:07 +02:00
x86/asm/entry/64: Implement better check for canonical addresses
This change makes the check exact (no more false positives
on "negative" addresses).
Andy explains:
"Canonical addresses either start with 17 zeros or 17 ones.
In the old code, we checked that the top (64-47) = 17 bits were all
zero. We did this by shifting right by 47 bits and making sure that
nothing was left.
In the new code, we're shifting left by (64 - 48) = 16 bits and then
signed shifting right by the same amount, this propagating the 17th
highest bit to all positions to its left. If we get the same value we
started with, then we're good to go."
While it isn't really important to be fully correct here -
almost all addresses we'll ever see will be userspace ones,
but OTOH it looks to be cheap enough: the new code uses
two more ALU ops but preserves %rcx, allowing to not
reload it from pt_regs->cx again.
On disassembly level, the changes are:
cmp %rcx,0x80(%rsp) -> mov 0x80(%rsp),%r11; cmp %rcx,%r11
shr $0x2f,%rcx -> shl $0x10,%rcx; sar $0x10,%rcx; cmp %rcx,%r11
mov 0x58(%rsp),%rcx -> (eliminated)
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1429633649-20169-1-git-send-email-dvlasenk@redhat.com
[ Changelog massage. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-21 18:27:29 +02:00
/* Change top 16 bits to be the sign-extension of 47th bit */
shl $ ( 6 4 - ( _ _ V I R T U A L _ M A S K _ S H I F T + 1 ) ) , % r c x
sar $ ( 6 4 - ( _ _ V I R T U A L _ M A S K _ S H I F T + 1 ) ) , % r c x
2015-06-08 20:43:07 +02:00
x86/asm/entry/64: Implement better check for canonical addresses
This change makes the check exact (no more false positives
on "negative" addresses).
Andy explains:
"Canonical addresses either start with 17 zeros or 17 ones.
In the old code, we checked that the top (64-47) = 17 bits were all
zero. We did this by shifting right by 47 bits and making sure that
nothing was left.
In the new code, we're shifting left by (64 - 48) = 16 bits and then
signed shifting right by the same amount, this propagating the 17th
highest bit to all positions to its left. If we get the same value we
started with, then we're good to go."
While it isn't really important to be fully correct here -
almost all addresses we'll ever see will be userspace ones,
but OTOH it looks to be cheap enough: the new code uses
two more ALU ops but preserves %rcx, allowing to not
reload it from pt_regs->cx again.
On disassembly level, the changes are:
cmp %rcx,0x80(%rsp) -> mov 0x80(%rsp),%r11; cmp %rcx,%r11
shr $0x2f,%rcx -> shl $0x10,%rcx; sar $0x10,%rcx; cmp %rcx,%r11
mov 0x58(%rsp),%rcx -> (eliminated)
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1429633649-20169-1-git-send-email-dvlasenk@redhat.com
[ Changelog massage. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-21 18:27:29 +02:00
/* If this changed %rcx, it was not canonical */
cmpq % r c x , % r11
jne o p p o r t u n i s t i c _ s y s r e t _ f a i l e d
2015-04-02 18:46:59 +02:00
2015-06-08 20:43:07 +02:00
cmpq $ _ _ U S E R _ C S , C S ( % r s p ) / * C S m u s t m a t c h S Y S R E T * /
jne o p p o r t u n i s t i c _ s y s r e t _ f a i l e d
2015-04-02 18:46:59 +02:00
2015-06-08 20:43:07 +02:00
movq R 1 1 ( % r s p ) , % r11
cmpq % r11 , E F L A G S ( % r s p ) / * R 1 1 = = R F L A G S * /
jne o p p o r t u n i s t i c _ s y s r e t _ f a i l e d
2015-04-02 18:46:59 +02:00
/ *
* SYSRET c a n ' t r e s t o r e R F . S Y S R E T c a n r e s t o r e T F , b u t u n l i k e I R E T ,
* restoring T F r e s u l t s i n a t r a p f r o m u s e r s p a c e i m m e d i a t e l y a f t e r
* SYSRET. T h i s w o u l d c a u s e a n i n f i n i t e l o o p w h e n e v e r #D B h a p p e n s
* with r e g i s t e r s t a t e t h a t s a t i s f i e s t h e o p p o r t u n i s t i c S Y S R E T
* conditions. F o r e x a m p l e , s i n g l e - s t e p p i n g t h i s u s e r c o d e :
*
2015-06-08 20:43:07 +02:00
* movq $ s t u c k _ h e r e , % r c x
2015-04-02 18:46:59 +02:00
* pushfq
* popq % r11
* stuck_here :
*
* would n e v e r g e t p a s t ' s t u c k _ h e r e ' .
* /
2015-06-08 20:43:07 +02:00
testq $ ( X 8 6 _ E F L A G S _ R F | X 8 6 _ E F L A G S _ T F ) , % r11
jnz o p p o r t u n i s t i c _ s y s r e t _ f a i l e d
2015-04-02 18:46:59 +02:00
/* nothing to check for RSP */
2015-06-08 20:43:07 +02:00
cmpq $ _ _ U S E R _ D S , S S ( % r s p ) / * S S m u s t m a t c h S Y S R E T * /
jne o p p o r t u n i s t i c _ s y s r e t _ f a i l e d
2015-04-02 18:46:59 +02:00
/ *
2015-06-08 20:43:07 +02:00
* We w i n ! T h i s l a b e l i s h e r e j u s t f o r e a s e o f u n d e r s t a n d i n g
* perf p r o f i l e s . N o t h i n g j u m p s h e r e .
2015-04-02 18:46:59 +02:00
* /
syscall_return_via_sysret :
x86/asm/entry/64: Implement better check for canonical addresses
This change makes the check exact (no more false positives
on "negative" addresses).
Andy explains:
"Canonical addresses either start with 17 zeros or 17 ones.
In the old code, we checked that the top (64-47) = 17 bits were all
zero. We did this by shifting right by 47 bits and making sure that
nothing was left.
In the new code, we're shifting left by (64 - 48) = 16 bits and then
signed shifting right by the same amount, this propagating the 17th
highest bit to all positions to its left. If we get the same value we
started with, then we're good to go."
While it isn't really important to be fully correct here -
almost all addresses we'll ever see will be userspace ones,
but OTOH it looks to be cheap enough: the new code uses
two more ALU ops but preserves %rcx, allowing to not
reload it from pt_regs->cx again.
On disassembly level, the changes are:
cmp %rcx,0x80(%rsp) -> mov 0x80(%rsp),%r11; cmp %rcx,%r11
shr $0x2f,%rcx -> shl $0x10,%rcx; sar $0x10,%rcx; cmp %rcx,%r11
mov 0x58(%rsp),%rcx -> (eliminated)
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1429633649-20169-1-git-send-email-dvlasenk@redhat.com
[ Changelog massage. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-21 18:27:29 +02:00
/* rcx and r11 are already restored (see code above) */
RESTORE_ C _ R E G S _ E X C E P T _ R C X _ R 1 1
2015-06-08 20:43:07 +02:00
movq R S P ( % r s p ) , % r s p
2015-04-02 18:46:59 +02:00
USERGS_ S Y S R E T 6 4
opportunistic_sysret_failed :
SWAPGS
jmp r e s t o r e _ c _ r e g s _ a n d _ i r e t
2015-06-08 08:42:03 +02:00
END( e n t r y _ S Y S C A L L _ 6 4 )
2008-11-16 15:29:00 +01:00
2016-01-28 15:11:25 -08:00
ENTRY( s t u b _ p t r e g s _ 6 4 )
/ *
* Syscalls m a r k e d a s n e e d i n g p t r e g s l a n d h e r e .
* If w e a r e o n t h e f a s t p a t h , w e n e e d t o s a v e t h e e x t r a r e g s .
* If w e a r e o n t h e s l o w p a t h , t h e e x t r a r e g s a r e a l r e a d y s a v e d .
*
* RAX s t o r e s a p o i n t e r t o t h e C f u n c t i o n i m p l e m e n t i n g t h e s y s c a l l .
* /
cmpq $ . L e n t r y _ S Y S C A L L _ 6 4 _ a f t e r _ f a s t p a t h _ c a l l , ( % r s p )
jne 1 f
/* Called from fast path -- pop return address and jump to slow path */
popq % r a x
jmp t r a c e s y s / * c a l l e d f r o m f a s t p a t h * /
1 :
/* Called from C */
jmp * % r a x / * c a l l e d f r o m C * /
END( s t u b _ p t r e g s _ 6 4 )
.macro ptregs_stub func
ENTRY( p t r e g s _ \ f u n c )
leaq \ f u n c ( % r i p ) , % r a x
jmp s t u b _ p t r e g s _ 6 4
END( p t r e g s _ \ f u n c )
.endm
/* Instantiate ptregs_stub for each ptregs-using syscall */
# define _ _ S Y S C A L L _ 6 4 _ Q U A L _ ( s y m )
# define _ _ S Y S C A L L _ 6 4 _ Q U A L _ p t r e g s ( s y m ) p t r e g s _ s t u b s y m
# define _ _ S Y S C A L L _ 6 4 ( n r , s y m , q u a l ) _ _ S Y S C A L L _ 6 4 _ Q U A L _ ## q u a l ( s y m )
# include < a s m / s y s c a l l s _ 6 4 . h >
2015-04-02 18:46:59 +02:00
2015-02-26 14:40:33 -08:00
/ *
* A n e w l y f o r k e d p r o c e s s d i r e c t l y c o n t e x t s w i t c h e s i n t o t h i s a d d r e s s .
*
* rdi : prev t a s k w e s w i t c h e d f r o m
* /
ENTRY( r e t _ f r o m _ f o r k )
2015-06-08 20:43:07 +02:00
LOCK ; btr $TIF_FORK, TI_flags(%r8)
2015-02-26 14:40:33 -08:00
2015-06-08 20:43:07 +02:00
pushq $ 0 x00 0 2
popfq / * r e s e t k e r n e l e f l a g s * /
2015-02-26 14:40:33 -08:00
2015-06-08 20:43:07 +02:00
call s c h e d u l e _ t a i l / * r d i : ' p r e v ' t a s k p a r a m e t e r * /
2015-02-26 14:40:33 -08:00
2015-06-08 20:43:07 +02:00
testb $ 3 , C S ( % r s p ) / * f r o m k e r n e l _ t h r e a d ? * /
2016-01-28 15:11:27 -08:00
jnz 1 f
2015-02-26 14:40:33 -08:00
2015-02-26 14:40:39 -08:00
/ *
2016-01-28 15:11:27 -08:00
* We c a m e f r o m k e r n e l _ t h r e a d . T h i s c o d e p a t h i s q u i t e t w i s t e d , a n d
* someone s h o u l d c l e a n i t u p .
*
* copy_ t h r e a d _ t l s s t a s h e s t h e f u n c t i o n p o i n t e r i n R B X a n d t h e
* parameter t o b e p a s s e d i n R B P . T h e c a l l e d f u n c t i o n i s p e r m i t t e d
* to c a l l d o _ e x e c v e a n d t h e r e b y j u m p t o u s e r m o d e .
2015-02-26 14:40:39 -08:00
* /
2016-01-28 15:11:27 -08:00
movq R B P ( % r s p ) , % r d i
call * R B X ( % r s p )
movl $ 0 , R A X ( % r s p )
2015-02-26 14:40:33 -08:00
2015-06-08 20:43:07 +02:00
/ *
2016-01-28 15:11:27 -08:00
* Fall t h r o u g h a s t h o u g h w e ' r e e x i t i n g a s y s c a l l . T h i s m a k e s a
* twisted s o r t o f s e n s e i f w e j u s t c a l l e d d o _ e x e c v e .
2015-06-08 20:43:07 +02:00
* /
2016-01-28 15:11:27 -08:00
1 :
movq % r s p , % r d i
call s y s c a l l _ r e t u r n _ s l o w p a t h / * r e t u r n s w i t h I R Q s d i s a b l e d * /
TRACE_ I R Q S _ O N / * u s e r m o d e i s t r a c e d a s I R Q S o n * /
SWAPGS
jmp r e s t o r e _ r e g s _ a n d _ i r e t
2015-02-26 14:40:33 -08:00
END( r e t _ f r o m _ f o r k )
2008-11-11 13:51:52 -08:00
/ *
2015-04-03 21:49:13 +02:00
* Build t h e e n t r y s t u b s w i t h s o m e a s s e m b l e r m a g i c .
* We p a c k 1 s t u b i n t o e v e r y 8 - b y t e b l o c k .
2008-11-11 13:51:52 -08:00
* /
2015-04-03 21:49:13 +02:00
.align 8
2008-11-11 13:51:52 -08:00
ENTRY( i r q _ e n t r i e s _ s t a r t )
2015-04-03 21:49:13 +02:00
vector=FIRST_EXTERNAL_VECTOR
.rept ( FIRST_ S Y S T E M _ V E C T O R - F I R S T _ E X T E R N A L _ V E C T O R )
2015-06-08 20:43:07 +02:00
pushq $ ( ~ v e c t o r + 0 x80 ) / * N o t e : a l w a y s i n s i g n e d b y t e r a n g e * /
2015-04-03 21:49:13 +02:00
vector=vector + 1
jmp c o m m o n _ i n t e r r u p t
.align 8
.endr
2008-11-11 13:51:52 -08:00
END( i r q _ e n t r i e s _ s t a r t )
x86: move entry_64.S register saving out of the macros
Here is a combined patch that moves "save_args" out-of-line for
the interrupt macro and moves "error_entry" mostly out-of-line
for the zeroentry and errorentry macros.
The save_args function becomes really straightforward and easy
to understand, with the possible exception of the stack switch
code, which now needs to copy the return address of to the
calling function. Normal interrupts arrive with ((~vector)-0x80)
on the stack, which gets adjusted in common_interrupt:
<common_interrupt>:
(5) addq $0xffffffffffffff80,(%rsp) /* -> ~(vector) */
(4) sub $0x50,%rsp /* space for registers */
(5) callq ffffffff80211290 <save_args>
(5) callq ffffffff80214290 <do_IRQ>
<ret_from_intr>:
...
An apic interrupt stub now look like this:
<thermal_interrupt>:
(5) pushq $0xffffffffffffff05 /* ~(vector) */
(4) sub $0x50,%rsp /* space for registers */
(5) callq ffffffff80211290 <save_args>
(5) callq ffffffff80212b8f <smp_thermal_interrupt>
(5) jmpq ffffffff80211f93 <ret_from_intr>
Similarly the exception handler register saving function becomes
simpler, without the need of any parameter shuffling. The stub
for an exception without errorcode looks like this:
<overflow>:
(6) callq *0x1cad12(%rip) # ffffffff803dd448 <pv_irq_ops+0x38>
(2) pushq $0xffffffffffffffff /* no syscall */
(4) sub $0x78,%rsp /* space for registers */
(5) callq ffffffff8030e3b0 <error_entry>
(3) mov %rsp,%rdi /* pt_regs pointer */
(2) xor %esi,%esi /* no error code */
(5) callq ffffffff80213446 <do_overflow>
(5) jmpq ffffffff8030e460 <error_exit>
And one for an exception with errorcode like this:
<segment_not_present>:
(6) callq *0x1cab92(%rip) # ffffffff803dd448 <pv_irq_ops+0x38>
(4) sub $0x78,%rsp /* space for registers */
(5) callq ffffffff8030e3b0 <error_entry>
(3) mov %rsp,%rdi /* pt_regs pointer */
(5) mov 0x78(%rsp),%rsi /* load error code */
(9) movq $0xffffffffffffffff,0x78(%rsp) /* no syscall */
(5) callq ffffffff80213209 <do_segment_not_present>
(5) jmpq ffffffff8030e460 <error_exit>
Unfortunately, this last type is more than 32 bytes. But the total space
savings due to this patch is about 2500 bytes on an smp-configuration,
and I think the code is clearer than it was before. The tested kernels
were non-paravirt ones (i.e., without the indirect call at the top of
the exception handlers).
Anyhow, I tested this patch on top of a recent -tip. The machine
was an 2x4-core Xeon at 2333MHz. Measured where the delays between
(almost-)adjacent rdtsc instructions. The graphs show how much
time is spent outside of the program as a function of the measured
delay. The area under the graph represents the total time spent
outside the program. Eight instances of the rdtsctest were
started, each pinned to a single cpu. The histogams are added.
For each kernel two measurements were done: one in mostly idle
condition, the other while running "bonnie++ -f", bound to cpu 0.
Each measurement took 40 minutes runtime. See the attached graphs
for the results. The graphs overlap almost everywhere, but there
are small differences.
Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-19 01:18:11 +01:00
/ *
2005-04-16 15:20:36 -07:00
* Interrupt e n t r y / e x i t .
*
* Interrupt e n t r y p o i n t s s a v e o n l y c a l l e e c l o b b e r e d r e g i s t e r s i n f a s t p a t h .
x86: move entry_64.S register saving out of the macros
Here is a combined patch that moves "save_args" out-of-line for
the interrupt macro and moves "error_entry" mostly out-of-line
for the zeroentry and errorentry macros.
The save_args function becomes really straightforward and easy
to understand, with the possible exception of the stack switch
code, which now needs to copy the return address of to the
calling function. Normal interrupts arrive with ((~vector)-0x80)
on the stack, which gets adjusted in common_interrupt:
<common_interrupt>:
(5) addq $0xffffffffffffff80,(%rsp) /* -> ~(vector) */
(4) sub $0x50,%rsp /* space for registers */
(5) callq ffffffff80211290 <save_args>
(5) callq ffffffff80214290 <do_IRQ>
<ret_from_intr>:
...
An apic interrupt stub now look like this:
<thermal_interrupt>:
(5) pushq $0xffffffffffffff05 /* ~(vector) */
(4) sub $0x50,%rsp /* space for registers */
(5) callq ffffffff80211290 <save_args>
(5) callq ffffffff80212b8f <smp_thermal_interrupt>
(5) jmpq ffffffff80211f93 <ret_from_intr>
Similarly the exception handler register saving function becomes
simpler, without the need of any parameter shuffling. The stub
for an exception without errorcode looks like this:
<overflow>:
(6) callq *0x1cad12(%rip) # ffffffff803dd448 <pv_irq_ops+0x38>
(2) pushq $0xffffffffffffffff /* no syscall */
(4) sub $0x78,%rsp /* space for registers */
(5) callq ffffffff8030e3b0 <error_entry>
(3) mov %rsp,%rdi /* pt_regs pointer */
(2) xor %esi,%esi /* no error code */
(5) callq ffffffff80213446 <do_overflow>
(5) jmpq ffffffff8030e460 <error_exit>
And one for an exception with errorcode like this:
<segment_not_present>:
(6) callq *0x1cab92(%rip) # ffffffff803dd448 <pv_irq_ops+0x38>
(4) sub $0x78,%rsp /* space for registers */
(5) callq ffffffff8030e3b0 <error_entry>
(3) mov %rsp,%rdi /* pt_regs pointer */
(5) mov 0x78(%rsp),%rsi /* load error code */
(9) movq $0xffffffffffffffff,0x78(%rsp) /* no syscall */
(5) callq ffffffff80213209 <do_segment_not_present>
(5) jmpq ffffffff8030e460 <error_exit>
Unfortunately, this last type is more than 32 bytes. But the total space
savings due to this patch is about 2500 bytes on an smp-configuration,
and I think the code is clearer than it was before. The tested kernels
were non-paravirt ones (i.e., without the indirect call at the top of
the exception handlers).
Anyhow, I tested this patch on top of a recent -tip. The machine
was an 2x4-core Xeon at 2333MHz. Measured where the delays between
(almost-)adjacent rdtsc instructions. The graphs show how much
time is spent outside of the program as a function of the measured
delay. The area under the graph represents the total time spent
outside the program. Eight instances of the rdtsctest were
started, each pinned to a single cpu. The histogams are added.
For each kernel two measurements were done: one in mostly idle
condition, the other while running "bonnie++ -f", bound to cpu 0.
Each measurement took 40 minutes runtime. See the attached graphs
for the results. The graphs overlap almost everywhere, but there
are small differences.
Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-19 01:18:11 +01:00
*
* Entry r u n s w i t h i n t e r r u p t s o f f .
* /
2005-04-16 15:20:36 -07:00
2008-11-13 13:50:20 +01:00
/* 0(%rsp): ~(interrupt number) */
2005-04-16 15:20:36 -07:00
.macro interrupt func
2015-01-08 17:25:15 +01:00
cld
2015-07-03 12:44:29 -07:00
ALLOC_ P T _ G P R E G S _ O N _ S T A C K
SAVE_ C _ R E G S
SAVE_ E X T R A _ R E G S
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
2015-07-03 12:44:29 -07:00
testb $ 3 , C S ( % r s p )
2015-04-27 15:21:51 +02:00
jz 1 f
2015-07-03 12:44:31 -07:00
/ *
* IRQ f r o m u s e r m o d e . S w i t c h t o k e r n e l g s b a s e a n d i n f o r m c o n t e x t
* tracking t h a t w e ' r e i n k e r n e l m o d e .
* /
2015-01-08 17:25:15 +01:00
SWAPGS
2015-11-12 12:59:00 -08:00
/ *
* We n e e d t o t e l l l o c k d e p t h a t I R Q s a r e o f f . W e c a n ' t d o t h i s u n t i l
* we f i x g s b a s e , a n d w e s h o u l d d o i t b e f o r e e n t e r _ f r o m _ u s e r _ m o d e
* ( which c a n t a k e l o c k s ) . S i n c e T R A C E _ I R Q S _ O F F i d e m p o t e n t ,
* the s i m p l e s t w a y t o h a n d l e i t i s t o j u s t c a l l i t t w i c e i f
* we e n t e r f r o m u s e r m o d e . T h e r e ' s n o r e a s o n t o o p t i m i z e t h i s s i n c e
* TRACE_ I R Q S _ O F F i s a n o - o p i f l o c k d e p i s o f f .
* /
TRACE_ I R Q S _ O F F
2015-11-12 12:59:04 -08:00
CALL_ e n t e r _ f r o m _ u s e r _ m o d e
2015-07-03 12:44:31 -07:00
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
1 :
2015-01-08 17:25:15 +01:00
/ *
2015-02-26 14:40:28 -08:00
* Save p r e v i o u s s t a c k p o i n t e r , o p t i o n a l l y s w i t c h t o i n t e r r u p t s t a c k .
2015-01-08 17:25:15 +01:00
* irq_ c o u n t i s u s e d t o c h e c k i f a C P U i s a l r e a d y o n a n i n t e r r u p t s t a c k
* or n o t . W h i l e t h i s i s e s s e n t i a l l y r e d u n d a n t w i t h p r e e m p t _ c o u n t i t i s
* a l i t t l e c h e a p e r t o u s e a s e p a r a t e c o u n t e r i n t h e P D A ( s h o r t o f
* moving i r q _ e n t e r i n t o a s s e m b l y , w h i c h w o u l d b e t o o m u c h w o r k )
* /
2015-07-03 12:44:30 -07:00
movq % r s p , % r d i
2015-06-08 20:43:07 +02:00
incl P E R _ C P U _ V A R ( i r q _ c o u n t )
cmovzq P E R _ C P U _ V A R ( i r q _ s t a c k _ p t r ) , % r s p
2015-07-03 12:44:30 -07:00
pushq % r d i
2015-01-08 17:25:15 +01:00
/* We entered an interrupt context - irqs are off: */
TRACE_ I R Q S _ O F F
2015-07-03 12:44:30 -07:00
call \ f u n c / * r d i p o i n t s t o p t _ r e g s * /
2005-04-16 15:20:36 -07:00
.endm
2008-11-13 13:50:20 +01:00
/ *
* The i n t e r r u p t s t u b s p u s h ( ~ v e c t o r + 0 x80 ) o n t o t h e s t a c k a n d
* then j u m p t o c o m m o n _ i n t e r r u p t .
* /
2008-11-11 13:51:52 -08:00
.p2align CONFIG_X86_L1_CACHE_SHIFT
common_interrupt :
2012-11-02 11:18:39 +00:00
ASM_ C L A C
2015-06-08 20:43:07 +02:00
addq $ - 0 x80 , ( % r s p ) / * A d j u s t v e c t o r t o [ - 2 5 6 , - 1 ] r a n g e * /
2005-04-16 15:20:36 -07:00
interrupt d o _ I R Q
2015-03-23 14:03:59 +01:00
/* 0(%rsp): old RSP */
2005-09-12 18:49:24 +02:00
ret_from_intr :
2008-01-30 13:32:08 +01:00
DISABLE_ I N T E R R U P T S ( C L B R _ N O N E )
2006-07-03 00:24:45 -07:00
TRACE_ I R Q S _ O F F
2015-06-08 20:43:07 +02:00
decl P E R _ C P U _ V A R ( i r q _ c o u n t )
2011-01-06 15:22:47 +01:00
2011-07-02 16:52:45 +02:00
/* Restore saved previous stack */
2015-07-03 12:44:29 -07:00
popq % r s p
2011-01-06 15:22:47 +01:00
x86/asm/entry/64: Clean up usage of TEST insns
By the nature of TEST operation, it is often possible
to test a narrower part of the operand:
"testl $3, mem" -> "testb $3, mem"
This results in shorter insns, because TEST insn has no
sign-entending byte-immediate forms unlike other ALU ops.
text data bss dec hex filename
11674 0 0 11674 2d9a entry_64.o.before
11658 0 0 11658 2d8a entry_64.o
Changes in object code:
- f7 84 24 88 00 00 00 03 00 00 00 testl $0x3,0x88(%rsp)
+ f6 84 24 88 00 00 00 03 testb $0x3,0x88(%rsp)
- f7 44 24 68 03 00 00 00 testl $0x3,0x68(%rsp)
+ f6 44 24 68 03 testb $0x3,0x68(%rsp)
- f7 84 24 90 00 00 00 03 00 00 00 testl $0x3,0x90(%rsp)
+ f6 84 24 90 00 00 00 03 testb $0x3,0x90(%rsp)
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1430140912-7960-2-git-send-email-dvlasenk@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-27 15:21:52 +02:00
testb $ 3 , C S ( % r s p )
2015-04-27 15:21:51 +02:00
jz r e t i n t _ k e r n e l
2015-06-08 20:43:07 +02:00
2015-07-03 12:44:31 -07:00
/* Interrupt came from user space */
GLOBAL( r e t i n t _ u s e r )
mov % r s p ,% r d i
call p r e p a r e _ e x i t _ t o _ u s e r m o d e
2006-07-03 00:24:45 -07:00
TRACE_ I R Q S _ I R E T Q
2008-01-30 13:32:08 +01:00
SWAPGS
2015-07-03 12:44:29 -07:00
jmp r e s t o r e _ r e g s _ a n d _ i r e t
2006-07-03 00:24:45 -07:00
2015-03-30 20:09:31 +02:00
/* Returning to kernel space */
2015-03-31 19:00:05 +02:00
retint_kernel :
2015-03-30 20:09:31 +02:00
# ifdef C O N F I G _ P R E E M P T
/* Interrupts are off */
/* Check if we need preemption */
2015-06-08 20:43:07 +02:00
bt $ 9 , E F L A G S ( % r s p ) / * w e r e i n t e r r u p t s o f f ? * /
2015-03-31 19:00:05 +02:00
jnc 1 f
2015-06-08 20:43:07 +02:00
0 : cmpl $ 0 , P E R _ C P U _ V A R ( _ _ p r e e m p t _ c o u n t )
2015-03-31 19:00:07 +02:00
jnz 1 f
2015-03-30 20:09:31 +02:00
call p r e e m p t _ s c h e d u l e _ i r q
2015-03-31 19:00:07 +02:00
jmp 0 b
2015-03-31 19:00:05 +02:00
1 :
2015-03-30 20:09:31 +02:00
# endif
2006-07-03 00:24:45 -07:00
/ *
* The i r e t q c o u l d r e - e n a b l e i n t e r r u p t s :
* /
TRACE_ I R Q S _ I R E T Q
2015-04-02 18:46:59 +02:00
/ *
* At t h i s l a b e l , c o d e p a t h s w h i c h r e t u r n t o k e r n e l a n d t o u s e r ,
* which c o m e f r o m i n t e r r u p t s / e x c e p t i o n a n d f r o m s y s c a l l s , m e r g e .
* /
2015-10-05 17:48:09 -07:00
GLOBAL( r e s t o r e _ r e g s _ a n d _ i r e t )
2015-07-03 12:44:29 -07:00
RESTORE_ E X T R A _ R E G S
2015-04-02 18:46:59 +02:00
restore_c_regs_and_iret :
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
RESTORE_ C _ R E G S
REMOVE_ P T _ G P R E G S _ F R O M _ S T A C K 8
2014-07-23 08:34:11 -07:00
INTERRUPT_ R E T U R N
ENTRY( n a t i v e _ i r e t )
x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack
The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer. This
causes some 16-bit software to break, but it also leaks kernel state
to user space. We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.
In checkin:
b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.
This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart. When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace. The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.
(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)
Special thanks to:
- Andy Lutomirski, for the suggestion of using very small stack slots
and copy (as opposed to map) the IRET frame there, and for the
suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.
Reported-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andrew Lutomriski <amluto@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dirk Hohndel <dirk@hohndel.org>
Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
Cc: comex <comexk@gmail.com>
Cc: Alexander van Heukelum <heukelum@fastmail.fm>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: <stable@vger.kernel.org> # consider after upstream merge
2014-04-29 16:46:09 -07:00
/ *
* Are w e r e t u r n i n g t o a s t a c k s e g m e n t f r o m t h e L D T ? N o t e : i n
* 6 4 - bit m o d e S S : R S P o n t h e e x c e p t i o n s t a c k i s a l w a y s v a l i d .
* /
2014-05-04 10:36:22 -07:00
# ifdef C O N F I G _ X 8 6 _ E S P F I X 6 4
2015-06-08 20:43:07 +02:00
testb $ 4 , ( S S - R I P ) ( % r s p )
jnz n a t i v e _ i r q _ r e t u r n _ l d t
2014-05-04 10:36:22 -07:00
# endif
x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack
The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer. This
causes some 16-bit software to break, but it also leaks kernel state
to user space. We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.
In checkin:
b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.
This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart. When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace. The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.
(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)
Special thanks to:
- Andy Lutomirski, for the suggestion of using very small stack slots
and copy (as opposed to map) the IRET frame there, and for the
suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.
Reported-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andrew Lutomriski <amluto@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dirk Hohndel <dirk@hohndel.org>
Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
Cc: comex <comexk@gmail.com>
Cc: Alexander van Heukelum <heukelum@fastmail.fm>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: <stable@vger.kernel.org> # consider after upstream merge
2014-04-29 16:46:09 -07:00
2014-11-22 18:00:31 -08:00
.global native_irq_return_iret
2014-07-23 08:34:11 -07:00
native_irq_return_iret :
x86_64, traps: Rework bad_iret
It's possible for iretq to userspace to fail. This can happen because
of a bad CS, SS, or RIP.
Historically, we've handled it by fixing up an exception from iretq to
land at bad_iret, which pretends that the failed iret frame was really
the hardware part of #GP(0) from userspace. To make this work, there's
an extra fixup to fudge the gs base into a usable state.
This is suboptimal because it loses the original exception. It's also
buggy because there's no guarantee that we were on the kernel stack to
begin with. For example, if the failing iret happened on return from an
NMI, then we'll end up executing general_protection on the NMI stack.
This is bad for several reasons, the most immediate of which is that
general_protection, as a non-paranoid idtentry, will try to deliver
signals and/or schedule from the wrong stack.
This patch throws out bad_iret entirely. As a replacement, it augments
the existing swapgs fudge into a full-blown iret fixup, mostly written
in C. It's should be clearer and more correct.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-11-22 18:00:33 -08:00
/ *
* This m a y f a u l t . N o n - p a r a n o i d f a u l t s o n r e t u r n t o u s e r s p a c e a r e
* handled b y f i x u p _ b a d _ i r e t . T h e s e i n c l u d e #S S , # G P , a n d # N P .
* Double- f a u l t s d u e t o e s p f i x64 a r e h a n d l e d i n d o _ d o u b l e _ f a u l t .
* Other f a u l t s h e r e a r e f a t a l .
* /
2005-04-16 15:20:36 -07:00
iretq
2008-02-09 23:24:08 +01:00
2014-05-04 10:36:22 -07:00
# ifdef C O N F I G _ X 8 6 _ E S P F I X 6 4
2014-07-23 08:34:11 -07:00
native_irq_return_ldt :
2015-06-08 20:43:07 +02:00
pushq % r a x
pushq % r d i
x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack
The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer. This
causes some 16-bit software to break, but it also leaks kernel state
to user space. We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.
In checkin:
b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.
This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart. When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace. The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.
(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)
Special thanks to:
- Andy Lutomirski, for the suggestion of using very small stack slots
and copy (as opposed to map) the IRET frame there, and for the
suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.
Reported-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andrew Lutomriski <amluto@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dirk Hohndel <dirk@hohndel.org>
Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
Cc: comex <comexk@gmail.com>
Cc: Alexander van Heukelum <heukelum@fastmail.fm>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: <stable@vger.kernel.org> # consider after upstream merge
2014-04-29 16:46:09 -07:00
SWAPGS
2015-06-08 20:43:07 +02:00
movq P E R _ C P U _ V A R ( e s p f i x _ w a d d r ) , % r d i
movq % r a x , ( 0 * 8 ) ( % r d i ) / * R A X * /
movq ( 2 * 8 ) ( % r s p ) , % r a x / * R I P * /
movq % r a x , ( 1 * 8 ) ( % r d i )
movq ( 3 * 8 ) ( % r s p ) , % r a x / * C S * /
movq % r a x , ( 2 * 8 ) ( % r d i )
movq ( 4 * 8 ) ( % r s p ) , % r a x / * R F L A G S * /
movq % r a x , ( 3 * 8 ) ( % r d i )
movq ( 6 * 8 ) ( % r s p ) , % r a x / * S S * /
movq % r a x , ( 5 * 8 ) ( % r d i )
movq ( 5 * 8 ) ( % r s p ) , % r a x / * R S P * /
movq % r a x , ( 4 * 8 ) ( % r d i )
andl $ 0 x f f f f00 0 0 , % e a x
popq % r d i
orq P E R _ C P U _ V A R ( e s p f i x _ s t a c k ) , % r a x
x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack
The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer. This
causes some 16-bit software to break, but it also leaks kernel state
to user space. We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.
In checkin:
b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.
This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart. When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace. The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.
(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)
Special thanks to:
- Andy Lutomirski, for the suggestion of using very small stack slots
and copy (as opposed to map) the IRET frame there, and for the
suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.
Reported-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andrew Lutomriski <amluto@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dirk Hohndel <dirk@hohndel.org>
Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
Cc: comex <comexk@gmail.com>
Cc: Alexander van Heukelum <heukelum@fastmail.fm>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: <stable@vger.kernel.org> # consider after upstream merge
2014-04-29 16:46:09 -07:00
SWAPGS
2015-06-08 20:43:07 +02:00
movq % r a x , % r s p
popq % r a x
jmp n a t i v e _ i r q _ r e t u r n _ i r e t
2014-05-04 10:36:22 -07:00
# endif
2006-06-26 13:56:55 +02:00
END( c o m m o n _ i n t e r r u p t )
x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack
The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer. This
causes some 16-bit software to break, but it also leaks kernel state
to user space. We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.
In checkin:
b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.
This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart. When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace. The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.
(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)
Special thanks to:
- Andy Lutomirski, for the suggestion of using very small stack slots
and copy (as opposed to map) the IRET frame there, and for the
suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.
Reported-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andrew Lutomriski <amluto@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dirk Hohndel <dirk@hohndel.org>
Cc: Arjan van de Ven <arjan.van.de.ven@intel.com>
Cc: comex <comexk@gmail.com>
Cc: Alexander van Heukelum <heukelum@fastmail.fm>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: <stable@vger.kernel.org> # consider after upstream merge
2014-04-29 16:46:09 -07:00
2005-04-16 15:20:36 -07:00
/ *
* APIC i n t e r r u p t s .
2008-11-16 15:29:00 +01:00
* /
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 11:46:53 -04:00
.macro apicinterrupt3 num s y m d o _ s y m
2008-11-23 10:08:28 +01:00
ENTRY( \ s y m )
2012-11-02 11:18:39 +00:00
ASM_ C L A C
2015-06-08 20:43:07 +02:00
pushq $ ~ ( \ n u m )
2011-11-29 11:03:46 +00:00
.Lcommon_ \ sym :
2008-11-23 10:08:28 +01:00
interrupt \ d o _ s y m
2015-06-08 20:43:07 +02:00
jmp r e t _ f r o m _ i n t r
2008-11-23 10:08:28 +01:00
END( \ s y m )
.endm
2005-04-16 15:20:36 -07:00
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 11:46:53 -04:00
# ifdef C O N F I G _ T R A C I N G
# define t r a c e ( s y m ) t r a c e _ ## s y m
# define s m p _ t r a c e ( s y m ) s m p _ t r a c e _ ## s y m
.macro trace_apicinterrupt num s y m
apicinterrupt3 \ n u m t r a c e ( \ s y m ) s m p _ t r a c e ( \ s y m )
.endm
# else
.macro trace_apicinterrupt num s y m d o _ s y m
.endm
# endif
.macro apicinterrupt num s y m d o _ s y m
apicinterrupt3 \ n u m \ s y m \ d o _ s y m
trace_ a p i c i n t e r r u p t \ n u m \ s y m
.endm
2008-11-23 10:08:28 +01:00
# ifdef C O N F I G _ S M P
2015-06-08 20:43:07 +02:00
apicinterrupt3 I R Q _ M O V E _ C L E A N U P _ V E C T O R i r q _ m o v e _ c l e a n u p _ i n t e r r u p t s m p _ i r q _ m o v e _ c l e a n u p _ i n t e r r u p t
apicinterrupt3 R E B O O T _ V E C T O R r e b o o t _ i n t e r r u p t s m p _ r e b o o t _ i n t e r r u p t
2008-11-23 10:08:28 +01:00
# endif
2005-04-16 15:20:36 -07:00
2009-01-20 04:36:04 +01:00
# ifdef C O N F I G _ X 8 6 _ U V
2015-06-08 20:43:07 +02:00
apicinterrupt3 U V _ B A U _ M E S S A G E u v _ b a u _ m e s s a g e _ i n t r1 u v _ b a u _ m e s s a g e _ i n t e r r u p t
2009-01-20 04:36:04 +01:00
# endif
2015-06-08 20:43:07 +02:00
apicinterrupt L O C A L _ T I M E R _ V E C T O R a p i c _ t i m e r _ i n t e r r u p t s m p _ a p i c _ t i m e r _ i n t e r r u p t
apicinterrupt X 8 6 _ P L A T F O R M _ I P I _ V E C T O R x86 _ p l a t f o r m _ i p i s m p _ x86 _ p l a t f o r m _ i p i
2005-11-05 17:25:53 +01:00
2013-04-11 19:25:11 +08:00
# ifdef C O N F I G _ H A V E _ K V M
2015-06-08 20:43:07 +02:00
apicinterrupt3 P O S T E D _ I N T R _ V E C T O R k v m _ p o s t e d _ i n t r _ i p i s m p _ k v m _ p o s t e d _ i n t r _ i p i
apicinterrupt3 P O S T E D _ I N T R _ W A K E U P _ V E C T O R k v m _ p o s t e d _ i n t r _ w a k e u p _ i p i s m p _ k v m _ p o s t e d _ i n t r _ w a k e u p _ i p i
2013-04-11 19:25:11 +08:00
# endif
2013-06-22 07:33:30 -04:00
# ifdef C O N F I G _ X 8 6 _ M C E _ T H R E S H O L D
2015-06-08 20:43:07 +02:00
apicinterrupt T H R E S H O L D _ A P I C _ V E C T O R t h r e s h o l d _ i n t e r r u p t s m p _ t h r e s h o l d _ i n t e r r u p t
2013-06-22 07:33:30 -04:00
# endif
2015-05-06 06:58:56 -05:00
# ifdef C O N F I G _ X 8 6 _ M C E _ A M D
2015-06-08 20:43:07 +02:00
apicinterrupt D E F E R R E D _ E R R O R _ V E C T O R d e f e r r e d _ e r r o r _ i n t e r r u p t s m p _ d e f e r r e d _ e r r o r _ i n t e r r u p t
2015-05-06 06:58:56 -05:00
# endif
2013-06-22 07:33:30 -04:00
# ifdef C O N F I G _ X 8 6 _ T H E R M A L _ V E C T O R
2015-06-08 20:43:07 +02:00
apicinterrupt T H E R M A L _ A P I C _ V E C T O R t h e r m a l _ i n t e r r u p t s m p _ t h e r m a l _ i n t e r r u p t
2013-06-22 07:33:30 -04:00
# endif
2008-06-02 08:56:14 -05:00
2008-11-23 10:08:28 +01:00
# ifdef C O N F I G _ S M P
2015-06-08 20:43:07 +02:00
apicinterrupt C A L L _ F U N C T I O N _ S I N G L E _ V E C T O R c a l l _ f u n c t i o n _ s i n g l e _ i n t e r r u p t s m p _ c a l l _ f u n c t i o n _ s i n g l e _ i n t e r r u p t
apicinterrupt C A L L _ F U N C T I O N _ V E C T O R c a l l _ f u n c t i o n _ i n t e r r u p t s m p _ c a l l _ f u n c t i o n _ i n t e r r u p t
apicinterrupt R E S C H E D U L E _ V E C T O R r e s c h e d u l e _ i n t e r r u p t s m p _ r e s c h e d u l e _ i n t e r r u p t
2008-11-23 10:08:28 +01:00
# endif
2005-04-16 15:20:36 -07:00
2015-06-08 20:43:07 +02:00
apicinterrupt E R R O R _ A P I C _ V E C T O R e r r o r _ i n t e r r u p t s m p _ e r r o r _ i n t e r r u p t
apicinterrupt S P U R I O U S _ A P I C _ V E C T O R s p u r i o u s _ i n t e r r u p t s m p _ s p u r i o u s _ i n t e r r u p t
2008-11-16 15:29:00 +01:00
2010-10-14 14:01:34 +08:00
# ifdef C O N F I G _ I R Q _ W O R K
2015-06-08 20:43:07 +02:00
apicinterrupt I R Q _ W O R K _ V E C T O R i r q _ w o r k _ i n t e r r u p t s m p _ i r q _ w o r k _ i n t e r r u p t
2008-12-03 10:39:53 +01:00
# endif
2005-04-16 15:20:36 -07:00
/ *
* Exception e n t r y p o i n t s .
2008-11-16 15:29:00 +01:00
* /
2015-03-05 19:19:07 -08:00
# define C P U _ T S S _ I S T ( x ) P E R _ C P U _ V A R ( c p u _ t s s ) + ( T S S _ i s t + ( ( x ) - 1 ) * 8 )
2014-05-21 15:07:09 -07:00
.macro idtentry sym d o _ s y m h a s _ e r r o r _ c o d e : r e q p a r a n o i d =0 s h i f t _ i s t = - 1
2008-11-23 10:08:28 +01:00
ENTRY( \ s y m )
2014-05-21 15:07:09 -07:00
/* Sanity check */
.if \ shift_ i s t ! = - 1 & & \ p a r a n o i d = = 0
.error " using s h i f t _ i s t r e q u i r e s p a r a n o i d =1 "
.endif
2012-11-02 11:18:39 +00:00
ASM_ C L A C
2008-11-21 16:44:28 +01:00
PARAVIRT_ A D J U S T _ E X C E P T I O N _ F R A M E
2014-05-21 15:07:08 -07:00
.ifeq \ has_ e r r o r _ c o d e
2015-06-08 20:43:07 +02:00
pushq $ - 1 / * O R I G _ R A X : n o s y s c a l l t o r e s t a r t * /
2014-05-21 15:07:08 -07:00
.endif
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
ALLOC_ P T _ G P R E G S _ O N _ S T A C K
2014-05-21 15:07:08 -07:00
.if \ paranoid
2014-11-11 12:49:41 -08:00
.if \ paranoid = = 1
2015-06-08 20:43:07 +02:00
testb $ 3 , C S ( % r s p ) / * I f c o m i n g f r o m u s e r s p a c e , s w i t c h s t a c k s * /
jnz 1 f
2014-11-11 12:49:41 -08:00
.endif
2015-06-08 20:43:07 +02:00
call p a r a n o i d _ e n t r y
2014-05-21 15:07:08 -07:00
.else
2015-06-08 20:43:07 +02:00
call e r r o r _ e n t r y
2014-05-21 15:07:08 -07:00
.endif
2015-02-26 14:40:34 -08:00
/* returned flag: ebx=0: need swapgs on exit, ebx=1: don't need it */
2014-05-21 15:07:08 -07:00
.if \ paranoid
2014-05-21 15:07:09 -07:00
.if \ shift_ i s t ! = - 1
2015-06-08 20:43:07 +02:00
TRACE_ I R Q S _ O F F _ D E B U G / * r e l o a d I D T i n c a s e o f r e c u r s i o n * /
2014-05-21 15:07:09 -07:00
.else
2008-11-21 16:44:28 +01:00
TRACE_ I R Q S _ O F F
2014-05-21 15:07:08 -07:00
.endif
2014-05-21 15:07:09 -07:00
.endif
2014-05-21 15:07:08 -07:00
2015-06-08 20:43:07 +02:00
movq % r s p , % r d i / * p t _ r e g s p o i n t e r * /
2014-05-21 15:07:08 -07:00
.if \ has_ e r r o r _ c o d e
2015-06-08 20:43:07 +02:00
movq O R I G _ R A X ( % r s p ) , % r s i / * g e t e r r o r c o d e * /
movq $ - 1 , O R I G _ R A X ( % r s p ) / * n o s y s c a l l t o r e s t a r t * /
2014-05-21 15:07:08 -07:00
.else
2015-06-08 20:43:07 +02:00
xorl % e s i , % e s i / * n o e r r o r c o d e * /
2014-05-21 15:07:08 -07:00
.endif
2014-05-21 15:07:09 -07:00
.if \ shift_ i s t ! = - 1
2015-06-08 20:43:07 +02:00
subq $ E X C E P T I O N _ S T K S Z , C P U _ T S S _ I S T ( \ s h i f t _ i s t )
2014-05-21 15:07:09 -07:00
.endif
2015-06-08 20:43:07 +02:00
call \ d o _ s y m
2014-05-21 15:07:08 -07:00
2014-05-21 15:07:09 -07:00
.if \ shift_ i s t ! = - 1
2015-06-08 20:43:07 +02:00
addq $ E X C E P T I O N _ S T K S Z , C P U _ T S S _ I S T ( \ s h i f t _ i s t )
2014-05-21 15:07:09 -07:00
.endif
2015-02-26 14:40:34 -08:00
/* these procedures expect "no swapgs" flag in ebx */
2014-05-21 15:07:08 -07:00
.if \ paranoid
2015-06-08 20:43:07 +02:00
jmp p a r a n o i d _ e x i t
2014-05-21 15:07:08 -07:00
.else
2015-06-08 20:43:07 +02:00
jmp e r r o r _ e x i t
2014-05-21 15:07:08 -07:00
.endif
2014-11-11 12:49:41 -08:00
.if \ paranoid = = 1
/ *
* Paranoid e n t r y f r o m u s e r s p a c e . S w i t c h s t a c k s a n d t r e a t i t
* as a n o r m a l e n t r y . T h i s m e a n s t h a t p a r a n o i d h a n d l e r s
* run i n r e a l p r o c e s s c o n t e x t i f u s e r _ m o d e ( r e g s ) .
* /
1 :
2015-06-08 20:43:07 +02:00
call e r r o r _ e n t r y
2014-11-11 12:49:41 -08:00
2015-06-08 20:43:07 +02:00
movq % r s p , % r d i / * p t _ r e g s p o i n t e r * /
call s y n c _ r e g s
movq % r a x , % r s p / * s w i t c h s t a c k * /
2014-11-11 12:49:41 -08:00
2015-06-08 20:43:07 +02:00
movq % r s p , % r d i / * p t _ r e g s p o i n t e r * /
2014-11-11 12:49:41 -08:00
.if \ has_ e r r o r _ c o d e
2015-06-08 20:43:07 +02:00
movq O R I G _ R A X ( % r s p ) , % r s i / * g e t e r r o r c o d e * /
movq $ - 1 , O R I G _ R A X ( % r s p ) / * n o s y s c a l l t o r e s t a r t * /
2014-11-11 12:49:41 -08:00
.else
2015-06-08 20:43:07 +02:00
xorl % e s i , % e s i / * n o e r r o r c o d e * /
2014-11-11 12:49:41 -08:00
.endif
2015-06-08 20:43:07 +02:00
call \ d o _ s y m
2014-11-11 12:49:41 -08:00
2015-06-08 20:43:07 +02:00
jmp e r r o r _ e x i t / * % e b x : n o s w a p g s f l a g * /
2014-11-11 12:49:41 -08:00
.endif
2008-11-24 13:24:28 +01:00
END( \ s y m )
2008-11-23 10:08:28 +01:00
.endm
2008-11-21 16:44:28 +01:00
2013-10-30 16:37:00 -04:00
# ifdef C O N F I G _ T R A C I N G
2014-05-21 15:07:08 -07:00
.macro trace_idtentry sym d o _ s y m h a s _ e r r o r _ c o d e : r e q
idtentry t r a c e ( \ s y m ) t r a c e ( \ d o _ s y m ) h a s _ e r r o r _ c o d e = \ h a s _ e r r o r _ c o d e
idtentry \ s y m \ d o _ s y m h a s _ e r r o r _ c o d e = \ h a s _ e r r o r _ c o d e
2013-10-30 16:37:00 -04:00
.endm
# else
2014-05-21 15:07:08 -07:00
.macro trace_idtentry sym d o _ s y m h a s _ e r r o r _ c o d e : r e q
idtentry \ s y m \ d o _ s y m h a s _ e r r o r _ c o d e = \ h a s _ e r r o r _ c o d e
2013-10-30 16:37:00 -04:00
.endm
# endif
2015-06-08 20:43:07 +02:00
idtentry d i v i d e _ e r r o r d o _ d i v i d e _ e r r o r h a s _ e r r o r _ c o d e =0
idtentry o v e r f l o w d o _ o v e r f l o w h a s _ e r r o r _ c o d e =0
idtentry b o u n d s d o _ b o u n d s h a s _ e r r o r _ c o d e =0
idtentry i n v a l i d _ o p d o _ i n v a l i d _ o p h a s _ e r r o r _ c o d e =0
idtentry d e v i c e _ n o t _ a v a i l a b l e d o _ d e v i c e _ n o t _ a v a i l a b l e h a s _ e r r o r _ c o d e =0
idtentry d o u b l e _ f a u l t d o _ d o u b l e _ f a u l t h a s _ e r r o r _ c o d e =1 p a r a n o i d =2
idtentry c o p r o c e s s o r _ s e g m e n t _ o v e r r u n d o _ c o p r o c e s s o r _ s e g m e n t _ o v e r r u n h a s _ e r r o r _ c o d e =0
idtentry i n v a l i d _ T S S d o _ i n v a l i d _ T S S h a s _ e r r o r _ c o d e =1
idtentry s e g m e n t _ n o t _ p r e s e n t d o _ s e g m e n t _ n o t _ p r e s e n t h a s _ e r r o r _ c o d e =1
idtentry s p u r i o u s _ i n t e r r u p t _ b u g d o _ s p u r i o u s _ i n t e r r u p t _ b u g h a s _ e r r o r _ c o d e =0
idtentry c o p r o c e s s o r _ e r r o r d o _ c o p r o c e s s o r _ e r r o r h a s _ e r r o r _ c o d e =0
idtentry a l i g n m e n t _ c h e c k d o _ a l i g n m e n t _ c h e c k h a s _ e r r o r _ c o d e =1
idtentry s i m d _ c o p r o c e s s o r _ e r r o r d o _ s i m d _ c o p r o c e s s o r _ e r r o r h a s _ e r r o r _ c o d e =0
/ *
* Reload g s s e l e c t o r w i t h e x c e p t i o n h a n d l i n g
* edi : new s e l e c t o r
* /
2008-06-25 00:19:32 -04:00
ENTRY( n a t i v e _ l o a d _ g s _ i n d e x )
x86/debug: Remove perpetually broken, unmaintainable dwarf annotations
So the dwarf2 annotations in low level assembly code have
become an increasing hindrance: unreadable, messy macros
mixed into some of the most security sensitive code paths
of the Linux kernel.
These debug info annotations don't even buy the upstream
kernel anything: dwarf driven stack unwinding has caused
problems in the past so it's out of tree, and the upstream
kernel only uses the much more robust framepointers based
stack unwinding method.
In addition to that there's a steady, slow bitrot going
on with these annotations, requiring frequent fixups.
There's no tooling and no functionality upstream that
keeps it correct.
So burn down the sick forest, allowing new, healthier growth:
27 files changed, 350 insertions(+), 1101 deletions(-)
Someone who has the willingness and time to do this
properly can attempt to reintroduce dwarf debuginfo in x86
assembly code plus dwarf unwinding from first principles,
with the following conditions:
- it should be maximally readable, and maximally low-key to
'ordinary' code reading and maintenance.
- find a build time method to insert dwarf annotations
automatically in the most common cases, for pop/push
instructions that manipulate the stack pointer. This could
be done for example via a preprocessing step that just
looks for common patterns - plus special annotations for
the few cases where we want to depart from the default.
We have hundreds of CFI annotations, so automating most of
that makes sense.
- it should come with build tooling checks that ensure that
CFI annotations are sensible. We've seen such efforts from
the framepointer side, and there's no reason it couldn't be
done on the dwarf side.
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-28 12:21:47 +02:00
pushfq
2009-01-28 14:35:03 -08:00
DISABLE_ I N T E R R U P T S ( C L B R _ A N Y & ~ C L B R _ R D I )
2008-11-27 21:10:08 +03:00
SWAPGS
2008-11-16 15:29:00 +01:00
gs_change :
2015-06-08 20:43:07 +02:00
movl % e d i , % g s
2 : mfence / * w o r k a r o u n d * /
2008-01-30 13:32:08 +01:00
SWAPGS
x86/debug: Remove perpetually broken, unmaintainable dwarf annotations
So the dwarf2 annotations in low level assembly code have
become an increasing hindrance: unreadable, messy macros
mixed into some of the most security sensitive code paths
of the Linux kernel.
These debug info annotations don't even buy the upstream
kernel anything: dwarf driven stack unwinding has caused
problems in the past so it's out of tree, and the upstream
kernel only uses the much more robust framepointers based
stack unwinding method.
In addition to that there's a steady, slow bitrot going
on with these annotations, requiring frequent fixups.
There's no tooling and no functionality upstream that
keeps it correct.
So burn down the sick forest, allowing new, healthier growth:
27 files changed, 350 insertions(+), 1101 deletions(-)
Someone who has the willingness and time to do this
properly can attempt to reintroduce dwarf debuginfo in x86
assembly code plus dwarf unwinding from first principles,
with the following conditions:
- it should be maximally readable, and maximally low-key to
'ordinary' code reading and maintenance.
- find a build time method to insert dwarf annotations
automatically in the most common cases, for pop/push
instructions that manipulate the stack pointer. This could
be done for example via a preprocessing step that just
looks for common patterns - plus special annotations for
the few cases where we want to depart from the default.
We have hundreds of CFI annotations, so automating most of
that makes sense.
- it should come with build tooling checks that ensure that
CFI annotations are sensible. We've seen such efforts from
the framepointer side, and there's no reason it couldn't be
done on the dwarf side.
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-28 12:21:47 +02:00
popfq
2008-11-27 21:10:08 +03:00
ret
2008-11-23 10:15:32 +01:00
END( n a t i v e _ l o a d _ g s _ i n d e x )
2008-11-16 15:29:00 +01:00
2015-06-08 20:43:07 +02:00
_ ASM_ E X T A B L E ( g s _ c h a n g e , b a d _ g s )
.section .fixup , " ax"
2005-04-16 15:20:36 -07:00
/* running with kernelgs */
2008-11-16 15:29:00 +01:00
bad_gs :
2015-06-08 20:43:07 +02:00
SWAPGS / * s w i t c h b a c k t o u s e r g s * /
xorl % e a x , % e a x
movl % e a x , % g s
jmp 2 b
2008-11-27 21:10:08 +03:00
.previous
2008-11-16 15:29:00 +01:00
2006-08-02 22:37:28 +02:00
/* Call softirq on interrupt stack. Interrupts are off. */
2013-09-05 15:49:45 +02:00
ENTRY( d o _ s o f t i r q _ o w n _ s t a c k )
2015-06-08 20:43:07 +02:00
pushq % r b p
mov % r s p , % r b p
incl P E R _ C P U _ V A R ( i r q _ c o u n t )
cmove P E R _ C P U _ V A R ( i r q _ s t a c k _ p t r ) , % r s p
push % r b p / * f r a m e p o i n t e r b a c k l i n k * /
call _ _ d o _ s o f t i r q
2006-08-02 22:37:28 +02:00
leaveq
2015-06-08 20:43:07 +02:00
decl P E R _ C P U _ V A R ( i r q _ c o u n t )
2005-07-28 21:15:49 -07:00
ret
2013-09-05 15:49:45 +02:00
END( d o _ s o f t i r q _ o w n _ s t a c k )
2007-06-23 02:29:25 +02:00
2008-07-08 15:06:49 -07:00
# ifdef C O N F I G _ X E N
2014-05-21 15:07:08 -07:00
idtentry x e n _ h y p e r v i s o r _ c a l l b a c k x e n _ d o _ h y p e r v i s o r _ c a l l b a c k h a s _ e r r o r _ c o d e =0
2008-07-08 15:06:49 -07:00
/ *
2008-11-27 21:10:08 +03:00
* A n o t e o n t h e " c r i t i c a l r e g i o n " i n o u r c a l l b a c k h a n d l e r .
* We w a n t t o a v o i d s t a c k i n g c a l l b a c k h a n d l e r s d u e t o e v e n t s o c c u r r i n g
* during h a n d l i n g o f t h e l a s t e v e n t . T o d o t h i s , w e k e e p e v e n t s d i s a b l e d
* until w e ' v e d o n e a l l p r o c e s s i n g . H O W E V E R , w e m u s t e n a b l e e v e n t s b e f o r e
* popping t h e s t a c k f r a m e ( c a n ' t b e d o n e a t o m i c a l l y ) a n d s o i t w o u l d s t i l l
* be p o s s i b l e t o g e t e n o u g h h a n d l e r a c t i v a t i o n s t o o v e r f l o w t h e s t a c k .
* Although u n l i k e l y , b u g s o f t h a t k i n d a r e h a r d t o t r a c k d o w n , s o w e ' d
* like t o a v o i d t h e p o s s i b i l i t y .
* So, o n e n t r y t o t h e h a n d l e r w e d e t e c t w h e t h e r w e i n t e r r u p t e d a n
* existing a c t i v a t i o n i n i t s c r i t i c a l r e g i o n - - i f s o , w e p o p t h e c u r r e n t
* activation a n d r e s t a r t t h e h a n d l e r u s i n g t h e p r e v i o u s o n e .
* /
2015-06-08 20:43:07 +02:00
ENTRY( x e n _ d o _ h y p e r v i s o r _ c a l l b a c k ) / * d o _ h y p e r v i s o r _ c a l l b a c k ( s t r u c t * p t _ r e g s ) * /
2008-11-27 21:10:08 +03:00
/ *
* Since w e d o n ' t m o d i f y % r d i , e v t c h n _ d o _ u p a l l ( s t r u c t * p t _ r e g s ) w i l l
* see t h e c o r r e c t p o i n t e r t o t h e p t _ r e g s
* /
2015-06-08 20:43:07 +02:00
movq % r d i , % r s p / * w e d o n ' t r e t u r n , a d j u s t t h e s t a c k f r a m e * /
11 : incl P E R _ C P U _ V A R ( i r q _ c o u n t )
movq % r s p , % r b p
cmovzq P E R _ C P U _ V A R ( i r q _ s t a c k _ p t r ) , % r s p
pushq % r b p / * f r a m e p o i n t e r b a c k l i n k * /
call x e n _ e v t c h n _ d o _ u p c a l l
popq % r s p
decl P E R _ C P U _ V A R ( i r q _ c o u n t )
2015-02-19 15:23:17 +00:00
# ifndef C O N F I G _ P R E E M P T
2015-06-08 20:43:07 +02:00
call x e n _ m a y b e _ p r e e m p t _ h c a l l
2015-02-19 15:23:17 +00:00
# endif
2015-06-08 20:43:07 +02:00
jmp e r r o r _ e x i t
x86, binutils, xen: Fix another wrong size directive
The latest binutils (2.21.0.20110302/Ubuntu) breaks the build
yet another time, under CONFIG_XEN=y due to a .size directive that
refers to a slightly differently named (hence, to the now very
strict and unforgiving assembler, non-existent) symbol.
[ mingo:
This unnecessary build breakage caused by new binutils
version 2.21 gets escallated back several kernel releases spanning
several years of Linux history, affecting over 130,000 upstream
kernel commits (!), on CONFIG_XEN=y 64-bit kernels (i.e. essentially
affecting all major Linux distro kernel configs).
Git annotate tells us that this slight debug symbol code mismatch
bug has been introduced in 2008 in commit 3d75e1b8:
3d75e1b8 (Jeremy Fitzhardinge 2008-07-08 15:06:49 -0700 1231) ENTRY(xen_do_hypervisor_callback) # do_hypervisor_callback(struct *pt_regs)
The 'bug' is just a slight assymetry in ENTRY()/END()
debug-symbols sequences, with lots of assembly code between the
ENTRY() and the END():
ENTRY(xen_do_hypervisor_callback) # do_hypervisor_callback(struct *pt_regs)
...
END(do_hypervisor_callback)
Human reviewers almost never catch such small mismatches, and binutils
never even warned about it either.
This new binutils version thus breaks the Xen build on all upstream kernels
since v2.6.27, out of the blue.
This makes a straightforward Git bisection of all 64-bit Xen-enabled kernels
impossible on such binutils, for a bisection window of over hundred
thousand historic commits. (!)
This is a major fail on the side of binutils and binutils needs to turn
this show-stopper build failure into a warning ASAP. ]
Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: H.J. Lu <hjl.tools@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kees Cook <kees.cook@canonical.com>
LKML-Reference: <1299877178-26063-1-git-send-email-heukelum@fastmail.fm>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-03-11 21:59:38 +01:00
END( x e n _ d o _ h y p e r v i s o r _ c a l l b a c k )
2008-07-08 15:06:49 -07:00
/ *
2008-11-27 21:10:08 +03:00
* Hypervisor u s e s t h i s f o r a p p l i c a t i o n f a u l t s w h i l e i t e x e c u t e s .
* We g e t h e r e f o r t w o r e a s o n s :
* 1 . Fault w h i l e r e l o a d i n g D S , E S , F S o r G S
* 2 . Fault w h i l e e x e c u t i n g I R E T
* Category 1 w e d o n o t n e e d t o f i x u p a s X e n h a s a l r e a d y r e l o a d e d a l l s e g m e n t
* registers t h a t c o u l d b e r e l o a d e d a n d z e r o e d t h e o t h e r s .
* Category 2 w e f i x u p b y k i l l i n g t h e c u r r e n t p r o c e s s . W e c a n n o t u s e t h e
* normal L i n u x r e t u r n p a t h i n t h i s c a s e b e c a u s e i f w e u s e t h e I R E T h y p e r c a l l
* to p o p t h e s t a c k f r a m e w e e n d u p i n a n i n f i n i t e l o o p o f f a i l s a f e c a l l b a c k s .
* We d i s t i n g u i s h b e t w e e n c a t e g o r i e s b y c o m p a r i n g e a c h s a v e d s e g m e n t r e g i s t e r
* with i t s c u r r e n t c o n t e n t s : a n y d i s c r e p a n c y m e a n s w e i n c a t e g o r y 1 .
* /
2008-07-08 15:06:49 -07:00
ENTRY( x e n _ f a i l s a f e _ c a l l b a c k )
2015-06-08 20:43:07 +02:00
movl % d s , % e c x
cmpw % c x , 0 x10 ( % r s p )
jne 1 f
movl % e s , % e c x
cmpw % c x , 0 x18 ( % r s p )
jne 1 f
movl % f s , % e c x
cmpw % c x , 0 x20 ( % r s p )
jne 1 f
movl % g s , % e c x
cmpw % c x , 0 x28 ( % r s p )
jne 1 f
2008-07-08 15:06:49 -07:00
/* All segments match their saved values => Category 2 (Bad IRET). */
2015-06-08 20:43:07 +02:00
movq ( % r s p ) , % r c x
movq 8 ( % r s p ) , % r11
addq $ 0 x30 , % r s p
pushq $ 0 / * R I P * /
pushq % r11
pushq % r c x
jmp g e n e r a l _ p r o t e c t i o n
2008-07-08 15:06:49 -07:00
1 : /* Segment mismatch => Category 1 (Bad segment). Retry the IRET. */
2015-06-08 20:43:07 +02:00
movq ( % r s p ) , % r c x
movq 8 ( % r s p ) , % r11
addq $ 0 x30 , % r s p
pushq $ - 1 / * o r i g _ a x = - 1 = > n o t a s y s t e m c a l l * /
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
ALLOC_ P T _ G P R E G S _ O N _ S T A C K
SAVE_ C _ R E G S
SAVE_ E X T R A _ R E G S
2015-06-08 20:43:07 +02:00
jmp e r r o r _ e x i t
2008-07-08 15:06:49 -07:00
END( x e n _ f a i l s a f e _ c a l l b a c k )
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 11:46:53 -04:00
apicinterrupt3 H Y P E R V I S O R _ C A L L B A C K _ V E C T O R \
2010-05-14 12:40:51 +01:00
xen_ h v m _ c a l l b a c k _ v e c t o r x e n _ e v t c h n _ d o _ u p c a l l
2008-07-08 15:06:49 -07:00
# endif / * C O N F I G _ X E N * /
2008-11-24 13:24:28 +01:00
2013-02-03 17:22:39 -08:00
# if I S _ E N A B L E D ( C O N F I G _ H Y P E R V )
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 11:46:53 -04:00
apicinterrupt3 H Y P E R V I S O R _ C A L L B A C K _ V E C T O R \
2013-02-03 17:22:39 -08:00
hyperv_ c a l l b a c k _ v e c t o r h y p e r v _ v e c t o r _ h a n d l e r
# endif / * C O N F I G _ H Y P E R V * /
2015-06-08 20:43:07 +02:00
idtentry d e b u g d o _ d e b u g h a s _ e r r o r _ c o d e =0 p a r a n o i d =1 s h i f t _ i s t =DEBUG_STACK
idtentry i n t 3 d o _ i n t 3 h a s _ e r r o r _ c o d e =0 p a r a n o i d =1 s h i f t _ i s t =DEBUG_STACK
idtentry s t a c k _ s e g m e n t d o _ s t a c k _ s e g m e n t h a s _ e r r o r _ c o d e =1
2009-03-29 19:56:29 -07:00
# ifdef C O N F I G _ X E N
2015-06-08 20:43:07 +02:00
idtentry x e n _ d e b u g d o _ d e b u g h a s _ e r r o r _ c o d e =0
idtentry x e n _ i n t 3 d o _ i n t 3 h a s _ e r r o r _ c o d e =0
idtentry x e n _ s t a c k _ s e g m e n t d o _ s t a c k _ s e g m e n t h a s _ e r r o r _ c o d e =1
2009-03-29 19:56:29 -07:00
# endif
2015-06-08 20:43:07 +02:00
idtentry g e n e r a l _ p r o t e c t i o n d o _ g e n e r a l _ p r o t e c t i o n h a s _ e r r o r _ c o d e =1
trace_ i d t e n t r y p a g e _ f a u l t d o _ p a g e _ f a u l t h a s _ e r r o r _ c o d e =1
2010-10-14 11:22:52 +02:00
# ifdef C O N F I G _ K V M _ G U E S T
2015-06-08 20:43:07 +02:00
idtentry a s y n c _ p a g e _ f a u l t d o _ a s y n c _ p a g e _ f a u l t h a s _ e r r o r _ c o d e =1
2010-10-14 11:22:52 +02:00
# endif
2015-06-08 20:43:07 +02:00
2008-11-24 13:24:28 +01:00
# ifdef C O N F I G _ X 8 6 _ M C E
2015-06-08 20:43:07 +02:00
idtentry m a c h i n e _ c h e c k h a s _ e r r o r _ c o d e =0 p a r a n o i d =1 d o _ s y m = * m a c h i n e _ c h e c k _ v e c t o r ( % r i p )
2008-11-24 13:24:28 +01:00
# endif
2015-02-26 14:40:34 -08:00
/ *
* Save a l l r e g i s t e r s i n p t _ r e g s , a n d s w i t c h g s i f n e e d e d .
* Use s l o w , b u t s u r e f i r e " a r e w e i n k e r n e l ? " c h e c k .
* Return : ebx=0 : n e e d s w a p g s o n e x i t , e b x =1 : o t h e r w i s e
* /
ENTRY( p a r a n o i d _ e n t r y )
2015-02-26 14:40:33 -08:00
cld
SAVE_ C _ R E G S 8
SAVE_ E X T R A _ R E G S 8
2015-06-08 20:43:07 +02:00
movl $ 1 , % e b x
movl $ M S R _ G S _ B A S E , % e c x
2015-02-26 14:40:33 -08:00
rdmsr
2015-06-08 20:43:07 +02:00
testl % e d x , % e d x
js 1 f / * n e g a t i v e - > i n k e r n e l * /
2015-02-26 14:40:33 -08:00
SWAPGS
2015-06-08 20:43:07 +02:00
xorl % e b x , % e b x
2015-02-26 14:40:33 -08:00
1 : ret
2015-02-26 14:40:34 -08:00
END( p a r a n o i d _ e n t r y )
2008-11-24 13:24:28 +01:00
2015-02-26 14:40:34 -08:00
/ *
* " Paranoid" e x i t p a t h f r o m e x c e p t i o n s t a c k . T h i s i s i n v o k e d
* only o n r e t u r n f r o m n o n - N M I I S T i n t e r r u p t s t h a t c a m e
* from k e r n e l s p a c e .
*
* We m a y b e r e t u r n i n g t o v e r y s t r a n g e c o n t e x t s ( e . g . v e r y e a r l y
* in s y s c a l l e n t r y ) , s o c h e c k i n g f o r p r e e m p t i o n h e r e w o u l d
* be c o m p l i c a t e d . F o r t u n a t e l y , w e t h e r e ' s n o g o o d r e a s o n
* to t r y t o h a n d l e p r e e m p t i o n h e r e .
2015-06-08 20:43:07 +02:00
*
* On e n t r y , e b x i s " n o s w a p g s " f l a g ( 1 : d o n ' t n e e d s w a p g s , 0 : n e e d i t )
2015-02-26 14:40:34 -08:00
* /
2008-11-24 13:24:28 +01:00
ENTRY( p a r a n o i d _ e x i t )
DISABLE_ I N T E R R U P T S ( C L B R _ N O N E )
2012-05-30 11:54:53 -04:00
TRACE_ I R Q S _ O F F _ D E B U G
2015-06-08 20:43:07 +02:00
testl % e b x , % e b x / * s w a p g s n e e d e d ? * /
jnz p a r a n o i d _ e x i t _ n o _ s w a p g s
2015-02-26 14:40:30 -08:00
TRACE_ I R Q S _ I R E T Q
2008-11-24 13:24:28 +01:00
SWAPGS_ U N S A F E _ S T A C K
2015-06-08 20:43:07 +02:00
jmp p a r a n o i d _ e x i t _ r e s t o r e
2015-02-26 14:40:29 -08:00
paranoid_exit_no_swapgs :
2015-02-26 14:40:30 -08:00
TRACE_ I R Q S _ I R E T Q _ D E B U G
2015-02-26 14:40:29 -08:00
paranoid_exit_restore :
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
RESTORE_ E X T R A _ R E G S
RESTORE_ C _ R E G S
REMOVE_ P T _ G P R E G S _ F R O M _ S T A C K 8
2014-11-11 12:49:41 -08:00
INTERRUPT_ R E T U R N
2008-11-24 13:24:28 +01:00
END( p a r a n o i d _ e x i t )
/ *
2015-02-26 14:40:34 -08:00
* Save a l l r e g i s t e r s i n p t _ r e g s , a n d s w i t c h g s i f n e e d e d .
2015-06-09 12:36:01 -07:00
* Return : EBX=0 : c a m e f r o m u s e r m o d e ; EBX=1: otherwise
2008-11-24 13:24:28 +01:00
* /
ENTRY( e r r o r _ e n t r y )
cld
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
SAVE_ C _ R E G S 8
SAVE_ E X T R A _ R E G S 8
2015-06-08 20:43:07 +02:00
xorl % e b x , % e b x
x86/asm/entry/64: Clean up usage of TEST insns
By the nature of TEST operation, it is often possible
to test a narrower part of the operand:
"testl $3, mem" -> "testb $3, mem"
This results in shorter insns, because TEST insn has no
sign-entending byte-immediate forms unlike other ALU ops.
text data bss dec hex filename
11674 0 0 11674 2d9a entry_64.o.before
11658 0 0 11658 2d8a entry_64.o
Changes in object code:
- f7 84 24 88 00 00 00 03 00 00 00 testl $0x3,0x88(%rsp)
+ f6 84 24 88 00 00 00 03 testb $0x3,0x88(%rsp)
- f7 44 24 68 03 00 00 00 testl $0x3,0x68(%rsp)
+ f6 44 24 68 03 testb $0x3,0x68(%rsp)
- f7 84 24 90 00 00 00 03 00 00 00 testl $0x3,0x90(%rsp)
+ f6 84 24 90 00 00 00 03 testb $0x3,0x90(%rsp)
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1430140912-7960-2-git-send-email-dvlasenk@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-04-27 15:21:52 +02:00
testb $ 3 , C S + 8 ( % r s p )
2015-07-03 12:44:27 -07:00
jz . L e r r o r _ k e r n e l s p a c e
2015-06-09 12:36:01 -07:00
2015-07-03 12:44:27 -07:00
.Lerror_entry_from_usermode_swapgs :
/ *
* We e n t e r e d f r o m u s e r m o d e o r w e ' r e p r e t e n d i n g t o h a v e e n t e r e d
* from u s e r m o d e d u e t o a n I R E T f a u l t .
* /
2008-11-24 13:24:28 +01:00
SWAPGS
2015-06-09 12:36:01 -07:00
2015-07-03 12:44:27 -07:00
.Lerror_entry_from_usermode_after_swapgs :
2015-11-12 12:59:00 -08:00
/ *
* We n e e d t o t e l l l o c k d e p t h a t I R Q s a r e o f f . W e c a n ' t d o t h i s u n t i l
* we f i x g s b a s e , a n d w e s h o u l d d o i t b e f o r e e n t e r _ f r o m _ u s e r _ m o d e
* ( which c a n t a k e l o c k s ) .
* /
TRACE_ I R Q S _ O F F
2015-11-12 12:59:04 -08:00
CALL_ e n t e r _ f r o m _ u s e r _ m o d e
2015-11-12 12:59:00 -08:00
ret
2015-07-03 12:44:31 -07:00
2015-07-03 12:44:27 -07:00
.Lerror_entry_done :
2008-11-24 13:24:28 +01:00
TRACE_ I R Q S _ O F F
ret
2015-02-26 14:40:34 -08:00
/ *
* There a r e t w o p l a c e s i n t h e k e r n e l t h a t c a n p o t e n t i a l l y f a u l t w i t h
* usergs. H a n d l e t h e m h e r e . B s t e p p i n g K 8 s s o m e t i m e s r e p o r t a
* truncated R I P f o r I R E T e x c e p t i o n s r e t u r n i n g t o c o m p a t m o d e . C h e c k
* for t h e s e h e r e t o o .
* /
2015-07-03 12:44:27 -07:00
.Lerror_kernelspace :
2015-06-08 20:43:07 +02:00
incl % e b x
leaq n a t i v e _ i r q _ r e t u r n _ i r e t ( % r i p ) , % r c x
cmpq % r c x , R I P + 8 ( % r s p )
2015-07-03 12:44:27 -07:00
je . L e r r o r _ b a d _ i r e t
2015-06-08 20:43:07 +02:00
movl % e c x , % e a x / * z e r o e x t e n d * /
cmpq % r a x , R I P + 8 ( % r s p )
2015-07-03 12:44:27 -07:00
je . L b s t e p _ i r e t
2015-06-08 20:43:07 +02:00
cmpq $ g s _ c h a n g e , R I P + 8 ( % r s p )
2015-07-03 12:44:27 -07:00
jne . L e r r o r _ e n t r y _ d o n e
2015-06-09 12:36:01 -07:00
/ *
* hack : gs_ c h a n g e c a n f a i l w i t h u s e r g s b a s e . I f t h i s h a p p e n s , f i x u p
* gsbase a n d p r o c e e d . W e ' l l f i x u p t h e e x c e p t i o n a n d l a n d i n
* gs_ c h a n g e ' s e r r o r h a n d l e r w i t h k e r n e l g s b a s e .
* /
2015-07-03 12:44:27 -07:00
jmp . L e r r o r _ e n t r y _ f r o m _ u s e r m o d e _ s w a p g s
2009-10-12 10:18:23 -04:00
2015-07-03 12:44:27 -07:00
.Lbstep_iret :
2009-10-12 10:18:23 -04:00
/* Fix truncated RIP */
2015-06-08 20:43:07 +02:00
movq % r c x , R I P + 8 ( % r s p )
x86_64, traps: Rework bad_iret
It's possible for iretq to userspace to fail. This can happen because
of a bad CS, SS, or RIP.
Historically, we've handled it by fixing up an exception from iretq to
land at bad_iret, which pretends that the failed iret frame was really
the hardware part of #GP(0) from userspace. To make this work, there's
an extra fixup to fudge the gs base into a usable state.
This is suboptimal because it loses the original exception. It's also
buggy because there's no guarantee that we were on the kernel stack to
begin with. For example, if the failing iret happened on return from an
NMI, then we'll end up executing general_protection on the NMI stack.
This is bad for several reasons, the most immediate of which is that
general_protection, as a non-paranoid idtentry, will try to deliver
signals and/or schedule from the wrong stack.
This patch throws out bad_iret entirely. As a replacement, it augments
the existing swapgs fudge into a full-blown iret fixup, mostly written
in C. It's should be clearer and more correct.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-11-22 18:00:33 -08:00
/* fall through */
2015-07-03 12:44:27 -07:00
.Lerror_bad_iret :
2015-06-09 12:36:01 -07:00
/ *
* We c a m e f r o m a n I R E T t o u s e r m o d e , s o w e h a v e u s e r g s b a s e .
* Switch t o k e r n e l g s b a s e :
* /
x86_64, traps: Rework bad_iret
It's possible for iretq to userspace to fail. This can happen because
of a bad CS, SS, or RIP.
Historically, we've handled it by fixing up an exception from iretq to
land at bad_iret, which pretends that the failed iret frame was really
the hardware part of #GP(0) from userspace. To make this work, there's
an extra fixup to fudge the gs base into a usable state.
This is suboptimal because it loses the original exception. It's also
buggy because there's no guarantee that we were on the kernel stack to
begin with. For example, if the failing iret happened on return from an
NMI, then we'll end up executing general_protection on the NMI stack.
This is bad for several reasons, the most immediate of which is that
general_protection, as a non-paranoid idtentry, will try to deliver
signals and/or schedule from the wrong stack.
This patch throws out bad_iret entirely. As a replacement, it augments
the existing swapgs fudge into a full-blown iret fixup, mostly written
in C. It's should be clearer and more correct.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-11-22 18:00:33 -08:00
SWAPGS
2015-06-09 12:36:01 -07:00
/ *
* Pretend t h a t t h e e x c e p t i o n c a m e f r o m u s e r m o d e : s e t u p p t _ r e g s
* as i f w e f a u l t e d i m m e d i a t e l y a f t e r I R E T a n d c l e a r E B X s o t h a t
* error_ e x i t k n o w s t h a t w e w i l l b e r e t u r n i n g t o u s e r m o d e .
* /
2015-06-08 20:43:07 +02:00
mov % r s p , % r d i
call f i x u p _ b a d _ i r e t
mov % r a x , % r s p
2015-06-09 12:36:01 -07:00
decl % e b x
2015-07-03 12:44:27 -07:00
jmp . L e r r o r _ e n t r y _ f r o m _ u s e r m o d e _ a f t e r _ s w a p g s
2008-11-24 13:24:28 +01:00
END( e r r o r _ e n t r y )
2015-06-09 12:36:01 -07:00
/ *
* On e n t r y , E B S i s a " r e t u r n t o k e r n e l m o d e " f l a g :
* 1 : already i n k e r n e l m o d e , d o n ' t n e e d S W A P G S
* 0 : user g s b a s e i s l o a d e d , w e n e e d S W A P G S a n d s t a n d a r d p r e p a r a t i o n f o r r e t u r n t o u s e r m o d e
* /
2008-11-24 13:24:28 +01:00
ENTRY( e r r o r _ e x i t )
2015-06-08 20:43:07 +02:00
movl % e b x , % e a x
2008-11-24 13:24:28 +01:00
DISABLE_ I N T E R R U P T S ( C L B R _ N O N E )
TRACE_ I R Q S _ O F F
2015-06-08 20:43:07 +02:00
testl % e a x , % e a x
jnz r e t i n t _ k e r n e l
jmp r e t i n t _ u s e r
2008-11-24 13:24:28 +01:00
END( e r r o r _ e x i t )
2015-04-01 16:50:57 +02:00
/* Runs on exception stack */
2008-11-24 13:24:28 +01:00
ENTRY( n m i )
x86/paravirt: Replace the paravirt nop with a bona fide empty function
PARAVIRT_ADJUST_EXCEPTION_FRAME generates this code (using nmi as an
example, trimmed for readability):
ff 15 00 00 00 00 callq *0x0(%rip) # 2796 <nmi+0x6>
2792: R_X86_64_PC32 pv_irq_ops+0x2c
That's a call through a function pointer to regular C function that
does nothing on native boots, but that function isn't protected
against kprobes, isn't marked notrace, and is certainly not
guaranteed to preserve any registers if the compiler is feeling
perverse. This is bad news for a CLBR_NONE operation.
Of course, if everything works correctly, once paravirt ops are
patched, it gets nopped out, but what if we hit this code before
paravirt ops are patched in? This can potentially cause breakage
that is very difficult to debug.
A more subtle failure is possible here, too: if _paravirt_nop uses
the stack at all (even just to push RBP), it will overwrite the "NMI
executing" variable if it's called in the NMI prologue.
The Xen case, perhaps surprisingly, is fine, because it's already
written in asm.
Fix all of the cases that default to paravirt_nop (including
adjust_exception_frame) with a big hammer: replace paravirt_nop with
an asm function that is just a ret instruction.
The Xen case may have other problems, so document them.
This is part of a fix for some random crashes that Sasha saw.
Reported-and-tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/8f5d2ba295f9d73751c33d97fda03e0495d9ade0.1442791737.git.luto@kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-09-20 16:32:04 -07:00
/ *
* Fix u p t h e e x c e p t i o n f r a m e i f w e ' r e o n X e n .
* PARAVIRT_ A D J U S T _ E X C E P T I O N _ F R A M E i s g u a r a n t e e d t o p u s h a t m o s t
* one v a l u e t o t h e s t a c k o n n a t i v e , s o i t m a y c l o b b e r t h e r d x
* scratch s l o t , b u t i t w o n ' t c l o b b e r a n y o f t h e i m p o r t a n t
* slots p a s t i t .
*
* Xen i s a d i f f e r e n t s t o r y , b e c a u s e t h e X e n f r a m e i t s e l f o v e r l a p s
* the " N M I e x e c u t i n g " v a r i a b l e .
* /
2008-11-24 13:24:28 +01:00
PARAVIRT_ A D J U S T _ E X C E P T I O N _ F R A M E
x86/paravirt: Replace the paravirt nop with a bona fide empty function
PARAVIRT_ADJUST_EXCEPTION_FRAME generates this code (using nmi as an
example, trimmed for readability):
ff 15 00 00 00 00 callq *0x0(%rip) # 2796 <nmi+0x6>
2792: R_X86_64_PC32 pv_irq_ops+0x2c
That's a call through a function pointer to regular C function that
does nothing on native boots, but that function isn't protected
against kprobes, isn't marked notrace, and is certainly not
guaranteed to preserve any registers if the compiler is feeling
perverse. This is bad news for a CLBR_NONE operation.
Of course, if everything works correctly, once paravirt ops are
patched, it gets nopped out, but what if we hit this code before
paravirt ops are patched in? This can potentially cause breakage
that is very difficult to debug.
A more subtle failure is possible here, too: if _paravirt_nop uses
the stack at all (even just to push RBP), it will overwrite the "NMI
executing" variable if it's called in the NMI prologue.
The Xen case, perhaps surprisingly, is fine, because it's already
written in asm.
Fix all of the cases that default to paravirt_nop (including
adjust_exception_frame) with a big hammer: replace paravirt_nop with
an asm function that is just a ret instruction.
The Xen case may have other problems, so document them.
This is part of a fix for some random crashes that Sasha saw.
Reported-and-tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/8f5d2ba295f9d73751c33d97fda03e0495d9ade0.1442791737.git.luto@kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-09-20 16:32:04 -07:00
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
/ *
* We a l l o w b r e a k p o i n t s i n N M I s . I f a b r e a k p o i n t o c c u r s , t h e n
* the i r e t q i t p e r f o r m s w i l l t a k e u s o u t o f N M I c o n t e x t .
* This m e a n s t h a t w e c a n h a v e n e s t e d N M I s w h e r e t h e n e x t
* NMI i s u s i n g t h e t o p o f t h e s t a c k o f t h e p r e v i o u s N M I . W e
* can' t l e t i t e x e c u t e b e c a u s e t h e n e s t e d N M I w i l l c o r r u p t t h e
* stack o f t h e p r e v i o u s N M I . N M I h a n d l e r s a r e n o t r e - e n t r a n t
* anyway.
*
* To h a n d l e t h i s c a s e w e d o t h e f o l l o w i n g :
* Check t h e a s p e c i a l l o c a t i o n o n t h e s t a c k t h a t c o n t a i n s
* a v a r i a b l e t h a t i s s e t w h e n N M I s a r e e x e c u t i n g .
* The i n t e r r u p t e d t a s k ' s s t a c k i s a l s o c h e c k e d t o s e e i f i t
* is a n N M I s t a c k .
* If t h e v a r i a b l e i s n o t s e t a n d t h e s t a c k i s n o t t h e N M I
* stack t h e n :
* o S e t t h e s p e c i a l v a r i a b l e o n t h e s t a c k
2015-07-15 10:29:36 -07:00
* o C o p y t h e i n t e r r u p t f r a m e i n t o a n " o u t e r m o s t " l o c a t i o n o n t h e
* stack
* o C o p y t h e i n t e r r u p t f r a m e i n t o a n " i r e t " l o c a t i o n o n t h e s t a c k
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
* o C o n t i n u e p r o c e s s i n g t h e N M I
* If t h e v a r i a b l e i s s e t o r t h e p r e v i o u s s t a c k i s t h e N M I s t a c k :
2015-07-15 10:29:36 -07:00
* o M o d i f y t h e " i r e t " l o c a t i o n t o j u m p t o t h e r e p e a t _ n m i
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
* o r e t u r n b a c k t o t h e f i r s t N M I
*
* Now o n e x i t o f t h e f i r s t N M I , w e f i r s t c l e a r t h e s t a c k v a r i a b l e
* The N M I s t a c k w i l l t e l l a n y n e s t e d N M I s a t t h a t p o i n t t h a t i t i s
* nested. T h e n w e p o p t h e s t a c k n o r m a l l y w i t h i r e t , a n d i f t h e r e w a s
* a n e s t e d N M I t h a t u p d a t e d t h e c o p y i n t e r r u p t s t a c k f r a m e , a
* jump w i l l b e m a d e t o t h e r e p e a t _ n m i c o d e t h a t w i l l h a n d l e t h e s e c o n d
* NMI.
2015-07-15 10:29:35 -07:00
*
* However, e s p f i x p r e v e n t s u s f r o m d i r e c t l y r e t u r n i n g t o u s e r s p a c e
* with a s i n g l e I R E T i n s t r u c t i o n . S i m i l a r l y , I R E T t o u s e r m o d e
* can f a u l t . W e t h e r e f o r e h a n d l e N M I s f r o m u s e r s p a c e l i k e
* other I S T e n t r i e s .
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
* /
2015-03-25 18:18:13 +01:00
/* Use %rdx as our temp variable throughout */
2015-06-08 20:43:07 +02:00
pushq % r d x
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
2015-07-15 10:29:35 -07:00
testb $ 3 , C S - R I P + 8 ( % r s p )
jz . L n m i _ f r o m _ k e r n e l
/ *
* NMI f r o m u s e r m o d e . W e n e e d t o r u n o n t h e t h r e a d s t a c k , b u t w e
* can' t g o t h r o u g h t h e n o r m a l e n t r y p a t h s : N M I s a r e m a s k e d , a n d
* we d o n ' t w a n t t o e n a b l e i n t e r r u p t s , b e c a u s e t h e n w e ' l l e n d
* up i n a n a w k w a r d s i t u a t i o n i n w h i c h I R Q s a r e o n b u t N M I s
* are o f f .
2015-09-20 16:32:05 -07:00
*
* We a l s o m u s t n o t p u s h a n y t h i n g t o t h e s t a c k b e f o r e s w i t c h i n g
* stacks l e s t w e c o r r u p t t h e " N M I e x e c u t i n g " v a r i a b l e .
2015-07-15 10:29:35 -07:00
* /
2015-09-20 16:32:05 -07:00
SWAPGS_ U N S A F E _ S T A C K
2015-07-15 10:29:35 -07:00
cld
movq % r s p , % r d x
movq P E R _ C P U _ V A R ( c p u _ c u r r e n t _ t o p _ o f _ s t a c k ) , % r s p
pushq 5 * 8 ( % r d x ) / * p t _ r e g s - > s s * /
pushq 4 * 8 ( % r d x ) / * p t _ r e g s - > r s p * /
pushq 3 * 8 ( % r d x ) / * p t _ r e g s - > f l a g s * /
pushq 2 * 8 ( % r d x ) / * p t _ r e g s - > c s * /
pushq 1 * 8 ( % r d x ) / * p t _ r e g s - > r i p * /
pushq $ - 1 / * p t _ r e g s - > o r i g _ a x * /
pushq % r d i / * p t _ r e g s - > d i * /
pushq % r s i / * p t _ r e g s - > s i * /
pushq ( % r d x ) / * p t _ r e g s - > d x * /
pushq % r c x / * p t _ r e g s - > c x * /
pushq % r a x / * p t _ r e g s - > a x * /
pushq % r8 / * p t _ r e g s - > r8 * /
pushq % r9 / * p t _ r e g s - > r9 * /
pushq % r10 / * p t _ r e g s - > r10 * /
pushq % r11 / * p t _ r e g s - > r11 * /
pushq % r b x / * p t _ r e g s - > r b x * /
pushq % r b p / * p t _ r e g s - > r b p * /
pushq % r12 / * p t _ r e g s - > r12 * /
pushq % r13 / * p t _ r e g s - > r13 * /
pushq % r14 / * p t _ r e g s - > r14 * /
pushq % r15 / * p t _ r e g s - > r15 * /
/ *
* At t h i s p o i n t w e n o l o n g e r n e e d t o w o r r y a b o u t s t a c k d a m a g e
* due t o n e s t i n g - - w e ' r e o n t h e n o r m a l t h r e a d s t a c k a n d w e ' r e
* done w i t h t h e N M I s t a c k .
* /
movq % r s p , % r d i
movq $ - 1 , % r s i
call d o _ n m i
2012-02-19 16:43:37 -05:00
/ *
2015-07-15 10:29:35 -07:00
* Return b a c k t o u s e r m o d e . W e m u s t * n o t * d o t h e n o r m a l e x i t
* work, b e c a u s e w e d o n ' t w a n t t o e n a b l e i n t e r r u p t s . F o r t u n a t e l y ,
* do_ n m i d o e s n ' t m o d i f y p t _ r e g s .
2012-02-19 16:43:37 -05:00
* /
2015-07-15 10:29:35 -07:00
SWAPGS
jmp r e s t o r e _ c _ r e g s _ a n d _ i r e t
2012-02-19 16:43:37 -05:00
2015-07-15 10:29:35 -07:00
.Lnmi_from_kernel :
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
/ *
2015-07-15 10:29:36 -07:00
* Here' s w h a t o u r s t a c k f r a m e w i l l l o o k l i k e :
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | original S S |
* | original R e t u r n R S P |
* | original R F L A G S |
* | original C S |
* | original R I P |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | temp s t o r a g e f o r r d x |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | " NMI e x e c u t i n g " v a r i a b l e |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | iret S S } C o p i e d f r o m " o u t e r m o s t " f r a m e |
* | iret R e t u r n R S P } o n e a c h l o o p i t e r a t i o n ; overwritten |
* | iret R F L A G S } b y a n e s t e d N M I t o f o r c e a n o t h e r |
* | iret C S } i t e r a t i o n i f n e e d e d . |
* | iret R I P } |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | outermost S S } i n i t i a l i z e d i n f i r s t _ n m i ; |
* | outermost R e t u r n R S P } w i l l n o t b e c h a n g e d b e f o r e |
* | outermost R F L A G S } N M I p r o c e s s i n g i s d o n e . |
* | outermost C S } C o p i e d t o " i r e t " f r a m e o n e a c h |
* | outermost R I P } i t e r a t i o n . |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
* | pt_ r e g s |
* + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
*
* The " o r i g i n a l " f r a m e i s u s e d b y h a r d w a r e . B e f o r e r e - e n a b l i n g
* NMIs, w e n e e d t o b e d o n e w i t h i t , a n d w e n e e d t o l e a v e e n o u g h
* space f o r t h e a s m c o d e h e r e .
*
* We r e t u r n b y e x e c u t i n g I R E T w h i l e R S P p o i n t s t o t h e " i r e t " f r a m e .
* That w i l l e i t h e r r e t u r n f o r r e a l o r i t w i l l l o o p b a c k i n t o N M I
* processing.
*
* The " o u t e r m o s t " f r a m e i s c o p i e d t o t h e " i r e t " f r a m e o n e a c h
* iteration o f t h e l o o p , s o e a c h i t e r a t i o n s t a r t s w i t h t h e " i r e t "
* frame p o i n t i n g t o t h e f i n a l r e t u r n t a r g e t .
* /
2012-02-19 16:43:37 -05:00
/ *
2015-07-15 10:29:36 -07:00
* Determine w h e t h e r w e ' r e a n e s t e d N M I .
*
2015-07-15 10:29:37 -07:00
* If w e i n t e r r u p t e d k e r n e l c o d e b e t w e e n r e p e a t _ n m i a n d
* end_ r e p e a t _ n m i , t h e n w e a r e a n e s t e d N M I . W e m u s t n o t
* modify t h e " i r e t " f r a m e b e c a u s e i t ' s b e i n g w r i t t e n b y
* the o u t e r N M I . T h a t ' s o k a y ; the outer NMI handler is
* about t o a b o u t t o c a l l d o _ n m i a n y w a y , s o w e c a n j u s t
* resume t h e o u t e r N M I .
2012-02-19 16:43:37 -05:00
* /
2015-07-15 10:29:37 -07:00
movq $ r e p e a t _ n m i , % r d x
cmpq 8 ( % r s p ) , % r d x
ja 1 f
movq $ e n d _ r e p e a t _ n m i , % r d x
cmpq 8 ( % r s p ) , % r d x
ja n e s t e d _ n m i _ o u t
1 :
2012-02-19 16:43:37 -05:00
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
/ *
2015-07-15 10:29:37 -07:00
* Now c h e c k " N M I e x e c u t i n g " . I f i t ' s s e t , t h e n w e ' r e n e s t e d .
2015-07-15 10:29:36 -07:00
* This w i l l n o t d e t e c t i f w e i n t e r r u p t e d a n o u t e r N M I j u s t
* before I R E T .
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
* /
2015-06-08 20:43:07 +02:00
cmpl $ 1 , - 8 ( % r s p )
je n e s t e d _ n m i
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
/ *
2015-07-15 10:29:36 -07:00
* Now t e s t i f t h e p r e v i o u s s t a c k w a s a n N M I s t a c k . T h i s c o v e r s
* the c a s e w h e r e w e i n t e r r u p t a n o u t e r N M I a f t e r i t c l e a r s
2015-07-15 10:29:38 -07:00
* " NMI e x e c u t i n g " b u t b e f o r e I R E T . W e n e e d t o b e c a r e f u l , t h o u g h :
* there i s o n e c a s e i n w h i c h R S P c o u l d p o i n t t o t h e N M I s t a c k
* despite t h e r e b e i n g n o N M I a c t i v e : n a u g h t y u s e r s p a c e c o n t r o l s
* RSP a t t h e v e r y b e g i n n i n g o f t h e S Y S C A L L t a r g e t s . W e c a n
* pull a f a s t o n e o n n a u g h t y u s e r s p a c e , t h o u g h : w e p r o g r a m
* SYSCALL t o m a s k D F , s o u s e r s p a c e c a n n o t c a u s e D F t o b e s e t
* if i t c o n t r o l s t h e k e r n e l ' s R S P . W e s e t D F b e f o r e w e c l e a r
* " NMI e x e c u t i n g " .
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
* /
2015-04-01 16:50:57 +02:00
lea 6 * 8 ( % r s p ) , % r d x
/* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */
cmpq % r d x , 4 * 8 ( % r s p )
/* If the stack pointer is above the NMI stack, this is a normal NMI */
ja f i r s t _ n m i
2015-06-08 20:43:07 +02:00
2015-04-01 16:50:57 +02:00
subq $ E X C E P T I O N _ S T K S Z , % r d x
cmpq % r d x , 4 * 8 ( % r s p )
/* If it is below the NMI stack, it is a normal NMI */
jb f i r s t _ n m i
2015-07-15 10:29:38 -07:00
/* Ah, it is within the NMI stack. */
testb $ ( X 8 6 _ E F L A G S _ D F > > 8 ) , ( 3 * 8 + 1 ) ( % r s p )
jz f i r s t _ n m i / * R S P w a s u s e r c o n t r o l l e d . * /
/* This is a nested NMI. */
2015-04-01 16:50:57 +02:00
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
nested_nmi :
/ *
2015-07-15 10:29:36 -07:00
* Modify t h e " i r e t " f r a m e t o p o i n t t o r e p e a t _ n m i , f o r c i n g a n o t h e r
* iteration o f N M I h a n d l i n g .
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
* /
2015-07-15 10:29:39 -07:00
subq $ 8 , % r s p
2015-06-08 20:43:07 +02:00
leaq - 1 0 * 8 ( % r s p ) , % r d x
pushq $ _ _ K E R N E L _ D S
pushq % r d x
x86/debug: Remove perpetually broken, unmaintainable dwarf annotations
So the dwarf2 annotations in low level assembly code have
become an increasing hindrance: unreadable, messy macros
mixed into some of the most security sensitive code paths
of the Linux kernel.
These debug info annotations don't even buy the upstream
kernel anything: dwarf driven stack unwinding has caused
problems in the past so it's out of tree, and the upstream
kernel only uses the much more robust framepointers based
stack unwinding method.
In addition to that there's a steady, slow bitrot going
on with these annotations, requiring frequent fixups.
There's no tooling and no functionality upstream that
keeps it correct.
So burn down the sick forest, allowing new, healthier growth:
27 files changed, 350 insertions(+), 1101 deletions(-)
Someone who has the willingness and time to do this
properly can attempt to reintroduce dwarf debuginfo in x86
assembly code plus dwarf unwinding from first principles,
with the following conditions:
- it should be maximally readable, and maximally low-key to
'ordinary' code reading and maintenance.
- find a build time method to insert dwarf annotations
automatically in the most common cases, for pop/push
instructions that manipulate the stack pointer. This could
be done for example via a preprocessing step that just
looks for common patterns - plus special annotations for
the few cases where we want to depart from the default.
We have hundreds of CFI annotations, so automating most of
that makes sense.
- it should come with build tooling checks that ensure that
CFI annotations are sensible. We've seen such efforts from
the framepointer side, and there's no reason it couldn't be
done on the dwarf side.
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-28 12:21:47 +02:00
pushfq
2015-06-08 20:43:07 +02:00
pushq $ _ _ K E R N E L _ C S
pushq $ r e p e a t _ n m i
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
/* Put stack back */
2015-06-08 20:43:07 +02:00
addq $ ( 6 * 8 ) , % r s p
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
nested_nmi_out :
2015-06-08 20:43:07 +02:00
popq % r d x
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
2015-07-15 10:29:36 -07:00
/* We are returning to kernel mode, so this cannot result in a fault. */
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
INTERRUPT_ R E T U R N
first_nmi :
2015-07-15 10:29:36 -07:00
/* Restore rdx. */
2015-06-08 20:43:07 +02:00
movq ( % r s p ) , % r d x
2012-02-24 14:54:37 +00:00
2015-07-15 10:29:40 -07:00
/* Make room for "NMI executing". */
pushq $ 0
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
2015-07-15 10:29:36 -07:00
/* Leave room for the "iret" frame */
2015-06-08 20:43:07 +02:00
subq $ ( 5 * 8 ) , % r s p
2012-10-01 17:29:25 -07:00
2015-07-15 10:29:36 -07:00
/* Copy the "original" frame to the "outermost" frame */
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
.rept 5
2015-06-08 20:43:07 +02:00
pushq 1 1 * 8 ( % r s p )
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
.endr
2012-02-24 14:54:37 +00:00
2012-02-24 15:55:13 -05:00
/* Everything up to here is safe from nested NMIs */
2015-07-15 10:29:41 -07:00
# ifdef C O N F I G _ D E B U G _ E N T R Y
/ *
* For e a s e o f t e s t i n g , u n m a s k N M I s r i g h t a w a y . D i s a b l e d b y
* default b e c a u s e I R E T i s v e r y e x p e n s i v e .
* /
pushq $ 0 / * S S * /
pushq % r s p / * R S P ( m i n u s 8 b e c a u s e o f t h e p r e v i o u s p u s h ) * /
addq $ 8 , ( % r s p ) / * F i x u p R S P * /
pushfq / * R F L A G S * /
pushq $ _ _ K E R N E L _ C S / * C S * /
pushq $ 1 f / * R I P * /
INTERRUPT_ R E T U R N / * c o n t i n u e s a t r e p e a t _ n m i b e l o w * /
1 :
# endif
2015-07-15 10:29:36 -07:00
repeat_nmi :
2012-02-24 14:54:37 +00:00
/ *
* If t h e r e w a s a n e s t e d N M I , t h e f i r s t N M I ' s i r e t w i l l r e t u r n
* here. B u t N M I s a r e s t i l l e n a b l e d a n d w e c a n t a k e a n o t h e r
* nested N M I . T h e n e s t e d N M I c h e c k s t h e i n t e r r u p t e d R I P t o s e e
* if i t i s b e t w e e n r e p e a t _ n m i a n d e n d _ r e p e a t _ n m i , a n d i f s o
* it w i l l j u s t r e t u r n , a s w e a r e a b o u t t o r e p e a t a n N M I a n y w a y .
* This m a k e s i t s a f e t o c o p y t o t h e s t a c k f r a m e t h a t a n e s t e d
* NMI w i l l u p d a t e .
2015-07-15 10:29:36 -07:00
*
* RSP i s p o i n t i n g t o " o u t e r m o s t R I P " . g s b a s e i s u n k n o w n , b u t , i f
* we' r e r e p e a t i n g a n N M I , g s b a s e h a s t h e s a m e v a l u e t h a t i t h a d o n
* the f i r s t i t e r a t i o n . p a r a n o i d _ e n t r y w i l l l o a d t h e k e r n e l
2015-07-15 10:29:40 -07:00
* gsbase i f n e e d e d b e f o r e w e c a l l d o _ n m i . " N M I e x e c u t i n g "
* is z e r o .
2012-02-24 14:54:37 +00:00
* /
2015-07-15 10:29:40 -07:00
movq $ 1 , 1 0 * 8 ( % r s p ) / * S e t " N M I e x e c u t i n g " . * /
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
2012-02-24 14:54:37 +00:00
/ *
2015-07-15 10:29:36 -07:00
* Copy t h e " o u t e r m o s t " f r a m e t o t h e " i r e t " f r a m e . N M I s t h a t n e s t
* here m u s t n o t m o d i f y t h e " i r e t " f r a m e w h i l e w e ' r e w r i t i n g t o
* it o r i t w i l l e n d u p c o n t a i n i n g g a r b a g e .
2012-02-24 14:54:37 +00:00
* /
2015-06-08 20:43:07 +02:00
addq $ ( 1 0 * 8 ) , % r s p
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
.rept 5
2015-06-08 20:43:07 +02:00
pushq - 6 * 8 ( % r s p )
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
.endr
2015-06-08 20:43:07 +02:00
subq $ ( 5 * 8 ) , % r s p
2012-02-24 14:54:37 +00:00
end_repeat_nmi :
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
/ *
2015-07-15 10:29:36 -07:00
* Everything b e l o w t h i s p o i n t c a n b e p r e e m p t e d b y a n e s t e d N M I .
* If t h i s h a p p e n s , t h e n t h e i n n e r N M I w i l l c h a n g e t h e " i r e t "
* frame t o p o i n t b a c k t o r e p e a t _ n m i .
x86: Add workaround to NMI iret woes
In x86, when an NMI goes off, the CPU goes into an NMI context that
prevents other NMIs to trigger on that CPU. If an NMI is suppose to
trigger, it has to wait till the previous NMI leaves NMI context.
At that time, the next NMI can trigger (note, only one more NMI will
trigger, as only one can be latched at a time).
The way x86 gets out of NMI context is by calling iret. The problem
with this is that this causes problems if the NMI handle either
triggers an exception, or a breakpoint. Both the exception and the
breakpoint handlers will finish with an iret. If this happens while
in NMI context, the CPU will leave NMI context and a new NMI may come
in. As NMI handlers are not made to be re-entrant, this can cause
havoc with the system, not to mention, the nested NMI will write
all over the previous NMI's stack.
Linus Torvalds proposed the following workaround to this problem:
https://lkml.org/lkml/2010/7/14/264
"In fact, I wonder if we couldn't just do a software NMI disable
instead? Hav ea per-cpu variable (in the _core_ percpu areas that get
allocated statically) that points to the NMI stack frame, and just
make the NMI code itself do something like
NMI entry:
- load percpu NMI stack frame pointer
- if non-zero we know we're nested, and should ignore this NMI:
- we're returning to kernel mode, so return immediately by using
"popf/ret", which also keeps NMI's disabled in the hardware until the
"real" NMI iret happens.
- before the popf/iret, use the NMI stack pointer to make the NMI
return stack be invalid and cause a fault
- set the NMI stack pointer to the current stack pointer
NMI exit (not the above "immediate exit because we nested"):
clear the percpu NMI stack pointer
Just do the iret.
Now, the thing is, now the "iret" is atomic. If we had a nested NMI,
we'll take a fault, and that re-does our "delayed" NMI - and NMI's
will stay masked.
And if we didn't have a nested NMI, that iret will now unmask NMI's,
and everything is happy."
I first tried to follow this advice but as I started implementing this
code, a few gotchas showed up.
One, is accessing per-cpu variables in the NMI handler.
The problem is that per-cpu variables use the %gs register to get the
variable for the given CPU. But as the NMI may happen in userspace,
we must first perform a SWAPGS to get to it. The NMI handler already
does this later in the code, but its too late as we have saved off
all the registers and we don't want to do that for a disabled NMI.
Peter Zijlstra suggested to keep all variables on the stack. This
simplifies things greatly and it has the added benefit of cache locality.
Two, faulting on the iret.
I really wanted to make this work, but it was becoming very hacky, and
I never got it to be stable. The iret already had a fault handler for
userspace faulting with bad segment registers, and getting NMI to trigger
a fault and detect it was very tricky. But for strange reasons, the system
would usually take a double fault and crash. I never figured out why
and decided to go with a simple "jmp" approach. The new approach I took
also simplified things.
Finally, the last problem with Linus's approach was to have the nested
NMI handler do a ret instead of an iret to give the first NMI NMI-context
again.
The problem is that ret is much more limited than an iret. I couldn't figure
out how to get the stack back where it belonged. I could have copied the
current stack, pushed the return onto it, but my fear here is that there
may be some place that writes data below the stack pointer. I know that
is not something code should depend on, but I don't want to chance it.
I may add this feature later, but for now, an NMI handler that loses NMI
context will not get it back.
Here's what is done:
When an NMI comes in, the HW pushes the interrupt stack frame onto the
per cpu NMI stack that is selected by the IST.
A special location on the NMI stack holds a variable that is set when
the first NMI handler runs. If this variable is set then we know that
this is a nested NMI and we process the nested NMI code.
There is still a race when this variable is cleared and an NMI comes
in just before the first NMI does the return. For this case, if the
variable is cleared, we also check if the interrupted stack is the
NMI stack. If it is, then we process the nested NMI code.
Why the two tests and not just test the interrupted stack?
If the first NMI hits a breakpoint and loses NMI context, and then it
hits another breakpoint and while processing that breakpoint we get a
nested NMI. When processing a breakpoint, the stack changes to the
breakpoint stack. If another NMI comes in here we can't rely on the
interrupted stack to be the NMI stack.
If the variable is not set and the interrupted task's stack is not the
NMI stack, then we know this is the first NMI and we can process things
normally. But in order to do so, we need to do a few things first.
1) Set the stack variable that tells us that we are in an NMI handler
2) Make two copies of the interrupt stack frame.
One copy is used to return on iret
The other is used to restore the first one if we have a nested NMI.
This is what the stack will look like:
+-------------------------+
| original SS |
| original Return RSP |
| original RFLAGS |
| original CS |
| original RIP |
+-------------------------+
| temp storage for rdx |
+-------------------------+
| NMI executing variable |
+-------------------------+
| Saved SS |
| Saved Return RSP |
| Saved RFLAGS |
| Saved CS |
| Saved RIP |
+-------------------------+
| copied SS |
| copied Return RSP |
| copied RFLAGS |
| copied CS |
| copied RIP |
+-------------------------+
| pt_regs |
+-------------------------+
The original stack frame contains what the HW put in when we entered
the NMI.
We store %rdx as a temp variable to use. Both the original HW stack
frame and this %rdx storage will be clobbered by nested NMIs so we
can not rely on them later in the first NMI handler.
The next item is the special stack variable that is set when we execute
the rest of the NMI handler.
Then we have two copies of the interrupt stack. The second copy is
modified by any nested NMIs to let the first NMI know that we triggered
a second NMI (latched) and that we should repeat the NMI handler.
If the first NMI hits an exception or breakpoint that takes it out of
NMI context, if a second NMI comes in before the first one finishes,
it will update the copied interrupt stack to point to a fix up location
to trigger another NMI.
When the first NMI calls iret, it will instead jump to the fix up
location. This fix up location will copy the saved interrupt stack back
to the copy and execute the nmi handler again.
Note, the nested NMI knows enough to check if it preempted a previous
NMI handler while it is in the fixup location. If it has, it will not
modify the copied interrupt stack and will just leave as if nothing
happened. As the NMI handle is about to execute again, there's no reason
to latch now.
To test all this, I forced the NMI handler to call iret and take itself
out of NMI context. I also added assemble code to write to the serial to
make sure that it hits the nested path as well as the fix up path.
Everything seems to be working fine.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Turner <pjt@google.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-12-08 12:36:23 -05:00
* /
2015-06-08 20:43:07 +02:00
pushq $ - 1 / * O R I G _ R A X : n o s y s c a l l t o r e s t a r t * /
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
ALLOC_ P T _ G P R E G S _ O N _ S T A C K
2011-12-08 12:32:27 -05:00
/ *
2015-02-26 14:40:34 -08:00
* Use p a r a n o i d _ e n t r y t o h a n d l e S W A P G S , b u t n o n e e d t o u s e p a r a n o i d _ e x i t
2011-12-08 12:32:27 -05:00
* as w e s h o u l d n o t b e c a l l i n g s c h e d u l e i n N M I c o n t e x t .
* Even w i t h n o r m a l i n t e r r u p t s e n a b l e d . A n N M I s h o u l d n o t b e
* setting N E E D _ R E S C H E D o r a n y t h i n g t h a t n o r m a l i n t e r r u p t s a n d
* exceptions m i g h t d o .
* /
2015-06-08 20:43:07 +02:00
call p a r a n o i d _ e n t r y
2012-06-07 10:21:21 -04:00
2008-11-24 13:24:28 +01:00
/* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */
2015-06-08 20:43:07 +02:00
movq % r s p , % r d i
movq $ - 1 , % r s i
call d o _ n m i
2012-06-07 10:21:21 -04:00
2015-06-08 20:43:07 +02:00
testl % e b x , % e b x / * s w a p g s n e e d e d ? * /
jnz n m i _ r e s t o r e
2008-11-24 13:24:28 +01:00
nmi_swapgs :
SWAPGS_ U N S A F E _ S T A C K
nmi_restore :
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
RESTORE_ E X T R A _ R E G S
RESTORE_ C _ R E G S
2015-07-15 10:29:36 -07:00
/* Point RSP at the "iret" frame. */
x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack
The 64-bit entry code was using six stack slots less by not
saving/restoring registers which are callee-preserved according
to the C ABI, and was not allocating space for them.
Only when syscalls needed a complete "struct pt_regs" was
the complete area allocated and filled in.
As an additional twist, on interrupt entry a "slightly less
truncated pt_regs" trick is used, to make nested interrupt
stacks easier to unwind.
This proved to be a source of significant obfuscation and subtle
bugs. For example, 'stub_fork' had to pop the return address,
extend the struct, save registers, and push return address back.
Ugly. 'ia32_ptregs_common' pops return address and "returns" via
jmp insn, throwing a wrench into CPU return stack cache.
This patch changes the code to always allocate a complete
"struct pt_regs" on the kernel stack. The saving of registers
is still done lazily.
"Partial pt_regs" trick on interrupt stack is retained.
Macros which manipulate "struct pt_regs" on stack are reworked:
- ALLOC_PT_GPREGS_ON_STACK allocates the structure.
- SAVE_C_REGS saves to it those registers which are clobbered
by C code.
- SAVE_EXTRA_REGS saves to it all other registers.
- Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
reverse it.
'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
with the return pointer.
LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
instead of magic numbers.
'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
SAVE_EXTRA_REGS instead of having it open-coded yet again.
Patch was run-tested: 64-bit executables, 32-bit executables,
strace works.
Timing tests did not show measurable difference in 32-bit
and 64-bit syscalls.
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-02-26 14:40:27 -08:00
REMOVE_ P T _ G P R E G S _ F R O M _ S T A C K 6 * 8
2012-10-01 17:29:25 -07:00
2015-07-15 10:29:38 -07:00
/ *
* Clear " N M I e x e c u t i n g " . S e t D F f i r s t s o t h a t w e c a n e a s i l y
* distinguish t h e r e m a i n i n g c o d e b e t w e e n h e r e a n d I R E T f r o m
* the S Y S C A L L e n t r y a n d e x i t p a t h s . O n a n a t i v e k e r n e l , w e
* could j u s t i n s p e c t R I P , b u t , o n p a r a v i r t k e r n e l s ,
* INTERRUPT_ R E T U R N c a n t r a n s l a t e i n t o a j u m p i n t o a
* hypercall p a g e .
* /
std
movq $ 0 , 5 * 8 ( % r s p ) / * c l e a r " N M I e x e c u t i n g " * /
2015-07-15 10:29:36 -07:00
/ *
* INTERRUPT_ R E T U R N r e a d s t h e " i r e t " f r a m e a n d e x i t s t h e N M I
* stack i n a s i n g l e i n s t r u c t i o n . W e a r e r e t u r n i n g t o k e r n e l
* mode, s o t h i s c a n n o t r e s u l t i n a f a u l t .
* /
2015-06-04 13:24:29 -07:00
INTERRUPT_ R E T U R N
2008-11-24 13:24:28 +01:00
END( n m i )
ENTRY( i g n o r e _ s y s r e t )
2015-06-08 20:43:07 +02:00
mov $ - E N O S Y S , % e a x
2008-11-24 13:24:28 +01:00
sysret
END( i g n o r e _ s y s r e t )