2019-06-03 07:44:50 +02:00
/* SPDX-License-Identifier: GPL-2.0-only */
2014-04-28 06:11:32 +01:00
/ *
2021-06-02 16:13:58 +01:00
* Copyright ( c ) 2 0 1 3 - 2 0 2 1 , A r m L i m i t e d .
2014-04-28 06:11:32 +01:00
*
2021-05-27 16:34:41 +01:00
* Adapted f r o m t h e o r i g i n a l a t :
2021-06-02 16:13:58 +01:00
* https : / / github. c o m / A R M - s o f t w a r e / o p t i m i z e d - r o u t i n e s / b l o b / e 8 2 3 e 3 a b f5 f89 e c b / s t r i n g / a a r c h64 / m e m c m p . S
2014-04-28 06:11:32 +01:00
* /
# include < l i n u x / l i n k a g e . h >
# include < a s m / a s s e m b l e r . h >
2021-05-27 16:34:41 +01:00
/ * Assumptions :
*
* ARMv8 - a , A A r c h64 , u n a l i g n e d a c c e s s e s .
* /
# define L ( l a b e l ) . L ## l a b e l
2014-04-28 06:11:32 +01:00
/* Parameters and result. */
2021-05-27 16:34:41 +01:00
# define s r c1 x0
# define s r c2 x1
# define l i m i t x2
# define r e s u l t w0
2014-04-28 06:11:32 +01:00
/* Internal variables. */
2021-05-27 16:34:41 +01:00
# define d a t a1 x3
# define d a t a1 w w3
# define d a t a1 h x4
# define d a t a2 x5
# define d a t a2 w w5
# define d a t a2 h x6
# define t m p1 x7
# define t m p2 x8
2014-04-28 06:11:32 +01:00
arm64: clean up symbol aliasing
Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify and more consistently define function aliases across
arch/arm64.
Aliases are now defined in terms of a canonical function name. For
position-independent functions I've made the __pi_<func> name the
canonical name, and defined other alises in terms of this.
The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name,
and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro
also obscures the fact that the __pi_<func> fymbol is global and the
<func> symbol is weak. For clarity, I have removed these macros and used
SYM_FUNC_{START,END}() directly with the __pi_<func> name.
For example:
SYM_FUNC_START_WEAK_PI(func)
... asm insns ...
SYM_FUNC_END_PI(func)
EXPORT_SYMBOL(func)
... becomes:
SYM_FUNC_START(__pi_func)
... asm insns ...
SYM_FUNC_END(__pi_func)
SYM_FUNC_ALIAS_WEAK(func, __pi_func)
EXPORT_SYMBOL(func)
For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:
SYM_FUNC_START(func)
... asm insns ...
SYM_FUNC_END(func)
EXPORT_SYMBOL(func)
SYM_FUNC_ALIAS(alias, func)
EXPORT_SYMBOL(alias)
For consistency with the other string functions, I've defined strrchr as
a position-independent function, as it can safely be used as such even
though we have no users today.
As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are
removed. The common versions will be removed by a subsequent patch.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220216162229.1076788-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2022-02-16 16:22:27 +00:00
SYM_ F U N C _ S T A R T ( _ _ p i _ m e m c m p )
2021-05-27 16:34:41 +01:00
subs l i m i t , l i m i t , 8
b. l o L ( l e s s8 )
ldr d a t a1 , [ s r c1 ] , 8
ldr d a t a2 , [ s r c2 ] , 8
cmp d a t a1 , d a t a2
b. n e L ( r e t u r n )
subs l i m i t , l i m i t , 8
b. g t L ( m o r e 1 6 )
ldr d a t a1 , [ s r c1 , l i m i t ]
ldr d a t a2 , [ s r c2 , l i m i t ]
b L ( r e t u r n )
L( m o r e 1 6 ) :
ldr d a t a1 , [ s r c1 ] , 8
ldr d a t a2 , [ s r c2 ] , 8
cmp d a t a1 , d a t a2
bne L ( r e t u r n )
/ * Jump d i r e c t l y t o c o m p a r i n g t h e l a s t 1 6 b y t e s f o r 3 2 b y t e ( o r l e s s )
strings. * /
subs l i m i t , l i m i t , 1 6
b. l s L ( l a s t _ b y t e s )
/ * We o v e r l a p l o a d s b e t w e e n 0 - 3 2 b y t e s a t e i t h e r s i d e o f S R C 1 w h e n w e
try t o a l i g n , s o l i m i t i t o n l y t o s t r i n g s l a r g e r t h a n 1 2 8 b y t e s . * /
cmp l i m i t , 9 6
b. l s L ( l o o p16 )
/* Align src1 and adjust src2 with bytes not yet done. */
and t m p1 , s r c1 , 1 5
add l i m i t , l i m i t , t m p1
sub s r c1 , s r c1 , t m p1
sub s r c2 , s r c2 , t m p1
/ * Loop p e r f o r m i n g 1 6 b y t e s p e r i t e r a t i o n u s i n g a l i g n e d s r c1 .
Limit i s p r e - d e c r e m e n t e d b y 1 6 a n d m u s t b e l a r g e r t h a n z e r o .
Exit i f < = 1 6 b y t e s l e f t t o d o o r i f t h e d a t a i s n o t e q u a l . * /
.p2align 4
L( l o o p16 ) :
ldp d a t a1 , d a t a1 h , [ s r c1 ] , 1 6
ldp d a t a2 , d a t a2 h , [ s r c2 ] , 1 6
subs l i m i t , l i m i t , 1 6
ccmp d a t a1 , d a t a2 , 0 , h i
ccmp d a t a1 h , d a t a2 h , 0 , e q
b. e q L ( l o o p16 )
cmp d a t a1 , d a t a2
bne L ( r e t u r n )
mov d a t a1 , d a t a1 h
mov d a t a2 , d a t a2 h
cmp d a t a1 , d a t a2
bne L ( r e t u r n )
/* Compare last 1-16 bytes using unaligned access. */
L( l a s t _ b y t e s ) :
add s r c1 , s r c1 , l i m i t
add s r c2 , s r c2 , l i m i t
ldp d a t a1 , d a t a1 h , [ s r c1 ]
ldp d a t a2 , d a t a2 h , [ s r c2 ]
cmp d a t a1 , d a t a2
bne L ( r e t u r n )
mov d a t a1 , d a t a1 h
mov d a t a2 , d a t a2 h
cmp d a t a1 , d a t a2
/* Compare data bytes and set return value to 0, -1 or 1. */
L( r e t u r n ) :
# ifndef _ _ A A R C H 6 4 E B _ _
rev d a t a1 , d a t a1
rev d a t a2 , d a t a2
# endif
cmp d a t a1 , d a t a2
L( r e t _ e q ) :
cset r e s u l t , n e
cneg r e s u l t , r e s u l t , l o
2014-04-28 06:11:32 +01:00
ret
2021-05-27 16:34:41 +01:00
.p2align 4
/* Compare up to 8 bytes. Limit is [-8..-1]. */
L( l e s s8 ) :
adds l i m i t , l i m i t , 4
b. l o L ( l e s s4 )
ldr d a t a1 w , [ s r c1 ] , 4
ldr d a t a2 w , [ s r c2 ] , 4
cmp d a t a1 w , d a t a2 w
b. n e L ( r e t u r n )
sub l i m i t , l i m i t , 4
L( l e s s4 ) :
adds l i m i t , l i m i t , 4
beq L ( r e t _ e q )
L( b y t e _ l o o p ) :
ldrb d a t a1 w , [ s r c1 ] , 1
ldrb d a t a2 w , [ s r c2 ] , 1
subs l i m i t , l i m i t , 1
ccmp d a t a1 w , d a t a2 w , 0 , n e / * N Z C V = 0 b00 0 0 . * /
b. e q L ( b y t e _ l o o p )
sub r e s u l t , d a t a1 w , d a t a2 w
2014-04-28 06:11:32 +01:00
ret
arm64: clean up symbol aliasing
Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify and more consistently define function aliases across
arch/arm64.
Aliases are now defined in terms of a canonical function name. For
position-independent functions I've made the __pi_<func> name the
canonical name, and defined other alises in terms of this.
The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name,
and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro
also obscures the fact that the __pi_<func> fymbol is global and the
<func> symbol is weak. For clarity, I have removed these macros and used
SYM_FUNC_{START,END}() directly with the __pi_<func> name.
For example:
SYM_FUNC_START_WEAK_PI(func)
... asm insns ...
SYM_FUNC_END_PI(func)
EXPORT_SYMBOL(func)
... becomes:
SYM_FUNC_START(__pi_func)
... asm insns ...
SYM_FUNC_END(__pi_func)
SYM_FUNC_ALIAS_WEAK(func, __pi_func)
EXPORT_SYMBOL(func)
For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:
SYM_FUNC_START(func)
... asm insns ...
SYM_FUNC_END(func)
EXPORT_SYMBOL(func)
SYM_FUNC_ALIAS(alias, func)
EXPORT_SYMBOL(alias)
For consistency with the other string functions, I've defined strrchr as
a position-independent function, as it can safely be used as such even
though we have no users today.
As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are
removed. The common versions will be removed by a subsequent patch.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220216162229.1076788-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2022-02-16 16:22:27 +00:00
SYM_ F U N C _ E N D ( _ _ p i _ m e m c m p )
SYM_ F U N C _ A L I A S _ W E A K ( m e m c m p , _ _ p i _ m e m c m p )
2018-12-07 18:08:21 +00:00
EXPORT_ S Y M B O L _ N O K A S A N ( m e m c m p )