2019-05-27 09:55:01 +03:00
/* SPDX-License-Identifier: GPL-2.0-or-later */
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
/ *
2018-12-05 09:20:03 +03:00
* ChaCha 2 5 6 - b i t c i p h e r a l g o r i t h m , x64 A V X 2 f u n c t i o n s
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
*
* Copyright ( C ) 2 0 1 5 M a r t i n W i l l i
* /
# include < l i n u x / l i n k a g e . h >
crypto: x86 - make constants readonly, allow linker to merge them
A lot of asm-optimized routines in arch/x86/crypto/ keep its
constants in .data. This is wrong, they should be on .rodata.
Mnay of these constants are the same in different modules.
For example, 128-bit shuffle mask 0x000102030405060708090A0B0C0D0E0F
exists in at least half a dozen places.
There is a way to let linker merge them and use just one copy.
The rules are as follows: mergeable objects of different sizes
should not share sections. You can't put them all in one .rodata
section, they will lose "mergeability".
GCC puts its mergeable constants in ".rodata.cstSIZE" sections,
or ".rodata.cstSIZE.<object_name>" if -fdata-sections is used.
This patch does the same:
.section .rodata.cst16.SHUF_MASK, "aM", @progbits, 16
It is important that all data in such section consists of
16-byte elements, not larger ones, and there are no implicit
use of one element from another.
When this is not the case, use non-mergeable section:
.section .rodata[.VAR_NAME], "a", @progbits
This reduces .data by ~15 kbytes:
text data bss dec hex filename
11097415 2705840 2630712 16433967 fac32f vmlinux-prev.o
11112095 2690672 2630712 16433479 fac147 vmlinux.o
Merged objects are visible in System.map:
ffffffff81a28810 r POLY
ffffffff81a28810 r POLY
ffffffff81a28820 r TWOONE
ffffffff81a28820 r TWOONE
ffffffff81a28830 r PSHUFFLE_BYTE_FLIP_MASK <- merged regardless of
ffffffff81a28830 r SHUF_MASK <------------- the name difference
ffffffff81a28830 r SHUF_MASK
ffffffff81a28830 r SHUF_MASK
..
ffffffff81a28d00 r K512 <- merged three identical 640-byte tables
ffffffff81a28d00 r K512
ffffffff81a28d00 r K512
Use of object names in section name suffixes is not strictly necessary,
but might help if someday link stage will use garbage collection
to eliminate unused sections (ld --gc-sections).
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: Josh Poimboeuf <jpoimboe@redhat.com>
CC: Xiaodong Liu <xiaodong.liu@intel.com>
CC: Megha Dey <megha.dey@intel.com>
CC: linux-crypto@vger.kernel.org
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-20 00:33:04 +03:00
.section .rodata .cst32 .ROT8 , " aM" , @progbits, 32
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
.align 32
ROT8 : .octa 0x0e0d0c0f 0 a0 9 0 8 0 b06 0 5 0 4 0 7 0 2 0 1 0 0 0 3
.octa 0x0e0d0c0f0a09080b0605040702010003
crypto: x86 - make constants readonly, allow linker to merge them
A lot of asm-optimized routines in arch/x86/crypto/ keep its
constants in .data. This is wrong, they should be on .rodata.
Mnay of these constants are the same in different modules.
For example, 128-bit shuffle mask 0x000102030405060708090A0B0C0D0E0F
exists in at least half a dozen places.
There is a way to let linker merge them and use just one copy.
The rules are as follows: mergeable objects of different sizes
should not share sections. You can't put them all in one .rodata
section, they will lose "mergeability".
GCC puts its mergeable constants in ".rodata.cstSIZE" sections,
or ".rodata.cstSIZE.<object_name>" if -fdata-sections is used.
This patch does the same:
.section .rodata.cst16.SHUF_MASK, "aM", @progbits, 16
It is important that all data in such section consists of
16-byte elements, not larger ones, and there are no implicit
use of one element from another.
When this is not the case, use non-mergeable section:
.section .rodata[.VAR_NAME], "a", @progbits
This reduces .data by ~15 kbytes:
text data bss dec hex filename
11097415 2705840 2630712 16433967 fac32f vmlinux-prev.o
11112095 2690672 2630712 16433479 fac147 vmlinux.o
Merged objects are visible in System.map:
ffffffff81a28810 r POLY
ffffffff81a28810 r POLY
ffffffff81a28820 r TWOONE
ffffffff81a28820 r TWOONE
ffffffff81a28830 r PSHUFFLE_BYTE_FLIP_MASK <- merged regardless of
ffffffff81a28830 r SHUF_MASK <------------- the name difference
ffffffff81a28830 r SHUF_MASK
ffffffff81a28830 r SHUF_MASK
..
ffffffff81a28d00 r K512 <- merged three identical 640-byte tables
ffffffff81a28d00 r K512
ffffffff81a28d00 r K512
Use of object names in section name suffixes is not strictly necessary,
but might help if someday link stage will use garbage collection
to eliminate unused sections (ld --gc-sections).
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: Josh Poimboeuf <jpoimboe@redhat.com>
CC: Xiaodong Liu <xiaodong.liu@intel.com>
CC: Megha Dey <megha.dey@intel.com>
CC: linux-crypto@vger.kernel.org
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-20 00:33:04 +03:00
.section .rodata .cst32 .ROT16 , " aM" , @progbits, 32
.align 32
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
ROT16 : .octa 0x0d0c0f0e 0 9 0 8 0 b0 a05 0 4 0 7 0 6 0 1 0 0 0 3 0 2
.octa 0x0d0c0f0e09080b0a0504070601000302
crypto: x86 - make constants readonly, allow linker to merge them
A lot of asm-optimized routines in arch/x86/crypto/ keep its
constants in .data. This is wrong, they should be on .rodata.
Mnay of these constants are the same in different modules.
For example, 128-bit shuffle mask 0x000102030405060708090A0B0C0D0E0F
exists in at least half a dozen places.
There is a way to let linker merge them and use just one copy.
The rules are as follows: mergeable objects of different sizes
should not share sections. You can't put them all in one .rodata
section, they will lose "mergeability".
GCC puts its mergeable constants in ".rodata.cstSIZE" sections,
or ".rodata.cstSIZE.<object_name>" if -fdata-sections is used.
This patch does the same:
.section .rodata.cst16.SHUF_MASK, "aM", @progbits, 16
It is important that all data in such section consists of
16-byte elements, not larger ones, and there are no implicit
use of one element from another.
When this is not the case, use non-mergeable section:
.section .rodata[.VAR_NAME], "a", @progbits
This reduces .data by ~15 kbytes:
text data bss dec hex filename
11097415 2705840 2630712 16433967 fac32f vmlinux-prev.o
11112095 2690672 2630712 16433479 fac147 vmlinux.o
Merged objects are visible in System.map:
ffffffff81a28810 r POLY
ffffffff81a28810 r POLY
ffffffff81a28820 r TWOONE
ffffffff81a28820 r TWOONE
ffffffff81a28830 r PSHUFFLE_BYTE_FLIP_MASK <- merged regardless of
ffffffff81a28830 r SHUF_MASK <------------- the name difference
ffffffff81a28830 r SHUF_MASK
ffffffff81a28830 r SHUF_MASK
..
ffffffff81a28d00 r K512 <- merged three identical 640-byte tables
ffffffff81a28d00 r K512
ffffffff81a28d00 r K512
Use of object names in section name suffixes is not strictly necessary,
but might help if someday link stage will use garbage collection
to eliminate unused sections (ld --gc-sections).
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: Josh Poimboeuf <jpoimboe@redhat.com>
CC: Xiaodong Liu <xiaodong.liu@intel.com>
CC: Megha Dey <megha.dey@intel.com>
CC: linux-crypto@vger.kernel.org
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-20 00:33:04 +03:00
.section .rodata .cst32 .CTRINC , " aM" , @progbits, 32
.align 32
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
CTRINC : .octa 0x00000003 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
.octa 0x00000007000000060000000500000004
2018-11-11 12:36:29 +03:00
.section .rodata .cst32 .CTR2BL , " aM" , @progbits, 32
.align 32
CTR2BL : .octa 0x00000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
.octa 0x00000000000000000000000000000001
2018-11-11 12:36:30 +03:00
.section .rodata .cst32 .CTR4BL , " aM" , @progbits, 32
.align 32
CTR4BL : .octa 0x00000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
.octa 0x00000000000000000000000000000003
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
.text
2019-10-11 14:51:04 +03:00
SYM_ F U N C _ S T A R T ( c h a c h a _ 2 b l o c k _ x o r _ a v x2 )
2018-11-11 12:36:29 +03:00
# % rdi : Input s t a t e m a t r i x , s
# % rsi : up t o 2 d a t a b l o c k s o u t p u t , o
# % rdx : up t o 2 d a t a b l o c k s i n p u t , i
# % rcx : input/ o u t p u t l e n g t h i n b y t e s
2018-12-05 09:20:03 +03:00
# % r8d : nrounds
2018-11-11 12:36:29 +03:00
2018-12-05 09:20:03 +03:00
# This f u n c t i o n e n c r y p t s t w o C h a C h a b l o c k s b y l o a d i n g t h e s t a t e
2018-11-11 12:36:29 +03:00
# matrix t w i c e a c r o s s f o u r A V X r e g i s t e r s . I t p e r f o r m s m a t r i x o p e r a t i o n s
# on f o u r w o r d s i n e a c h m a t r i x i n p a r a l l e l , b u t r e q u i r e s s h u f f l i n g t o
# rearrange t h e w o r d s a f t e r e a c h r o u n d .
vzeroupper
# x0 . . 3 [ 0 - 2 ] = s0 . . 3
vbroadcasti1 2 8 0 x00 ( % r d i ) ,% y m m 0
vbroadcasti1 2 8 0 x10 ( % r d i ) ,% y m m 1
vbroadcasti1 2 8 0 x20 ( % r d i ) ,% y m m 2
vbroadcasti1 2 8 0 x30 ( % r d i ) ,% y m m 3
vpaddd C T R 2 B L ( % r i p ) ,% y m m 3 ,% y m m 3
vmovdqa % y m m 0 ,% y m m 8
vmovdqa % y m m 1 ,% y m m 9
vmovdqa % y m m 2 ,% y m m 1 0
vmovdqa % y m m 3 ,% y m m 1 1
vmovdqa R O T 8 ( % r i p ) ,% y m m 4
vmovdqa R O T 1 6 ( % r i p ) ,% y m m 5
mov % r c x ,% r a x
.Ldoubleround :
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 1 6 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 5 ,% y m m 3 ,% y m m 3
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 1 2 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 6
vpslld $ 1 2 ,% y m m 6 ,% y m m 6
vpsrld $ 2 0 ,% y m m 1 ,% y m m 1
vpor % y m m 6 ,% y m m 1 ,% y m m 1
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 8 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 4 ,% y m m 3 ,% y m m 3
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 7 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 7
vpslld $ 7 ,% y m m 7 ,% y m m 7
vpsrld $ 2 5 ,% y m m 1 ,% y m m 1
vpor % y m m 7 ,% y m m 1 ,% y m m 1
# x1 = s h u f f l e 3 2 ( x1 , M A S K ( 0 , 3 , 2 , 1 ) )
vpshufd $ 0 x39 ,% y m m 1 ,% y m m 1
# x2 = s h u f f l e 3 2 ( x2 , M A S K ( 1 , 0 , 3 , 2 ) )
vpshufd $ 0 x4 e ,% y m m 2 ,% y m m 2
# x3 = s h u f f l e 3 2 ( x3 , M A S K ( 2 , 1 , 0 , 3 ) )
vpshufd $ 0 x93 ,% y m m 3 ,% y m m 3
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 1 6 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 5 ,% y m m 3 ,% y m m 3
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 1 2 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 6
vpslld $ 1 2 ,% y m m 6 ,% y m m 6
vpsrld $ 2 0 ,% y m m 1 ,% y m m 1
vpor % y m m 6 ,% y m m 1 ,% y m m 1
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 8 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 4 ,% y m m 3 ,% y m m 3
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 7 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 7
vpslld $ 7 ,% y m m 7 ,% y m m 7
vpsrld $ 2 5 ,% y m m 1 ,% y m m 1
vpor % y m m 7 ,% y m m 1 ,% y m m 1
# x1 = s h u f f l e 3 2 ( x1 , M A S K ( 2 , 1 , 0 , 3 ) )
vpshufd $ 0 x93 ,% y m m 1 ,% y m m 1
# x2 = s h u f f l e 3 2 ( x2 , M A S K ( 1 , 0 , 3 , 2 ) )
vpshufd $ 0 x4 e ,% y m m 2 ,% y m m 2
# x3 = s h u f f l e 3 2 ( x3 , M A S K ( 0 , 3 , 2 , 1 ) )
vpshufd $ 0 x39 ,% y m m 3 ,% y m m 3
2018-12-05 09:20:03 +03:00
sub $ 2 ,% r8 d
2018-11-11 12:36:29 +03:00
jnz . L d o u b l e r o u n d
# o0 = i 0 ^ ( x0 + s0 )
vpaddd % y m m 8 ,% y m m 0 ,% y m m 7
cmp $ 0 x10 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x00 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x00 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 7 ,% x m m 0
# o1 = i 1 ^ ( x1 + s1 )
vpaddd % y m m 9 ,% y m m 1 ,% y m m 7
cmp $ 0 x20 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x10 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x10 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 7 ,% x m m 1
# o2 = i 2 ^ ( x2 + s2 )
vpaddd % y m m 1 0 ,% y m m 2 ,% y m m 7
cmp $ 0 x30 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x20 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x20 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 7 ,% x m m 2
# o3 = i 3 ^ ( x3 + s3 )
vpaddd % y m m 1 1 ,% y m m 3 ,% y m m 7
cmp $ 0 x40 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x30 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x30 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 7 ,% x m m 3
# xor a n d w r i t e s e c o n d b l o c k
vmovdqa % x m m 0 ,% x m m 7
cmp $ 0 x50 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x40 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x40 ( % r s i )
vmovdqa % x m m 1 ,% x m m 7
cmp $ 0 x60 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x50 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x50 ( % r s i )
vmovdqa % x m m 2 ,% x m m 7
cmp $ 0 x70 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x60 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x60 ( % r s i )
vmovdqa % x m m 3 ,% x m m 7
cmp $ 0 x80 ,% r a x
jl . L x o r p a r t 2
vpxor 0 x70 ( % r d x ) ,% x m m 7 ,% x m m 6
vmovdqu % x m m 6 ,0 x70 ( % r s i )
.Ldone2 :
vzeroupper
ret
.Lxorpart2 :
# xor r e m a i n i n g b y t e s f r o m p a r t i a l r e g i s t e r i n t o o u t p u t
mov % r a x ,% r9
and $ 0 x0 f ,% r9
jz . L d o n e 2
and $ ~ 0 x0 f ,% r a x
mov % r s i ,% r11
lea 8 ( % r s p ) ,% r10
sub $ 0 x10 ,% r s p
and $ ~ 3 1 ,% r s p
lea ( % r d x ,% r a x ) ,% r s i
mov % r s p ,% r d i
mov % r9 ,% r c x
rep m o v s b
vpxor 0 x00 ( % r s p ) ,% x m m 7 ,% x m m 7
vmovdqa % x m m 7 ,0 x00 ( % r s p )
mov % r s p ,% r s i
lea ( % r11 ,% r a x ) ,% r d i
mov % r9 ,% r c x
rep m o v s b
lea - 8 ( % r10 ) ,% r s p
jmp . L d o n e 2
2019-10-11 14:51:04 +03:00
SYM_ F U N C _ E N D ( c h a c h a _ 2 b l o c k _ x o r _ a v x2 )
2018-11-11 12:36:29 +03:00
2019-10-11 14:51:04 +03:00
SYM_ F U N C _ S T A R T ( c h a c h a _ 4 b l o c k _ x o r _ a v x2 )
2018-11-11 12:36:30 +03:00
# % rdi : Input s t a t e m a t r i x , s
# % rsi : up t o 4 d a t a b l o c k s o u t p u t , o
# % rdx : up t o 4 d a t a b l o c k s i n p u t , i
# % rcx : input/ o u t p u t l e n g t h i n b y t e s
2018-12-05 09:20:03 +03:00
# % r8d : nrounds
2018-11-11 12:36:30 +03:00
2018-12-05 09:20:03 +03:00
# This f u n c t i o n e n c r y p t s f o u r C h a C h a b l o c k s b y l o a d i n g t h e s t a t e
2018-11-11 12:36:30 +03:00
# matrix f o u r t i m e s a c r o s s e i g h t A V X r e g i s t e r s . I t p e r f o r m s m a t r i x
# operations o n f o u r w o r d s i n t w o m a t r i c e s i n p a r a l l e l , s e q u e n t i a l l y
# to t h e o p e r a t i o n s o n t h e f o u r w o r d s o f t h e o t h e r t w o m a t r i c e s . T h e
# required w o r d s h u f f l i n g h a s a r a t h e r h i g h l a t e n c y , w e c a n d o t h e
# arithmetic o n t w o m a t r i x - p a i r s w i t h o u t m u c h s l o w d o w n .
vzeroupper
# x0 . . 3 [ 0 - 4 ] = s0 . . 3
vbroadcasti1 2 8 0 x00 ( % r d i ) ,% y m m 0
vbroadcasti1 2 8 0 x10 ( % r d i ) ,% y m m 1
vbroadcasti1 2 8 0 x20 ( % r d i ) ,% y m m 2
vbroadcasti1 2 8 0 x30 ( % r d i ) ,% y m m 3
vmovdqa % y m m 0 ,% y m m 4
vmovdqa % y m m 1 ,% y m m 5
vmovdqa % y m m 2 ,% y m m 6
vmovdqa % y m m 3 ,% y m m 7
vpaddd C T R 2 B L ( % r i p ) ,% y m m 3 ,% y m m 3
vpaddd C T R 4 B L ( % r i p ) ,% y m m 7 ,% y m m 7
vmovdqa % y m m 0 ,% y m m 1 1
vmovdqa % y m m 1 ,% y m m 1 2
vmovdqa % y m m 2 ,% y m m 1 3
vmovdqa % y m m 3 ,% y m m 1 4
vmovdqa % y m m 7 ,% y m m 1 5
vmovdqa R O T 8 ( % r i p ) ,% y m m 8
vmovdqa R O T 1 6 ( % r i p ) ,% y m m 9
mov % r c x ,% r a x
.Ldoubleround4 :
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 1 6 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 9 ,% y m m 3 ,% y m m 3
vpaddd % y m m 5 ,% y m m 4 ,% y m m 4
vpxor % y m m 4 ,% y m m 7 ,% y m m 7
vpshufb % y m m 9 ,% y m m 7 ,% y m m 7
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 1 2 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 1 0
vpslld $ 1 2 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 0 ,% y m m 1 ,% y m m 1
vpor % y m m 1 0 ,% y m m 1 ,% y m m 1
vpaddd % y m m 7 ,% y m m 6 ,% y m m 6
vpxor % y m m 6 ,% y m m 5 ,% y m m 5
vmovdqa % y m m 5 ,% y m m 1 0
vpslld $ 1 2 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 0 ,% y m m 5 ,% y m m 5
vpor % y m m 1 0 ,% y m m 5 ,% y m m 5
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 8 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 8 ,% y m m 3 ,% y m m 3
vpaddd % y m m 5 ,% y m m 4 ,% y m m 4
vpxor % y m m 4 ,% y m m 7 ,% y m m 7
vpshufb % y m m 8 ,% y m m 7 ,% y m m 7
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 7 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 1 0
vpslld $ 7 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 5 ,% y m m 1 ,% y m m 1
vpor % y m m 1 0 ,% y m m 1 ,% y m m 1
vpaddd % y m m 7 ,% y m m 6 ,% y m m 6
vpxor % y m m 6 ,% y m m 5 ,% y m m 5
vmovdqa % y m m 5 ,% y m m 1 0
vpslld $ 7 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 5 ,% y m m 5 ,% y m m 5
vpor % y m m 1 0 ,% y m m 5 ,% y m m 5
# x1 = s h u f f l e 3 2 ( x1 , M A S K ( 0 , 3 , 2 , 1 ) )
vpshufd $ 0 x39 ,% y m m 1 ,% y m m 1
vpshufd $ 0 x39 ,% y m m 5 ,% y m m 5
# x2 = s h u f f l e 3 2 ( x2 , M A S K ( 1 , 0 , 3 , 2 ) )
vpshufd $ 0 x4 e ,% y m m 2 ,% y m m 2
vpshufd $ 0 x4 e ,% y m m 6 ,% y m m 6
# x3 = s h u f f l e 3 2 ( x3 , M A S K ( 2 , 1 , 0 , 3 ) )
vpshufd $ 0 x93 ,% y m m 3 ,% y m m 3
vpshufd $ 0 x93 ,% y m m 7 ,% y m m 7
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 1 6 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 9 ,% y m m 3 ,% y m m 3
vpaddd % y m m 5 ,% y m m 4 ,% y m m 4
vpxor % y m m 4 ,% y m m 7 ,% y m m 7
vpshufb % y m m 9 ,% y m m 7 ,% y m m 7
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 1 2 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 1 0
vpslld $ 1 2 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 0 ,% y m m 1 ,% y m m 1
vpor % y m m 1 0 ,% y m m 1 ,% y m m 1
vpaddd % y m m 7 ,% y m m 6 ,% y m m 6
vpxor % y m m 6 ,% y m m 5 ,% y m m 5
vmovdqa % y m m 5 ,% y m m 1 0
vpslld $ 1 2 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 0 ,% y m m 5 ,% y m m 5
vpor % y m m 1 0 ,% y m m 5 ,% y m m 5
# x0 + = x1 , x3 = r o t l 3 2 ( x3 ^ x0 , 8 )
vpaddd % y m m 1 ,% y m m 0 ,% y m m 0
vpxor % y m m 0 ,% y m m 3 ,% y m m 3
vpshufb % y m m 8 ,% y m m 3 ,% y m m 3
vpaddd % y m m 5 ,% y m m 4 ,% y m m 4
vpxor % y m m 4 ,% y m m 7 ,% y m m 7
vpshufb % y m m 8 ,% y m m 7 ,% y m m 7
# x2 + = x3 , x1 = r o t l 3 2 ( x1 ^ x2 , 7 )
vpaddd % y m m 3 ,% y m m 2 ,% y m m 2
vpxor % y m m 2 ,% y m m 1 ,% y m m 1
vmovdqa % y m m 1 ,% y m m 1 0
vpslld $ 7 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 5 ,% y m m 1 ,% y m m 1
vpor % y m m 1 0 ,% y m m 1 ,% y m m 1
vpaddd % y m m 7 ,% y m m 6 ,% y m m 6
vpxor % y m m 6 ,% y m m 5 ,% y m m 5
vmovdqa % y m m 5 ,% y m m 1 0
vpslld $ 7 ,% y m m 1 0 ,% y m m 1 0
vpsrld $ 2 5 ,% y m m 5 ,% y m m 5
vpor % y m m 1 0 ,% y m m 5 ,% y m m 5
# x1 = s h u f f l e 3 2 ( x1 , M A S K ( 2 , 1 , 0 , 3 ) )
vpshufd $ 0 x93 ,% y m m 1 ,% y m m 1
vpshufd $ 0 x93 ,% y m m 5 ,% y m m 5
# x2 = s h u f f l e 3 2 ( x2 , M A S K ( 1 , 0 , 3 , 2 ) )
vpshufd $ 0 x4 e ,% y m m 2 ,% y m m 2
vpshufd $ 0 x4 e ,% y m m 6 ,% y m m 6
# x3 = s h u f f l e 3 2 ( x3 , M A S K ( 0 , 3 , 2 , 1 ) )
vpshufd $ 0 x39 ,% y m m 3 ,% y m m 3
vpshufd $ 0 x39 ,% y m m 7 ,% y m m 7
2018-12-05 09:20:03 +03:00
sub $ 2 ,% r8 d
2018-11-11 12:36:30 +03:00
jnz . L d o u b l e r o u n d4
# o0 = i 0 ^ ( x0 + s0 ) , f i r s t b l o c k
vpaddd % y m m 1 1 ,% y m m 0 ,% y m m 1 0
cmp $ 0 x10 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x00 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x00 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 0
# o1 = i 1 ^ ( x1 + s1 ) , f i r s t b l o c k
vpaddd % y m m 1 2 ,% y m m 1 ,% y m m 1 0
cmp $ 0 x20 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x10 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x10 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 1
# o2 = i 2 ^ ( x2 + s2 ) , f i r s t b l o c k
vpaddd % y m m 1 3 ,% y m m 2 ,% y m m 1 0
cmp $ 0 x30 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x20 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x20 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 2
# o3 = i 3 ^ ( x3 + s3 ) , f i r s t b l o c k
vpaddd % y m m 1 4 ,% y m m 3 ,% y m m 1 0
cmp $ 0 x40 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x30 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x30 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 3
# xor a n d w r i t e s e c o n d b l o c k
vmovdqa % x m m 0 ,% x m m 1 0
cmp $ 0 x50 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x40 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x40 ( % r s i )
vmovdqa % x m m 1 ,% x m m 1 0
cmp $ 0 x60 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x50 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x50 ( % r s i )
vmovdqa % x m m 2 ,% x m m 1 0
cmp $ 0 x70 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x60 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x60 ( % r s i )
vmovdqa % x m m 3 ,% x m m 1 0
cmp $ 0 x80 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x70 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x70 ( % r s i )
# o0 = i 0 ^ ( x0 + s0 ) , t h i r d b l o c k
vpaddd % y m m 1 1 ,% y m m 4 ,% y m m 1 0
cmp $ 0 x90 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x80 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x80 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 4
# o1 = i 1 ^ ( x1 + s1 ) , t h i r d b l o c k
vpaddd % y m m 1 2 ,% y m m 5 ,% y m m 1 0
cmp $ 0 x a0 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x90 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x90 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 5
# o2 = i 2 ^ ( x2 + s2 ) , t h i r d b l o c k
vpaddd % y m m 1 3 ,% y m m 6 ,% y m m 1 0
cmp $ 0 x b0 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x a0 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x a0 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 6
# o3 = i 3 ^ ( x3 + s3 ) , t h i r d b l o c k
vpaddd % y m m 1 5 ,% y m m 7 ,% y m m 1 0
cmp $ 0 x c0 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x b0 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x b0 ( % r s i )
vextracti1 2 8 $ 1 ,% y m m 1 0 ,% x m m 7
# xor a n d w r i t e f o u r t h b l o c k
vmovdqa % x m m 4 ,% x m m 1 0
cmp $ 0 x d0 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x c0 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x c0 ( % r s i )
vmovdqa % x m m 5 ,% x m m 1 0
cmp $ 0 x e 0 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x d0 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x d0 ( % r s i )
vmovdqa % x m m 6 ,% x m m 1 0
cmp $ 0 x f0 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x e 0 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x e 0 ( % r s i )
vmovdqa % x m m 7 ,% x m m 1 0
cmp $ 0 x10 0 ,% r a x
jl . L x o r p a r t 4
vpxor 0 x f0 ( % r d x ) ,% x m m 1 0 ,% x m m 9
vmovdqu % x m m 9 ,0 x f0 ( % r s i )
.Ldone4 :
vzeroupper
ret
.Lxorpart4 :
# xor r e m a i n i n g b y t e s f r o m p a r t i a l r e g i s t e r i n t o o u t p u t
mov % r a x ,% r9
and $ 0 x0 f ,% r9
jz . L d o n e 4
and $ ~ 0 x0 f ,% r a x
mov % r s i ,% r11
lea 8 ( % r s p ) ,% r10
sub $ 0 x10 ,% r s p
and $ ~ 3 1 ,% r s p
lea ( % r d x ,% r a x ) ,% r s i
mov % r s p ,% r d i
mov % r9 ,% r c x
rep m o v s b
vpxor 0 x00 ( % r s p ) ,% x m m 1 0 ,% x m m 1 0
vmovdqa % x m m 1 0 ,0 x00 ( % r s p )
mov % r s p ,% r s i
lea ( % r11 ,% r a x ) ,% r d i
mov % r9 ,% r c x
rep m o v s b
lea - 8 ( % r10 ) ,% r s p
jmp . L d o n e 4
2019-10-11 14:51:04 +03:00
SYM_ F U N C _ E N D ( c h a c h a _ 4 b l o c k _ x o r _ a v x2 )
2018-11-11 12:36:30 +03:00
2019-10-11 14:51:04 +03:00
SYM_ F U N C _ S T A R T ( c h a c h a _ 8 b l o c k _ x o r _ a v x2 )
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
# % rdi : Input s t a t e m a t r i x , s
2018-11-11 12:36:27 +03:00
# % rsi : up t o 8 d a t a b l o c k s o u t p u t , o
# % rdx : up t o 8 d a t a b l o c k s i n p u t , i
# % rcx : input/ o u t p u t l e n g t h i n b y t e s
2018-12-05 09:20:03 +03:00
# % r8d : nrounds
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
2018-12-05 09:20:03 +03:00
# This f u n c t i o n e n c r y p t s e i g h t c o n s e c u t i v e C h a C h a b l o c k s b y l o a d i n g
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
# the s t a t e m a t r i x i n A V X r e g i s t e r s e i g h t t i m e s . A s w e n e e d s o m e
# scratch r e g i s t e r s , w e s a v e t h e f i r s t f o u r r e g i s t e r s o n t h e s t a c k . T h e
# algorithm p e r f o r m s e a c h o p e r a t i o n o n t h e c o r r e s p o n d i n g w o r d o f e a c h
# state m a t r i x , h e n c e r e q u i r e s n o w o r d s h u f f l i n g . F o r f i n a l X O R i n g s t e p
# we t r a n s p o s e t h e m a t r i x b y i n t e r l e a v i n g 3 2 - , 6 4 - a n d t h e n 1 2 8 - b i t
# words, w h i c h a l l o w s u s t o d o X O R i n A V X r e g i s t e r s . 8 / 1 6 - b i t w o r d
# rotation i s d o n e w i t h t h e s l i g h t l y b e t t e r p e r f o r m i n g b y t e s h u f f l i n g ,
# 7 / 1 2 - bit w o r d r o t a t i o n u s e s t r a d i t i o n a l s h i f t + O R .
vzeroupper
# 4 * 3 2 byte s t a c k , 3 2 - b y t e a l i g n e d
2017-10-08 23:50:53 +03:00
lea 8 ( % r s p ) ,% r10
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
and $ ~ 3 1 , % r s p
sub $ 0 x80 , % r s p
2018-11-11 12:36:27 +03:00
mov % r c x ,% r a x
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
# x0 . . 1 5 [ 0 - 7 ] = s [ 0 . . 1 5 ]
vpbroadcastd 0 x00 ( % r d i ) ,% y m m 0
vpbroadcastd 0 x04 ( % r d i ) ,% y m m 1
vpbroadcastd 0 x08 ( % r d i ) ,% y m m 2
vpbroadcastd 0 x0 c ( % r d i ) ,% y m m 3
vpbroadcastd 0 x10 ( % r d i ) ,% y m m 4
vpbroadcastd 0 x14 ( % r d i ) ,% y m m 5
vpbroadcastd 0 x18 ( % r d i ) ,% y m m 6
vpbroadcastd 0 x1 c ( % r d i ) ,% y m m 7
vpbroadcastd 0 x20 ( % r d i ) ,% y m m 8
vpbroadcastd 0 x24 ( % r d i ) ,% y m m 9
vpbroadcastd 0 x28 ( % r d i ) ,% y m m 1 0
vpbroadcastd 0 x2 c ( % r d i ) ,% y m m 1 1
vpbroadcastd 0 x30 ( % r d i ) ,% y m m 1 2
vpbroadcastd 0 x34 ( % r d i ) ,% y m m 1 3
vpbroadcastd 0 x38 ( % r d i ) ,% y m m 1 4
vpbroadcastd 0 x3 c ( % r d i ) ,% y m m 1 5
# x0 . . 3 o n s t a c k
vmovdqa % y m m 0 ,0 x00 ( % r s p )
vmovdqa % y m m 1 ,0 x20 ( % r s p )
vmovdqa % y m m 2 ,0 x40 ( % r s p )
vmovdqa % y m m 3 ,0 x60 ( % r s p )
vmovdqa C T R I N C ( % r i p ) ,% y m m 1
vmovdqa R O T 8 ( % r i p ) ,% y m m 2
vmovdqa R O T 1 6 ( % r i p ) ,% y m m 3
# x1 2 + = c o u n t e r v a l u e s 0 - 3
vpaddd % y m m 1 ,% y m m 1 2 ,% y m m 1 2
.Ldoubleround8 :
# x0 + = x4 , x12 = r o t l 3 2 ( x12 ^ x0 , 1 6 )
vpaddd 0 x00 ( % r s p ) ,% y m m 4 ,% y m m 0
vmovdqa % y m m 0 ,0 x00 ( % r s p )
vpxor % y m m 0 ,% y m m 1 2 ,% y m m 1 2
vpshufb % y m m 3 ,% y m m 1 2 ,% y m m 1 2
# x1 + = x5 , x13 = r o t l 3 2 ( x13 ^ x1 , 1 6 )
vpaddd 0 x20 ( % r s p ) ,% y m m 5 ,% y m m 0
vmovdqa % y m m 0 ,0 x20 ( % r s p )
vpxor % y m m 0 ,% y m m 1 3 ,% y m m 1 3
vpshufb % y m m 3 ,% y m m 1 3 ,% y m m 1 3
# x2 + = x6 , x14 = r o t l 3 2 ( x14 ^ x2 , 1 6 )
vpaddd 0 x40 ( % r s p ) ,% y m m 6 ,% y m m 0
vmovdqa % y m m 0 ,0 x40 ( % r s p )
vpxor % y m m 0 ,% y m m 1 4 ,% y m m 1 4
vpshufb % y m m 3 ,% y m m 1 4 ,% y m m 1 4
# x3 + = x7 , x15 = r o t l 3 2 ( x15 ^ x3 , 1 6 )
vpaddd 0 x60 ( % r s p ) ,% y m m 7 ,% y m m 0
vmovdqa % y m m 0 ,0 x60 ( % r s p )
vpxor % y m m 0 ,% y m m 1 5 ,% y m m 1 5
vpshufb % y m m 3 ,% y m m 1 5 ,% y m m 1 5
# x8 + = x12 , x4 = r o t l 3 2 ( x4 ^ x8 , 1 2 )
vpaddd % y m m 1 2 ,% y m m 8 ,% y m m 8
vpxor % y m m 8 ,% y m m 4 ,% y m m 4
vpslld $ 1 2 ,% y m m 4 ,% y m m 0
vpsrld $ 2 0 ,% y m m 4 ,% y m m 4
vpor % y m m 0 ,% y m m 4 ,% y m m 4
# x9 + = x13 , x5 = r o t l 3 2 ( x5 ^ x9 , 1 2 )
vpaddd % y m m 1 3 ,% y m m 9 ,% y m m 9
vpxor % y m m 9 ,% y m m 5 ,% y m m 5
vpslld $ 1 2 ,% y m m 5 ,% y m m 0
vpsrld $ 2 0 ,% y m m 5 ,% y m m 5
vpor % y m m 0 ,% y m m 5 ,% y m m 5
# x1 0 + = x14 , x6 = r o t l 3 2 ( x6 ^ x10 , 1 2 )
vpaddd % y m m 1 4 ,% y m m 1 0 ,% y m m 1 0
vpxor % y m m 1 0 ,% y m m 6 ,% y m m 6
vpslld $ 1 2 ,% y m m 6 ,% y m m 0
vpsrld $ 2 0 ,% y m m 6 ,% y m m 6
vpor % y m m 0 ,% y m m 6 ,% y m m 6
# x1 1 + = x15 , x7 = r o t l 3 2 ( x7 ^ x11 , 1 2 )
vpaddd % y m m 1 5 ,% y m m 1 1 ,% y m m 1 1
vpxor % y m m 1 1 ,% y m m 7 ,% y m m 7
vpslld $ 1 2 ,% y m m 7 ,% y m m 0
vpsrld $ 2 0 ,% y m m 7 ,% y m m 7
vpor % y m m 0 ,% y m m 7 ,% y m m 7
# x0 + = x4 , x12 = r o t l 3 2 ( x12 ^ x0 , 8 )
vpaddd 0 x00 ( % r s p ) ,% y m m 4 ,% y m m 0
vmovdqa % y m m 0 ,0 x00 ( % r s p )
vpxor % y m m 0 ,% y m m 1 2 ,% y m m 1 2
vpshufb % y m m 2 ,% y m m 1 2 ,% y m m 1 2
# x1 + = x5 , x13 = r o t l 3 2 ( x13 ^ x1 , 8 )
vpaddd 0 x20 ( % r s p ) ,% y m m 5 ,% y m m 0
vmovdqa % y m m 0 ,0 x20 ( % r s p )
vpxor % y m m 0 ,% y m m 1 3 ,% y m m 1 3
vpshufb % y m m 2 ,% y m m 1 3 ,% y m m 1 3
# x2 + = x6 , x14 = r o t l 3 2 ( x14 ^ x2 , 8 )
vpaddd 0 x40 ( % r s p ) ,% y m m 6 ,% y m m 0
vmovdqa % y m m 0 ,0 x40 ( % r s p )
vpxor % y m m 0 ,% y m m 1 4 ,% y m m 1 4
vpshufb % y m m 2 ,% y m m 1 4 ,% y m m 1 4
# x3 + = x7 , x15 = r o t l 3 2 ( x15 ^ x3 , 8 )
vpaddd 0 x60 ( % r s p ) ,% y m m 7 ,% y m m 0
vmovdqa % y m m 0 ,0 x60 ( % r s p )
vpxor % y m m 0 ,% y m m 1 5 ,% y m m 1 5
vpshufb % y m m 2 ,% y m m 1 5 ,% y m m 1 5
# x8 + = x12 , x4 = r o t l 3 2 ( x4 ^ x8 , 7 )
vpaddd % y m m 1 2 ,% y m m 8 ,% y m m 8
vpxor % y m m 8 ,% y m m 4 ,% y m m 4
vpslld $ 7 ,% y m m 4 ,% y m m 0
vpsrld $ 2 5 ,% y m m 4 ,% y m m 4
vpor % y m m 0 ,% y m m 4 ,% y m m 4
# x9 + = x13 , x5 = r o t l 3 2 ( x5 ^ x9 , 7 )
vpaddd % y m m 1 3 ,% y m m 9 ,% y m m 9
vpxor % y m m 9 ,% y m m 5 ,% y m m 5
vpslld $ 7 ,% y m m 5 ,% y m m 0
vpsrld $ 2 5 ,% y m m 5 ,% y m m 5
vpor % y m m 0 ,% y m m 5 ,% y m m 5
# x1 0 + = x14 , x6 = r o t l 3 2 ( x6 ^ x10 , 7 )
vpaddd % y m m 1 4 ,% y m m 1 0 ,% y m m 1 0
vpxor % y m m 1 0 ,% y m m 6 ,% y m m 6
vpslld $ 7 ,% y m m 6 ,% y m m 0
vpsrld $ 2 5 ,% y m m 6 ,% y m m 6
vpor % y m m 0 ,% y m m 6 ,% y m m 6
# x1 1 + = x15 , x7 = r o t l 3 2 ( x7 ^ x11 , 7 )
vpaddd % y m m 1 5 ,% y m m 1 1 ,% y m m 1 1
vpxor % y m m 1 1 ,% y m m 7 ,% y m m 7
vpslld $ 7 ,% y m m 7 ,% y m m 0
vpsrld $ 2 5 ,% y m m 7 ,% y m m 7
vpor % y m m 0 ,% y m m 7 ,% y m m 7
# x0 + = x5 , x15 = r o t l 3 2 ( x15 ^ x0 , 1 6 )
vpaddd 0 x00 ( % r s p ) ,% y m m 5 ,% y m m 0
vmovdqa % y m m 0 ,0 x00 ( % r s p )
vpxor % y m m 0 ,% y m m 1 5 ,% y m m 1 5
vpshufb % y m m 3 ,% y m m 1 5 ,% y m m 1 5
# x1 + = x6 , x12 = r o t l 3 2 ( x12 ^ x1 , 1 6 ) % y m m 0
vpaddd 0 x20 ( % r s p ) ,% y m m 6 ,% y m m 0
vmovdqa % y m m 0 ,0 x20 ( % r s p )
vpxor % y m m 0 ,% y m m 1 2 ,% y m m 1 2
vpshufb % y m m 3 ,% y m m 1 2 ,% y m m 1 2
# x2 + = x7 , x13 = r o t l 3 2 ( x13 ^ x2 , 1 6 )
vpaddd 0 x40 ( % r s p ) ,% y m m 7 ,% y m m 0
vmovdqa % y m m 0 ,0 x40 ( % r s p )
vpxor % y m m 0 ,% y m m 1 3 ,% y m m 1 3
vpshufb % y m m 3 ,% y m m 1 3 ,% y m m 1 3
# x3 + = x4 , x14 = r o t l 3 2 ( x14 ^ x3 , 1 6 )
vpaddd 0 x60 ( % r s p ) ,% y m m 4 ,% y m m 0
vmovdqa % y m m 0 ,0 x60 ( % r s p )
vpxor % y m m 0 ,% y m m 1 4 ,% y m m 1 4
vpshufb % y m m 3 ,% y m m 1 4 ,% y m m 1 4
# x1 0 + = x15 , x5 = r o t l 3 2 ( x5 ^ x10 , 1 2 )
vpaddd % y m m 1 5 ,% y m m 1 0 ,% y m m 1 0
vpxor % y m m 1 0 ,% y m m 5 ,% y m m 5
vpslld $ 1 2 ,% y m m 5 ,% y m m 0
vpsrld $ 2 0 ,% y m m 5 ,% y m m 5
vpor % y m m 0 ,% y m m 5 ,% y m m 5
# x1 1 + = x12 , x6 = r o t l 3 2 ( x6 ^ x11 , 1 2 )
vpaddd % y m m 1 2 ,% y m m 1 1 ,% y m m 1 1
vpxor % y m m 1 1 ,% y m m 6 ,% y m m 6
vpslld $ 1 2 ,% y m m 6 ,% y m m 0
vpsrld $ 2 0 ,% y m m 6 ,% y m m 6
vpor % y m m 0 ,% y m m 6 ,% y m m 6
# x8 + = x13 , x7 = r o t l 3 2 ( x7 ^ x8 , 1 2 )
vpaddd % y m m 1 3 ,% y m m 8 ,% y m m 8
vpxor % y m m 8 ,% y m m 7 ,% y m m 7
vpslld $ 1 2 ,% y m m 7 ,% y m m 0
vpsrld $ 2 0 ,% y m m 7 ,% y m m 7
vpor % y m m 0 ,% y m m 7 ,% y m m 7
# x9 + = x14 , x4 = r o t l 3 2 ( x4 ^ x9 , 1 2 )
vpaddd % y m m 1 4 ,% y m m 9 ,% y m m 9
vpxor % y m m 9 ,% y m m 4 ,% y m m 4
vpslld $ 1 2 ,% y m m 4 ,% y m m 0
vpsrld $ 2 0 ,% y m m 4 ,% y m m 4
vpor % y m m 0 ,% y m m 4 ,% y m m 4
# x0 + = x5 , x15 = r o t l 3 2 ( x15 ^ x0 , 8 )
vpaddd 0 x00 ( % r s p ) ,% y m m 5 ,% y m m 0
vmovdqa % y m m 0 ,0 x00 ( % r s p )
vpxor % y m m 0 ,% y m m 1 5 ,% y m m 1 5
vpshufb % y m m 2 ,% y m m 1 5 ,% y m m 1 5
# x1 + = x6 , x12 = r o t l 3 2 ( x12 ^ x1 , 8 )
vpaddd 0 x20 ( % r s p ) ,% y m m 6 ,% y m m 0
vmovdqa % y m m 0 ,0 x20 ( % r s p )
vpxor % y m m 0 ,% y m m 1 2 ,% y m m 1 2
vpshufb % y m m 2 ,% y m m 1 2 ,% y m m 1 2
# x2 + = x7 , x13 = r o t l 3 2 ( x13 ^ x2 , 8 )
vpaddd 0 x40 ( % r s p ) ,% y m m 7 ,% y m m 0
vmovdqa % y m m 0 ,0 x40 ( % r s p )
vpxor % y m m 0 ,% y m m 1 3 ,% y m m 1 3
vpshufb % y m m 2 ,% y m m 1 3 ,% y m m 1 3
# x3 + = x4 , x14 = r o t l 3 2 ( x14 ^ x3 , 8 )
vpaddd 0 x60 ( % r s p ) ,% y m m 4 ,% y m m 0
vmovdqa % y m m 0 ,0 x60 ( % r s p )
vpxor % y m m 0 ,% y m m 1 4 ,% y m m 1 4
vpshufb % y m m 2 ,% y m m 1 4 ,% y m m 1 4
# x1 0 + = x15 , x5 = r o t l 3 2 ( x5 ^ x10 , 7 )
vpaddd % y m m 1 5 ,% y m m 1 0 ,% y m m 1 0
vpxor % y m m 1 0 ,% y m m 5 ,% y m m 5
vpslld $ 7 ,% y m m 5 ,% y m m 0
vpsrld $ 2 5 ,% y m m 5 ,% y m m 5
vpor % y m m 0 ,% y m m 5 ,% y m m 5
# x1 1 + = x12 , x6 = r o t l 3 2 ( x6 ^ x11 , 7 )
vpaddd % y m m 1 2 ,% y m m 1 1 ,% y m m 1 1
vpxor % y m m 1 1 ,% y m m 6 ,% y m m 6
vpslld $ 7 ,% y m m 6 ,% y m m 0
vpsrld $ 2 5 ,% y m m 6 ,% y m m 6
vpor % y m m 0 ,% y m m 6 ,% y m m 6
# x8 + = x13 , x7 = r o t l 3 2 ( x7 ^ x8 , 7 )
vpaddd % y m m 1 3 ,% y m m 8 ,% y m m 8
vpxor % y m m 8 ,% y m m 7 ,% y m m 7
vpslld $ 7 ,% y m m 7 ,% y m m 0
vpsrld $ 2 5 ,% y m m 7 ,% y m m 7
vpor % y m m 0 ,% y m m 7 ,% y m m 7
# x9 + = x14 , x4 = r o t l 3 2 ( x4 ^ x9 , 7 )
vpaddd % y m m 1 4 ,% y m m 9 ,% y m m 9
vpxor % y m m 9 ,% y m m 4 ,% y m m 4
vpslld $ 7 ,% y m m 4 ,% y m m 0
vpsrld $ 2 5 ,% y m m 4 ,% y m m 4
vpor % y m m 0 ,% y m m 4 ,% y m m 4
2018-12-05 09:20:03 +03:00
sub $ 2 ,% r8 d
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
jnz . L d o u b l e r o u n d8
# x0 . . 1 5 [ 0 - 3 ] + = s [ 0 . . 1 5 ]
vpbroadcastd 0 x00 ( % r d i ) ,% y m m 0
vpaddd 0 x00 ( % r s p ) ,% y m m 0 ,% y m m 0
vmovdqa % y m m 0 ,0 x00 ( % r s p )
vpbroadcastd 0 x04 ( % r d i ) ,% y m m 0
vpaddd 0 x20 ( % r s p ) ,% y m m 0 ,% y m m 0
vmovdqa % y m m 0 ,0 x20 ( % r s p )
vpbroadcastd 0 x08 ( % r d i ) ,% y m m 0
vpaddd 0 x40 ( % r s p ) ,% y m m 0 ,% y m m 0
vmovdqa % y m m 0 ,0 x40 ( % r s p )
vpbroadcastd 0 x0 c ( % r d i ) ,% y m m 0
vpaddd 0 x60 ( % r s p ) ,% y m m 0 ,% y m m 0
vmovdqa % y m m 0 ,0 x60 ( % r s p )
vpbroadcastd 0 x10 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 4 ,% y m m 4
vpbroadcastd 0 x14 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 5 ,% y m m 5
vpbroadcastd 0 x18 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 6 ,% y m m 6
vpbroadcastd 0 x1 c ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 7 ,% y m m 7
vpbroadcastd 0 x20 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 8 ,% y m m 8
vpbroadcastd 0 x24 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 9 ,% y m m 9
vpbroadcastd 0 x28 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 1 0 ,% y m m 1 0
vpbroadcastd 0 x2 c ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 1 1 ,% y m m 1 1
vpbroadcastd 0 x30 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 1 2 ,% y m m 1 2
vpbroadcastd 0 x34 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 1 3 ,% y m m 1 3
vpbroadcastd 0 x38 ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 1 4 ,% y m m 1 4
vpbroadcastd 0 x3 c ( % r d i ) ,% y m m 0
vpaddd % y m m 0 ,% y m m 1 5 ,% y m m 1 5
# x1 2 + = c o u n t e r v a l u e s 0 - 3
vpaddd % y m m 1 ,% y m m 1 2 ,% y m m 1 2
# interleave 3 2 - b i t w o r d s i n s t a t e n , n + 1
vmovdqa 0 x00 ( % r s p ) ,% y m m 0
vmovdqa 0 x20 ( % r s p ) ,% y m m 1
vpunpckldq % y m m 1 ,% y m m 0 ,% y m m 2
vpunpckhdq % y m m 1 ,% y m m 0 ,% y m m 1
vmovdqa % y m m 2 ,0 x00 ( % r s p )
vmovdqa % y m m 1 ,0 x20 ( % r s p )
vmovdqa 0 x40 ( % r s p ) ,% y m m 0
vmovdqa 0 x60 ( % r s p ) ,% y m m 1
vpunpckldq % y m m 1 ,% y m m 0 ,% y m m 2
vpunpckhdq % y m m 1 ,% y m m 0 ,% y m m 1
vmovdqa % y m m 2 ,0 x40 ( % r s p )
vmovdqa % y m m 1 ,0 x60 ( % r s p )
vmovdqa % y m m 4 ,% y m m 0
vpunpckldq % y m m 5 ,% y m m 0 ,% y m m 4
vpunpckhdq % y m m 5 ,% y m m 0 ,% y m m 5
vmovdqa % y m m 6 ,% y m m 0
vpunpckldq % y m m 7 ,% y m m 0 ,% y m m 6
vpunpckhdq % y m m 7 ,% y m m 0 ,% y m m 7
vmovdqa % y m m 8 ,% y m m 0
vpunpckldq % y m m 9 ,% y m m 0 ,% y m m 8
vpunpckhdq % y m m 9 ,% y m m 0 ,% y m m 9
vmovdqa % y m m 1 0 ,% y m m 0
vpunpckldq % y m m 1 1 ,% y m m 0 ,% y m m 1 0
vpunpckhdq % y m m 1 1 ,% y m m 0 ,% y m m 1 1
vmovdqa % y m m 1 2 ,% y m m 0
vpunpckldq % y m m 1 3 ,% y m m 0 ,% y m m 1 2
vpunpckhdq % y m m 1 3 ,% y m m 0 ,% y m m 1 3
vmovdqa % y m m 1 4 ,% y m m 0
vpunpckldq % y m m 1 5 ,% y m m 0 ,% y m m 1 4
vpunpckhdq % y m m 1 5 ,% y m m 0 ,% y m m 1 5
# interleave 6 4 - b i t w o r d s i n s t a t e n , n + 2
vmovdqa 0 x00 ( % r s p ) ,% y m m 0
vmovdqa 0 x40 ( % r s p ) ,% y m m 2
vpunpcklqdq % y m m 2 ,% y m m 0 ,% y m m 1
vpunpckhqdq % y m m 2 ,% y m m 0 ,% y m m 2
vmovdqa % y m m 1 ,0 x00 ( % r s p )
vmovdqa % y m m 2 ,0 x40 ( % r s p )
vmovdqa 0 x20 ( % r s p ) ,% y m m 0
vmovdqa 0 x60 ( % r s p ) ,% y m m 2
vpunpcklqdq % y m m 2 ,% y m m 0 ,% y m m 1
vpunpckhqdq % y m m 2 ,% y m m 0 ,% y m m 2
vmovdqa % y m m 1 ,0 x20 ( % r s p )
vmovdqa % y m m 2 ,0 x60 ( % r s p )
vmovdqa % y m m 4 ,% y m m 0
vpunpcklqdq % y m m 6 ,% y m m 0 ,% y m m 4
vpunpckhqdq % y m m 6 ,% y m m 0 ,% y m m 6
vmovdqa % y m m 5 ,% y m m 0
vpunpcklqdq % y m m 7 ,% y m m 0 ,% y m m 5
vpunpckhqdq % y m m 7 ,% y m m 0 ,% y m m 7
vmovdqa % y m m 8 ,% y m m 0
vpunpcklqdq % y m m 1 0 ,% y m m 0 ,% y m m 8
vpunpckhqdq % y m m 1 0 ,% y m m 0 ,% y m m 1 0
vmovdqa % y m m 9 ,% y m m 0
vpunpcklqdq % y m m 1 1 ,% y m m 0 ,% y m m 9
vpunpckhqdq % y m m 1 1 ,% y m m 0 ,% y m m 1 1
vmovdqa % y m m 1 2 ,% y m m 0
vpunpcklqdq % y m m 1 4 ,% y m m 0 ,% y m m 1 2
vpunpckhqdq % y m m 1 4 ,% y m m 0 ,% y m m 1 4
vmovdqa % y m m 1 3 ,% y m m 0
vpunpcklqdq % y m m 1 5 ,% y m m 0 ,% y m m 1 3
vpunpckhqdq % y m m 1 5 ,% y m m 0 ,% y m m 1 5
# interleave 1 2 8 - b i t w o r d s i n s t a t e n , n + 4
2018-11-11 12:36:27 +03:00
# xor/ w r i t e f i r s t f o u r b l o c k s
vmovdqa 0 x00 ( % r s p ) ,% y m m 1
vperm2 i 1 2 8 $ 0 x20 ,% y m m 4 ,% y m m 1 ,% y m m 0
cmp $ 0 x00 2 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x00 0 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 0 0 ( % r s i )
vperm2 i 1 2 8 $ 0 x31 ,% y m m 4 ,% y m m 1 ,% y m m 4
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
vperm2 i 1 2 8 $ 0 x20 ,% y m m 1 2 ,% y m m 8 ,% y m m 0
2018-11-11 12:36:27 +03:00
cmp $ 0 x00 4 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x00 2 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 2 0 ( % r s i )
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
vperm2 i 1 2 8 $ 0 x31 ,% y m m 1 2 ,% y m m 8 ,% y m m 1 2
2018-11-11 12:36:27 +03:00
vmovdqa 0 x40 ( % r s p ) ,% y m m 1
vperm2 i 1 2 8 $ 0 x20 ,% y m m 6 ,% y m m 1 ,% y m m 0
cmp $ 0 x00 6 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x00 4 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 4 0 ( % r s i )
vperm2 i 1 2 8 $ 0 x31 ,% y m m 6 ,% y m m 1 ,% y m m 6
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
vperm2 i 1 2 8 $ 0 x20 ,% y m m 1 4 ,% y m m 1 0 ,% y m m 0
2018-11-11 12:36:27 +03:00
cmp $ 0 x00 8 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x00 6 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 6 0 ( % r s i )
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
vperm2 i 1 2 8 $ 0 x31 ,% y m m 1 4 ,% y m m 1 0 ,% y m m 1 4
2018-11-11 12:36:27 +03:00
vmovdqa 0 x20 ( % r s p ) ,% y m m 1
vperm2 i 1 2 8 $ 0 x20 ,% y m m 5 ,% y m m 1 ,% y m m 0
cmp $ 0 x00 a0 ,% r a x
jl . L x o r p a r t 8
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
vpxor 0 x00 8 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 8 0 ( % r s i )
2018-11-11 12:36:27 +03:00
vperm2 i 1 2 8 $ 0 x31 ,% y m m 5 ,% y m m 1 ,% y m m 5
vperm2 i 1 2 8 $ 0 x20 ,% y m m 1 3 ,% y m m 9 ,% y m m 0
cmp $ 0 x00 c0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x00 a0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 a0 ( % r s i )
vperm2 i 1 2 8 $ 0 x31 ,% y m m 1 3 ,% y m m 9 ,% y m m 1 3
vmovdqa 0 x60 ( % r s p ) ,% y m m 1
vperm2 i 1 2 8 $ 0 x20 ,% y m m 7 ,% y m m 1 ,% y m m 0
cmp $ 0 x00 e 0 ,% r a x
jl . L x o r p a r t 8
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
vpxor 0 x00 c0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 c0 ( % r s i )
2018-11-11 12:36:27 +03:00
vperm2 i 1 2 8 $ 0 x31 ,% y m m 7 ,% y m m 1 ,% y m m 7
vperm2 i 1 2 8 $ 0 x20 ,% y m m 1 5 ,% y m m 1 1 ,% y m m 0
cmp $ 0 x01 0 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x00 e 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x00 e 0 ( % r s i )
vperm2 i 1 2 8 $ 0 x31 ,% y m m 1 5 ,% y m m 1 1 ,% y m m 1 5
# xor r e m a i n i n g b l o c k s , w r i t e t o o u t p u t
vmovdqa % y m m 4 ,% y m m 0
cmp $ 0 x01 2 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 0 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 0 0 ( % r s i )
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
2018-11-11 12:36:27 +03:00
vmovdqa % y m m 1 2 ,% y m m 0
cmp $ 0 x01 4 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 2 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 2 0 ( % r s i )
vmovdqa % y m m 6 ,% y m m 0
cmp $ 0 x01 6 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 4 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 4 0 ( % r s i )
vmovdqa % y m m 1 4 ,% y m m 0
cmp $ 0 x01 8 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 6 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 6 0 ( % r s i )
vmovdqa % y m m 5 ,% y m m 0
cmp $ 0 x01 a0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 8 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 8 0 ( % r s i )
vmovdqa % y m m 1 3 ,% y m m 0
cmp $ 0 x01 c0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 a0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 a0 ( % r s i )
vmovdqa % y m m 7 ,% y m m 0
cmp $ 0 x01 e 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 c0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 c0 ( % r s i )
vmovdqa % y m m 1 5 ,% y m m 0
cmp $ 0 x02 0 0 ,% r a x
jl . L x o r p a r t 8
vpxor 0 x01 e 0 ( % r d x ) ,% y m m 0 ,% y m m 0
vmovdqu % y m m 0 ,0 x01 e 0 ( % r s i )
.Ldone8 :
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
vzeroupper
2017-10-08 23:50:53 +03:00
lea - 8 ( % r10 ) ,% r s p
crypto: chacha20 - Add an eight block AVX2 variant for x86_64
Extends the x86_64 ChaCha20 implementation by a function processing eight
ChaCha20 blocks in parallel using AVX2.
For large messages, throughput increases by ~55-70% compared to four block
SSSE3:
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 42249230 operations in 10 seconds (675987680 bytes)
test 1 (256 bit key, 64 byte blocks): 46441641 operations in 10 seconds (2972265024 bytes)
test 2 (256 bit key, 256 byte blocks): 33028112 operations in 10 seconds (8455196672 bytes)
test 3 (256 bit key, 1024 byte blocks): 11568759 operations in 10 seconds (11846409216 bytes)
test 4 (256 bit key, 8192 byte blocks): 1448761 operations in 10 seconds (11868250112 bytes)
testing speed of chacha20 (chacha20-simd) encryption
test 0 (256 bit key, 16 byte blocks): 41999675 operations in 10 seconds (671994800 bytes)
test 1 (256 bit key, 64 byte blocks): 45805908 operations in 10 seconds (2931578112 bytes)
test 2 (256 bit key, 256 byte blocks): 32814947 operations in 10 seconds (8400626432 bytes)
test 3 (256 bit key, 1024 byte blocks): 19777167 operations in 10 seconds (20251819008 bytes)
test 4 (256 bit key, 8192 byte blocks): 2279321 operations in 10 seconds (18672197632 bytes)
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 20:14:03 +03:00
ret
2018-11-11 12:36:27 +03:00
.Lxorpart8 :
# xor r e m a i n i n g b y t e s f r o m p a r t i a l r e g i s t e r i n t o o u t p u t
mov % r a x ,% r9
and $ 0 x1 f ,% r9
jz . L d o n e 8
and $ ~ 0 x1 f ,% r a x
mov % r s i ,% r11
lea ( % r d x ,% r a x ) ,% r s i
mov % r s p ,% r d i
mov % r9 ,% r c x
rep m o v s b
vpxor 0 x00 ( % r s p ) ,% y m m 0 ,% y m m 0
vmovdqa % y m m 0 ,0 x00 ( % r s p )
mov % r s p ,% r s i
lea ( % r11 ,% r a x ) ,% r d i
mov % r9 ,% r c x
rep m o v s b
jmp . L d o n e 8
2019-10-11 14:51:04 +03:00
SYM_ F U N C _ E N D ( c h a c h a _ 8 b l o c k _ x o r _ a v x2 )