crypto: poly1305 - Add a four block AVX2 variant for x86_64
Extends the x86_64 Poly1305 authenticator by a function processing four
consecutive Poly1305 blocks in parallel using AVX2 instructions.
For large messages, throughput increases by ~15-45% compared to two
block SSE2:
testing speed of poly1305 (poly1305-simd)
test 0 ( 96 byte blocks, 16 bytes per update, 6 updates): 3809514 opers/sec, 365713411 bytes/sec
test 1 ( 96 byte blocks, 32 bytes per update, 3 updates): 5973423 opers/sec, 573448627 bytes/sec
test 2 ( 96 byte blocks, 96 bytes per update, 1 updates): 9446779 opers/sec, 906890803 bytes/sec
test 3 ( 288 byte blocks, 16 bytes per update, 18 updates): 1364814 opers/sec, 393066691 bytes/sec
test 4 ( 288 byte blocks, 32 bytes per update, 9 updates): 2045780 opers/sec, 589184697 bytes/sec
test 5 ( 288 byte blocks, 288 bytes per update, 1 updates): 3711946 opers/sec, 1069040592 bytes/sec
test 6 ( 1056 byte blocks, 32 bytes per update, 33 updates): 573686 opers/sec, 605812732 bytes/sec
test 7 ( 1056 byte blocks, 1056 bytes per update, 1 updates): 1647802 opers/sec, 1740079440 bytes/sec
test 8 ( 2080 byte blocks, 32 bytes per update, 65 updates): 292970 opers/sec, 609378224 bytes/sec
test 9 ( 2080 byte blocks, 2080 bytes per update, 1 updates): 943229 opers/sec, 1961916528 bytes/sec
test 10 ( 4128 byte blocks, 4128 bytes per update, 1 updates): 494623 opers/sec, 2041804569 bytes/sec
test 11 ( 8224 byte blocks, 8224 bytes per update, 1 updates): 254045 opers/sec, 2089271014 bytes/sec
testing speed of poly1305 (poly1305-simd)
test 0 ( 96 byte blocks, 16 bytes per update, 6 updates): 3826224 opers/sec, 367317552 bytes/sec
test 1 ( 96 byte blocks, 32 bytes per update, 3 updates): 5948638 opers/sec, 571069267 bytes/sec
test 2 ( 96 byte blocks, 96 bytes per update, 1 updates): 9439110 opers/sec, 906154627 bytes/sec
test 3 ( 288 byte blocks, 16 bytes per update, 18 updates): 1367756 opers/sec, 393913872 bytes/sec
test 4 ( 288 byte blocks, 32 bytes per update, 9 updates): 2056881 opers/sec, 592381958 bytes/sec
test 5 ( 288 byte blocks, 288 bytes per update, 1 updates): 3711153 opers/sec, 1068812179 bytes/sec
test 6 ( 1056 byte blocks, 32 bytes per update, 33 updates): 574940 opers/sec, 607136745 bytes/sec
test 7 ( 1056 byte blocks, 1056 bytes per update, 1 updates): 1948830 opers/sec, 2057964585 bytes/sec
test 8 ( 2080 byte blocks, 32 bytes per update, 65 updates): 293308 opers/sec, 610082096 bytes/sec
test 9 ( 2080 byte blocks, 2080 bytes per update, 1 updates): 1235224 opers/sec, 2569267792 bytes/sec
test 10 ( 4128 byte blocks, 4128 bytes per update, 1 updates): 684405 opers/sec, 2825226316 bytes/sec
test 11 ( 8224 byte blocks, 8224 bytes per update, 1 updates): 367101 opers/sec, 3019039446 bytes/sec
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 19:14:08 +02:00
/ *
* Poly1 3 0 5 a u t h e n t i c a t o r a l g o r i t h m , R F C 7 5 3 9 , x64 A V X 2 f u n c t i o n s
*
* Copyright ( C ) 2 0 1 5 M a r t i n W i l l i
*
* This p r o g r a m i s f r e e s o f t w a r e ; you can redistribute it and/or modify
* it u n d e r t h e t e r m s o f t h e G N U G e n e r a l P u b l i c L i c e n s e a s p u b l i s h e d b y
* the F r e e S o f t w a r e F o u n d a t i o n ; either version 2 of the License, or
* ( at y o u r o p t i o n ) a n y l a t e r v e r s i o n .
* /
# include < l i n u x / l i n k a g e . h >
crypto: x86 - make constants readonly, allow linker to merge them
A lot of asm-optimized routines in arch/x86/crypto/ keep its
constants in .data. This is wrong, they should be on .rodata.
Mnay of these constants are the same in different modules.
For example, 128-bit shuffle mask 0x000102030405060708090A0B0C0D0E0F
exists in at least half a dozen places.
There is a way to let linker merge them and use just one copy.
The rules are as follows: mergeable objects of different sizes
should not share sections. You can't put them all in one .rodata
section, they will lose "mergeability".
GCC puts its mergeable constants in ".rodata.cstSIZE" sections,
or ".rodata.cstSIZE.<object_name>" if -fdata-sections is used.
This patch does the same:
.section .rodata.cst16.SHUF_MASK, "aM", @progbits, 16
It is important that all data in such section consists of
16-byte elements, not larger ones, and there are no implicit
use of one element from another.
When this is not the case, use non-mergeable section:
.section .rodata[.VAR_NAME], "a", @progbits
This reduces .data by ~15 kbytes:
text data bss dec hex filename
11097415 2705840 2630712 16433967 fac32f vmlinux-prev.o
11112095 2690672 2630712 16433479 fac147 vmlinux.o
Merged objects are visible in System.map:
ffffffff81a28810 r POLY
ffffffff81a28810 r POLY
ffffffff81a28820 r TWOONE
ffffffff81a28820 r TWOONE
ffffffff81a28830 r PSHUFFLE_BYTE_FLIP_MASK <- merged regardless of
ffffffff81a28830 r SHUF_MASK <------------- the name difference
ffffffff81a28830 r SHUF_MASK
ffffffff81a28830 r SHUF_MASK
..
ffffffff81a28d00 r K512 <- merged three identical 640-byte tables
ffffffff81a28d00 r K512
ffffffff81a28d00 r K512
Use of object names in section name suffixes is not strictly necessary,
but might help if someday link stage will use garbage collection
to eliminate unused sections (ld --gc-sections).
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: Josh Poimboeuf <jpoimboe@redhat.com>
CC: Xiaodong Liu <xiaodong.liu@intel.com>
CC: Megha Dey <megha.dey@intel.com>
CC: linux-crypto@vger.kernel.org
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-19 22:33:04 +01:00
.section .rodata .cst32 .ANMASK , " aM" , @progbits, 32
crypto: poly1305 - Add a four block AVX2 variant for x86_64
Extends the x86_64 Poly1305 authenticator by a function processing four
consecutive Poly1305 blocks in parallel using AVX2 instructions.
For large messages, throughput increases by ~15-45% compared to two
block SSE2:
testing speed of poly1305 (poly1305-simd)
test 0 ( 96 byte blocks, 16 bytes per update, 6 updates): 3809514 opers/sec, 365713411 bytes/sec
test 1 ( 96 byte blocks, 32 bytes per update, 3 updates): 5973423 opers/sec, 573448627 bytes/sec
test 2 ( 96 byte blocks, 96 bytes per update, 1 updates): 9446779 opers/sec, 906890803 bytes/sec
test 3 ( 288 byte blocks, 16 bytes per update, 18 updates): 1364814 opers/sec, 393066691 bytes/sec
test 4 ( 288 byte blocks, 32 bytes per update, 9 updates): 2045780 opers/sec, 589184697 bytes/sec
test 5 ( 288 byte blocks, 288 bytes per update, 1 updates): 3711946 opers/sec, 1069040592 bytes/sec
test 6 ( 1056 byte blocks, 32 bytes per update, 33 updates): 573686 opers/sec, 605812732 bytes/sec
test 7 ( 1056 byte blocks, 1056 bytes per update, 1 updates): 1647802 opers/sec, 1740079440 bytes/sec
test 8 ( 2080 byte blocks, 32 bytes per update, 65 updates): 292970 opers/sec, 609378224 bytes/sec
test 9 ( 2080 byte blocks, 2080 bytes per update, 1 updates): 943229 opers/sec, 1961916528 bytes/sec
test 10 ( 4128 byte blocks, 4128 bytes per update, 1 updates): 494623 opers/sec, 2041804569 bytes/sec
test 11 ( 8224 byte blocks, 8224 bytes per update, 1 updates): 254045 opers/sec, 2089271014 bytes/sec
testing speed of poly1305 (poly1305-simd)
test 0 ( 96 byte blocks, 16 bytes per update, 6 updates): 3826224 opers/sec, 367317552 bytes/sec
test 1 ( 96 byte blocks, 32 bytes per update, 3 updates): 5948638 opers/sec, 571069267 bytes/sec
test 2 ( 96 byte blocks, 96 bytes per update, 1 updates): 9439110 opers/sec, 906154627 bytes/sec
test 3 ( 288 byte blocks, 16 bytes per update, 18 updates): 1367756 opers/sec, 393913872 bytes/sec
test 4 ( 288 byte blocks, 32 bytes per update, 9 updates): 2056881 opers/sec, 592381958 bytes/sec
test 5 ( 288 byte blocks, 288 bytes per update, 1 updates): 3711153 opers/sec, 1068812179 bytes/sec
test 6 ( 1056 byte blocks, 32 bytes per update, 33 updates): 574940 opers/sec, 607136745 bytes/sec
test 7 ( 1056 byte blocks, 1056 bytes per update, 1 updates): 1948830 opers/sec, 2057964585 bytes/sec
test 8 ( 2080 byte blocks, 32 bytes per update, 65 updates): 293308 opers/sec, 610082096 bytes/sec
test 9 ( 2080 byte blocks, 2080 bytes per update, 1 updates): 1235224 opers/sec, 2569267792 bytes/sec
test 10 ( 4128 byte blocks, 4128 bytes per update, 1 updates): 684405 opers/sec, 2825226316 bytes/sec
test 11 ( 8224 byte blocks, 8224 bytes per update, 1 updates): 367101 opers/sec, 3019039446 bytes/sec
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 19:14:08 +02:00
.align 32
ANMASK : .octa 0x00000000 0 3 ffffff0 0 0 0 0 0 0 0 0 3 f f f f f f
.octa 0x0000000003ffffff0000000003ffffff
crypto: x86 - make constants readonly, allow linker to merge them
A lot of asm-optimized routines in arch/x86/crypto/ keep its
constants in .data. This is wrong, they should be on .rodata.
Mnay of these constants are the same in different modules.
For example, 128-bit shuffle mask 0x000102030405060708090A0B0C0D0E0F
exists in at least half a dozen places.
There is a way to let linker merge them and use just one copy.
The rules are as follows: mergeable objects of different sizes
should not share sections. You can't put them all in one .rodata
section, they will lose "mergeability".
GCC puts its mergeable constants in ".rodata.cstSIZE" sections,
or ".rodata.cstSIZE.<object_name>" if -fdata-sections is used.
This patch does the same:
.section .rodata.cst16.SHUF_MASK, "aM", @progbits, 16
It is important that all data in such section consists of
16-byte elements, not larger ones, and there are no implicit
use of one element from another.
When this is not the case, use non-mergeable section:
.section .rodata[.VAR_NAME], "a", @progbits
This reduces .data by ~15 kbytes:
text data bss dec hex filename
11097415 2705840 2630712 16433967 fac32f vmlinux-prev.o
11112095 2690672 2630712 16433479 fac147 vmlinux.o
Merged objects are visible in System.map:
ffffffff81a28810 r POLY
ffffffff81a28810 r POLY
ffffffff81a28820 r TWOONE
ffffffff81a28820 r TWOONE
ffffffff81a28830 r PSHUFFLE_BYTE_FLIP_MASK <- merged regardless of
ffffffff81a28830 r SHUF_MASK <------------- the name difference
ffffffff81a28830 r SHUF_MASK
ffffffff81a28830 r SHUF_MASK
..
ffffffff81a28d00 r K512 <- merged three identical 640-byte tables
ffffffff81a28d00 r K512
ffffffff81a28d00 r K512
Use of object names in section name suffixes is not strictly necessary,
but might help if someday link stage will use garbage collection
to eliminate unused sections (ld --gc-sections).
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: Josh Poimboeuf <jpoimboe@redhat.com>
CC: Xiaodong Liu <xiaodong.liu@intel.com>
CC: Megha Dey <megha.dey@intel.com>
CC: linux-crypto@vger.kernel.org
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-19 22:33:04 +01:00
.section .rodata .cst32 .ORMASK , " aM" , @progbits, 32
.align 32
crypto: poly1305 - Add a four block AVX2 variant for x86_64
Extends the x86_64 Poly1305 authenticator by a function processing four
consecutive Poly1305 blocks in parallel using AVX2 instructions.
For large messages, throughput increases by ~15-45% compared to two
block SSE2:
testing speed of poly1305 (poly1305-simd)
test 0 ( 96 byte blocks, 16 bytes per update, 6 updates): 3809514 opers/sec, 365713411 bytes/sec
test 1 ( 96 byte blocks, 32 bytes per update, 3 updates): 5973423 opers/sec, 573448627 bytes/sec
test 2 ( 96 byte blocks, 96 bytes per update, 1 updates): 9446779 opers/sec, 906890803 bytes/sec
test 3 ( 288 byte blocks, 16 bytes per update, 18 updates): 1364814 opers/sec, 393066691 bytes/sec
test 4 ( 288 byte blocks, 32 bytes per update, 9 updates): 2045780 opers/sec, 589184697 bytes/sec
test 5 ( 288 byte blocks, 288 bytes per update, 1 updates): 3711946 opers/sec, 1069040592 bytes/sec
test 6 ( 1056 byte blocks, 32 bytes per update, 33 updates): 573686 opers/sec, 605812732 bytes/sec
test 7 ( 1056 byte blocks, 1056 bytes per update, 1 updates): 1647802 opers/sec, 1740079440 bytes/sec
test 8 ( 2080 byte blocks, 32 bytes per update, 65 updates): 292970 opers/sec, 609378224 bytes/sec
test 9 ( 2080 byte blocks, 2080 bytes per update, 1 updates): 943229 opers/sec, 1961916528 bytes/sec
test 10 ( 4128 byte blocks, 4128 bytes per update, 1 updates): 494623 opers/sec, 2041804569 bytes/sec
test 11 ( 8224 byte blocks, 8224 bytes per update, 1 updates): 254045 opers/sec, 2089271014 bytes/sec
testing speed of poly1305 (poly1305-simd)
test 0 ( 96 byte blocks, 16 bytes per update, 6 updates): 3826224 opers/sec, 367317552 bytes/sec
test 1 ( 96 byte blocks, 32 bytes per update, 3 updates): 5948638 opers/sec, 571069267 bytes/sec
test 2 ( 96 byte blocks, 96 bytes per update, 1 updates): 9439110 opers/sec, 906154627 bytes/sec
test 3 ( 288 byte blocks, 16 bytes per update, 18 updates): 1367756 opers/sec, 393913872 bytes/sec
test 4 ( 288 byte blocks, 32 bytes per update, 9 updates): 2056881 opers/sec, 592381958 bytes/sec
test 5 ( 288 byte blocks, 288 bytes per update, 1 updates): 3711153 opers/sec, 1068812179 bytes/sec
test 6 ( 1056 byte blocks, 32 bytes per update, 33 updates): 574940 opers/sec, 607136745 bytes/sec
test 7 ( 1056 byte blocks, 1056 bytes per update, 1 updates): 1948830 opers/sec, 2057964585 bytes/sec
test 8 ( 2080 byte blocks, 32 bytes per update, 65 updates): 293308 opers/sec, 610082096 bytes/sec
test 9 ( 2080 byte blocks, 2080 bytes per update, 1 updates): 1235224 opers/sec, 2569267792 bytes/sec
test 10 ( 4128 byte blocks, 4128 bytes per update, 1 updates): 684405 opers/sec, 2825226316 bytes/sec
test 11 ( 8224 byte blocks, 8224 bytes per update, 1 updates): 367101 opers/sec, 3019039446 bytes/sec
Benchmark results from a Core i5-4670T.
Signed-off-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-16 19:14:08 +02:00
ORMASK : .octa 0x00000000 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
.octa 0x00000000010000000000000001000000
.text
# define h0 0 x00 ( % r d i )
# define h1 0 x04 ( % r d i )
# define h2 0 x08 ( % r d i )
# define h3 0 x0 c ( % r d i )
# define h4 0 x10 ( % r d i )
# define r0 0 x00 ( % r d x )
# define r1 0 x04 ( % r d x )
# define r2 0 x08 ( % r d x )
# define r3 0 x0 c ( % r d x )
# define r4 0 x10 ( % r d x )
# define u 0 0 x00 ( % r8 )
# define u 1 0 x04 ( % r8 )
# define u 2 0 x08 ( % r8 )
# define u 3 0 x0 c ( % r8 )
# define u 4 0 x10 ( % r8 )
# define w0 0 x14 ( % r8 )
# define w1 0 x18 ( % r8 )
# define w2 0 x1 c ( % r8 )
# define w3 0 x20 ( % r8 )
# define w4 0 x24 ( % r8 )
# define y 0 0 x28 ( % r8 )
# define y 1 0 x2 c ( % r8 )
# define y 2 0 x30 ( % r8 )
# define y 3 0 x34 ( % r8 )
# define y 4 0 x38 ( % r8 )
# define m % r s i
# define h c0 % y m m 0
# define h c1 % y m m 1
# define h c2 % y m m 2
# define h c3 % y m m 3
# define h c4 % y m m 4
# define h c0 x % x m m 0
# define h c1 x % x m m 1
# define h c2 x % x m m 2
# define h c3 x % x m m 3
# define h c4 x % x m m 4
# define t 1 % y m m 5
# define t 2 % y m m 6
# define t 1 x % x m m 5
# define t 2 x % x m m 6
# define r u w y 0 % y m m 7
# define r u w y 1 % y m m 8
# define r u w y 2 % y m m 9
# define r u w y 3 % y m m 1 0
# define r u w y 4 % y m m 1 1
# define r u w y 0 x % x m m 7
# define r u w y 1 x % x m m 8
# define r u w y 2 x % x m m 9
# define r u w y 3 x % x m m 1 0
# define r u w y 4 x % x m m 1 1
# define s v x z 1 % y m m 1 2
# define s v x z 2 % y m m 1 3
# define s v x z 3 % y m m 1 4
# define s v x z 4 % y m m 1 5
# define d0 % r9
# define d1 % r10
# define d2 % r11
# define d3 % r12
# define d4 % r13
ENTRY( p o l y 1 3 0 5 _ 4 b l o c k _ a v x2 )
# % rdi : Accumulator h [ 5 ]
# % rsi : 6 4 byte i n p u t b l o c k m
# % rdx : Poly1 3 0 5 k e y r [ 5 ]
# % rcx : Quadblock c o u n t
# % r8 : Poly1 3 0 5 d e r i v e d k e y r ^ 2 u [ 5 ] , r ^ 3 w [ 5 ] , r ^ 4 y [ 5 ] ,
# This f o u r - b l o c k v a r i a n t u s e s l o o p u n r o l l e d b l o c k p r o c e s s i n g . I t
# requires 4 P o l y 1 3 0 5 k e y s : r , r ^ 2 , r ^ 3 a n d r ^ 4 :
# h = ( h + m ) * r = > h = ( h + m 1 ) * r ^ 4 + m 2 * r ^ 3 + m 3 * r ^ 2 + m 4 * r
vzeroupper
push % r b x
push % r12
push % r13
# combine r0 ,u 0 ,w0 ,y 0
vmovd y 0 ,r u w y 0 x
vmovd w0 ,t 1 x
vpunpcklqdq t 1 ,r u w y 0 ,r u w y 0
vmovd u 0 ,t 1 x
vmovd r0 ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,r u w y 0 ,r u w y 0
# combine r1 ,u 1 ,w1 ,y 1 a n d s1 =r1 * 5 ,v1 =u1 * 5 ,x1 =w1 * 5 ,z 1 =y1 * 5
vmovd y 1 ,r u w y 1 x
vmovd w1 ,t 1 x
vpunpcklqdq t 1 ,r u w y 1 ,r u w y 1
vmovd u 1 ,t 1 x
vmovd r1 ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,r u w y 1 ,r u w y 1
vpslld $ 2 ,r u w y 1 ,s v x z 1
vpaddd r u w y 1 ,s v x z 1 ,s v x z 1
# combine r2 ,u 2 ,w2 ,y 2 a n d s2 =r2 * 5 ,v2 =u2 * 5 ,x2 =w2 * 5 ,z 2 =y2 * 5
vmovd y 2 ,r u w y 2 x
vmovd w2 ,t 1 x
vpunpcklqdq t 1 ,r u w y 2 ,r u w y 2
vmovd u 2 ,t 1 x
vmovd r2 ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,r u w y 2 ,r u w y 2
vpslld $ 2 ,r u w y 2 ,s v x z 2
vpaddd r u w y 2 ,s v x z 2 ,s v x z 2
# combine r3 ,u 3 ,w3 ,y 3 a n d s3 =r3 * 5 ,v3 =u3 * 5 ,x3 =w3 * 5 ,z 3 =y3 * 5
vmovd y 3 ,r u w y 3 x
vmovd w3 ,t 1 x
vpunpcklqdq t 1 ,r u w y 3 ,r u w y 3
vmovd u 3 ,t 1 x
vmovd r3 ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,r u w y 3 ,r u w y 3
vpslld $ 2 ,r u w y 3 ,s v x z 3
vpaddd r u w y 3 ,s v x z 3 ,s v x z 3
# combine r4 ,u 4 ,w4 ,y 4 a n d s4 =r4 * 5 ,v4 =u4 * 5 ,x4 =w4 * 5 ,z 4 =y4 * 5
vmovd y 4 ,r u w y 4 x
vmovd w4 ,t 1 x
vpunpcklqdq t 1 ,r u w y 4 ,r u w y 4
vmovd u 4 ,t 1 x
vmovd r4 ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,r u w y 4 ,r u w y 4
vpslld $ 2 ,r u w y 4 ,s v x z 4
vpaddd r u w y 4 ,s v x z 4 ,s v x z 4
.Ldoblock4 :
# hc0 = [ m [ 4 8 - 5 1 ] & 0 x3 f f f f f f , m [ 3 2 - 3 5 ] & 0 x3 f f f f f f ,
# m[ 1 6 - 1 9 ] & 0 x3 f f f f f f , m [ 0 - 3 ] & 0 x3 f f f f f f + h0 ]
vmovd 0 x00 ( m ) ,h c0 x
vmovd 0 x10 ( m ) ,t 1 x
vpunpcklqdq t 1 ,h c0 ,h c0
vmovd 0 x20 ( m ) ,t 1 x
vmovd 0 x30 ( m ) ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,h c0 ,h c0
vpand A N M A S K ( % r i p ) ,h c0 ,h c0
vmovd h0 ,t 1 x
vpaddd t 1 ,h c0 ,h c0
# hc1 = [ ( m [ 5 1 - 5 4 ] > > 2 ) & 0 x3 f f f f f f , ( m [ 3 5 - 3 8 ] > > 2 ) & 0 x3 f f f f f f ,
# ( m[ 1 9 - 2 2 ] > > 2 ) & 0 x3 f f f f f f , ( m [ 3 - 6 ] > > 2 ) & 0 x3 f f f f f f + h1 ]
vmovd 0 x03 ( m ) ,h c1 x
vmovd 0 x13 ( m ) ,t 1 x
vpunpcklqdq t 1 ,h c1 ,h c1
vmovd 0 x23 ( m ) ,t 1 x
vmovd 0 x33 ( m ) ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,h c1 ,h c1
vpsrld $ 2 ,h c1 ,h c1
vpand A N M A S K ( % r i p ) ,h c1 ,h c1
vmovd h1 ,t 1 x
vpaddd t 1 ,h c1 ,h c1
# hc2 = [ ( m [ 5 4 - 5 7 ] > > 4 ) & 0 x3 f f f f f f , ( m [ 3 8 - 4 1 ] > > 4 ) & 0 x3 f f f f f f ,
# ( m[ 2 2 - 2 5 ] > > 4 ) & 0 x3 f f f f f f , ( m [ 6 - 9 ] > > 4 ) & 0 x3 f f f f f f + h2 ]
vmovd 0 x06 ( m ) ,h c2 x
vmovd 0 x16 ( m ) ,t 1 x
vpunpcklqdq t 1 ,h c2 ,h c2
vmovd 0 x26 ( m ) ,t 1 x
vmovd 0 x36 ( m ) ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,h c2 ,h c2
vpsrld $ 4 ,h c2 ,h c2
vpand A N M A S K ( % r i p ) ,h c2 ,h c2
vmovd h2 ,t 1 x
vpaddd t 1 ,h c2 ,h c2
# hc3 = [ ( m [ 5 7 - 6 0 ] > > 6 ) & 0 x3 f f f f f f , ( m [ 4 1 - 4 4 ] > > 6 ) & 0 x3 f f f f f f ,
# ( m[ 2 5 - 2 8 ] > > 6 ) & 0 x3 f f f f f f , ( m [ 9 - 1 2 ] > > 6 ) & 0 x3 f f f f f f + h3 ]
vmovd 0 x09 ( m ) ,h c3 x
vmovd 0 x19 ( m ) ,t 1 x
vpunpcklqdq t 1 ,h c3 ,h c3
vmovd 0 x29 ( m ) ,t 1 x
vmovd 0 x39 ( m ) ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,h c3 ,h c3
vpsrld $ 6 ,h c3 ,h c3
vpand A N M A S K ( % r i p ) ,h c3 ,h c3
vmovd h3 ,t 1 x
vpaddd t 1 ,h c3 ,h c3
# hc4 = [ ( m [ 6 0 - 6 3 ] > > 8 ) | ( 1 < < 2 4 ) , ( m [ 4 4 - 4 7 ] > > 8 ) | ( 1 < < 2 4 ) ,
# ( m[ 2 8 - 3 1 ] > > 8 ) | ( 1 < < 2 4 ) , ( m [ 1 2 - 1 5 ] > > 8 ) | ( 1 < < 2 4 ) + h4 ]
vmovd 0 x0 c ( m ) ,h c4 x
vmovd 0 x1 c ( m ) ,t 1 x
vpunpcklqdq t 1 ,h c4 ,h c4
vmovd 0 x2 c ( m ) ,t 1 x
vmovd 0 x3 c ( m ) ,t 2 x
vpunpcklqdq t 2 ,t 1 ,t 1
vperm2 i 1 2 8 $ 0 x20 ,t 1 ,h c4 ,h c4
vpsrld $ 8 ,h c4 ,h c4
vpor O R M A S K ( % r i p ) ,h c4 ,h c4
vmovd h4 ,t 1 x
vpaddd t 1 ,h c4 ,h c4
# t1 = [ h c0 [ 3 ] * r0 , h c0 [ 2 ] * u 0 , h c0 [ 1 ] * w0 , h c0 [ 0 ] * y 0 ]
vpmuludq h c0 ,r u w y 0 ,t 1
# t1 + = [ h c1 [ 3 ] * s4 , h c1 [ 2 ] * v4 , h c1 [ 1 ] * x4 , h c1 [ 0 ] * z 4 ]
vpmuludq h c1 ,s v x z 4 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c2 [ 3 ] * s3 , h c2 [ 2 ] * v3 , h c2 [ 1 ] * x3 , h c2 [ 0 ] * z 3 ]
vpmuludq h c2 ,s v x z 3 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c3 [ 3 ] * s2 , h c3 [ 2 ] * v2 , h c3 [ 1 ] * x2 , h c3 [ 0 ] * z 2 ]
vpmuludq h c3 ,s v x z 2 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c4 [ 3 ] * s1 , h c4 [ 2 ] * v1 , h c4 [ 1 ] * x1 , h c4 [ 0 ] * z 1 ]
vpmuludq h c4 ,s v x z 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
# d0 = t 1 [ 0 ] + t 1 [ 1 ] + t [ 2 ] + t [ 3 ]
vpermq $ 0 x e e ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vpsrldq $ 8 ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vmovq t 1 x ,d0
# t1 = [ h c0 [ 3 ] * r1 , h c0 [ 2 ] * u 1 ,h c0 [ 1 ] * w1 , h c0 [ 0 ] * y 1 ]
vpmuludq h c0 ,r u w y 1 ,t 1
# t1 + = [ h c1 [ 3 ] * r0 , h c1 [ 2 ] * u 0 , h c1 [ 1 ] * w0 , h c1 [ 0 ] * y 0 ]
vpmuludq h c1 ,r u w y 0 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c2 [ 3 ] * s4 , h c2 [ 2 ] * v4 , h c2 [ 1 ] * x4 , h c2 [ 0 ] * z 4 ]
vpmuludq h c2 ,s v x z 4 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c3 [ 3 ] * s3 , h c3 [ 2 ] * v3 , h c3 [ 1 ] * x3 , h c3 [ 0 ] * z 3 ]
vpmuludq h c3 ,s v x z 3 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c4 [ 3 ] * s2 , h c4 [ 2 ] * v2 , h c4 [ 1 ] * x2 , h c4 [ 0 ] * z 2 ]
vpmuludq h c4 ,s v x z 2 ,t 2
vpaddq t 2 ,t 1 ,t 1
# d1 = t 1 [ 0 ] + t 1 [ 1 ] + t 1 [ 3 ] + t 1 [ 4 ]
vpermq $ 0 x e e ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vpsrldq $ 8 ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vmovq t 1 x ,d1
# t1 = [ h c0 [ 3 ] * r2 , h c0 [ 2 ] * u 2 , h c0 [ 1 ] * w2 , h c0 [ 0 ] * y 2 ]
vpmuludq h c0 ,r u w y 2 ,t 1
# t1 + = [ h c1 [ 3 ] * r1 , h c1 [ 2 ] * u 1 , h c1 [ 1 ] * w1 , h c1 [ 0 ] * y 1 ]
vpmuludq h c1 ,r u w y 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c2 [ 3 ] * r0 , h c2 [ 2 ] * u 0 , h c2 [ 1 ] * w0 , h c2 [ 0 ] * y 0 ]
vpmuludq h c2 ,r u w y 0 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c3 [ 3 ] * s4 , h c3 [ 2 ] * v4 , h c3 [ 1 ] * x4 , h c3 [ 0 ] * z 4 ]
vpmuludq h c3 ,s v x z 4 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c4 [ 3 ] * s3 , h c4 [ 2 ] * v3 , h c4 [ 1 ] * x3 , h c4 [ 0 ] * z 3 ]
vpmuludq h c4 ,s v x z 3 ,t 2
vpaddq t 2 ,t 1 ,t 1
# d2 = t 1 [ 0 ] + t 1 [ 1 ] + t 1 [ 2 ] + t 1 [ 3 ]
vpermq $ 0 x e e ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vpsrldq $ 8 ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vmovq t 1 x ,d2
# t1 = [ h c0 [ 3 ] * r3 , h c0 [ 2 ] * u 3 , h c0 [ 1 ] * w3 , h c0 [ 0 ] * y 3 ]
vpmuludq h c0 ,r u w y 3 ,t 1
# t1 + = [ h c1 [ 3 ] * r2 , h c1 [ 2 ] * u 2 , h c1 [ 1 ] * w2 , h c1 [ 0 ] * y 2 ]
vpmuludq h c1 ,r u w y 2 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c2 [ 3 ] * r1 , h c2 [ 2 ] * u 1 , h c2 [ 1 ] * w1 , h c2 [ 0 ] * y 1 ]
vpmuludq h c2 ,r u w y 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c3 [ 3 ] * r0 , h c3 [ 2 ] * u 0 , h c3 [ 1 ] * w0 , h c3 [ 0 ] * y 0 ]
vpmuludq h c3 ,r u w y 0 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c4 [ 3 ] * s4 , h c4 [ 2 ] * v4 , h c4 [ 1 ] * x4 , h c4 [ 0 ] * z 4 ]
vpmuludq h c4 ,s v x z 4 ,t 2
vpaddq t 2 ,t 1 ,t 1
# d3 = t 1 [ 0 ] + t 1 [ 1 ] + t 1 [ 2 ] + t 1 [ 3 ]
vpermq $ 0 x e e ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vpsrldq $ 8 ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vmovq t 1 x ,d3
# t1 = [ h c0 [ 3 ] * r4 , h c0 [ 2 ] * u 4 , h c0 [ 1 ] * w4 , h c0 [ 0 ] * y 4 ]
vpmuludq h c0 ,r u w y 4 ,t 1
# t1 + = [ h c1 [ 3 ] * r3 , h c1 [ 2 ] * u 3 , h c1 [ 1 ] * w3 , h c1 [ 0 ] * y 3 ]
vpmuludq h c1 ,r u w y 3 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c2 [ 3 ] * r2 , h c2 [ 2 ] * u 2 , h c2 [ 1 ] * w2 , h c2 [ 0 ] * y 2 ]
vpmuludq h c2 ,r u w y 2 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c3 [ 3 ] * r1 , h c3 [ 2 ] * u 1 , h c3 [ 1 ] * w1 , h c3 [ 0 ] * y 1 ]
vpmuludq h c3 ,r u w y 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
# t1 + = [ h c4 [ 3 ] * r0 , h c4 [ 2 ] * u 0 , h c4 [ 1 ] * w0 , h c4 [ 0 ] * y 0 ]
vpmuludq h c4 ,r u w y 0 ,t 2
vpaddq t 2 ,t 1 ,t 1
# d4 = t 1 [ 0 ] + t 1 [ 1 ] + t 1 [ 2 ] + t 1 [ 3 ]
vpermq $ 0 x e e ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vpsrldq $ 8 ,t 1 ,t 2
vpaddq t 2 ,t 1 ,t 1
vmovq t 1 x ,d4
# d1 + = d0 > > 2 6
mov d0 ,% r a x
shr $ 2 6 ,% r a x
add % r a x ,d1
# h0 = d0 & 0 x3 f f f f f f
mov d0 ,% r b x
and $ 0 x3 f f f f f f ,% e b x
# d2 + = d1 > > 2 6
mov d1 ,% r a x
shr $ 2 6 ,% r a x
add % r a x ,d2
# h1 = d1 & 0 x3 f f f f f f
mov d1 ,% r a x
and $ 0 x3 f f f f f f ,% e a x
mov % e a x ,h1
# d3 + = d2 > > 2 6
mov d2 ,% r a x
shr $ 2 6 ,% r a x
add % r a x ,d3
# h2 = d2 & 0 x3 f f f f f f
mov d2 ,% r a x
and $ 0 x3 f f f f f f ,% e a x
mov % e a x ,h2
# d4 + = d3 > > 2 6
mov d3 ,% r a x
shr $ 2 6 ,% r a x
add % r a x ,d4
# h3 = d3 & 0 x3 f f f f f f
mov d3 ,% r a x
and $ 0 x3 f f f f f f ,% e a x
mov % e a x ,h3
# h0 + = ( d4 > > 2 6 ) * 5
mov d4 ,% r a x
shr $ 2 6 ,% r a x
lea ( % e a x ,% e a x ,4 ) ,% e a x
add % e a x ,% e b x
# h4 = d4 & 0 x3 f f f f f f
mov d4 ,% r a x
and $ 0 x3 f f f f f f ,% e a x
mov % e a x ,h4
# h1 + = h0 > > 2 6
mov % e b x ,% e a x
shr $ 2 6 ,% e a x
add % e a x ,h1
# h0 = h0 & 0 x3 f f f f f f
andl $ 0 x3 f f f f f f ,% e b x
mov % e b x ,h0
add $ 0 x40 ,m
dec % r c x
jnz . L d o b l o c k 4
vzeroupper
pop % r13
pop % r12
pop % r b x
ret
ENDPROC( p o l y 1 3 0 5 _ 4 b l o c k _ a v x2 )