License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
// SPDX-License-Identifier: GPL-2.0
2012-07-31 16:23:59 +02:00
/*
* BPF Jit compiler for s390 .
*
2015-04-01 16:08:32 +02:00
* Minimum build requirements :
*
* - HAVE_MARCH_Z196_FEATURES : laal , laalg
* - HAVE_MARCH_Z10_FEATURES : msfi , cgrj , clgrj
* - HAVE_MARCH_Z9_109_FEATURES : alfi , llilf , clfi , oilf , nilf
* - 64 BIT
*
* Copyright IBM Corp . 2012 , 2015
2012-07-31 16:23:59 +02:00
*
* Author ( s ) : Martin Schwidefsky < schwidefsky @ de . ibm . com >
2015-04-01 16:08:32 +02:00
* Michael Holzheu < holzheu @ linux . vnet . ibm . com >
2012-07-31 16:23:59 +02:00
*/
2015-04-01 16:08:32 +02:00
# define KMSG_COMPONENT "bpf_jit"
# define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
2012-07-31 16:23:59 +02:00
# include <linux/netdevice.h>
# include <linux/filter.h>
2013-07-17 14:26:50 +02:00
# include <linux/init.h>
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
# include <linux/bpf.h>
2019-11-07 15:18:38 +01:00
# include <linux/mm.h>
2019-11-18 19:03:36 +01:00
# include <linux/kernel.h>
2012-07-31 16:23:59 +02:00
# include <asm/cacheflush.h>
2022-02-28 11:22:12 +01:00
# include <asm/extable.h>
2013-09-13 13:36:25 +02:00
# include <asm/dis.h>
2018-04-23 14:31:36 +02:00
# include <asm/facility.h>
# include <asm/nospec-branch.h>
2017-05-08 15:58:08 -07:00
# include <asm/set_memory.h>
2015-04-01 16:08:32 +02:00
# include "bpf_jit.h"
2012-07-31 16:23:59 +02:00
2015-04-01 16:08:32 +02:00
struct bpf_jit {
u32 seen ; /* Flags to remember seen eBPF instructions */
u32 seen_reg [ 16 ] ; /* Array to remember which registers are used */
u32 * addrs ; /* Array with relative instruction addresses */
u8 * prg_buf ; /* Start of program */
int size ; /* Size of program and literal pool */
int size_prg ; /* Size of program */
int prg ; /* Current position in program */
2019-11-18 19:03:36 +01:00
int lit32_start ; /* Start of 32-bit literal pool */
int lit32 ; /* Current position in 32-bit literal pool */
int lit64_start ; /* Start of 64-bit literal pool */
int lit64 ; /* Current position in 64-bit literal pool */
2015-04-01 16:08:32 +02:00
int base_ip ; /* Base address for literal pool */
int exit_ip ; /* Address of exit */
2018-04-23 14:31:36 +02:00
int r1_thunk_ip ; /* Address of expoline thunk for 'br %r1' */
int r14_thunk_ip ; /* Address of expoline thunk for 'br %r14' */
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
int tail_call_start ; /* Tail call start offset */
2020-06-24 14:55:22 +02:00
int excnt ; /* Number of exception table entries */
2015-04-01 16:08:32 +02:00
} ;
2019-11-07 12:40:33 +01:00
# define SEEN_MEM BIT(0) /* use mem[] for temporary storage */
# define SEEN_LITERAL BIT(1) /* code uses literals */
# define SEEN_FUNC BIT(2) /* calls C functions */
# define SEEN_TAIL_CALL BIT(3) /* code uses tail calls */
2018-05-04 01:08:22 +02:00
# define SEEN_STACK (SEEN_FUNC | SEEN_MEM)
2015-04-01 16:08:32 +02:00
2012-07-31 16:23:59 +02:00
/*
2015-04-01 16:08:32 +02:00
* s390 registers
2012-07-31 16:23:59 +02:00
*/
2016-05-13 19:08:35 +02:00
# define REG_W0 (MAX_BPF_JIT_REG + 0) /* Work register 1 (even) */
# define REG_W1 (MAX_BPF_JIT_REG + 1) /* Work register 2 (odd) */
2018-05-04 01:08:22 +02:00
# define REG_L (MAX_BPF_JIT_REG + 2) /* Literal pool register */
# define REG_15 (MAX_BPF_JIT_REG + 3) /* Register 15 */
2015-04-01 16:08:32 +02:00
# define REG_0 REG_W0 /* Register 0 */
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
# define REG_1 REG_W1 /* Register 1 */
2015-04-01 16:08:32 +02:00
# define REG_2 BPF_REG_1 /* Register 2 */
# define REG_14 BPF_REG_0 /* Register 14 */
2012-07-31 16:23:59 +02:00
2015-04-01 16:08:32 +02:00
/*
* Mapping of BPF registers to s390 registers
*/
static const int reg2hex [ ] = {
/* Return code */
[ BPF_REG_0 ] = 14 ,
/* Function parameters */
[ BPF_REG_1 ] = 2 ,
[ BPF_REG_2 ] = 3 ,
[ BPF_REG_3 ] = 4 ,
[ BPF_REG_4 ] = 5 ,
[ BPF_REG_5 ] = 6 ,
/* Call saved registers */
[ BPF_REG_6 ] = 7 ,
[ BPF_REG_7 ] = 8 ,
[ BPF_REG_8 ] = 9 ,
[ BPF_REG_9 ] = 10 ,
/* BPF stack pointer */
[ BPF_REG_FP ] = 13 ,
2018-05-04 01:08:22 +02:00
/* Register for blinding */
2016-05-13 19:08:35 +02:00
[ BPF_REG_AX ] = 12 ,
2015-04-01 16:08:32 +02:00
/* Work registers for s390x backend */
[ REG_W0 ] = 0 ,
[ REG_W1 ] = 1 ,
[ REG_L ] = 11 ,
[ REG_15 ] = 15 ,
2012-07-31 16:23:59 +02:00
} ;
2015-04-01 16:08:32 +02:00
static inline u32 reg ( u32 dst_reg , u32 src_reg )
{
return reg2hex [ dst_reg ] < < 4 | reg2hex [ src_reg ] ;
}
static inline u32 reg_high ( u32 reg )
{
return reg2hex [ reg ] < < 4 ;
}
static inline void reg_set_seen ( struct bpf_jit * jit , u32 b1 )
{
u32 r1 = reg2hex [ b1 ] ;
2021-07-15 13:57:12 +01:00
if ( r1 > = 6 & & r1 < = 15 & & ! jit - > seen_reg [ r1 ] )
2015-04-01 16:08:32 +02:00
jit - > seen_reg [ r1 ] = 1 ;
}
# define REG_SET_SEEN(b1) \
( { \
reg_set_seen ( jit , b1 ) ; \
} )
# define REG_SEEN(b1) jit->seen_reg[reg2hex[(b1)]]
/*
* EMIT macros for code generation
*/
# define _EMIT2(op) \
( { \
if ( jit - > prg_buf ) \
2019-11-07 12:32:11 +01:00
* ( u16 * ) ( jit - > prg_buf + jit - > prg ) = ( op ) ; \
2015-04-01 16:08:32 +02:00
jit - > prg + = 2 ; \
} )
2012-07-31 16:23:59 +02:00
2015-04-01 16:08:32 +02:00
# define EMIT2(op, b1, b2) \
( { \
2019-11-07 12:32:11 +01:00
_EMIT2 ( ( op ) | reg ( b1 , b2 ) ) ; \
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( b1 ) ; \
REG_SET_SEEN ( b2 ) ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define _EMIT4(op) \
( { \
if ( jit - > prg_buf ) \
2019-11-07 12:32:11 +01:00
* ( u32 * ) ( jit - > prg_buf + jit - > prg ) = ( op ) ; \
2015-04-01 16:08:32 +02:00
jit - > prg + = 4 ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define EMIT4(op, b1, b2) \
( { \
2019-11-07 12:32:11 +01:00
_EMIT4 ( ( op ) | reg ( b1 , b2 ) ) ; \
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( b1 ) ; \
REG_SET_SEEN ( b2 ) ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define EMIT4_RRF(op, b1, b2, b3) \
( { \
2019-11-07 12:32:11 +01:00
_EMIT4 ( ( op ) | reg_high ( b3 ) < < 8 | reg ( b1 , b2 ) ) ; \
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( b1 ) ; \
REG_SET_SEEN ( b2 ) ; \
REG_SET_SEEN ( b3 ) ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define _EMIT4_DISP(op, disp) \
( { \
unsigned int __disp = ( disp ) & 0xfff ; \
2019-11-07 12:32:11 +01:00
_EMIT4 ( ( op ) | __disp ) ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define EMIT4_DISP(op, b1, b2, disp) \
( { \
2019-11-07 12:32:11 +01:00
_EMIT4_DISP ( ( op ) | reg_high ( b1 ) < < 16 | \
reg_high ( b2 ) < < 8 , ( disp ) ) ; \
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( b1 ) ; \
REG_SET_SEEN ( b2 ) ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define EMIT4_IMM(op, b1, imm) \
( { \
unsigned int __imm = ( imm ) & 0xffff ; \
2019-11-07 12:32:11 +01:00
_EMIT4 ( ( op ) | reg_high ( b1 ) < < 16 | __imm ) ; \
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( b1 ) ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define EMIT4_PCREL(op, pcrel) \
( { \
long __pcrel = ( ( pcrel ) > > 1 ) & 0xffff ; \
2019-11-07 12:32:11 +01:00
_EMIT4 ( ( op ) | __pcrel ) ; \
2012-08-28 15:36:14 +02:00
} )
2019-11-18 19:03:35 +01:00
# define EMIT4_PCREL_RIC(op, mask, target) \
( { \
int __rel = ( ( target ) - jit - > prg ) / 2 ; \
_EMIT4 ( ( op ) | ( mask ) < < 20 | ( __rel & 0xffff ) ) ; \
} )
2015-04-01 16:08:32 +02:00
# define _EMIT6(op1, op2) \
( { \
if ( jit - > prg_buf ) { \
2019-11-07 12:32:11 +01:00
* ( u32 * ) ( jit - > prg_buf + jit - > prg ) = ( op1 ) ; \
* ( u16 * ) ( jit - > prg_buf + jit - > prg + 4 ) = ( op2 ) ; \
2015-04-01 16:08:32 +02:00
} \
jit - > prg + = 6 ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define _EMIT6_DISP(op1, op2, disp) \
( { \
unsigned int __disp = ( disp ) & 0xfff ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op1 ) | __disp , op2 ) ; \
2012-07-31 16:23:59 +02:00
} )
2015-04-01 16:08:32 +02:00
# define _EMIT6_DISP_LH(op1, op2, disp) \
( { \
2019-11-07 12:32:11 +01:00
u32 _disp = ( u32 ) ( disp ) ; \
2015-07-29 21:15:15 +02:00
unsigned int __disp_h = _disp & 0xff000 ; \
unsigned int __disp_l = _disp & 0x00fff ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op1 ) | __disp_l , ( op2 ) | __disp_h > > 4 ) ; \
2015-04-01 16:08:32 +02:00
} )
# define EMIT6_DISP_LH(op1, op2, b1, b2, b3, disp) \
( { \
2019-11-07 12:32:11 +01:00
_EMIT6_DISP_LH ( ( op1 ) | reg ( b1 , b2 ) < < 16 | \
2015-04-01 16:08:32 +02:00
reg_high ( b3 ) < < 8 , op2 , disp ) ; \
REG_SET_SEEN ( b1 ) ; \
REG_SET_SEEN ( b2 ) ; \
REG_SET_SEEN ( b3 ) ; \
} )
2020-09-10 01:21:41 +02:00
# define EMIT6_PCREL_RIEB(op1, op2, b1, b2, mask, target) \
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
( { \
2020-09-10 01:21:41 +02:00
unsigned int rel = ( int ) ( ( target ) - jit - > prg ) / 2 ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op1 ) | reg ( b1 , b2 ) < < 16 | ( rel & 0xffff ) , \
( op2 ) | ( mask ) < < 12 ) ; \
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
REG_SET_SEEN ( b1 ) ; \
REG_SET_SEEN ( b2 ) ; \
} )
2020-09-10 01:21:41 +02:00
# define EMIT6_PCREL_RIEC(op1, op2, b1, imm, mask, target) \
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
( { \
2020-09-10 01:21:41 +02:00
unsigned int rel = ( int ) ( ( target ) - jit - > prg ) / 2 ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op1 ) | ( reg_high ( b1 ) | ( mask ) ) < < 16 | \
( rel & 0xffff ) , ( op2 ) | ( ( imm ) & 0xff ) < < 8 ) ; \
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
REG_SET_SEEN ( b1 ) ; \
2019-11-07 12:32:11 +01:00
BUILD_BUG_ON ( ( ( unsigned long ) ( imm ) ) > 0xff ) ; \
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
} )
2015-04-01 16:08:32 +02:00
# define EMIT6_PCREL(op1, op2, b1, b2, i, off, mask) \
( { \
2021-09-07 11:58:59 +02:00
int rel = ( addrs [ ( i ) + ( off ) + 1 ] - jit - > prg ) / 2 ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op1 ) | reg ( b1 , b2 ) < < 16 | ( rel & 0xffff ) , ( op2 ) | ( mask ) ) ; \
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( b1 ) ; \
REG_SET_SEEN ( b2 ) ; \
} )
2018-04-23 14:31:36 +02:00
# define EMIT6_PCREL_RILB(op, b, target) \
( { \
2019-11-18 19:03:35 +01:00
unsigned int rel = ( int ) ( ( target ) - jit - > prg ) / 2 ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op ) | reg_high ( b ) < < 16 | rel > > 16 , rel & 0xffff ) ; \
2018-04-23 14:31:36 +02:00
REG_SET_SEEN ( b ) ; \
} )
# define EMIT6_PCREL_RIL(op, target) \
( { \
2019-11-18 19:03:35 +01:00
unsigned int rel = ( int ) ( ( target ) - jit - > prg ) / 2 ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op ) | rel > > 16 , rel & 0xffff ) ; \
2018-04-23 14:31:36 +02:00
} )
2019-11-18 19:03:35 +01:00
# define EMIT6_PCREL_RILC(op, mask, target) \
( { \
EMIT6_PCREL_RIL ( ( op ) | ( mask ) < < 20 , ( target ) ) ; \
} )
2015-04-01 16:08:32 +02:00
# define _EMIT6_IMM(op, imm) \
( { \
unsigned int __imm = ( imm ) ; \
2019-11-07 12:32:11 +01:00
_EMIT6 ( ( op ) | ( __imm > > 16 ) , __imm & 0xffff ) ; \
2015-04-01 16:08:32 +02:00
} )
# define EMIT6_IMM(op, b1, imm) \
( { \
2019-11-07 12:32:11 +01:00
_EMIT6_IMM ( ( op ) | reg_high ( b1 ) < < 16 , imm ) ; \
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( b1 ) ; \
} )
2019-11-18 19:03:38 +01:00
# define _EMIT_CONST_U32(val) \
2015-04-01 16:08:32 +02:00
( { \
unsigned int ret ; \
2019-11-18 19:03:38 +01:00
ret = jit - > lit32 ; \
2015-04-01 16:08:32 +02:00
if ( jit - > prg_buf ) \
2019-11-18 19:03:36 +01:00
* ( u32 * ) ( jit - > prg_buf + jit - > lit32 ) = ( u32 ) ( val ) ; \
jit - > lit32 + = 4 ; \
2015-04-01 16:08:32 +02:00
ret ; \
} )
2019-11-18 19:03:38 +01:00
# define EMIT_CONST_U32(val) \
2015-04-01 16:08:32 +02:00
( { \
jit - > seen | = SEEN_LITERAL ; \
2019-11-18 19:03:38 +01:00
_EMIT_CONST_U32 ( val ) - jit - > base_ip ; \
} )
# define _EMIT_CONST_U64(val) \
( { \
unsigned int ret ; \
ret = jit - > lit64 ; \
2015-04-01 16:08:32 +02:00
if ( jit - > prg_buf ) \
2019-11-18 19:03:36 +01:00
* ( u64 * ) ( jit - > prg_buf + jit - > lit64 ) = ( u64 ) ( val ) ; \
jit - > lit64 + = 8 ; \
2015-04-01 16:08:32 +02:00
ret ; \
} )
2019-11-18 19:03:38 +01:00
# define EMIT_CONST_U64(val) \
( { \
jit - > seen | = SEEN_LITERAL ; \
_EMIT_CONST_U64 ( val ) - jit - > base_ip ; \
} )
2015-04-01 16:08:32 +02:00
# define EMIT_ZERO(b1) \
( { \
2019-05-24 23:25:24 +01:00
if ( ! fp - > aux - > verifier_zext ) { \
/* llgfr %dst,%dst (zero extend to 64 bit) */ \
EMIT4 ( 0xb9160000 , b1 , b1 ) ; \
REG_SET_SEEN ( b1 ) ; \
} \
2015-04-01 16:08:32 +02:00
} )
2019-11-14 16:18:20 +01:00
/*
* Return whether this is the first pass . The first pass is special , since we
* don ' t know any sizes yet , and thus must be conservative .
*/
static bool is_first_pass ( struct bpf_jit * jit )
{
return jit - > size = = 0 ;
}
/*
* Return whether this is the code generation pass . The code generation pass is
* special , since we should change as little as possible .
*/
static bool is_codegen_pass ( struct bpf_jit * jit )
{
return jit - > prg_buf ;
}
2019-11-18 19:03:35 +01:00
/*
* Return whether " rel " can be encoded as a short PC - relative offset
*/
static bool is_valid_rel ( int rel )
{
return rel > = - 65536 & & rel < = 65534 ;
}
/*
* Return whether " off " can be reached using a short PC - relative offset
*/
static bool can_use_rel ( struct bpf_jit * jit , int off )
{
return is_valid_rel ( off - jit - > prg ) ;
}
2019-11-18 19:03:37 +01:00
/*
* Return whether given displacement can be encoded using
* Long - Displacement Facility
*/
static bool is_valid_ldisp ( int disp )
{
return disp > = - 524288 & & disp < = 524287 ;
}
2019-11-18 19:03:39 +01:00
/*
* Return whether the next 32 - bit literal pool entry can be referenced using
* Long - Displacement Facility
*/
static bool can_use_ldisp_for_lit32 ( struct bpf_jit * jit )
{
return is_valid_ldisp ( jit - > lit32 - jit - > base_ip ) ;
}
/*
* Return whether the next 64 - bit literal pool entry can be referenced using
* Long - Displacement Facility
*/
static bool can_use_ldisp_for_lit64 ( struct bpf_jit * jit )
{
return is_valid_ldisp ( jit - > lit64 - jit - > base_ip ) ;
}
2015-04-01 16:08:32 +02:00
/*
* Fill whole space with illegal instructions
*/
static void jit_fill_hole ( void * area , unsigned int size )
2014-09-08 08:04:47 +02:00
{
memset ( area , 0 , size ) ;
}
2015-04-01 16:08:32 +02:00
/*
* Save registers from " rs " ( register start ) to " re " ( register end ) on stack
*/
static void save_regs ( struct bpf_jit * jit , u32 rs , u32 re )
{
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
u32 off = STK_OFF_R6 + ( rs - 6 ) * 8 ;
2015-04-01 16:08:32 +02:00
if ( rs = = re )
/* stg %rs,off(%r15) */
_EMIT6 ( 0xe300f000 | rs < < 20 | off , 0x0024 ) ;
else
/* stmg %rs,%re,off(%r15) */
_EMIT6_DISP ( 0xeb00f000 | rs < < 20 | re < < 16 , 0x0024 , off ) ;
}
/*
* Restore registers from " rs " ( register start ) to " re " ( register end ) on stack
*/
2017-11-07 19:16:25 +01:00
static void restore_regs ( struct bpf_jit * jit , u32 rs , u32 re , u32 stack_depth )
2012-07-31 16:23:59 +02:00
{
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
u32 off = STK_OFF_R6 + ( rs - 6 ) * 8 ;
2015-04-01 16:08:32 +02:00
if ( jit - > seen & SEEN_STACK )
2017-11-07 19:16:25 +01:00
off + = STK_OFF + stack_depth ;
2015-04-01 16:08:32 +02:00
if ( rs = = re )
/* lg %rs,off(%r15) */
_EMIT6 ( 0xe300f000 | rs < < 20 | off , 0x0004 ) ;
else
/* lmg %rs,%re,off(%r15) */
_EMIT6_DISP ( 0xeb00f000 | rs < < 20 | re < < 16 , 0x0004 , off ) ;
}
2012-07-31 16:23:59 +02:00
2015-04-01 16:08:32 +02:00
/*
* Return first seen register ( from start )
*/
static int get_start ( struct bpf_jit * jit , int start )
{
int i ;
for ( i = start ; i < = 15 ; i + + ) {
if ( jit - > seen_reg [ i ] )
return i ;
}
return 0 ;
}
/*
* Return last seen register ( from start ) ( gap > = 2 )
*/
static int get_end ( struct bpf_jit * jit , int start )
{
int i ;
for ( i = start ; i < 15 ; i + + ) {
if ( ! jit - > seen_reg [ i ] & & ! jit - > seen_reg [ i + 1 ] )
return i - 1 ;
}
return jit - > seen_reg [ 15 ] ? 15 : 14 ;
}
# define REGS_SAVE 1
# define REGS_RESTORE 0
/*
* Save and restore clobbered registers ( 6 - 15 ) on stack .
* We save / restore registers in chunks with gap > = 2 registers .
*/
2017-11-07 19:16:25 +01:00
static void save_restore_regs ( struct bpf_jit * jit , int op , u32 stack_depth )
2015-04-01 16:08:32 +02:00
{
2019-11-14 16:18:20 +01:00
const int last = 15 , save_restore_size = 6 ;
2015-04-01 16:08:32 +02:00
int re = 6 , rs ;
2019-11-14 16:18:20 +01:00
if ( is_first_pass ( jit ) ) {
/*
* We don ' t know yet which registers are used . Reserve space
* conservatively .
*/
jit - > prg + = ( last - re + 1 ) * save_restore_size ;
return ;
}
2015-04-01 16:08:32 +02:00
do {
rs = get_start ( jit , re ) ;
if ( ! rs )
break ;
re = get_end ( jit , rs + 1 ) ;
if ( op = = REGS_SAVE )
save_regs ( jit , rs , re ) ;
else
2017-11-07 19:16:25 +01:00
restore_regs ( jit , rs , re , stack_depth ) ;
2015-04-01 16:08:32 +02:00
re + + ;
2019-11-14 16:18:20 +01:00
} while ( re < = last ) ;
2015-04-01 16:08:32 +02:00
}
2020-07-17 18:53:25 +02:00
static void bpf_skip ( struct bpf_jit * jit , int size )
{
if ( size > = 6 & & ! is_valid_rel ( size ) ) {
/* brcl 0xf,size */
EMIT6_PCREL_RIL ( 0xc0f4000000 , size ) ;
size - = 6 ;
} else if ( size > = 4 & & is_valid_rel ( size ) ) {
/* brc 0xf,size */
EMIT4_PCREL ( 0xa7f40000 , size ) ;
size - = 4 ;
}
while ( size > = 2 ) {
/* bcr 0,%0 */
_EMIT2 ( 0x0700 ) ;
size - = 2 ;
}
}
2015-04-01 16:08:32 +02:00
/*
* Emit function prologue
*
* Save registers and create stack frame if necessary .
2023-01-28 01:06:45 +01:00
* See stack frame layout description in " bpf_jit.h " !
2015-04-01 16:08:32 +02:00
*/
2017-11-07 19:16:25 +01:00
static void bpf_jit_prologue ( struct bpf_jit * jit , u32 stack_depth )
2015-04-01 16:08:32 +02:00
{
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
if ( jit - > seen & SEEN_TAIL_CALL ) {
/* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */
_EMIT6 ( 0xd703f000 | STK_OFF_TCCNT , 0xf000 | STK_OFF_TCCNT ) ;
} else {
2020-07-17 18:53:26 +02:00
/*
* There are no tail calls . Insert nops in order to have
* tail_call_start at a predictable offset .
*/
bpf_skip ( jit , 6 ) ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
}
/* Tail calls have to skip above initialization */
jit - > tail_call_start = jit - > prg ;
2015-04-01 16:08:32 +02:00
/* Save registers */
2017-11-07 19:16:25 +01:00
save_restore_regs ( jit , REGS_SAVE , stack_depth ) ;
2012-07-31 16:23:59 +02:00
/* Setup literal pool */
2019-11-14 16:18:20 +01:00
if ( is_first_pass ( jit ) | | ( jit - > seen & SEEN_LITERAL ) ) {
2019-11-18 19:03:37 +01:00
if ( ! is_first_pass ( jit ) & &
is_valid_ldisp ( jit - > size - ( jit - > prg + 2 ) ) ) {
/* basr %l,0 */
EMIT2 ( 0x0d00 , REG_L , REG_0 ) ;
jit - > base_ip = jit - > prg ;
} else {
/* larl %l,lit32_start */
EMIT6_PCREL_RILB ( 0xc0000000 , REG_L , jit - > lit32_start ) ;
jit - > base_ip = jit - > lit32_start ;
}
2012-07-31 16:23:59 +02:00
}
2015-04-01 16:08:32 +02:00
/* Setup stack and backchain */
2019-11-14 16:18:20 +01:00
if ( is_first_pass ( jit ) | | ( jit - > seen & SEEN_STACK ) ) {
if ( is_first_pass ( jit ) | | ( jit - > seen & SEEN_FUNC ) )
2015-06-01 22:48:35 -07:00
/* lgr %w1,%r15 (backchain) */
EMIT4 ( 0xb9040000 , REG_W1 , REG_15 ) ;
/* la %bfp,STK_160_UNUSED(%r15) (BPF frame pointer) */
EMIT4_DISP ( 0x41000000 , BPF_REG_FP , REG_15 , STK_160_UNUSED ) ;
2015-04-01 16:08:32 +02:00
/* aghi %r15,-STK_OFF */
2017-11-07 19:16:25 +01:00
EMIT4_IMM ( 0xa70b0000 , REG_15 , - ( STK_OFF + stack_depth ) ) ;
2019-11-14 16:18:20 +01:00
if ( is_first_pass ( jit ) | | ( jit - > seen & SEEN_FUNC ) )
2015-06-01 22:48:35 -07:00
/* stg %w1,152(%r15) (backchain) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0024 , REG_W1 , REG_0 ,
2015-04-01 16:08:32 +02:00
REG_15 , 152 ) ;
}
2012-07-31 16:23:59 +02:00
}
2015-04-01 16:08:32 +02:00
/*
* Function epilogue
*/
2017-11-07 19:16:25 +01:00
static void bpf_jit_epilogue ( struct bpf_jit * jit , u32 stack_depth )
2012-07-31 16:23:59 +02:00
{
jit - > exit_ip = jit - > prg ;
2015-04-01 16:08:32 +02:00
/* Load exit code: lgr %r2,%b0 */
EMIT4 ( 0xb9040000 , REG_2 , BPF_REG_0 ) ;
2012-07-31 16:23:59 +02:00
/* Restore registers */
2017-11-07 19:16:25 +01:00
save_restore_regs ( jit , REGS_RESTORE , stack_depth ) ;
2021-10-04 08:51:06 +02:00
if ( nospec_uses_trampoline ( ) ) {
2018-04-23 14:31:36 +02:00
jit - > r14_thunk_ip = jit - > prg ;
/* Generate __s390_indirect_jump_r14 thunk */
2022-02-24 22:43:31 +01:00
/* exrl %r0,.+10 */
EMIT6_PCREL_RIL ( 0xc6000000 , jit - > prg + 10 ) ;
2018-04-23 14:31:36 +02:00
/* j . */
EMIT4_PCREL ( 0xa7f40000 , 0 ) ;
}
2012-07-31 16:23:59 +02:00
/* br %r14 */
2015-04-01 16:08:32 +02:00
_EMIT2 ( 0x07fe ) ;
2018-04-23 14:31:36 +02:00
2021-10-04 08:51:06 +02:00
if ( ( nospec_uses_trampoline ( ) ) & &
2019-11-14 16:18:20 +01:00
( is_first_pass ( jit ) | | ( jit - > seen & SEEN_FUNC ) ) ) {
2018-04-23 14:31:36 +02:00
jit - > r1_thunk_ip = jit - > prg ;
/* Generate __s390_indirect_jump_r1 thunk */
2022-02-24 22:43:31 +01:00
/* exrl %r0,.+10 */
EMIT6_PCREL_RIL ( 0xc6000000 , jit - > prg + 10 ) ;
/* j . */
EMIT4_PCREL ( 0xa7f40000 , 0 ) ;
/* br %r1 */
_EMIT2 ( 0x07f1 ) ;
2018-04-23 14:31:36 +02:00
}
2012-07-31 16:23:59 +02:00
}
2020-06-24 14:55:22 +02:00
static int get_probe_mem_regno ( const u8 * insn )
{
/*
* insn must point to llgc , llgh , llgf or lg , which have destination
* register at the same position .
*/
if ( insn [ 0 ] ! = 0xe3 ) /* common llgc, llgh, llgf and lg prefix */
return - 1 ;
if ( insn [ 5 ] ! = 0x90 & & /* llgc */
insn [ 5 ] ! = 0x91 & & /* llgh */
insn [ 5 ] ! = 0x16 & & /* llgf */
insn [ 5 ] ! = 0x04 ) /* lg */
return - 1 ;
return insn [ 1 ] > > 4 ;
}
2022-02-28 14:52:42 +01:00
bool ex_handler_bpf ( const struct exception_table_entry * x , struct pt_regs * regs )
2020-06-24 14:55:22 +02:00
{
regs - > psw . addr = extable_fixup ( x ) ;
2022-02-27 21:32:54 +01:00
regs - > gprs [ x - > data ] = 0 ;
2020-06-24 14:55:22 +02:00
return true ;
}
static int bpf_jit_probe_mem ( struct bpf_jit * jit , struct bpf_prog * fp ,
int probe_prg , int nop_prg )
{
struct exception_table_entry * ex ;
2022-02-27 21:32:54 +01:00
int reg , prg ;
2020-06-24 14:55:22 +02:00
s64 delta ;
u8 * insn ;
int i ;
if ( ! fp - > aux - > extable )
/* Do nothing during early JIT passes. */
return 0 ;
insn = jit - > prg_buf + probe_prg ;
2022-02-27 21:32:54 +01:00
reg = get_probe_mem_regno ( insn ) ;
if ( WARN_ON_ONCE ( reg < 0 ) )
2020-06-24 14:55:22 +02:00
/* JIT bug - unexpected probe instruction. */
return - 1 ;
if ( WARN_ON_ONCE ( probe_prg + insn_length ( * insn ) ! = nop_prg ) )
/* JIT bug - gap between probe and nop instructions. */
return - 1 ;
for ( i = 0 ; i < 2 ; i + + ) {
if ( WARN_ON_ONCE ( jit - > excnt > = fp - > aux - > num_exentries ) )
/* Verifier bug - not enough entries. */
return - 1 ;
ex = & fp - > aux - > extable [ jit - > excnt ] ;
/* Add extable entries for probe and nop instructions. */
prg = i = = 0 ? probe_prg : nop_prg ;
delta = jit - > prg_buf + prg - ( u8 * ) & ex - > insn ;
if ( WARN_ON_ONCE ( delta < INT_MIN | | delta > INT_MAX ) )
/* JIT bug - code and extable must be close. */
return - 1 ;
ex - > insn = delta ;
/*
* Always land on the nop . Note that extable infrastructure
* ignores fixup field , it is handled by ex_handler_bpf ( ) .
*/
delta = jit - > prg_buf + nop_prg - ( u8 * ) & ex - > fixup ;
if ( WARN_ON_ONCE ( delta < INT_MIN | | delta > INT_MAX ) )
/* JIT bug - landing pad and extable must be close. */
return - 1 ;
ex - > fixup = delta ;
2022-02-28 14:52:42 +01:00
ex - > type = EX_TYPE_BPF ;
2022-02-27 21:32:54 +01:00
ex - > data = reg ;
2020-06-24 14:55:22 +02:00
jit - > excnt + + ;
}
return 0 ;
}
2012-07-31 16:23:59 +02:00
/*
2015-04-01 16:08:32 +02:00
* Compile one eBPF instruction into s390x code
s390/bpf: Fix gcov stack space problem
When compiling the kernel for GCOV (CONFIG_GCOV_KERNEL,-fprofile-arcs),
gcc allocates a lot of stack space because of the large switch statement
in bpf_jit_insn().
This leads to the following compile warning:
arch/s390/net/bpf_jit_comp.c: In function 'bpf_jit_prog':
arch/s390/net/bpf_jit_comp.c:1144:1: warning: frame size of
function 'bpf_jit_prog' is 12592 bytes which is more than
half the stack size. The dynamic check would not be reliable.
No check emitted for this function.
arch/s390/net/bpf_jit_comp.c:1144:1: warning: the frame size of 12504
bytes is larger than 1024 bytes [-Wframe-larger-than=]
And indead gcc allocates 12592 bytes of stack space:
# objdump -d arch/s390/net/bpf_jit_comp.o
...
0000000000000c60 <bpf_jit_prog>:
c60: eb 6f f0 48 00 24 stmg %r6,%r15,72(%r15)
c66: b9 04 00 ef lgr %r14,%r15
c6a: e3 f0 fe d0 fc 71 lay %r15,-12592(%r15)
As a workaround of that problem we now define bpf_jit_insn() as
noinline which then reduces the stack space.
# objdump -d arch/s390/net/bpf_jit_comp.o
...
0000000000000070 <bpf_jit_insn>:
70: eb 6f f0 48 00 24 stmg %r6,%r15,72(%r15)
76: c0 d0 00 00 00 00 larl %r13,76 <bpf_jit_insn+0x6>
7c: a7 f1 3f 80 tmll %r15,16256
80: b9 04 00 ef lgr %r14,%r15
84: e3 f0 ff a0 ff 71 lay %r15,-96(%r15)
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-04-29 18:45:03 +02:00
*
* NOTE : Use noinline because for gcov ( - fprofile - arcs ) gcc allocates a lot of
* stack space for the large switch statement .
2012-07-31 16:23:59 +02:00
*/
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
static noinline int bpf_jit_insn ( struct bpf_jit * jit , struct bpf_prog * fp ,
2020-06-02 19:43:39 +02:00
int i , bool extra_pass , u32 stack_depth )
2012-07-31 16:23:59 +02:00
{
2015-04-01 16:08:32 +02:00
struct bpf_insn * insn = & fp - > insnsi [ i ] ;
u32 dst_reg = insn - > dst_reg ;
u32 src_reg = insn - > src_reg ;
2019-11-18 19:03:35 +01:00
int last , insn_count = 1 ;
2015-04-01 16:08:32 +02:00
u32 * addrs = jit - > addrs ;
s32 imm = insn - > imm ;
s16 off = insn - > off ;
2020-06-24 14:55:22 +02:00
int probe_prg = - 1 ;
2018-05-04 01:08:22 +02:00
unsigned int mask ;
2020-06-24 14:55:22 +02:00
int nop_prg ;
int err ;
if ( BPF_CLASS ( insn - > code ) = = BPF_LDX & &
BPF_MODE ( insn - > code ) = = BPF_PROBE_MEM )
probe_prg = jit - > prg ;
2012-07-31 16:23:59 +02:00
2015-04-01 16:08:32 +02:00
switch ( insn - > code ) {
/*
* BPF_MOV
*/
case BPF_ALU | BPF_MOV | BPF_X : /* dst = (u32) src */
/* llgfr %dst,%src */
EMIT4 ( 0xb9160000 , dst_reg , src_reg ) ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
case BPF_ALU64 | BPF_MOV | BPF_X : /* dst = src */
/* lgr %dst,%src */
EMIT4 ( 0xb9040000 , dst_reg , src_reg ) ;
break ;
case BPF_ALU | BPF_MOV | BPF_K : /* dst = (u32) imm */
/* llilf %dst,imm */
EMIT6_IMM ( 0xc00f0000 , dst_reg , imm ) ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
case BPF_ALU64 | BPF_MOV | BPF_K : /* dst = imm */
/* lgfi %dst,imm */
EMIT6_IMM ( 0xc0010000 , dst_reg , imm ) ;
break ;
/*
* BPF_LD 64
*/
case BPF_LD | BPF_IMM | BPF_DW : /* dst = (u64) imm */
{
/* 16 byte instruction that uses two 'struct bpf_insn' */
u64 imm64 ;
imm64 = ( u64 ) ( u32 ) insn [ 0 ] . imm | ( ( u64 ) ( u32 ) insn [ 1 ] . imm ) < < 32 ;
2019-11-18 19:03:38 +01:00
/* lgrl %dst,imm */
EMIT6_PCREL_RILB ( 0xc4080000 , dst_reg , _EMIT_CONST_U64 ( imm64 ) ) ;
2015-04-01 16:08:32 +02:00
insn_count = 2 ;
break ;
}
/*
* BPF_ADD
*/
case BPF_ALU | BPF_ADD | BPF_X : /* dst = (u32) dst + (u32) src */
/* ar %dst,%src */
EMIT2 ( 0x1a00 , dst_reg , src_reg ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_ADD | BPF_X : /* dst = dst + src */
/* agr %dst,%src */
EMIT4 ( 0xb9080000 , dst_reg , src_reg ) ;
break ;
case BPF_ALU | BPF_ADD | BPF_K : /* dst = (u32) dst + (u32) imm */
2021-09-06 15:04:14 +02:00
if ( imm ! = 0 ) {
/* alfi %dst,imm */
EMIT6_IMM ( 0xc20b0000 , dst_reg , imm ) ;
}
2015-04-01 16:08:32 +02:00
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_ADD | BPF_K : /* dst = dst + imm */
if ( ! imm )
break ;
/* agfi %dst,imm */
EMIT6_IMM ( 0xc2080000 , dst_reg , imm ) ;
break ;
/*
* BPF_SUB
*/
case BPF_ALU | BPF_SUB | BPF_X : /* dst = (u32) dst - (u32) src */
/* sr %dst,%src */
EMIT2 ( 0x1b00 , dst_reg , src_reg ) ;
EMIT_ZERO ( dst_reg ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_SUB | BPF_X : /* dst = dst - src */
/* sgr %dst,%src */
EMIT4 ( 0xb9090000 , dst_reg , src_reg ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU | BPF_SUB | BPF_K : /* dst = (u32) dst - (u32) imm */
2021-09-06 15:04:14 +02:00
if ( imm ! = 0 ) {
/* alfi %dst,-imm */
EMIT6_IMM ( 0xc20b0000 , dst_reg , - imm ) ;
}
2015-04-01 16:08:32 +02:00
EMIT_ZERO ( dst_reg ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_SUB | BPF_K : /* dst = dst - imm */
if ( ! imm )
break ;
2021-09-07 13:41:16 +02:00
if ( imm = = - 0x80000000 ) {
/* algfi %dst,0x80000000 */
EMIT6_IMM ( 0xc20a0000 , dst_reg , 0x80000000 ) ;
} else {
/* agfi %dst,-imm */
EMIT6_IMM ( 0xc2080000 , dst_reg , - imm ) ;
}
2015-04-01 16:08:32 +02:00
break ;
/*
* BPF_MUL
*/
case BPF_ALU | BPF_MUL | BPF_X : /* dst = (u32) dst * (u32) src */
/* msr %dst,%src */
EMIT4 ( 0xb2520000 , dst_reg , src_reg ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_MUL | BPF_X : /* dst = dst * src */
/* msgr %dst,%src */
EMIT4 ( 0xb90c0000 , dst_reg , src_reg ) ;
break ;
case BPF_ALU | BPF_MUL | BPF_K : /* dst = (u32) dst * (u32) imm */
2021-09-06 15:04:14 +02:00
if ( imm ! = 1 ) {
/* msfi %r5,imm */
EMIT6_IMM ( 0xc2010000 , dst_reg , imm ) ;
}
2015-04-01 16:08:32 +02:00
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_MUL | BPF_K : /* dst = dst * imm */
if ( imm = = 1 )
2014-01-15 06:50:07 -08:00
break ;
2015-04-01 16:08:32 +02:00
/* msgfi %dst,imm */
EMIT6_IMM ( 0xc2000000 , dst_reg , imm ) ;
break ;
/*
* BPF_DIV / BPF_MOD
*/
case BPF_ALU | BPF_DIV | BPF_X : /* dst = (u32) dst / (u32) src */
case BPF_ALU | BPF_MOD | BPF_X : /* dst = (u32) dst % (u32) src */
{
int rc_reg = BPF_OP ( insn - > code ) = = BPF_DIV ? REG_W1 : REG_W0 ;
/* lhi %w0,0 */
EMIT4_IMM ( 0xa7080000 , REG_W0 , 0 ) ;
/* lr %w1,%dst */
EMIT2 ( 0x1800 , REG_W1 , dst_reg ) ;
/* dlr %w0,%src */
EMIT4 ( 0xb9970000 , REG_W0 , src_reg ) ;
/* llgfr %dst,%rc */
EMIT4 ( 0xb9160000 , dst_reg , rc_reg ) ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
}
2015-04-27 11:12:25 +02:00
case BPF_ALU64 | BPF_DIV | BPF_X : /* dst = dst / src */
case BPF_ALU64 | BPF_MOD | BPF_X : /* dst = dst % src */
2015-04-01 16:08:32 +02:00
{
int rc_reg = BPF_OP ( insn - > code ) = = BPF_DIV ? REG_W1 : REG_W0 ;
/* lghi %w0,0 */
EMIT4_IMM ( 0xa7090000 , REG_W0 , 0 ) ;
/* lgr %w1,%dst */
EMIT4 ( 0xb9040000 , REG_W1 , dst_reg ) ;
/* dlgr %w0,%dst */
2015-04-27 11:12:25 +02:00
EMIT4 ( 0xb9870000 , REG_W0 , src_reg ) ;
2015-04-01 16:08:32 +02:00
/* lgr %dst,%rc */
EMIT4 ( 0xb9040000 , dst_reg , rc_reg ) ;
break ;
}
case BPF_ALU | BPF_DIV | BPF_K : /* dst = (u32) dst / (u32) imm */
case BPF_ALU | BPF_MOD | BPF_K : /* dst = (u32) dst % (u32) imm */
{
int rc_reg = BPF_OP ( insn - > code ) = = BPF_DIV ? REG_W1 : REG_W0 ;
if ( imm = = 1 ) {
if ( BPF_OP ( insn - > code ) = = BPF_MOD )
/* lhgi %dst,0 */
EMIT4_IMM ( 0xa7090000 , dst_reg , 0 ) ;
2021-09-06 15:04:14 +02:00
else
EMIT_ZERO ( dst_reg ) ;
2014-01-15 06:50:07 -08:00
break ;
}
2015-04-01 16:08:32 +02:00
/* lhi %w0,0 */
EMIT4_IMM ( 0xa7080000 , REG_W0 , 0 ) ;
/* lr %w1,%dst */
EMIT2 ( 0x1800 , REG_W1 , dst_reg ) ;
2019-11-18 19:03:39 +01:00
if ( ! is_first_pass ( jit ) & & can_use_ldisp_for_lit32 ( jit ) ) {
/* dl %w0,<d(imm)>(%l) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0097 , REG_W0 , REG_0 , REG_L ,
EMIT_CONST_U32 ( imm ) ) ;
} else {
/* lgfrl %dst,imm */
EMIT6_PCREL_RILB ( 0xc40c0000 , dst_reg ,
_EMIT_CONST_U32 ( imm ) ) ;
jit - > seen | = SEEN_LITERAL ;
/* dlr %w0,%dst */
EMIT4 ( 0xb9970000 , REG_W0 , dst_reg ) ;
}
2015-04-01 16:08:32 +02:00
/* llgfr %dst,%rc */
EMIT4 ( 0xb9160000 , dst_reg , rc_reg ) ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
}
2015-04-27 11:12:25 +02:00
case BPF_ALU64 | BPF_DIV | BPF_K : /* dst = dst / imm */
case BPF_ALU64 | BPF_MOD | BPF_K : /* dst = dst % imm */
2015-04-01 16:08:32 +02:00
{
int rc_reg = BPF_OP ( insn - > code ) = = BPF_DIV ? REG_W1 : REG_W0 ;
if ( imm = = 1 ) {
if ( BPF_OP ( insn - > code ) = = BPF_MOD )
/* lhgi %dst,0 */
EMIT4_IMM ( 0xa7090000 , dst_reg , 0 ) ;
break ;
}
/* lghi %w0,0 */
EMIT4_IMM ( 0xa7090000 , REG_W0 , 0 ) ;
/* lgr %w1,%dst */
EMIT4 ( 0xb9040000 , REG_W1 , dst_reg ) ;
2019-11-18 19:03:39 +01:00
if ( ! is_first_pass ( jit ) & & can_use_ldisp_for_lit64 ( jit ) ) {
/* dlg %w0,<d(imm)>(%l) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0087 , REG_W0 , REG_0 , REG_L ,
EMIT_CONST_U64 ( imm ) ) ;
} else {
/* lgrl %dst,imm */
EMIT6_PCREL_RILB ( 0xc4080000 , dst_reg ,
_EMIT_CONST_U64 ( imm ) ) ;
jit - > seen | = SEEN_LITERAL ;
/* dlgr %w0,%dst */
EMIT4 ( 0xb9870000 , REG_W0 , dst_reg ) ;
}
2015-04-01 16:08:32 +02:00
/* lgr %dst,%rc */
EMIT4 ( 0xb9040000 , dst_reg , rc_reg ) ;
break ;
}
/*
* BPF_AND
*/
case BPF_ALU | BPF_AND | BPF_X : /* dst = (u32) dst & (u32) src */
/* nr %dst,%src */
EMIT2 ( 0x1400 , dst_reg , src_reg ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_AND | BPF_X : /* dst = dst & src */
/* ngr %dst,%src */
EMIT4 ( 0xb9800000 , dst_reg , src_reg ) ;
break ;
case BPF_ALU | BPF_AND | BPF_K : /* dst = (u32) dst & (u32) imm */
/* nilf %dst,imm */
EMIT6_IMM ( 0xc00b0000 , dst_reg , imm ) ;
EMIT_ZERO ( dst_reg ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_AND | BPF_K : /* dst = dst & imm */
2019-11-18 19:03:39 +01:00
if ( ! is_first_pass ( jit ) & & can_use_ldisp_for_lit64 ( jit ) ) {
/* ng %dst,<d(imm)>(%l) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0080 ,
dst_reg , REG_0 , REG_L ,
EMIT_CONST_U64 ( imm ) ) ;
} else {
/* lgrl %w0,imm */
EMIT6_PCREL_RILB ( 0xc4080000 , REG_W0 ,
_EMIT_CONST_U64 ( imm ) ) ;
jit - > seen | = SEEN_LITERAL ;
/* ngr %dst,%w0 */
EMIT4 ( 0xb9800000 , dst_reg , REG_W0 ) ;
}
2012-09-24 08:31:35 +02:00
break ;
2015-04-01 16:08:32 +02:00
/*
* BPF_OR
*/
case BPF_ALU | BPF_OR | BPF_X : /* dst = (u32) dst | (u32) src */
/* or %dst,%src */
EMIT2 ( 0x1600 , dst_reg , src_reg ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_OR | BPF_X : /* dst = dst | src */
/* ogr %dst,%src */
EMIT4 ( 0xb9810000 , dst_reg , src_reg ) ;
break ;
case BPF_ALU | BPF_OR | BPF_K : /* dst = (u32) dst | (u32) imm */
/* oilf %dst,imm */
EMIT6_IMM ( 0xc00d0000 , dst_reg , imm ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_OR | BPF_K : /* dst = dst | imm */
2019-11-18 19:03:39 +01:00
if ( ! is_first_pass ( jit ) & & can_use_ldisp_for_lit64 ( jit ) ) {
/* og %dst,<d(imm)>(%l) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0081 ,
dst_reg , REG_0 , REG_L ,
EMIT_CONST_U64 ( imm ) ) ;
} else {
/* lgrl %w0,imm */
EMIT6_PCREL_RILB ( 0xc4080000 , REG_W0 ,
_EMIT_CONST_U64 ( imm ) ) ;
jit - > seen | = SEEN_LITERAL ;
/* ogr %dst,%w0 */
EMIT4 ( 0xb9810000 , dst_reg , REG_W0 ) ;
}
2015-04-01 16:08:32 +02:00
break ;
/*
* BPF_XOR
*/
case BPF_ALU | BPF_XOR | BPF_X : /* dst = (u32) dst ^ (u32) src */
/* xr %dst,%src */
EMIT2 ( 0x1700 , dst_reg , src_reg ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_XOR | BPF_X : /* dst = dst ^ src */
/* xgr %dst,%src */
EMIT4 ( 0xb9820000 , dst_reg , src_reg ) ;
break ;
case BPF_ALU | BPF_XOR | BPF_K : /* dst = (u32) dst ^ (u32) imm */
2021-09-06 15:04:14 +02:00
if ( imm ! = 0 ) {
/* xilf %dst,imm */
EMIT6_IMM ( 0xc0070000 , dst_reg , imm ) ;
}
2015-04-01 16:08:32 +02:00
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_XOR | BPF_K : /* dst = dst ^ imm */
2019-11-18 19:03:39 +01:00
if ( ! is_first_pass ( jit ) & & can_use_ldisp_for_lit64 ( jit ) ) {
/* xg %dst,<d(imm)>(%l) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0082 ,
dst_reg , REG_0 , REG_L ,
EMIT_CONST_U64 ( imm ) ) ;
} else {
/* lgrl %w0,imm */
EMIT6_PCREL_RILB ( 0xc4080000 , REG_W0 ,
_EMIT_CONST_U64 ( imm ) ) ;
jit - > seen | = SEEN_LITERAL ;
/* xgr %dst,%w0 */
EMIT4 ( 0xb9820000 , dst_reg , REG_W0 ) ;
}
2015-04-01 16:08:32 +02:00
break ;
/*
* BPF_LSH
*/
case BPF_ALU | BPF_LSH | BPF_X : /* dst = (u32) dst << (u32) src */
/* sll %dst,0(%src) */
EMIT4_DISP ( 0x89000000 , dst_reg , src_reg , 0 ) ;
EMIT_ZERO ( dst_reg ) ;
2012-12-01 12:42:32 +01:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_LSH | BPF_X : /* dst = dst << src */
/* sllg %dst,%dst,0(%src) */
EMIT6_DISP_LH ( 0xeb000000 , 0x000d , dst_reg , dst_reg , src_reg , 0 ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU | BPF_LSH | BPF_K : /* dst = (u32) dst << (u32) imm */
2021-09-06 15:04:14 +02:00
if ( imm ! = 0 ) {
/* sll %dst,imm(%r0) */
EMIT4_DISP ( 0x89000000 , dst_reg , REG_0 , imm ) ;
}
2015-04-01 16:08:32 +02:00
EMIT_ZERO ( dst_reg ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_LSH | BPF_K : /* dst = dst << imm */
if ( imm = = 0 )
break ;
/* sllg %dst,%dst,imm(%r0) */
EMIT6_DISP_LH ( 0xeb000000 , 0x000d , dst_reg , dst_reg , REG_0 , imm ) ;
break ;
/*
* BPF_RSH
*/
case BPF_ALU | BPF_RSH | BPF_X : /* dst = (u32) dst >> (u32) src */
/* srl %dst,0(%src) */
EMIT4_DISP ( 0x88000000 , dst_reg , src_reg , 0 ) ;
EMIT_ZERO ( dst_reg ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_RSH | BPF_X : /* dst = dst >> src */
/* srlg %dst,%dst,0(%src) */
EMIT6_DISP_LH ( 0xeb000000 , 0x000c , dst_reg , dst_reg , src_reg , 0 ) ;
break ;
case BPF_ALU | BPF_RSH | BPF_K : /* dst = (u32) dst >> (u32) imm */
2021-09-06 15:04:14 +02:00
if ( imm ! = 0 ) {
/* srl %dst,imm(%r0) */
EMIT4_DISP ( 0x88000000 , dst_reg , REG_0 , imm ) ;
}
2015-04-01 16:08:32 +02:00
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_RSH | BPF_K : /* dst = dst >> imm */
if ( imm = = 0 )
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
/* srlg %dst,%dst,imm(%r0) */
EMIT6_DISP_LH ( 0xeb000000 , 0x000c , dst_reg , dst_reg , REG_0 , imm ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
/*
* BPF_ARSH
*/
2018-12-05 13:52:32 -05:00
case BPF_ALU | BPF_ARSH | BPF_X : /* ((s32) dst) >>= src */
/* sra %dst,%dst,0(%src) */
EMIT4_DISP ( 0x8a000000 , dst_reg , src_reg , 0 ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_ARSH | BPF_X : /* ((s64) dst) >>= src */
/* srag %dst,%dst,0(%src) */
EMIT6_DISP_LH ( 0xeb000000 , 0x000a , dst_reg , dst_reg , src_reg , 0 ) ;
break ;
2018-12-05 13:52:32 -05:00
case BPF_ALU | BPF_ARSH | BPF_K : /* ((s32) dst >> imm */
2021-09-06 15:04:14 +02:00
if ( imm ! = 0 ) {
/* sra %dst,imm(%r0) */
EMIT4_DISP ( 0x8a000000 , dst_reg , REG_0 , imm ) ;
}
2018-12-05 13:52:32 -05:00
EMIT_ZERO ( dst_reg ) ;
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU64 | BPF_ARSH | BPF_K : /* ((s64) dst) >>= imm */
if ( imm = = 0 )
break ;
/* srag %dst,%dst,imm(%r0) */
EMIT6_DISP_LH ( 0xeb000000 , 0x000a , dst_reg , dst_reg , REG_0 , imm ) ;
break ;
/*
* BPF_NEG
*/
case BPF_ALU | BPF_NEG : /* dst = (u32) -dst */
/* lcr %dst,%dst */
EMIT2 ( 0x1300 , dst_reg , dst_reg ) ;
EMIT_ZERO ( dst_reg ) ;
break ;
case BPF_ALU64 | BPF_NEG : /* dst = -dst */
/* lcgr %dst,%dst */
2019-08-12 17:03:32 +02:00
EMIT4 ( 0xb9030000 , dst_reg , dst_reg ) ;
2015-04-01 16:08:32 +02:00
break ;
/*
* BPF_FROM_BE / LE
*/
case BPF_ALU | BPF_END | BPF_FROM_BE :
/* s390 is big endian, therefore only clear high order bytes */
switch ( imm ) {
case 16 : /* dst = (u16) cpu_to_be16(dst) */
/* llghr %dst,%dst */
EMIT4 ( 0xb9850000 , dst_reg , dst_reg ) ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
case 32 : /* dst = (u32) cpu_to_be32(dst) */
2019-05-24 23:25:24 +01:00
if ( ! fp - > aux - > verifier_zext )
/* llgfr %dst,%dst */
EMIT4 ( 0xb9160000 , dst_reg , dst_reg ) ;
2015-04-01 16:08:32 +02:00
break ;
case 64 : /* dst = (u64) cpu_to_be64(dst) */
break ;
2012-07-31 16:23:59 +02:00
}
break ;
2015-04-01 16:08:32 +02:00
case BPF_ALU | BPF_END | BPF_FROM_LE :
switch ( imm ) {
case 16 : /* dst = (u16) cpu_to_le16(dst) */
/* lrvr %dst,%dst */
EMIT4 ( 0xb91f0000 , dst_reg , dst_reg ) ;
/* srl %dst,16(%r0) */
EMIT4_DISP ( 0x88000000 , dst_reg , REG_0 , 16 ) ;
/* llghr %dst,%dst */
EMIT4 ( 0xb9850000 , dst_reg , dst_reg ) ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
case 32 : /* dst = (u32) cpu_to_le32(dst) */
/* lrvr %dst,%dst */
EMIT4 ( 0xb91f0000 , dst_reg , dst_reg ) ;
2019-05-24 23:25:24 +01:00
if ( ! fp - > aux - > verifier_zext )
/* llgfr %dst,%dst */
EMIT4 ( 0xb9160000 , dst_reg , dst_reg ) ;
2015-04-01 16:08:32 +02:00
break ;
case 64 : /* dst = (u64) cpu_to_le64(dst) */
/* lrvgr %dst,%dst */
EMIT4 ( 0xb90f0000 , dst_reg , dst_reg ) ;
2012-07-31 16:23:59 +02:00
break ;
}
break ;
2015-04-01 16:08:32 +02:00
/*
2021-07-13 08:18:31 +00:00
* BPF_NOSPEC ( speculation barrier )
*/
case BPF_ST | BPF_NOSPEC :
break ;
/*
2015-04-01 16:08:32 +02:00
* BPF_ST ( X )
*/
case BPF_STX | BPF_MEM | BPF_B : /* *(u8 *)(dst + off) = src_reg */
/* stcy %src,off(%dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0072 , src_reg , dst_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
break ;
case BPF_STX | BPF_MEM | BPF_H : /* (u16 *)(dst + off) = src */
/* sthy %src,off(%dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0070 , src_reg , dst_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
break ;
case BPF_STX | BPF_MEM | BPF_W : /* *(u32 *)(dst + off) = src */
/* sty %src,off(%dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0050 , src_reg , dst_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
break ;
case BPF_STX | BPF_MEM | BPF_DW : /* (u64 *)(dst + off) = src */
/* stg %src,off(%dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0024 , src_reg , dst_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
break ;
case BPF_ST | BPF_MEM | BPF_B : /* *(u8 *)(dst + off) = imm */
/* lhi %w0,imm */
EMIT4_IMM ( 0xa7080000 , REG_W0 , ( u8 ) imm ) ;
/* stcy %w0,off(dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0072 , REG_W0 , dst_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
break ;
case BPF_ST | BPF_MEM | BPF_H : /* (u16 *)(dst + off) = imm */
/* lhi %w0,imm */
EMIT4_IMM ( 0xa7080000 , REG_W0 , ( u16 ) imm ) ;
/* sthy %w0,off(dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0070 , REG_W0 , dst_reg , REG_0 , off ) ;
2012-07-31 16:23:59 +02:00
jit - > seen | = SEEN_MEM ;
break ;
2015-04-01 16:08:32 +02:00
case BPF_ST | BPF_MEM | BPF_W : /* *(u32 *)(dst + off) = imm */
/* llilf %w0,imm */
EMIT6_IMM ( 0xc00f0000 , REG_W0 , ( u32 ) imm ) ;
/* sty %w0,off(%dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0050 , REG_W0 , dst_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
break ;
case BPF_ST | BPF_MEM | BPF_DW : /* *(u64 *)(dst + off) = imm */
/* lgfi %w0,imm */
EMIT6_IMM ( 0xc0010000 , REG_W0 , imm ) ;
/* stg %w0,off(%dst) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0024 , REG_W0 , dst_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
break ;
/*
2021-01-14 18:17:44 +00:00
* BPF_ATOMIC
2015-04-01 16:08:32 +02:00
*/
2021-01-14 18:17:44 +00:00
case BPF_STX | BPF_ATOMIC | BPF_DW :
case BPF_STX | BPF_ATOMIC | BPF_W :
2021-03-05 00:30:02 +01:00
{
bool is32 = BPF_SIZE ( insn - > code ) = = BPF_W ;
switch ( insn - > imm ) {
/* {op32|op64} {%w0|%src},%src,off(%dst) */
# define EMIT_ATOMIC(op32, op64) do { \
EMIT6_DISP_LH ( 0xeb000000 , is32 ? ( op32 ) : ( op64 ) , \
( insn - > imm & BPF_FETCH ) ? src_reg : REG_W0 , \
src_reg , dst_reg , off ) ; \
if ( is32 & & ( insn - > imm & BPF_FETCH ) ) \
EMIT_ZERO ( src_reg ) ; \
} while ( 0 )
case BPF_ADD :
case BPF_ADD | BPF_FETCH :
/* {laal|laalg} */
EMIT_ATOMIC ( 0x00fa , 0x00ea ) ;
break ;
case BPF_AND :
case BPF_AND | BPF_FETCH :
/* {lan|lang} */
EMIT_ATOMIC ( 0x00f4 , 0x00e4 ) ;
break ;
case BPF_OR :
case BPF_OR | BPF_FETCH :
/* {lao|laog} */
EMIT_ATOMIC ( 0x00f6 , 0x00e6 ) ;
break ;
case BPF_XOR :
case BPF_XOR | BPF_FETCH :
/* {lax|laxg} */
EMIT_ATOMIC ( 0x00f7 , 0x00e7 ) ;
break ;
# undef EMIT_ATOMIC
case BPF_XCHG :
/* {ly|lg} %w0,off(%dst) */
EMIT6_DISP_LH ( 0xe3000000 ,
is32 ? 0x0058 : 0x0004 , REG_W0 , REG_0 ,
dst_reg , off ) ;
/* 0: {csy|csg} %w0,%src,off(%dst) */
EMIT6_DISP_LH ( 0xeb000000 , is32 ? 0x0014 : 0x0030 ,
REG_W0 , src_reg , dst_reg , off ) ;
/* brc 4,0b */
EMIT4_PCREL_RIC ( 0xa7040000 , 4 , jit - > prg - 6 ) ;
/* {llgfr|lgr} %src,%w0 */
EMIT4 ( is32 ? 0xb9160000 : 0xb9040000 , src_reg , REG_W0 ) ;
if ( is32 & & insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
break ;
case BPF_CMPXCHG :
/* 0: {csy|csg} %b0,%src,off(%dst) */
EMIT6_DISP_LH ( 0xeb000000 , is32 ? 0x0014 : 0x0030 ,
BPF_REG_0 , src_reg , dst_reg , off ) ;
break ;
default :
2021-01-14 18:17:44 +00:00
pr_err ( " Unknown atomic operation %02x \n " , insn - > imm ) ;
return - 1 ;
}
2015-04-01 16:08:32 +02:00
jit - > seen | = SEEN_MEM ;
break ;
2021-03-05 00:30:02 +01:00
}
2015-04-01 16:08:32 +02:00
/*
* BPF_LDX
*/
case BPF_LDX | BPF_MEM | BPF_B : /* dst = *(u8 *)(ul) (src + off) */
2020-06-24 14:55:22 +02:00
case BPF_LDX | BPF_PROBE_MEM | BPF_B :
2015-04-01 16:08:32 +02:00
/* llgc %dst,0(off,%src) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0090 , dst_reg , src_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
case BPF_LDX | BPF_MEM | BPF_H : /* dst = *(u16 *)(ul) (src + off) */
2020-06-24 14:55:22 +02:00
case BPF_LDX | BPF_PROBE_MEM | BPF_H :
2015-04-01 16:08:32 +02:00
/* llgh %dst,0(off,%src) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0091 , dst_reg , src_reg , REG_0 , off ) ;
jit - > seen | = SEEN_MEM ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
case BPF_LDX | BPF_MEM | BPF_W : /* dst = *(u32 *)(ul) (src + off) */
2020-06-24 14:55:22 +02:00
case BPF_LDX | BPF_PROBE_MEM | BPF_W :
2015-04-01 16:08:32 +02:00
/* llgf %dst,off(%src) */
jit - > seen | = SEEN_MEM ;
EMIT6_DISP_LH ( 0xe3000000 , 0x0016 , dst_reg , src_reg , REG_0 , off ) ;
2019-05-24 23:25:24 +01:00
if ( insn_is_zext ( & insn [ 1 ] ) )
insn_count = 2 ;
2015-04-01 16:08:32 +02:00
break ;
case BPF_LDX | BPF_MEM | BPF_DW : /* dst = *(u64 *)(ul) (src + off) */
2020-06-24 14:55:22 +02:00
case BPF_LDX | BPF_PROBE_MEM | BPF_DW :
2015-04-01 16:08:32 +02:00
/* lg %dst,0(off,%src) */
jit - > seen | = SEEN_MEM ;
EMIT6_DISP_LH ( 0xe3000000 , 0x0004 , dst_reg , src_reg , REG_0 , off ) ;
break ;
/*
* BPF_JMP / CALL
*/
case BPF_JMP | BPF_CALL :
{
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
u64 func ;
bool func_addr_fixed ;
int ret ;
ret = bpf_jit_get_func_addr ( fp , insn , extra_pass ,
& func , & func_addr_fixed ) ;
if ( ret < 0 )
return - 1 ;
2015-04-01 16:08:32 +02:00
REG_SET_SEEN ( BPF_REG_5 ) ;
jit - > seen | = SEEN_FUNC ;
2019-11-18 19:03:38 +01:00
/* lgrl %w1,func */
EMIT6_PCREL_RILB ( 0xc4080000 , REG_W1 , _EMIT_CONST_U64 ( func ) ) ;
2021-10-04 08:51:06 +02:00
if ( nospec_uses_trampoline ( ) ) {
2018-04-23 14:31:36 +02:00
/* brasl %r14,__s390_indirect_jump_r1 */
EMIT6_PCREL_RILB ( 0xc0050000 , REG_14 , jit - > r1_thunk_ip ) ;
} else {
/* basr %r14,%w1 */
EMIT2 ( 0x0d00 , REG_14 , REG_W1 ) ;
}
2015-04-01 16:08:32 +02:00
/* lgr %b0,%r2: load return value into %b0 */
EMIT4 ( 0xb9040000 , BPF_REG_0 , REG_2 ) ;
break ;
}
2020-09-10 01:21:41 +02:00
case BPF_JMP | BPF_TAIL_CALL : {
int patch_1_clrj , patch_2_clij , patch_3_brc ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
/*
* Implicit input :
* B1 : pointer to ctx
* B2 : pointer to bpf_array
* B3 : index in bpf_array
*/
jit - > seen | = SEEN_TAIL_CALL ;
/*
* if ( index > = array - > map . max_entries )
* goto out ;
*/
/* llgf %w1,map.max_entries(%b2) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0016 , REG_W1 , REG_0 , BPF_REG_2 ,
offsetof ( struct bpf_array , map . max_entries ) ) ;
2019-11-18 19:03:35 +01:00
/* if ((u32)%b3 >= (u32)%w1) goto out; */
2020-09-10 01:21:41 +02:00
/* clrj %b3,%w1,0xa,out */
patch_1_clrj = jit - > prg ;
EMIT6_PCREL_RIEB ( 0xec000000 , 0x0077 , BPF_REG_3 , REG_W1 , 0xa ,
jit - > prg ) ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
/*
bpf: Change value of MAX_TAIL_CALL_CNT from 32 to 33
In the current code, the actual max tail call count is 33 which is greater
than MAX_TAIL_CALL_CNT (defined as 32). The actual limit is not consistent
with the meaning of MAX_TAIL_CALL_CNT and thus confusing at first glance.
We can see the historical evolution from commit 04fd61ab36ec ("bpf: allow
bpf programs to tail-call other bpf programs") and commit f9dabe016b63
("bpf: Undo off-by-one in interpreter tail call count limit"). In order
to avoid changing existing behavior, the actual limit is 33 now, this is
reasonable.
After commit 874be05f525e ("bpf, tests: Add tail call test suite"), we can
see there exists failed testcase.
On all archs when CONFIG_BPF_JIT_ALWAYS_ON is not set:
# echo 0 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf
# dmesg | grep -w FAIL
Tail call error path, max count reached jited:0 ret 34 != 33 FAIL
On some archs:
# echo 1 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf
# dmesg | grep -w FAIL
Tail call error path, max count reached jited:1 ret 34 != 33 FAIL
Although the above failed testcase has been fixed in commit 18935a72eb25
("bpf/tests: Fix error in tail call limit tests"), it would still be good
to change the value of MAX_TAIL_CALL_CNT from 32 to 33 to make the code
more readable.
The 32-bit x86 JIT was using a limit of 32, just fix the wrong comments and
limit to 33 tail calls as the constant MAX_TAIL_CALL_CNT updated. For the
mips64 JIT, use "ori" instead of "addiu" as suggested by Johan Almbladh.
For the riscv JIT, use RV_REG_TCC directly to save one register move as
suggested by Björn Töpel. For the other implementations, no function changes,
it does not change the current limit 33, the new value of MAX_TAIL_CALL_CNT
can reflect the actual max tail call count, the related tail call testcases
in test_bpf module and selftests can work well for the interpreter and the
JIT.
Here are the test results on x86_64:
# uname -m
x86_64
# echo 0 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf test_suite=test_tail_calls
# dmesg | tail -1
test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed]
# rmmod test_bpf
# echo 1 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf test_suite=test_tail_calls
# dmesg | tail -1
test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [8/8 JIT'ed]
# rmmod test_bpf
# ./test_progs -t tailcalls
#142 tailcalls:OK
Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/bpf/1636075800-3264-1-git-send-email-yangtiezhu@loongson.cn
2021-11-05 09:30:00 +08:00
* if ( tail_call_cnt + + > = MAX_TAIL_CALL_CNT )
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
* goto out ;
*/
if ( jit - > seen & SEEN_STACK )
2020-06-02 19:43:39 +02:00
off = STK_OFF_TCCNT + STK_OFF + stack_depth ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
else
off = STK_OFF_TCCNT ;
/* lhi %w0,1 */
EMIT4_IMM ( 0xa7080000 , REG_W0 , 1 ) ;
/* laal %w1,%w0,off(%r15) */
EMIT6_DISP_LH ( 0xeb000000 , 0x00fa , REG_W1 , REG_W0 , REG_15 , off ) ;
bpf: Change value of MAX_TAIL_CALL_CNT from 32 to 33
In the current code, the actual max tail call count is 33 which is greater
than MAX_TAIL_CALL_CNT (defined as 32). The actual limit is not consistent
with the meaning of MAX_TAIL_CALL_CNT and thus confusing at first glance.
We can see the historical evolution from commit 04fd61ab36ec ("bpf: allow
bpf programs to tail-call other bpf programs") and commit f9dabe016b63
("bpf: Undo off-by-one in interpreter tail call count limit"). In order
to avoid changing existing behavior, the actual limit is 33 now, this is
reasonable.
After commit 874be05f525e ("bpf, tests: Add tail call test suite"), we can
see there exists failed testcase.
On all archs when CONFIG_BPF_JIT_ALWAYS_ON is not set:
# echo 0 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf
# dmesg | grep -w FAIL
Tail call error path, max count reached jited:0 ret 34 != 33 FAIL
On some archs:
# echo 1 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf
# dmesg | grep -w FAIL
Tail call error path, max count reached jited:1 ret 34 != 33 FAIL
Although the above failed testcase has been fixed in commit 18935a72eb25
("bpf/tests: Fix error in tail call limit tests"), it would still be good
to change the value of MAX_TAIL_CALL_CNT from 32 to 33 to make the code
more readable.
The 32-bit x86 JIT was using a limit of 32, just fix the wrong comments and
limit to 33 tail calls as the constant MAX_TAIL_CALL_CNT updated. For the
mips64 JIT, use "ori" instead of "addiu" as suggested by Johan Almbladh.
For the riscv JIT, use RV_REG_TCC directly to save one register move as
suggested by Björn Töpel. For the other implementations, no function changes,
it does not change the current limit 33, the new value of MAX_TAIL_CALL_CNT
can reflect the actual max tail call count, the related tail call testcases
in test_bpf module and selftests can work well for the interpreter and the
JIT.
Here are the test results on x86_64:
# uname -m
x86_64
# echo 0 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf test_suite=test_tail_calls
# dmesg | tail -1
test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed]
# rmmod test_bpf
# echo 1 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf test_suite=test_tail_calls
# dmesg | tail -1
test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [8/8 JIT'ed]
# rmmod test_bpf
# ./test_progs -t tailcalls
#142 tailcalls:OK
Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/bpf/1636075800-3264-1-git-send-email-yangtiezhu@loongson.cn
2021-11-05 09:30:00 +08:00
/* clij %w1,MAX_TAIL_CALL_CNT-1,0x2,out */
2020-09-10 01:21:41 +02:00
patch_2_clij = jit - > prg ;
bpf: Change value of MAX_TAIL_CALL_CNT from 32 to 33
In the current code, the actual max tail call count is 33 which is greater
than MAX_TAIL_CALL_CNT (defined as 32). The actual limit is not consistent
with the meaning of MAX_TAIL_CALL_CNT and thus confusing at first glance.
We can see the historical evolution from commit 04fd61ab36ec ("bpf: allow
bpf programs to tail-call other bpf programs") and commit f9dabe016b63
("bpf: Undo off-by-one in interpreter tail call count limit"). In order
to avoid changing existing behavior, the actual limit is 33 now, this is
reasonable.
After commit 874be05f525e ("bpf, tests: Add tail call test suite"), we can
see there exists failed testcase.
On all archs when CONFIG_BPF_JIT_ALWAYS_ON is not set:
# echo 0 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf
# dmesg | grep -w FAIL
Tail call error path, max count reached jited:0 ret 34 != 33 FAIL
On some archs:
# echo 1 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf
# dmesg | grep -w FAIL
Tail call error path, max count reached jited:1 ret 34 != 33 FAIL
Although the above failed testcase has been fixed in commit 18935a72eb25
("bpf/tests: Fix error in tail call limit tests"), it would still be good
to change the value of MAX_TAIL_CALL_CNT from 32 to 33 to make the code
more readable.
The 32-bit x86 JIT was using a limit of 32, just fix the wrong comments and
limit to 33 tail calls as the constant MAX_TAIL_CALL_CNT updated. For the
mips64 JIT, use "ori" instead of "addiu" as suggested by Johan Almbladh.
For the riscv JIT, use RV_REG_TCC directly to save one register move as
suggested by Björn Töpel. For the other implementations, no function changes,
it does not change the current limit 33, the new value of MAX_TAIL_CALL_CNT
can reflect the actual max tail call count, the related tail call testcases
in test_bpf module and selftests can work well for the interpreter and the
JIT.
Here are the test results on x86_64:
# uname -m
x86_64
# echo 0 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf test_suite=test_tail_calls
# dmesg | tail -1
test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed]
# rmmod test_bpf
# echo 1 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf test_suite=test_tail_calls
# dmesg | tail -1
test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [8/8 JIT'ed]
# rmmod test_bpf
# ./test_progs -t tailcalls
#142 tailcalls:OK
Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/bpf/1636075800-3264-1-git-send-email-yangtiezhu@loongson.cn
2021-11-05 09:30:00 +08:00
EMIT6_PCREL_RIEC ( 0xec000000 , 0x007f , REG_W1 , MAX_TAIL_CALL_CNT - 1 ,
2020-09-10 01:21:41 +02:00
2 , jit - > prg ) ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
/*
2015-08-11 08:56:51 +00:00
* prog = array - > ptrs [ index ] ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
* if ( prog = = NULL )
* goto out ;
*/
2019-08-12 18:18:07 +02:00
/* llgfr %r1,%b3: %r1 = (u32) index */
EMIT4 ( 0xb9160000 , REG_1 , BPF_REG_3 ) ;
/* sllg %r1,%r1,3: %r1 *= 8 */
EMIT6_DISP_LH ( 0xeb000000 , 0x000d , REG_1 , REG_1 , REG_0 , 3 ) ;
2019-11-18 19:03:35 +01:00
/* ltg %r1,prog(%b2,%r1) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0002 , REG_1 , BPF_REG_2 ,
2015-08-11 08:56:51 +00:00
REG_1 , offsetof ( struct bpf_array , ptrs ) ) ;
2020-09-10 01:21:41 +02:00
/* brc 0x8,out */
patch_3_brc = jit - > prg ;
EMIT4_PCREL_RIC ( 0xa7040000 , 8 , jit - > prg ) ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
/*
* Restore registers before calling function
*/
2020-06-02 19:43:39 +02:00
save_restore_regs ( jit , REGS_RESTORE , stack_depth ) ;
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
/*
* goto * ( prog - > bpf_func + tail_call_start ) ;
*/
/* lg %r1,bpf_func(%r1) */
EMIT6_DISP_LH ( 0xe3000000 , 0x0004 , REG_1 , REG_1 , REG_0 ,
offsetof ( struct bpf_prog , bpf_func ) ) ;
2023-01-29 20:04:55 +01:00
if ( nospec_uses_trampoline ( ) ) {
jit - > seen | = SEEN_FUNC ;
/* aghi %r1,tail_call_start */
EMIT4_IMM ( 0xa70b0000 , REG_1 , jit - > tail_call_start ) ;
/* brcl 0xf,__s390_indirect_jump_r1 */
EMIT6_PCREL_RILC ( 0xc0040000 , 0xf , jit - > r1_thunk_ip ) ;
} else {
/* bc 0xf,tail_call_start(%r1) */
_EMIT4 ( 0x47f01000 + jit - > tail_call_start ) ;
}
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
/* out: */
2020-09-10 01:21:41 +02:00
if ( jit - > prg_buf ) {
* ( u16 * ) ( jit - > prg_buf + patch_1_clrj + 2 ) =
( jit - > prg - patch_1_clrj ) > > 1 ;
* ( u16 * ) ( jit - > prg_buf + patch_2_clij + 2 ) =
( jit - > prg - patch_2_clij ) > > 1 ;
* ( u16 * ) ( jit - > prg_buf + patch_3_brc + 2 ) =
( jit - > prg - patch_3_brc ) > > 1 ;
}
s390/bpf: implement bpf_tail_call() helper
bpf_tail_call() arguments:
- ctx......: Context pointer
- jmp_table: One of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
- index....: Index in the jump table
In this implementation s390x JIT does stack unwinding and jumps into the
callee program prologue. Caller and callee use the same stack.
With this patch a tail call generates the following code on s390x:
if (index >= array->map.max_entries)
goto out
000003ff8001c7e4: e31030100016 llgf %r1,16(%r3)
000003ff8001c7ea: ec41001fa065 clgrj %r4,%r1,10,3ff8001c828
if (tail_call_cnt++ > MAX_TAIL_CALL_CNT)
goto out;
000003ff8001c7f0: a7080001 lhi %r0,1
000003ff8001c7f4: eb10f25000fa laal %r1,%r0,592(%r15)
000003ff8001c7fa: ec120017207f clij %r1,32,2,3ff8001c828
prog = array->prog[index];
if (prog == NULL)
goto out;
000003ff8001c800: eb140003000d sllg %r1,%r4,3
000003ff8001c806: e31310800004 lg %r1,128(%r3,%r1)
000003ff8001c80c: ec18000e007d clgij %r1,0,8,3ff8001c828
Restore registers before calling function
000003ff8001c812: eb68f2980004 lmg %r6,%r8,664(%r15)
000003ff8001c818: ebbff2c00004 lmg %r11,%r15,704(%r15)
goto *(prog->bpf_func + tail_call_start);
000003ff8001c81e: e31100200004 lg %r1,32(%r1,%r0)
000003ff8001c824: 47f01006 bc 15,6(%r1)
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-08 21:51:06 -07:00
break ;
2020-09-10 01:21:41 +02:00
}
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_EXIT : /* return b0 */
last = ( i = = fp - > len - 1 ) ? 1 : 0 ;
2019-11-07 12:40:33 +01:00
if ( last )
2015-04-01 16:08:32 +02:00
break ;
2020-07-17 18:53:24 +02:00
if ( ! is_first_pass ( jit ) & & can_use_rel ( jit , jit - > exit_ip ) )
/* brc 0xf, <exit> */
EMIT4_PCREL_RIC ( 0xa7040000 , 0xf , jit - > exit_ip ) ;
else
/* brcl 0xf, <exit> */
EMIT6_PCREL_RILC ( 0xc0040000 , 0xf , jit - > exit_ip ) ;
2012-07-31 16:23:59 +02:00
break ;
2015-04-01 16:08:32 +02:00
/*
* Branch relative ( number of skipped instructions ) to offset on
* condition .
*
* Condition code to mask mapping :
*
* CC | Description | Mask
* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* 0 | Operands equal | 8
* 1 | First operand low | 4
* 2 | First operand high | 2
* 3 | Unused | 1
*
* For s390x relative branches : ip = ip + off_bytes
* For BPF relative branches : insn = insn + off_insns + 1
*
* For example for s390x with offset 0 we jump to the branch
* instruction itself ( loop ) and for BPF with offset 0 we
* branch to the instruction behind the branch .
*/
case BPF_JMP | BPF_JA : /* if (true) */
mask = 0xf000 ; /* j */
goto branch_oc ;
case BPF_JMP | BPF_JSGT | BPF_K : /* ((s64) dst > (s64) imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSGT | BPF_K : /* ((s32) dst > (s32) imm) */
2015-04-01 16:08:32 +02:00
mask = 0x2000 ; /* jh */
goto branch_ks ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JSLT | BPF_K : /* ((s64) dst < (s64) imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSLT | BPF_K : /* ((s32) dst < (s32) imm) */
2017-08-10 01:39:59 +02:00
mask = 0x4000 ; /* jl */
goto branch_ks ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JSGE | BPF_K : /* ((s64) dst >= (s64) imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSGE | BPF_K : /* ((s32) dst >= (s32) imm) */
2015-04-01 16:08:32 +02:00
mask = 0xa000 ; /* jhe */
goto branch_ks ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JSLE | BPF_K : /* ((s64) dst <= (s64) imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSLE | BPF_K : /* ((s32) dst <= (s32) imm) */
2017-08-10 01:39:59 +02:00
mask = 0xc000 ; /* jle */
goto branch_ks ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JGT | BPF_K : /* (dst_reg > imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JGT | BPF_K : /* ((u32) dst_reg > (u32) imm) */
2015-04-01 16:08:32 +02:00
mask = 0x2000 ; /* jh */
goto branch_ku ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JLT | BPF_K : /* (dst_reg < imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JLT | BPF_K : /* ((u32) dst_reg < (u32) imm) */
2017-08-10 01:39:59 +02:00
mask = 0x4000 ; /* jl */
goto branch_ku ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JGE | BPF_K : /* (dst_reg >= imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JGE | BPF_K : /* ((u32) dst_reg >= (u32) imm) */
2015-04-01 16:08:32 +02:00
mask = 0xa000 ; /* jhe */
goto branch_ku ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JLE | BPF_K : /* (dst_reg <= imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JLE | BPF_K : /* ((u32) dst_reg <= (u32) imm) */
2017-08-10 01:39:59 +02:00
mask = 0xc000 ; /* jle */
goto branch_ku ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JNE | BPF_K : /* (dst_reg != imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JNE | BPF_K : /* ((u32) dst_reg != (u32) imm) */
2015-04-01 16:08:32 +02:00
mask = 0x7000 ; /* jne */
goto branch_ku ;
case BPF_JMP | BPF_JEQ | BPF_K : /* (dst_reg == imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JEQ | BPF_K : /* ((u32) dst_reg == (u32) imm) */
2015-04-01 16:08:32 +02:00
mask = 0x8000 ; /* je */
goto branch_ku ;
case BPF_JMP | BPF_JSET | BPF_K : /* (dst_reg & imm) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSET | BPF_K : /* ((u32) dst_reg & (u32) imm) */
2015-04-01 16:08:32 +02:00
mask = 0x7000 ; /* jnz */
2019-01-26 12:26:11 -05:00
if ( BPF_CLASS ( insn - > code ) = = BPF_JMP32 ) {
/* llilf %w1,imm (load zero extend imm) */
2019-02-04 16:44:55 +01:00
EMIT6_IMM ( 0xc00f0000 , REG_W1 , imm ) ;
2019-01-26 12:26:11 -05:00
/* nr %w1,%dst */
EMIT2 ( 0x1400 , REG_W1 , dst_reg ) ;
} else {
/* lgfi %w1,imm (load sign extend imm) */
EMIT6_IMM ( 0xc0010000 , REG_W1 , imm ) ;
/* ngr %w1,%dst */
EMIT4 ( 0xb9800000 , REG_W1 , dst_reg ) ;
}
2015-04-01 16:08:32 +02:00
goto branch_oc ;
case BPF_JMP | BPF_JSGT | BPF_X : /* ((s64) dst > (s64) src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSGT | BPF_X : /* ((s32) dst > (s32) src) */
2015-04-01 16:08:32 +02:00
mask = 0x2000 ; /* jh */
goto branch_xs ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JSLT | BPF_X : /* ((s64) dst < (s64) src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSLT | BPF_X : /* ((s32) dst < (s32) src) */
2017-08-10 01:39:59 +02:00
mask = 0x4000 ; /* jl */
goto branch_xs ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JSGE | BPF_X : /* ((s64) dst >= (s64) src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSGE | BPF_X : /* ((s32) dst >= (s32) src) */
2015-04-01 16:08:32 +02:00
mask = 0xa000 ; /* jhe */
goto branch_xs ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JSLE | BPF_X : /* ((s64) dst <= (s64) src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSLE | BPF_X : /* ((s32) dst <= (s32) src) */
2017-08-10 01:39:59 +02:00
mask = 0xc000 ; /* jle */
goto branch_xs ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JGT | BPF_X : /* (dst > src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JGT | BPF_X : /* ((u32) dst > (u32) src) */
2015-04-01 16:08:32 +02:00
mask = 0x2000 ; /* jh */
goto branch_xu ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JLT | BPF_X : /* (dst < src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JLT | BPF_X : /* ((u32) dst < (u32) src) */
2017-08-10 01:39:59 +02:00
mask = 0x4000 ; /* jl */
goto branch_xu ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JGE | BPF_X : /* (dst >= src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JGE | BPF_X : /* ((u32) dst >= (u32) src) */
2015-04-01 16:08:32 +02:00
mask = 0xa000 ; /* jhe */
goto branch_xu ;
2017-08-10 01:39:59 +02:00
case BPF_JMP | BPF_JLE | BPF_X : /* (dst <= src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JLE | BPF_X : /* ((u32) dst <= (u32) src) */
2017-08-10 01:39:59 +02:00
mask = 0xc000 ; /* jle */
goto branch_xu ;
2015-04-01 16:08:32 +02:00
case BPF_JMP | BPF_JNE | BPF_X : /* (dst != src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JNE | BPF_X : /* ((u32) dst != (u32) src) */
2015-04-01 16:08:32 +02:00
mask = 0x7000 ; /* jne */
goto branch_xu ;
case BPF_JMP | BPF_JEQ | BPF_X : /* (dst == src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JEQ | BPF_X : /* ((u32) dst == (u32) src) */
2015-04-01 16:08:32 +02:00
mask = 0x8000 ; /* je */
goto branch_xu ;
case BPF_JMP | BPF_JSET | BPF_X : /* (dst & src) */
2019-01-26 12:26:11 -05:00
case BPF_JMP32 | BPF_JSET | BPF_X : /* ((u32) dst & (u32) src) */
{
bool is_jmp32 = BPF_CLASS ( insn - > code ) = = BPF_JMP32 ;
2015-04-01 16:08:32 +02:00
mask = 0x7000 ; /* jnz */
2019-01-26 12:26:11 -05:00
/* nrk or ngrk %w1,%dst,%src */
EMIT4_RRF ( ( is_jmp32 ? 0xb9f40000 : 0xb9e40000 ) ,
REG_W1 , dst_reg , src_reg ) ;
2015-04-01 16:08:32 +02:00
goto branch_oc ;
branch_ks :
2019-02-04 16:44:55 +01:00
is_jmp32 = BPF_CLASS ( insn - > code ) = = BPF_JMP32 ;
2019-11-18 19:03:35 +01:00
/* cfi or cgfi %dst,imm */
EMIT6_IMM ( is_jmp32 ? 0xc20d0000 : 0xc20c0000 ,
dst_reg , imm ) ;
if ( ! is_first_pass ( jit ) & &
can_use_rel ( jit , addrs [ i + off + 1 ] ) ) {
/* brc mask,off */
EMIT4_PCREL_RIC ( 0xa7040000 ,
mask > > 12 , addrs [ i + off + 1 ] ) ;
} else {
/* brcl mask,off */
EMIT6_PCREL_RILC ( 0xc0040000 ,
mask > > 12 , addrs [ i + off + 1 ] ) ;
}
2015-04-01 16:08:32 +02:00
break ;
branch_ku :
2020-07-17 18:53:23 +02:00
/* lgfi %w1,imm (load sign extend imm) */
src_reg = REG_1 ;
EMIT6_IMM ( 0xc0010000 , src_reg , imm ) ;
goto branch_xu ;
2015-04-01 16:08:32 +02:00
branch_xs :
2019-02-04 16:44:55 +01:00
is_jmp32 = BPF_CLASS ( insn - > code ) = = BPF_JMP32 ;
2019-11-18 19:03:35 +01:00
if ( ! is_first_pass ( jit ) & &
can_use_rel ( jit , addrs [ i + off + 1 ] ) ) {
/* crj or cgrj %dst,%src,mask,off */
EMIT6_PCREL ( 0xec000000 , ( is_jmp32 ? 0x0076 : 0x0064 ) ,
dst_reg , src_reg , i , off , mask ) ;
} else {
/* cr or cgr %dst,%src */
if ( is_jmp32 )
EMIT2 ( 0x1900 , dst_reg , src_reg ) ;
else
EMIT4 ( 0xb9200000 , dst_reg , src_reg ) ;
/* brcl mask,off */
EMIT6_PCREL_RILC ( 0xc0040000 ,
mask > > 12 , addrs [ i + off + 1 ] ) ;
}
2015-04-01 16:08:32 +02:00
break ;
branch_xu :
2019-02-04 16:44:55 +01:00
is_jmp32 = BPF_CLASS ( insn - > code ) = = BPF_JMP32 ;
2019-11-18 19:03:35 +01:00
if ( ! is_first_pass ( jit ) & &
can_use_rel ( jit , addrs [ i + off + 1 ] ) ) {
/* clrj or clgrj %dst,%src,mask,off */
EMIT6_PCREL ( 0xec000000 , ( is_jmp32 ? 0x0077 : 0x0065 ) ,
dst_reg , src_reg , i , off , mask ) ;
} else {
/* clr or clgr %dst,%src */
if ( is_jmp32 )
EMIT2 ( 0x1500 , dst_reg , src_reg ) ;
else
EMIT4 ( 0xb9210000 , dst_reg , src_reg ) ;
/* brcl mask,off */
EMIT6_PCREL_RILC ( 0xc0040000 ,
mask > > 12 , addrs [ i + off + 1 ] ) ;
}
2015-04-01 16:08:32 +02:00
break ;
branch_oc :
2019-11-18 19:03:35 +01:00
if ( ! is_first_pass ( jit ) & &
can_use_rel ( jit , addrs [ i + off + 1 ] ) ) {
/* brc mask,off */
EMIT4_PCREL_RIC ( 0xa7040000 ,
mask > > 12 , addrs [ i + off + 1 ] ) ;
} else {
/* brcl mask,off */
EMIT6_PCREL_RILC ( 0xc0040000 ,
mask > > 12 , addrs [ i + off + 1 ] ) ;
}
2013-02-09 14:07:50 +01:00
break ;
2019-01-26 12:26:11 -05:00
}
2012-07-31 16:23:59 +02:00
default : /* too complex, give up */
2015-04-01 16:08:32 +02:00
pr_err ( " Unknown opcode %02x \n " , insn - > code ) ;
return - 1 ;
}
2020-06-24 14:55:22 +02:00
if ( probe_prg ! = - 1 ) {
/*
* Handlers of certain exceptions leave psw . addr pointing to
* the instruction directly after the failing one . Therefore ,
* create two exception table entries and also add a nop in
* case two probing instructions come directly after each
* other .
*/
nop_prg = jit - > prg ;
/* bcr 0,%0 */
_EMIT2 ( 0x0700 ) ;
err = bpf_jit_probe_mem ( jit , fp , probe_prg , nop_prg ) ;
if ( err < 0 )
return err ;
}
2015-04-01 16:08:32 +02:00
return insn_count ;
}
2019-11-14 16:18:20 +01:00
/*
* Return whether new i - th instruction address does not violate any invariant
*/
static bool bpf_is_new_addr_sane ( struct bpf_jit * jit , int i )
{
/* On the first pass anything goes */
if ( is_first_pass ( jit ) )
return true ;
/* The codegen pass must not change anything */
if ( is_codegen_pass ( jit ) )
return jit - > addrs [ i ] = = jit - > prg ;
/* Passes in between must not increase code size */
return jit - > addrs [ i ] > = jit - > prg ;
}
/*
* Update the address of i - th instruction
*/
static int bpf_set_addr ( struct bpf_jit * jit , int i )
{
2020-07-17 18:53:25 +02:00
int delta ;
if ( is_codegen_pass ( jit ) ) {
delta = jit - > prg - jit - > addrs [ i ] ;
if ( delta < 0 )
bpf_skip ( jit , - delta ) ;
}
if ( WARN_ON_ONCE ( ! bpf_is_new_addr_sane ( jit , i ) ) )
2019-11-14 16:18:20 +01:00
return - 1 ;
jit - > addrs [ i ] = jit - > prg ;
return 0 ;
}
2015-04-01 16:08:32 +02:00
/*
* Compile eBPF program into s390x code
*/
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
static int bpf_jit_prog ( struct bpf_jit * jit , struct bpf_prog * fp ,
2020-06-02 19:43:39 +02:00
bool extra_pass , u32 stack_depth )
2015-04-01 16:08:32 +02:00
{
2019-11-18 19:03:36 +01:00
int i , insn_count , lit32_size , lit64_size ;
2015-04-01 16:08:32 +02:00
2019-11-18 19:03:36 +01:00
jit - > lit32 = jit - > lit32_start ;
jit - > lit64 = jit - > lit64_start ;
2015-04-01 16:08:32 +02:00
jit - > prg = 0 ;
2020-06-24 14:55:22 +02:00
jit - > excnt = 0 ;
2015-04-01 16:08:32 +02:00
2020-06-02 19:43:39 +02:00
bpf_jit_prologue ( jit , stack_depth ) ;
2019-11-14 16:18:20 +01:00
if ( bpf_set_addr ( jit , 0 ) < 0 )
return - 1 ;
2015-04-01 16:08:32 +02:00
for ( i = 0 ; i < fp - > len ; i + = insn_count ) {
2020-06-02 19:43:39 +02:00
insn_count = bpf_jit_insn ( jit , fp , i , extra_pass , stack_depth ) ;
2015-04-01 16:08:32 +02:00
if ( insn_count < 0 )
return - 1 ;
bpf, s390: fix jit branch offset related to ldimm64
While testing some other work that required JIT modifications, I
run into test_bpf causing a hang when JIT enabled on s390. The
problematic test case was the one from ddc665a4bb4b (bpf, arm64:
fix jit branch offset related to ldimm64), and turns out that we
do have a similar issue on s390 as well. In bpf_jit_prog() we
update next instruction address after returning from bpf_jit_insn()
with an insn_count. bpf_jit_insn() returns either -1 in case of
error (e.g. unsupported insn), 1 or 2. The latter is only the
case for ldimm64 due to spanning 2 insns, however, next address
is only set to i + 1 not taking actual insn_count into account,
thus fix is to use insn_count instead of 1. bpf_jit_enable in
mode 2 provides also disasm on s390:
Before fix:
000003ff800349b6: a7f40003 brc 15,3ff800349bc ; target
000003ff800349ba: 0000 unknown
000003ff800349bc: e3b0f0700024 stg %r11,112(%r15)
000003ff800349c2: e3e0f0880024 stg %r14,136(%r15)
000003ff800349c8: 0db0 basr %r11,%r0
000003ff800349ca: c0ef00000000 llilf %r14,0
000003ff800349d0: e320b0360004 lg %r2,54(%r11)
000003ff800349d6: e330b03e0004 lg %r3,62(%r11)
000003ff800349dc: ec23ffeda065 clgrj %r2,%r3,10,3ff800349b6 ; jmp
000003ff800349e2: e3e0b0460004 lg %r14,70(%r11)
000003ff800349e8: e3e0b04e0004 lg %r14,78(%r11)
000003ff800349ee: b904002e lgr %r2,%r14
000003ff800349f2: e3b0f0700004 lg %r11,112(%r15)
000003ff800349f8: e3e0f0880004 lg %r14,136(%r15)
000003ff800349fe: 07fe bcr 15,%r14
After fix:
000003ff80ef3db4: a7f40003 brc 15,3ff80ef3dba
000003ff80ef3db8: 0000 unknown
000003ff80ef3dba: e3b0f0700024 stg %r11,112(%r15)
000003ff80ef3dc0: e3e0f0880024 stg %r14,136(%r15)
000003ff80ef3dc6: 0db0 basr %r11,%r0
000003ff80ef3dc8: c0ef00000000 llilf %r14,0
000003ff80ef3dce: e320b0360004 lg %r2,54(%r11)
000003ff80ef3dd4: e330b03e0004 lg %r3,62(%r11)
000003ff80ef3dda: ec230006a065 clgrj %r2,%r3,10,3ff80ef3de6 ; jmp
000003ff80ef3de0: e3e0b0460004 lg %r14,70(%r11)
000003ff80ef3de6: e3e0b04e0004 lg %r14,78(%r11) ; target
000003ff80ef3dec: b904002e lgr %r2,%r14
000003ff80ef3df0: e3b0f0700004 lg %r11,112(%r15)
000003ff80ef3df6: e3e0f0880004 lg %r14,136(%r15)
000003ff80ef3dfc: 07fe bcr 15,%r14
test_bpf.ko suite runs fine after the fix.
Fixes: 054623105728 ("s390/bpf: Add s390x eBPF JIT compiler backend")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-04 14:20:54 +02:00
/* Next instruction address */
2019-11-14 16:18:20 +01:00
if ( bpf_set_addr ( jit , i + insn_count ) < 0 )
return - 1 ;
2012-07-31 16:23:59 +02:00
}
2020-06-02 19:43:39 +02:00
bpf_jit_epilogue ( jit , stack_depth ) ;
2015-04-01 16:08:32 +02:00
2019-11-18 19:03:36 +01:00
lit32_size = jit - > lit32 - jit - > lit32_start ;
lit64_size = jit - > lit64 - jit - > lit64_start ;
jit - > lit32_start = jit - > prg ;
if ( lit32_size )
jit - > lit32_start = ALIGN ( jit - > lit32_start , 4 ) ;
jit - > lit64_start = jit - > lit32_start + lit32_size ;
if ( lit64_size )
jit - > lit64_start = ALIGN ( jit - > lit64_start , 8 ) ;
jit - > size = jit - > lit64_start + lit64_size ;
2015-04-01 16:08:32 +02:00
jit - > size_prg = jit - > prg ;
2020-06-24 14:55:22 +02:00
if ( WARN_ON_ONCE ( fp - > aux - > extable & &
jit - > excnt ! = fp - > aux - > num_exentries ) )
/* Verifier bug - too many entries. */
return - 1 ;
2012-07-31 16:23:59 +02:00
return 0 ;
}
2019-05-24 23:25:24 +01:00
bool bpf_jit_needs_zext ( void )
{
return true ;
}
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
struct s390_jit_data {
struct bpf_binary_header * header ;
struct bpf_jit ctx ;
int pass ;
} ;
2020-06-24 14:55:22 +02:00
static struct bpf_binary_header * bpf_jit_alloc ( struct bpf_jit * jit ,
struct bpf_prog * fp )
{
struct bpf_binary_header * header ;
u32 extable_size ;
u32 code_size ;
/* We need two entries per insn. */
fp - > aux - > num_exentries * = 2 ;
code_size = roundup ( jit - > size ,
__alignof__ ( struct exception_table_entry ) ) ;
extable_size = fp - > aux - > num_exentries *
sizeof ( struct exception_table_entry ) ;
header = bpf_jit_binary_alloc ( code_size + extable_size , & jit - > prg_buf ,
8 , jit_fill_hole ) ;
if ( ! header )
return NULL ;
fp - > aux - > extable = ( struct exception_table_entry * )
( jit - > prg_buf + code_size ) ;
return header ;
}
2015-04-01 16:08:32 +02:00
/*
* Compile eBPF program " fp "
*/
2016-05-13 19:08:31 +02:00
struct bpf_prog * bpf_int_jit_compile ( struct bpf_prog * fp )
2015-04-01 16:08:32 +02:00
{
2020-06-02 19:43:39 +02:00
u32 stack_depth = round_up ( fp - > aux - > stack_depth , 8 ) ;
2016-05-13 19:08:35 +02:00
struct bpf_prog * tmp , * orig_fp = fp ;
2015-04-01 16:08:32 +02:00
struct bpf_binary_header * header ;
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
struct s390_jit_data * jit_data ;
2016-05-13 19:08:35 +02:00
bool tmp_blinded = false ;
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
bool extra_pass = false ;
2015-04-01 16:08:32 +02:00
struct bpf_jit jit ;
int pass ;
2012-07-31 16:23:59 +02:00
2017-12-14 17:55:14 -08:00
if ( ! fp - > jit_requested )
2016-05-13 19:08:35 +02:00
return orig_fp ;
tmp = bpf_jit_blind_constants ( fp ) ;
/*
* If blinding was requested and we failed during blinding ,
* we must fall back to the interpreter .
*/
if ( IS_ERR ( tmp ) )
return orig_fp ;
if ( tmp ! = fp ) {
tmp_blinded = true ;
fp = tmp ;
}
2016-05-13 19:08:31 +02:00
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
jit_data = fp - > aux - > jit_data ;
if ( ! jit_data ) {
jit_data = kzalloc ( sizeof ( * jit_data ) , GFP_KERNEL ) ;
if ( ! jit_data ) {
fp = orig_fp ;
goto out ;
}
fp - > aux - > jit_data = jit_data ;
}
if ( jit_data - > ctx . addrs ) {
jit = jit_data - > ctx ;
header = jit_data - > header ;
extra_pass = true ;
pass = jit_data - > pass + 1 ;
goto skip_init_ctx ;
}
2015-04-01 16:08:32 +02:00
memset ( & jit , 0 , sizeof ( jit ) ) ;
2019-11-07 15:18:38 +01:00
jit . addrs = kvcalloc ( fp - > len + 1 , sizeof ( * jit . addrs ) , GFP_KERNEL ) ;
2016-05-13 19:08:35 +02:00
if ( jit . addrs = = NULL ) {
fp = orig_fp ;
2021-09-27 15:06:14 +08:00
goto free_addrs ;
2016-05-13 19:08:35 +02:00
}
2015-04-01 16:08:32 +02:00
/*
* Three initial passes :
* - 1 / 2 : Determine clobbered registers
2022-05-21 13:11:34 +02:00
* - 3 : Calculate program size and addrs array
2015-04-01 16:08:32 +02:00
*/
for ( pass = 1 ; pass < = 3 ; pass + + ) {
2020-06-02 19:43:39 +02:00
if ( bpf_jit_prog ( & jit , fp , extra_pass , stack_depth ) ) {
2016-05-13 19:08:35 +02:00
fp = orig_fp ;
2015-04-01 16:08:32 +02:00
goto free_addrs ;
2016-05-13 19:08:35 +02:00
}
2012-07-31 16:23:59 +02:00
}
2015-04-01 16:08:32 +02:00
/*
* Final pass : Allocate and generate program
*/
2020-06-24 14:55:22 +02:00
header = bpf_jit_alloc ( & jit , fp ) ;
2016-05-13 19:08:35 +02:00
if ( ! header ) {
fp = orig_fp ;
2015-04-01 16:08:32 +02:00
goto free_addrs ;
2016-05-13 19:08:35 +02:00
}
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
skip_init_ctx :
2020-06-02 19:43:39 +02:00
if ( bpf_jit_prog ( & jit , fp , extra_pass , stack_depth ) ) {
2018-06-28 23:34:58 +02:00
bpf_jit_binary_free ( header ) ;
2016-05-13 19:08:35 +02:00
fp = orig_fp ;
2015-04-01 16:08:32 +02:00
goto free_addrs ;
2016-05-13 19:08:35 +02:00
}
2012-07-31 16:23:59 +02:00
if ( bpf_jit_enable > 1 ) {
2015-04-01 16:08:32 +02:00
bpf_jit_dump ( fp - > len , jit . size , pass , jit . prg_buf ) ;
2017-01-14 01:48:24 +01:00
print_fn_code ( jit . prg_buf , jit . size_prg ) ;
2013-07-16 13:25:49 +02:00
}
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
if ( ! fp - > is_func | | extra_pass ) {
bpf_jit_binary_lock_ro ( header ) ;
} else {
jit_data - > header = header ;
jit_data - > ctx = jit ;
jit_data - > pass = pass ;
}
2017-01-14 01:48:24 +01:00
fp - > bpf_func = ( void * ) jit . prg_buf ;
fp - > jited = 1 ;
2017-06-05 12:15:51 -07:00
fp - > jited_len = jit . size ;
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
if ( ! fp - > is_func | | extra_pass ) {
2019-08-30 14:51:09 +03:00
bpf_prog_fill_jited_linfo ( fp , jit . addrs + 1 ) ;
2015-04-01 16:08:32 +02:00
free_addrs :
2019-11-07 15:18:38 +01:00
kvfree ( jit . addrs ) ;
bpf: s390: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the s390 JIT
compiler. The JIT compiler converts the bpf call instructions to
native branch instructions. After a round of the usual passes, the
start addresses of the JITed images for the callee functions are
known. Finally, to fixup the branch target addresses, we need to
perform an extra pass.
Because of the address range in which JITed images are allocated on
s390, the offsets of the start addresses of these images from
__bpf_call_base are as large as 64 bits. So, for a function call,
the imm field of the instruction cannot be used to determine the
callee's address. Use bpf_jit_get_func_addr() helper instead.
The patch borrows a lot from:
commit 8c11ea5ce13d ("bpf, arm64: fix getting subprog addr from aux
for calls")
commit e2c95a61656d ("bpf, ppc64: generalize fetching subprog into
bpf_jit_get_func_addr")
commit 8484ce8306f9 ("bpf: powerpc64: add JIT support for
multi-function programs")
(including the commit message).
test_verifier (5.3-rc6 with CONFIG_BPF_JIT_ALWAYS_ON=y):
without patch:
Summary: 1501 PASSED, 0 SKIPPED, 47 FAILED
with patch:
Summary: 1540 PASSED, 0 SKIPPED, 8 FAILED
Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-28 21:28:46 +03:00
kfree ( jit_data ) ;
fp - > aux - > jit_data = NULL ;
}
2016-05-13 19:08:35 +02:00
out :
if ( tmp_blinded )
bpf_jit_prog_release_other ( fp , fp = = orig_fp ?
tmp : orig_fp ) ;
2016-05-13 19:08:31 +02:00
return fp ;
2012-07-31 16:23:59 +02:00
}