bpf: Fix a data-race around bpf_jit_limit.
While reading bpf_jit_limit, it can be changed concurrently via sysctl,
WRITE_ONCE() in __do_proc_doulongvec_minmax(). The size of bpf_jit_limit
is long, so we need to add a paired READ_ONCE() to avoid load-tearing.
Fixes: ede95a63b5
("bpf: add bpf_jit_limit knob to restrict unpriv allocations")
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220823215804.2177-1-kuniyu@amazon.com
This commit is contained in:
parent
7d6620f107
commit
0947ae1121
@ -971,7 +971,7 @@ pure_initcall(bpf_jit_charge_init);
|
||||
|
||||
int bpf_jit_charge_modmem(u32 size)
|
||||
{
|
||||
if (atomic_long_add_return(size, &bpf_jit_current) > bpf_jit_limit) {
|
||||
if (atomic_long_add_return(size, &bpf_jit_current) > READ_ONCE(bpf_jit_limit)) {
|
||||
if (!bpf_capable()) {
|
||||
atomic_long_sub(size, &bpf_jit_current);
|
||||
return -EPERM;
|
||||
|
Loading…
Reference in New Issue
Block a user