linux/kernel/bpf
Andrey Ignatov d7af7e497f bpf: Fix possible out of bound write in narrow load handling
Fix a verifier bug found by smatch static checker in [0].

This problem has never been seen in prod to my best knowledge. Fixing it
still seems to be a good idea since it's hard to say for sure whether
it's possible or not to have a scenario where a combination of
convert_ctx_access() and a narrow load would lead to an out of bound
write.

When narrow load is handled, one or two new instructions are added to
insn_buf array, but before it was only checked that

	cnt >= ARRAY_SIZE(insn_buf)

And it's safe to add a new instruction to insn_buf[cnt++] only once. The
second try will lead to out of bound write. And this is what can happen
if `shift` is set.

Fix it by making sure that if the BPF_RSH instruction has to be added in
addition to BPF_AND then there is enough space for two more instructions
in insn_buf.

The full report [0] is below:

kernel/bpf/verifier.c:12304 convert_ctx_accesses() warn: offset 'cnt' incremented past end of array
kernel/bpf/verifier.c:12311 convert_ctx_accesses() warn: offset 'cnt' incremented past end of array

kernel/bpf/verifier.c
    12282
    12283 			insn->off = off & ~(size_default - 1);
    12284 			insn->code = BPF_LDX | BPF_MEM | size_code;
    12285 		}
    12286
    12287 		target_size = 0;
    12288 		cnt = convert_ctx_access(type, insn, insn_buf, env->prog,
    12289 					 &target_size);
    12290 		if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf) ||
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Bounds check.

    12291 		    (ctx_field_size && !target_size)) {
    12292 			verbose(env, "bpf verifier is misconfigured\n");
    12293 			return -EINVAL;
    12294 		}
    12295
    12296 		if (is_narrower_load && size < target_size) {
    12297 			u8 shift = bpf_ctx_narrow_access_offset(
    12298 				off, size, size_default) * 8;
    12299 			if (ctx_field_size <= 4) {
    12300 				if (shift)
    12301 					insn_buf[cnt++] = BPF_ALU32_IMM(BPF_RSH,
                                                         ^^^^^
increment beyond end of array

    12302 									insn->dst_reg,
    12303 									shift);
--> 12304 				insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg,
                                                 ^^^^^
out of bounds write

    12305 								(1 << size * 8) - 1);
    12306 			} else {
    12307 				if (shift)
    12308 					insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH,
    12309 									insn->dst_reg,
    12310 									shift);
    12311 				insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
                                        ^^^^^^^^^^^^^^^
Same.

    12312 								(1ULL << size * 8) - 1);
    12313 			}
    12314 		}
    12315
    12316 		new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
    12317 		if (!new_prog)
    12318 			return -ENOMEM;
    12319
    12320 		delta += cnt - 1;
    12321
    12322 		/* keep walking new program and skip insns we just inserted */
    12323 		env->prog = new_prog;
    12324 		insn      = new_prog->insnsi + i + delta;
    12325 	}
    12326
    12327 	return 0;
    12328 }

[0] https://lore.kernel.org/bpf/20210817050843.GA21456@kili/

v1->v2:
- clarify that problem was only seen by static checker but not in prod;

Fixes: 46f53a65d2 ("bpf: Allow narrow loads with offset > 0")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210820163935.1902398-1-rdna@fb.com
2021-08-24 14:32:26 -07:00
..
preload libbpf: Move BPF_SEQ_PRINTF and BPF_SNPRINTF to bpf_helpers.h 2021-05-26 10:45:41 -07:00
arraymap.c bpf: Add map side support for bpf timers. 2021-07-15 22:31:10 +02:00
bpf_inode_storage.c bpf: Fix spelling mistakes 2021-05-24 21:13:05 -07:00
bpf_iter.c bpf: Refactor BPF_PROG_RUN into a function 2021-08-17 00:45:07 +02:00
bpf_local_storage.c bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete] 2021-02-26 11:51:48 -08:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h bpf: Fix a typo "inacitve" -> "inactive" 2020-04-06 21:54:10 +02:00
bpf_lsm.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2021-06-17 11:54:56 -07:00
bpf_struct_ops_types.h
bpf_struct_ops.c bpf: Fix fexit trampoline. 2021-03-18 00:22:51 +01:00
bpf_task_storage.c bpf: Make symbol 'bpf_task_storage_busy' static 2021-03-16 12:24:20 -07:00
btf.c bpf: Emit better log message if bpf_iter ctx arg btf_id == 0 2021-07-29 15:10:11 -07:00
cgroup.c bpf: Migrate cgroup_bpf to internal cgroup_bpf_attach_type enum 2021-08-23 17:50:24 -07:00
core.c bpf: Undo off-by-one in interpreter tail call count limit 2021-08-19 18:33:37 +02:00
cpumap.c bpf: cpumap: Implement generic cpumap 2021-07-07 20:01:45 -07:00
devmap.c bpf, devmap: Exclude XDP broadcast to master device 2021-08-09 23:25:14 +02:00
disasm.c bpf: Introduce BPF nospec instruction for mitigating Spectre v4 2021-07-29 00:20:56 +02:00
disasm.h
dispatcher.c bpf: Remove bpf_image tree 2020-03-13 12:49:52 -07:00
hashtab.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2021-08-13 06:41:22 -07:00
helpers.c bpf: Support "%c" in bpf_bprintf_prepare(). 2021-08-15 00:13:33 -07:00
inode.c bpf: Fix regression on BPF_OBJ_GET with non-O_RDWR flags 2021-06-22 14:57:43 +02:00
Kconfig sock_map: Relax config dependency to CONFIG_NET 2021-07-15 18:17:49 -07:00
local_storage.c bpf: Increase supported cgroup storage value size 2021-07-27 15:59:29 -07:00
lpm_trie.c bpf: Allow RCU-protected lookups to happen from bh context 2021-06-24 19:41:15 +02:00
Makefile bpf: Enable task local storage for tracing programs 2021-02-26 11:51:47 -08:00
map_in_map.c bpf: Remember BTF of inner maps. 2021-07-15 22:31:10 +02:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Implement link_query callbacks in map element iterators 2020-08-21 14:01:39 -07:00
net_namespace.c bpf: Add support for forced LINK_DETACH command 2020-08-01 20:38:28 -07:00
offload.c
percpu_freelist.c bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: Eliminate rlimit-based memory accounting for queue_stack_maps maps 2020-12-02 18:32:46 -08:00
reuseport_array.c bpf: Fix spelling mistakes 2021-05-24 21:13:05 -07:00
ringbuf.c bpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc() 2021-06-28 15:57:46 +02:00
stackmap.c bpf: Refcount task stack in bpf_get_task_stack 2021-04-01 13:58:07 -07:00
syscall.c bpf: Use kvmalloc for map keys in syscalls 2021-08-20 00:09:49 +02:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: Introduce task_vma bpf_iter 2021-02-12 12:56:53 -08:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c bpf: Refactor BPF_PROG_RUN into a function 2021-08-17 00:45:07 +02:00
verifier.c bpf: Fix possible out of bound write in narrow load handling 2021-08-24 14:32:26 -07:00