linux/kernel/bpf
Daniel Borkmann 179ee84a89 bpf: Fix incorrect pruning due to atomic fetch precision tracking
When backtrack_insn encounters a BPF_STX instruction with BPF_ATOMIC
and BPF_FETCH, the src register (or r0 for BPF_CMPXCHG) also acts as
a destination, thus receiving the old value from the memory location.

The current backtracking logic does not account for this. It treats
atomic fetch operations the same as regular stores where the src
register is only an input. This leads the backtrack_insn to fail to
propagate precision to the stack location, which is then not marked
as precise!

Later, the verifier's path pruning can incorrectly consider two states
equivalent when they differ in terms of stack state. Meaning, two
branches can be treated as equivalent and thus get pruned when they
should not be seen as such.

Fix it as follows: Extend the BPF_LDX handling in backtrack_insn to
also cover atomic fetch operations via is_atomic_fetch_insn() helper.
When the fetch dst register is being tracked for precision, clear it,
and propagate precision over to the stack slot. For non-stack memory,
the precision walk stops at the atomic instruction, same as regular
BPF_LDX. This covers all fetch variants.

Before:

  0: (b7) r1 = 8                        ; R1=8
  1: (7b) *(u64 *)(r10 -8) = r1         ; R1=8 R10=fp0 fp-8=8
  2: (b7) r2 = 0                        ; R2=0
  3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)          ; R2=8 R10=fp0 fp-8=mmmmmmmm
  4: (bf) r3 = r10                      ; R3=fp0 R10=fp0
  5: (0f) r3 += r2
  mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1
  mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10
  mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)
  mark_precise: frame0: regs=r2 stack= before 2: (b7) r2 = 0
  6: R2=8 R3=fp8
  6: (b7) r0 = 0                        ; R0=0
  7: (95) exit

After:

  0: (b7) r1 = 8                        ; R1=8
  1: (7b) *(u64 *)(r10 -8) = r1         ; R1=8 R10=fp0 fp-8=8
  2: (b7) r2 = 0                        ; R2=0
  3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)          ; R2=8 R10=fp0 fp-8=mmmmmmmm
  4: (bf) r3 = r10                      ; R3=fp0 R10=fp0
  5: (0f) r3 += r2
  mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1
  mark_precise: frame0: regs=r2 stack= before 4: (bf) r3 = r10
  mark_precise: frame0: regs=r2 stack= before 3: (db) r2 = atomic64_fetch_add((u64 *)(r10 -8), r2)
  mark_precise: frame0: regs= stack=-8 before 2: (b7) r2 = 0
  mark_precise: frame0: regs= stack=-8 before 1: (7b) *(u64 *)(r10 -8) = r1
  mark_precise: frame0: regs=r1 stack= before 0: (b7) r1 = 8
  6: R2=8 R3=fp8
  6: (b7) r0 = 0                        ; R0=0
  7: (95) exit

Fixes: 5ffa25502b ("bpf: Add instructions for atomic_[cmp]xchg")
Fixes: 5ca419f286 ("bpf: Add BPF_FETCH field / create atomic_fetch_add instruction")
Reported-by: STAR Labs SG <info@starlabs.sg>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20260331222020.401848-1-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-04-02 09:57:59 -07:00
..
preload
Kconfig
Makefile bpf: annotate file argument as __nullable in bpf_lsm_mmap_file 2025-12-21 10:56:33 -08:00
arena.c bpf: Lose const-ness of map in map_check_btf() 2026-02-27 15:39:00 -08:00
arraymap.c bpf: Lose const-ness of map in map_check_btf() 2026-02-27 15:39:00 -08:00
bloom_filter.c bpf: Lose const-ness of map in map_check_btf() 2026-02-27 15:39:00 -08:00
bpf_cgrp_storage.c bpf: Switch to bpf_selem_unlink_nofail in bpf_local_storage_{map_free, destroy} 2026-02-06 14:47:59 -08:00
bpf_inode_storage.c bpf: Switch to bpf_selem_unlink_nofail in bpf_local_storage_{map_free, destroy} 2026-02-06 14:47:59 -08:00
bpf_insn_array.c bpf: Lose const-ness of map in map_check_btf() 2026-02-27 15:39:00 -08:00
bpf_iter.c Convert 'alloc_obj' family to use the new default GFP_KERNEL argument 2026-02-21 17:09:51 -08:00
bpf_local_storage.c bpf: Retire rcu_trace_implies_rcu_gp() from local storage 2026-02-27 15:39:00 -08:00
bpf_lru_list.c
bpf_lru_list.h
bpf_lsm.c bpf: annotate file argument as __nullable in bpf_lsm_mmap_file 2025-12-21 10:56:33 -08:00
bpf_lsm_proto.c bpf: annotate file argument as __nullable in bpf_lsm_mmap_file 2025-12-21 10:56:33 -08:00
bpf_struct_ops.c Convert 'alloc_obj' family to use the new default GFP_KERNEL argument 2026-02-21 17:09:51 -08:00
bpf_task_storage.c bpf: Switch to bpf_selem_unlink_nofail in bpf_local_storage_{map_free, destroy} 2026-02-06 14:47:59 -08:00
btf.c bpf: Release module BTF IDR before module unload 2026-03-18 17:26:40 -07:00
btf_iter.c
btf_relocate.c
cgroup.c Convert 'alloc_obj' family to use the new default GFP_KERNEL argument 2026-02-21 17:09:51 -08:00
cgroup_iter.c bpf: add new BPF_CGROUP_ITER_CHILDREN control option 2026-01-27 09:05:54 -08:00
core.c bpf: Fix undefined behavior in interpreter sdiv/smod for INT_MIN 2026-03-21 13:12:16 -07:00
cpumap.c bpf: Fix race in cpumap on PREEMPT_RT 2026-02-27 16:07:14 -08:00
cpumask.c bpf: Remove redundant KF_TRUSTED_ARGS flag from all kfuncs 2026-01-02 12:04:28 -08:00
crypto.c Convert 'alloc_obj' family to use the new default GFP_KERNEL argument 2026-02-21 17:09:51 -08:00
devmap.c bpf: Fix race in devmap on PREEMPT_RT 2026-02-27 16:08:10 -08:00
disasm.c
disasm.h
dispatcher.c
dmabuf_iter.c bpf: Fix truncated dmabuf iterator reads 2025-12-09 23:48:34 -08:00
hashtab.c bpf: Lose const-ness of map in map_check_btf() 2026-02-27 15:39:00 -08:00
helpers.c Convert 'alloc_obj' family to use the new default GFP_KERNEL argument 2026-02-21 17:09:51 -08:00
inode.c Convert 'alloc_obj' family to use the new default GFP_KERNEL argument 2026-02-21 17:09:51 -08:00
kmem_cache_iter.c
link_iter.c
liveness.c treewide: Replace kmalloc with kmalloc_obj for non-scalar types 2026-02-21 01:02:28 -08:00
local_storage.c bpf: Lose const-ness of map in map_check_btf() 2026-02-27 15:39:00 -08:00
log.c
lpm_trie.c bpf: Lose const-ness of map in map_check_btf() 2026-02-27 15:39:00 -08:00
map_in_map.c
map_in_map.h
map_iter.c bpf: Remove redundant KF_TRUSTED_ARGS flag from all kfuncs 2026-01-02 12:04:28 -08:00
memalloc.c bpf: Register dtor for freeing special fields 2026-02-27 15:39:00 -08:00
mmap_unlock_work.h
mprog.c
net_namespace.c treewide: Replace kmalloc with kmalloc_obj for non-scalar types 2026-02-21 01:02:28 -08:00
offload.c Convert 'alloc_obj' family to use the new default GFP_KERNEL argument 2026-02-21 17:09:51 -08:00
percpu_freelist.c
percpu_freelist.h
prog_iter.c
queue_stack_maps.c
range_tree.c bpf: arena: Reintroduce memcg accounting 2026-01-02 14:31:59 -08:00
range_tree.h
relo_core.c
reuseport_array.c
ringbuf.c bpf: Add SPDX license identifiers to a few files 2026-01-16 14:50:00 -08:00
rqspinlock.c mm.git review status for linus..mm-nonmm-stable 2026-02-12 12:13:01 -08:00
rqspinlock.h
stackmap.c
stream.c bpf: Add bpf_stream_print_stack stack dumping kfunc 2026-02-03 10:41:16 -08:00
syscall.c bpf: Fix grace period wait for tracepoint bpf_link 2026-03-31 16:01:13 -07:00
sysfs_btf.c
task_iter.c
tcx.c treewide: Replace kmalloc with kmalloc_obj for non-scalar types 2026-02-21 01:02:28 -08:00
tnum.c bpf: Introduce tnum_step to step through tnum's members 2026-02-27 16:11:50 -08:00
token.c treewide: Replace kmalloc with kmalloc_obj for non-scalar types 2026-02-21 01:02:28 -08:00
trampoline.c bpf: Fix a UAF issue in bpf_trampoline_link_cgroup_shim 2026-03-03 15:13:51 -08:00
verifier.c bpf: Fix incorrect pruning due to atomic fetch precision tracking 2026-04-02 09:57:59 -07:00