Merge branch 'bpf-token'
Andrii Nakryiko says: ==================== BPF token This patch set is a combination of three BPF token-related patch sets ([0], [1], [2]) with fixes ([3]) to kernel-side token_fd passing APIs incorporated into relevant patches, bpf_token_capable() changes requested by Christian Brauner, and necessary libbpf and BPF selftests side adjustments. This patch set introduces an ability to delegate a subset of BPF subsystem functionality from privileged system-wide daemon (e.g., systemd or any other container manager) through special mount options for userns-bound BPF FS to a *trusted* unprivileged application. Trust is the key here. This functionality is not about allowing unconditional unprivileged BPF usage. Establishing trust, though, is completely up to the discretion of respective privileged application that would create and mount a BPF FS instance with delegation enabled, as different production setups can and do achieve it through a combination of different means (signing, LSM, code reviews, etc), and it's undesirable and infeasible for kernel to enforce any particular way of validating trustworthiness of particular process. The main motivation for this work is a desire to enable containerized BPF applications to be used together with user namespaces. This is currently impossible, as CAP_BPF, required for BPF subsystem usage, cannot be namespaced or sandboxed, as a general rule. E.g., tracing BPF programs, thanks to BPF helpers like bpf_probe_read_kernel() and bpf_probe_read_user() can safely read arbitrary memory, and it's impossible to ensure that they only read memory of processes belonging to any given namespace. This means that it's impossible to have a mechanically verifiable namespace-aware CAP_BPF capability, and as such another mechanism to allow safe usage of BPF functionality is necessary. BPF FS delegation mount options and BPF token derived from such BPF FS instance is such a mechanism. Kernel makes no assumption about what "trusted" constitutes in any particular case, and it's up to specific privileged applications and their surrounding infrastructure to decide that. What kernel provides is a set of APIs to setup and mount special BPF FS instance and derive BPF tokens from it. BPF FS and BPF token are both bound to its owning userns and in such a way are constrained inside intended container. Users can then pass BPF token FD to privileged bpf() syscall commands, like BPF map creation and BPF program loading, to perform such operations without having init userns privileges. This version incorporates feedback and suggestions ([4]) received on earlier iterations of BPF token approach, and instead of allowing to create BPF tokens directly assuming capable(CAP_SYS_ADMIN), we instead enhance BPF FS to accept a few new delegation mount options. If these options are used and BPF FS itself is properly created, set up, and mounted inside the user namespaced container, user application is able to derive a BPF token object from BPF FS instance, and pass that token to bpf() syscall. As explained in patch #3, BPF token itself doesn't grant access to BPF functionality, but instead allows kernel to do namespaced capabilities checks (ns_capable() vs capable()) for CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, and CAP_SYS_ADMIN, as applicable. So it forms one half of a puzzle and allows container managers and sys admins to have safe and flexible configuration options: determining which containers get delegation of BPF functionality through BPF FS, and then which applications within such containers are allowed to perform bpf() commands, based on namespaces capabilities. Previous attempt at addressing this very same problem ([5]) attempted to utilize authoritative LSM approach, but was conclusively rejected by upstream LSM maintainers. BPF token concept is not changing anything about LSM approach, but can be combined with LSM hooks for very fine-grained security policy. Some ideas about making BPF token more convenient to use with LSM (in particular custom BPF LSM programs) was briefly described in recent LSF/MM/BPF 2023 presentation ([6]). E.g., an ability to specify user-provided data (context), which in combination with BPF LSM would allow implementing a very dynamic and fine-granular custom security policies on top of BPF token. In the interest of minimizing API surface area and discussions this was relegated to follow up patches, as it's not essential to the fundamental concept of delegatable BPF token. It should be noted that BPF token is conceptually quite similar to the idea of /dev/bpf device file, proposed by Song a while ago ([7]). The biggest difference is the idea of using virtual anon_inode file to hold BPF token and allowing multiple independent instances of them, each (potentially) with its own set of restrictions. And also, crucially, BPF token approach is not using any special stateful task-scoped flags. Instead, bpf() syscall accepts token_fd parameters explicitly for each relevant BPF command. This addresses main concerns brought up during the /dev/bpf discussion, and fits better with overall BPF subsystem design. Second part of this patch set adds full support for BPF token in libbpf's BPF object high-level API. Good chunk of the changes rework libbpf feature detection internals, which are the most affected by BPF token presence. Besides internal refactorings, libbpf allows to pass location of BPF FS from which BPF token should be created by libbpf. This can be done explicitly though a new bpf_object_open_opts.bpf_token_path field. But we also add implicit BPF token creation logic to BPF object load step, even without any explicit involvement of the user. If the environment is setup properly, BPF token will be created transparently and used implicitly. This allows for all existing application to gain BPF token support by just linking with latest version of libbpf library. No source code modifications are required. All that under assumption that privileged container management agent properly set up default BPF FS instance at /sys/bpf/fs to allow BPF token creation. libbpf adds support to override default BPF FS location for BPF token creation through LIBBPF_BPF_TOKEN_PATH envvar knowledge. This allows admins or container managers to mount BPF token-enabled BPF FS at non-standard location without the need to coordinate with applications. LIBBPF_BPF_TOKEN_PATH can also be used to disable BPF token implicit creation by setting it to an empty value. [0] https://patchwork.kernel.org/project/netdevbpf/list/?series=805707&state=* [1] https://patchwork.kernel.org/project/netdevbpf/list/?series=810260&state=* [2] https://patchwork.kernel.org/project/netdevbpf/list/?series=809800&state=* [3] https://patchwork.kernel.org/project/netdevbpf/patch/20231219053150.336991-1-andrii@kernel.org/ [4] https://lore.kernel.org/bpf/20230704-hochverdient-lehne-eeb9eeef785e@brauner/ [5] https://lore.kernel.org/bpf/20230412043300.360803-1-andrii@kernel.org/ [6] http://vger.kernel.org/bpfconf2023_material/Trusted_unprivileged_BPF_LSFMM2023.pdf [7] https://lore.kernel.org/bpf/20190627201923.2589391-2-songliubraving@fb.com/ v1->v2: - disable BPF token creation in init userns, and simplify bpf_token_capable() logic (Christian); - use kzalloc/kfree instead of kvzalloc/kvfree (Linus); - few more selftest cases to validate LSM and BPF token interations. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> ==================== Link: https://lore.kernel.org/r/20240124022127.2379740-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This commit is contained in:
commit
c8632acf19
@ -110,7 +110,7 @@ lirc_mode2_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_get_prandom_u32:
|
||||
return &bpf_get_prandom_u32_proto;
|
||||
case BPF_FUNC_trace_printk:
|
||||
if (perfmon_capable())
|
||||
if (bpf_token_capable(prog->aux->token, CAP_PERFMON))
|
||||
return bpf_get_trace_printk_proto();
|
||||
fallthrough;
|
||||
default:
|
||||
|
@ -52,6 +52,10 @@ struct module;
|
||||
struct bpf_func_state;
|
||||
struct ftrace_ops;
|
||||
struct cgroup;
|
||||
struct bpf_token;
|
||||
struct user_namespace;
|
||||
struct super_block;
|
||||
struct inode;
|
||||
|
||||
extern struct idr btf_idr;
|
||||
extern spinlock_t btf_idr_lock;
|
||||
@ -1485,6 +1489,7 @@ struct bpf_prog_aux {
|
||||
#ifdef CONFIG_SECURITY
|
||||
void *security;
|
||||
#endif
|
||||
struct bpf_token *token;
|
||||
struct bpf_prog_offload *offload;
|
||||
struct btf *btf;
|
||||
struct bpf_func_info *func_info;
|
||||
@ -1609,6 +1614,31 @@ struct bpf_link_primer {
|
||||
u32 id;
|
||||
};
|
||||
|
||||
struct bpf_mount_opts {
|
||||
kuid_t uid;
|
||||
kgid_t gid;
|
||||
umode_t mode;
|
||||
|
||||
/* BPF token-related delegation options */
|
||||
u64 delegate_cmds;
|
||||
u64 delegate_maps;
|
||||
u64 delegate_progs;
|
||||
u64 delegate_attachs;
|
||||
};
|
||||
|
||||
struct bpf_token {
|
||||
struct work_struct work;
|
||||
atomic64_t refcnt;
|
||||
struct user_namespace *userns;
|
||||
u64 allowed_cmds;
|
||||
u64 allowed_maps;
|
||||
u64 allowed_progs;
|
||||
u64 allowed_attachs;
|
||||
#ifdef CONFIG_SECURITY
|
||||
void *security;
|
||||
#endif
|
||||
};
|
||||
|
||||
struct bpf_struct_ops_value;
|
||||
struct btf_member;
|
||||
|
||||
@ -2097,6 +2127,7 @@ static inline void bpf_enable_instrumentation(void)
|
||||
migrate_enable();
|
||||
}
|
||||
|
||||
extern const struct super_operations bpf_super_ops;
|
||||
extern const struct file_operations bpf_map_fops;
|
||||
extern const struct file_operations bpf_prog_fops;
|
||||
extern const struct file_operations bpf_iter_fops;
|
||||
@ -2231,24 +2262,26 @@ static inline void bpf_map_dec_elem_count(struct bpf_map *map)
|
||||
|
||||
extern int sysctl_unprivileged_bpf_disabled;
|
||||
|
||||
static inline bool bpf_allow_ptr_leaks(void)
|
||||
bool bpf_token_capable(const struct bpf_token *token, int cap);
|
||||
|
||||
static inline bool bpf_allow_ptr_leaks(const struct bpf_token *token)
|
||||
{
|
||||
return perfmon_capable();
|
||||
return bpf_token_capable(token, CAP_PERFMON);
|
||||
}
|
||||
|
||||
static inline bool bpf_allow_uninit_stack(void)
|
||||
static inline bool bpf_allow_uninit_stack(const struct bpf_token *token)
|
||||
{
|
||||
return perfmon_capable();
|
||||
return bpf_token_capable(token, CAP_PERFMON);
|
||||
}
|
||||
|
||||
static inline bool bpf_bypass_spec_v1(void)
|
||||
static inline bool bpf_bypass_spec_v1(const struct bpf_token *token)
|
||||
{
|
||||
return cpu_mitigations_off() || perfmon_capable();
|
||||
return cpu_mitigations_off() || bpf_token_capable(token, CAP_PERFMON);
|
||||
}
|
||||
|
||||
static inline bool bpf_bypass_spec_v4(void)
|
||||
static inline bool bpf_bypass_spec_v4(const struct bpf_token *token)
|
||||
{
|
||||
return cpu_mitigations_off() || perfmon_capable();
|
||||
return cpu_mitigations_off() || bpf_token_capable(token, CAP_PERFMON);
|
||||
}
|
||||
|
||||
int bpf_map_new_fd(struct bpf_map *map, int flags);
|
||||
@ -2265,8 +2298,21 @@ int bpf_link_new_fd(struct bpf_link *link);
|
||||
struct bpf_link *bpf_link_get_from_fd(u32 ufd);
|
||||
struct bpf_link *bpf_link_get_curr_or_next(u32 *id);
|
||||
|
||||
void bpf_token_inc(struct bpf_token *token);
|
||||
void bpf_token_put(struct bpf_token *token);
|
||||
int bpf_token_create(union bpf_attr *attr);
|
||||
struct bpf_token *bpf_token_get_from_fd(u32 ufd);
|
||||
|
||||
bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd);
|
||||
bool bpf_token_allow_map_type(const struct bpf_token *token, enum bpf_map_type type);
|
||||
bool bpf_token_allow_prog_type(const struct bpf_token *token,
|
||||
enum bpf_prog_type prog_type,
|
||||
enum bpf_attach_type attach_type);
|
||||
|
||||
int bpf_obj_pin_user(u32 ufd, int path_fd, const char __user *pathname);
|
||||
int bpf_obj_get_user(int path_fd, const char __user *pathname, int flags);
|
||||
struct inode *bpf_get_inode(struct super_block *sb, const struct inode *dir,
|
||||
umode_t mode);
|
||||
|
||||
#define BPF_ITER_FUNC_PREFIX "bpf_iter_"
|
||||
#define DEFINE_BPF_ITER_FUNC(target, args...) \
|
||||
@ -2507,7 +2553,8 @@ int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt,
|
||||
struct bpf_prog *bpf_prog_by_id(u32 id);
|
||||
struct bpf_link *bpf_link_by_id(u32 id);
|
||||
|
||||
const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
|
||||
const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id,
|
||||
const struct bpf_prog *prog);
|
||||
void bpf_task_storage_free(struct task_struct *task);
|
||||
void bpf_cgrp_storage_free(struct cgroup *cgroup);
|
||||
bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
|
||||
@ -2626,6 +2673,24 @@ static inline int bpf_obj_get_user(const char __user *pathname, int flags)
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline bool bpf_token_capable(const struct bpf_token *token, int cap)
|
||||
{
|
||||
return capable(cap) || (cap != CAP_SYS_ADMIN && capable(CAP_SYS_ADMIN));
|
||||
}
|
||||
|
||||
static inline void bpf_token_inc(struct bpf_token *token)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void bpf_token_put(struct bpf_token *token)
|
||||
{
|
||||
}
|
||||
|
||||
static inline struct bpf_token *bpf_token_get_from_fd(u32 ufd)
|
||||
{
|
||||
return ERR_PTR(-EOPNOTSUPP);
|
||||
}
|
||||
|
||||
static inline void __dev_flush(void)
|
||||
{
|
||||
}
|
||||
@ -2749,7 +2814,7 @@ static inline int btf_struct_access(struct bpf_verifier_log *log,
|
||||
}
|
||||
|
||||
static inline const struct bpf_func_proto *
|
||||
bpf_base_func_proto(enum bpf_func_id func_id)
|
||||
bpf_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
@ -1140,7 +1140,7 @@ static inline bool bpf_jit_blinding_enabled(struct bpf_prog *prog)
|
||||
return false;
|
||||
if (!bpf_jit_harden)
|
||||
return false;
|
||||
if (bpf_jit_harden == 1 && bpf_capable())
|
||||
if (bpf_jit_harden == 1 && bpf_token_capable(prog->aux->token, CAP_BPF))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
|
@ -404,10 +404,17 @@ LSM_HOOK(void, LSM_RET_VOID, audit_rule_free, void *lsmrule)
|
||||
LSM_HOOK(int, 0, bpf, int cmd, union bpf_attr *attr, unsigned int size)
|
||||
LSM_HOOK(int, 0, bpf_map, struct bpf_map *map, fmode_t fmode)
|
||||
LSM_HOOK(int, 0, bpf_prog, struct bpf_prog *prog)
|
||||
LSM_HOOK(int, 0, bpf_map_alloc_security, struct bpf_map *map)
|
||||
LSM_HOOK(void, LSM_RET_VOID, bpf_map_free_security, struct bpf_map *map)
|
||||
LSM_HOOK(int, 0, bpf_prog_alloc_security, struct bpf_prog_aux *aux)
|
||||
LSM_HOOK(void, LSM_RET_VOID, bpf_prog_free_security, struct bpf_prog_aux *aux)
|
||||
LSM_HOOK(int, 0, bpf_map_create, struct bpf_map *map, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
LSM_HOOK(void, LSM_RET_VOID, bpf_map_free, struct bpf_map *map)
|
||||
LSM_HOOK(int, 0, bpf_prog_load, struct bpf_prog *prog, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
LSM_HOOK(void, LSM_RET_VOID, bpf_prog_free, struct bpf_prog *prog)
|
||||
LSM_HOOK(int, 0, bpf_token_create, struct bpf_token *token, union bpf_attr *attr,
|
||||
struct path *path)
|
||||
LSM_HOOK(void, LSM_RET_VOID, bpf_token_free, struct bpf_token *token)
|
||||
LSM_HOOK(int, 0, bpf_token_cmd, const struct bpf_token *token, enum bpf_cmd cmd)
|
||||
LSM_HOOK(int, 0, bpf_token_capable, const struct bpf_token *token, int cap)
|
||||
#endif /* CONFIG_BPF_SYSCALL */
|
||||
|
||||
LSM_HOOK(int, 0, locked_down, enum lockdown_reason what)
|
||||
|
@ -32,6 +32,7 @@
|
||||
#include <linux/string.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/sockptr.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <uapi/linux/lsm.h>
|
||||
|
||||
struct linux_binprm;
|
||||
@ -2064,15 +2065,22 @@ static inline void securityfs_remove(struct dentry *dentry)
|
||||
union bpf_attr;
|
||||
struct bpf_map;
|
||||
struct bpf_prog;
|
||||
struct bpf_prog_aux;
|
||||
struct bpf_token;
|
||||
#ifdef CONFIG_SECURITY
|
||||
extern int security_bpf(int cmd, union bpf_attr *attr, unsigned int size);
|
||||
extern int security_bpf_map(struct bpf_map *map, fmode_t fmode);
|
||||
extern int security_bpf_prog(struct bpf_prog *prog);
|
||||
extern int security_bpf_map_alloc(struct bpf_map *map);
|
||||
extern int security_bpf_map_create(struct bpf_map *map, union bpf_attr *attr,
|
||||
struct bpf_token *token);
|
||||
extern void security_bpf_map_free(struct bpf_map *map);
|
||||
extern int security_bpf_prog_alloc(struct bpf_prog_aux *aux);
|
||||
extern void security_bpf_prog_free(struct bpf_prog_aux *aux);
|
||||
extern int security_bpf_prog_load(struct bpf_prog *prog, union bpf_attr *attr,
|
||||
struct bpf_token *token);
|
||||
extern void security_bpf_prog_free(struct bpf_prog *prog);
|
||||
extern int security_bpf_token_create(struct bpf_token *token, union bpf_attr *attr,
|
||||
struct path *path);
|
||||
extern void security_bpf_token_free(struct bpf_token *token);
|
||||
extern int security_bpf_token_cmd(const struct bpf_token *token, enum bpf_cmd cmd);
|
||||
extern int security_bpf_token_capable(const struct bpf_token *token, int cap);
|
||||
#else
|
||||
static inline int security_bpf(int cmd, union bpf_attr *attr,
|
||||
unsigned int size)
|
||||
@ -2090,7 +2098,8 @@ static inline int security_bpf_prog(struct bpf_prog *prog)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int security_bpf_map_alloc(struct bpf_map *map)
|
||||
static inline int security_bpf_map_create(struct bpf_map *map, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
@ -2098,13 +2107,33 @@ static inline int security_bpf_map_alloc(struct bpf_map *map)
|
||||
static inline void security_bpf_map_free(struct bpf_map *map)
|
||||
{ }
|
||||
|
||||
static inline int security_bpf_prog_alloc(struct bpf_prog_aux *aux)
|
||||
static inline int security_bpf_prog_load(struct bpf_prog *prog, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void security_bpf_prog_free(struct bpf_prog_aux *aux)
|
||||
static inline void security_bpf_prog_free(struct bpf_prog *prog)
|
||||
{ }
|
||||
|
||||
static inline int security_bpf_token_create(struct bpf_token *token, union bpf_attr *attr,
|
||||
struct path *path)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void security_bpf_token_free(struct bpf_token *token)
|
||||
{ }
|
||||
|
||||
static inline int security_bpf_token_cmd(const struct bpf_token *token, enum bpf_cmd cmd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int security_bpf_token_capable(const struct bpf_token *token, int cap)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif /* CONFIG_SECURITY */
|
||||
#endif /* CONFIG_BPF_SYSCALL */
|
||||
|
||||
|
@ -847,6 +847,36 @@ union bpf_iter_link_info {
|
||||
* Returns zero on success. On error, -1 is returned and *errno*
|
||||
* is set appropriately.
|
||||
*
|
||||
* BPF_TOKEN_CREATE
|
||||
* Description
|
||||
* Create BPF token with embedded information about what
|
||||
* BPF-related functionality it allows:
|
||||
* - a set of allowed bpf() syscall commands;
|
||||
* - a set of allowed BPF map types to be created with
|
||||
* BPF_MAP_CREATE command, if BPF_MAP_CREATE itself is allowed;
|
||||
* - a set of allowed BPF program types and BPF program attach
|
||||
* types to be loaded with BPF_PROG_LOAD command, if
|
||||
* BPF_PROG_LOAD itself is allowed.
|
||||
*
|
||||
* BPF token is created (derived) from an instance of BPF FS,
|
||||
* assuming it has necessary delegation mount options specified.
|
||||
* This BPF token can be passed as an extra parameter to various
|
||||
* bpf() syscall commands to grant BPF subsystem functionality to
|
||||
* unprivileged processes.
|
||||
*
|
||||
* When created, BPF token is "associated" with the owning
|
||||
* user namespace of BPF FS instance (super block) that it was
|
||||
* derived from, and subsequent BPF operations performed with
|
||||
* BPF token would be performing capabilities checks (i.e.,
|
||||
* CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, CAP_SYS_ADMIN) within
|
||||
* that user namespace. Without BPF token, such capabilities
|
||||
* have to be granted in init user namespace, making bpf()
|
||||
* syscall incompatible with user namespace, for the most part.
|
||||
*
|
||||
* Return
|
||||
* A new file descriptor (a nonnegative integer), or -1 if an
|
||||
* error occurred (in which case, *errno* is set appropriately).
|
||||
*
|
||||
* NOTES
|
||||
* eBPF objects (maps and programs) can be shared between processes.
|
||||
*
|
||||
@ -901,6 +931,8 @@ enum bpf_cmd {
|
||||
BPF_ITER_CREATE,
|
||||
BPF_LINK_DETACH,
|
||||
BPF_PROG_BIND_MAP,
|
||||
BPF_TOKEN_CREATE,
|
||||
__MAX_BPF_CMD,
|
||||
};
|
||||
|
||||
enum bpf_map_type {
|
||||
@ -951,6 +983,7 @@ enum bpf_map_type {
|
||||
BPF_MAP_TYPE_BLOOM_FILTER,
|
||||
BPF_MAP_TYPE_USER_RINGBUF,
|
||||
BPF_MAP_TYPE_CGRP_STORAGE,
|
||||
__MAX_BPF_MAP_TYPE
|
||||
};
|
||||
|
||||
/* Note that tracing related programs such as
|
||||
@ -995,6 +1028,7 @@ enum bpf_prog_type {
|
||||
BPF_PROG_TYPE_SK_LOOKUP,
|
||||
BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */
|
||||
BPF_PROG_TYPE_NETFILTER,
|
||||
__MAX_BPF_PROG_TYPE
|
||||
};
|
||||
|
||||
enum bpf_attach_type {
|
||||
@ -1333,6 +1367,9 @@ enum {
|
||||
|
||||
/* Flag for value_type_btf_obj_fd, the fd is available */
|
||||
BPF_F_VTYPE_BTF_OBJ_FD = (1U << 15),
|
||||
|
||||
/* BPF token FD is passed in a corresponding command's token_fd field */
|
||||
BPF_F_TOKEN_FD = (1U << 16),
|
||||
};
|
||||
|
||||
/* Flags for BPF_PROG_QUERY. */
|
||||
@ -1411,6 +1448,10 @@ union bpf_attr {
|
||||
* type data for
|
||||
* btf_vmlinux_value_type_id.
|
||||
*/
|
||||
/* BPF token FD to use with BPF_MAP_CREATE operation.
|
||||
* If provided, map_flags should have BPF_F_TOKEN_FD flag set.
|
||||
*/
|
||||
__s32 map_token_fd;
|
||||
};
|
||||
|
||||
struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
|
||||
@ -1480,6 +1521,10 @@ union bpf_attr {
|
||||
* truncated), or smaller (if log buffer wasn't filled completely).
|
||||
*/
|
||||
__u32 log_true_size;
|
||||
/* BPF token FD to use with BPF_PROG_LOAD operation.
|
||||
* If provided, prog_flags should have BPF_F_TOKEN_FD flag set.
|
||||
*/
|
||||
__s32 prog_token_fd;
|
||||
};
|
||||
|
||||
struct { /* anonymous struct used by BPF_OBJ_* commands */
|
||||
@ -1592,6 +1637,11 @@ union bpf_attr {
|
||||
* truncated), or smaller (if log buffer wasn't filled completely).
|
||||
*/
|
||||
__u32 btf_log_true_size;
|
||||
__u32 btf_flags;
|
||||
/* BPF token FD to use with BPF_BTF_LOAD operation.
|
||||
* If provided, btf_flags should have BPF_F_TOKEN_FD flag set.
|
||||
*/
|
||||
__s32 btf_token_fd;
|
||||
};
|
||||
|
||||
struct {
|
||||
@ -1722,6 +1772,11 @@ union bpf_attr {
|
||||
__u32 flags; /* extra flags */
|
||||
} prog_bind_map;
|
||||
|
||||
struct { /* struct used by BPF_TOKEN_CREATE command */
|
||||
__u32 flags;
|
||||
__u32 bpffs_fd;
|
||||
} token_create;
|
||||
|
||||
} __attribute__((aligned(8)));
|
||||
|
||||
/* The description below is an attempt at providing documentation to eBPF
|
||||
|
@ -6,7 +6,7 @@ cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse
|
||||
endif
|
||||
CFLAGS_core.o += $(call cc-disable-warning, override-init) $(cflags-nogcse-yy)
|
||||
|
||||
obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o token.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o bloom_filter.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o
|
||||
|
@ -82,7 +82,7 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
|
||||
bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY;
|
||||
int numa_node = bpf_map_attr_numa_node(attr);
|
||||
u32 elem_size, index_mask, max_entries;
|
||||
bool bypass_spec_v1 = bpf_bypass_spec_v1();
|
||||
bool bypass_spec_v1 = bpf_bypass_spec_v1(NULL);
|
||||
u64 array_size, mask64;
|
||||
struct bpf_array *array;
|
||||
|
||||
|
@ -260,9 +260,15 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
BTF_SET_START(sleepable_lsm_hooks)
|
||||
BTF_ID(func, bpf_lsm_bpf)
|
||||
BTF_ID(func, bpf_lsm_bpf_map)
|
||||
BTF_ID(func, bpf_lsm_bpf_map_alloc_security)
|
||||
BTF_ID(func, bpf_lsm_bpf_map_free_security)
|
||||
BTF_ID(func, bpf_lsm_bpf_map_create)
|
||||
BTF_ID(func, bpf_lsm_bpf_map_free)
|
||||
BTF_ID(func, bpf_lsm_bpf_prog)
|
||||
BTF_ID(func, bpf_lsm_bpf_prog_load)
|
||||
BTF_ID(func, bpf_lsm_bpf_prog_free)
|
||||
BTF_ID(func, bpf_lsm_bpf_token_create)
|
||||
BTF_ID(func, bpf_lsm_bpf_token_free)
|
||||
BTF_ID(func, bpf_lsm_bpf_token_cmd)
|
||||
BTF_ID(func, bpf_lsm_bpf_token_capable)
|
||||
BTF_ID(func, bpf_lsm_bprm_check_security)
|
||||
BTF_ID(func, bpf_lsm_bprm_committed_creds)
|
||||
BTF_ID(func, bpf_lsm_bprm_committing_creds)
|
||||
@ -357,9 +363,8 @@ BTF_ID(func, bpf_lsm_userns_create)
|
||||
BTF_SET_END(sleepable_lsm_hooks)
|
||||
|
||||
BTF_SET_START(untrusted_lsm_hooks)
|
||||
BTF_ID(func, bpf_lsm_bpf_map_free_security)
|
||||
BTF_ID(func, bpf_lsm_bpf_prog_alloc_security)
|
||||
BTF_ID(func, bpf_lsm_bpf_prog_free_security)
|
||||
BTF_ID(func, bpf_lsm_bpf_map_free)
|
||||
BTF_ID(func, bpf_lsm_bpf_prog_free)
|
||||
BTF_ID(func, bpf_lsm_file_alloc_security)
|
||||
BTF_ID(func, bpf_lsm_file_free_security)
|
||||
#ifdef CONFIG_SECURITY_NETWORK
|
||||
|
@ -1630,7 +1630,7 @@ cgroup_dev_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_perf_event_output:
|
||||
return &bpf_event_output_data_proto;
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -2191,7 +2191,7 @@ sysctl_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_perf_event_output:
|
||||
return &bpf_event_output_data_proto;
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -2348,7 +2348,7 @@ cg_sockopt_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_perf_event_output:
|
||||
return &bpf_event_output_data_proto;
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -682,7 +682,7 @@ static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp)
|
||||
void bpf_prog_kallsyms_add(struct bpf_prog *fp)
|
||||
{
|
||||
if (!bpf_prog_kallsyms_candidate(fp) ||
|
||||
!bpf_capable())
|
||||
!bpf_token_capable(fp->aux->token, CAP_BPF))
|
||||
return;
|
||||
|
||||
bpf_prog_ksym_set_addr(fp);
|
||||
@ -2779,6 +2779,7 @@ void bpf_prog_free(struct bpf_prog *fp)
|
||||
|
||||
if (aux->dst_prog)
|
||||
bpf_prog_put(aux->dst_prog);
|
||||
bpf_token_put(aux->token);
|
||||
INIT_WORK(&aux->work, bpf_prog_free_deferred);
|
||||
schedule_work(&aux->work);
|
||||
}
|
||||
|
@ -1680,7 +1680,7 @@ const struct bpf_func_proto bpf_probe_read_kernel_str_proto __weak;
|
||||
const struct bpf_func_proto bpf_task_pt_regs_proto __weak;
|
||||
|
||||
const struct bpf_func_proto *
|
||||
bpf_base_func_proto(enum bpf_func_id func_id)
|
||||
bpf_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
switch (func_id) {
|
||||
case BPF_FUNC_map_lookup_elem:
|
||||
@ -1731,7 +1731,7 @@ bpf_base_func_proto(enum bpf_func_id func_id)
|
||||
break;
|
||||
}
|
||||
|
||||
if (!bpf_capable())
|
||||
if (!bpf_token_capable(prog->aux->token, CAP_BPF))
|
||||
return NULL;
|
||||
|
||||
switch (func_id) {
|
||||
@ -1789,7 +1789,7 @@ bpf_base_func_proto(enum bpf_func_id func_id)
|
||||
break;
|
||||
}
|
||||
|
||||
if (!perfmon_capable())
|
||||
if (!bpf_token_capable(prog->aux->token, CAP_PERFMON))
|
||||
return NULL;
|
||||
|
||||
switch (func_id) {
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include <linux/filter.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/bpf_trace.h>
|
||||
#include <linux/kstrtox.h>
|
||||
#include "preload/bpf_preload.h"
|
||||
|
||||
enum bpf_type {
|
||||
@ -98,9 +99,9 @@ static const struct inode_operations bpf_prog_iops = { };
|
||||
static const struct inode_operations bpf_map_iops = { };
|
||||
static const struct inode_operations bpf_link_iops = { };
|
||||
|
||||
static struct inode *bpf_get_inode(struct super_block *sb,
|
||||
const struct inode *dir,
|
||||
umode_t mode)
|
||||
struct inode *bpf_get_inode(struct super_block *sb,
|
||||
const struct inode *dir,
|
||||
umode_t mode)
|
||||
{
|
||||
struct inode *inode;
|
||||
|
||||
@ -594,6 +595,136 @@ struct bpf_prog *bpf_prog_get_type_path(const char *name, enum bpf_prog_type typ
|
||||
}
|
||||
EXPORT_SYMBOL(bpf_prog_get_type_path);
|
||||
|
||||
struct bpffs_btf_enums {
|
||||
const struct btf *btf;
|
||||
const struct btf_type *cmd_t;
|
||||
const struct btf_type *map_t;
|
||||
const struct btf_type *prog_t;
|
||||
const struct btf_type *attach_t;
|
||||
};
|
||||
|
||||
static int find_bpffs_btf_enums(struct bpffs_btf_enums *info)
|
||||
{
|
||||
const struct btf *btf;
|
||||
const struct btf_type *t;
|
||||
const char *name;
|
||||
int i, n;
|
||||
|
||||
memset(info, 0, sizeof(*info));
|
||||
|
||||
btf = bpf_get_btf_vmlinux();
|
||||
if (IS_ERR(btf))
|
||||
return PTR_ERR(btf);
|
||||
if (!btf)
|
||||
return -ENOENT;
|
||||
|
||||
info->btf = btf;
|
||||
|
||||
for (i = 1, n = btf_nr_types(btf); i < n; i++) {
|
||||
t = btf_type_by_id(btf, i);
|
||||
if (!btf_type_is_enum(t))
|
||||
continue;
|
||||
|
||||
name = btf_name_by_offset(btf, t->name_off);
|
||||
if (!name)
|
||||
continue;
|
||||
|
||||
if (strcmp(name, "bpf_cmd") == 0)
|
||||
info->cmd_t = t;
|
||||
else if (strcmp(name, "bpf_map_type") == 0)
|
||||
info->map_t = t;
|
||||
else if (strcmp(name, "bpf_prog_type") == 0)
|
||||
info->prog_t = t;
|
||||
else if (strcmp(name, "bpf_attach_type") == 0)
|
||||
info->attach_t = t;
|
||||
else
|
||||
continue;
|
||||
|
||||
if (info->cmd_t && info->map_t && info->prog_t && info->attach_t)
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -ESRCH;
|
||||
}
|
||||
|
||||
static bool find_btf_enum_const(const struct btf *btf, const struct btf_type *enum_t,
|
||||
const char *prefix, const char *str, int *value)
|
||||
{
|
||||
const struct btf_enum *e;
|
||||
const char *name;
|
||||
int i, n, pfx_len = strlen(prefix);
|
||||
|
||||
*value = 0;
|
||||
|
||||
if (!btf || !enum_t)
|
||||
return false;
|
||||
|
||||
for (i = 0, n = btf_vlen(enum_t); i < n; i++) {
|
||||
e = &btf_enum(enum_t)[i];
|
||||
|
||||
name = btf_name_by_offset(btf, e->name_off);
|
||||
if (!name || strncasecmp(name, prefix, pfx_len) != 0)
|
||||
continue;
|
||||
|
||||
/* match symbolic name case insensitive and ignoring prefix */
|
||||
if (strcasecmp(name + pfx_len, str) == 0) {
|
||||
*value = e->val;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void seq_print_delegate_opts(struct seq_file *m,
|
||||
const char *opt_name,
|
||||
const struct btf *btf,
|
||||
const struct btf_type *enum_t,
|
||||
const char *prefix,
|
||||
u64 delegate_msk, u64 any_msk)
|
||||
{
|
||||
const struct btf_enum *e;
|
||||
bool first = true;
|
||||
const char *name;
|
||||
u64 msk;
|
||||
int i, n, pfx_len = strlen(prefix);
|
||||
|
||||
delegate_msk &= any_msk; /* clear unknown bits */
|
||||
|
||||
if (delegate_msk == 0)
|
||||
return;
|
||||
|
||||
seq_printf(m, ",%s", opt_name);
|
||||
if (delegate_msk == any_msk) {
|
||||
seq_printf(m, "=any");
|
||||
return;
|
||||
}
|
||||
|
||||
if (btf && enum_t) {
|
||||
for (i = 0, n = btf_vlen(enum_t); i < n; i++) {
|
||||
e = &btf_enum(enum_t)[i];
|
||||
name = btf_name_by_offset(btf, e->name_off);
|
||||
if (!name || strncasecmp(name, prefix, pfx_len) != 0)
|
||||
continue;
|
||||
msk = 1ULL << e->val;
|
||||
if (delegate_msk & msk) {
|
||||
/* emit lower-case name without prefix */
|
||||
seq_printf(m, "%c", first ? '=' : ':');
|
||||
name += pfx_len;
|
||||
while (*name) {
|
||||
seq_printf(m, "%c", tolower(*name));
|
||||
name++;
|
||||
}
|
||||
|
||||
delegate_msk &= ~msk;
|
||||
first = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (delegate_msk)
|
||||
seq_printf(m, "%c0x%llx", first ? '=' : ':', delegate_msk);
|
||||
}
|
||||
|
||||
/*
|
||||
* Display the mount options in /proc/mounts.
|
||||
*/
|
||||
@ -601,6 +732,8 @@ static int bpf_show_options(struct seq_file *m, struct dentry *root)
|
||||
{
|
||||
struct inode *inode = d_inode(root);
|
||||
umode_t mode = inode->i_mode & S_IALLUGO & ~S_ISVTX;
|
||||
struct bpf_mount_opts *opts = root->d_sb->s_fs_info;
|
||||
u64 mask;
|
||||
|
||||
if (!uid_eq(inode->i_uid, GLOBAL_ROOT_UID))
|
||||
seq_printf(m, ",uid=%u",
|
||||
@ -610,6 +743,35 @@ static int bpf_show_options(struct seq_file *m, struct dentry *root)
|
||||
from_kgid_munged(&init_user_ns, inode->i_gid));
|
||||
if (mode != S_IRWXUGO)
|
||||
seq_printf(m, ",mode=%o", mode);
|
||||
|
||||
if (opts->delegate_cmds || opts->delegate_maps ||
|
||||
opts->delegate_progs || opts->delegate_attachs) {
|
||||
struct bpffs_btf_enums info;
|
||||
|
||||
/* ignore errors, fallback to hex */
|
||||
(void)find_bpffs_btf_enums(&info);
|
||||
|
||||
mask = (1ULL << __MAX_BPF_CMD) - 1;
|
||||
seq_print_delegate_opts(m, "delegate_cmds",
|
||||
info.btf, info.cmd_t, "BPF_",
|
||||
opts->delegate_cmds, mask);
|
||||
|
||||
mask = (1ULL << __MAX_BPF_MAP_TYPE) - 1;
|
||||
seq_print_delegate_opts(m, "delegate_maps",
|
||||
info.btf, info.map_t, "BPF_MAP_TYPE_",
|
||||
opts->delegate_maps, mask);
|
||||
|
||||
mask = (1ULL << __MAX_BPF_PROG_TYPE) - 1;
|
||||
seq_print_delegate_opts(m, "delegate_progs",
|
||||
info.btf, info.prog_t, "BPF_PROG_TYPE_",
|
||||
opts->delegate_progs, mask);
|
||||
|
||||
mask = (1ULL << __MAX_BPF_ATTACH_TYPE) - 1;
|
||||
seq_print_delegate_opts(m, "delegate_attachs",
|
||||
info.btf, info.attach_t, "BPF_",
|
||||
opts->delegate_attachs, mask);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -624,7 +786,7 @@ static void bpf_free_inode(struct inode *inode)
|
||||
free_inode_nonrcu(inode);
|
||||
}
|
||||
|
||||
static const struct super_operations bpf_super_ops = {
|
||||
const struct super_operations bpf_super_ops = {
|
||||
.statfs = simple_statfs,
|
||||
.drop_inode = generic_delete_inode,
|
||||
.show_options = bpf_show_options,
|
||||
@ -635,28 +797,30 @@ enum {
|
||||
OPT_UID,
|
||||
OPT_GID,
|
||||
OPT_MODE,
|
||||
OPT_DELEGATE_CMDS,
|
||||
OPT_DELEGATE_MAPS,
|
||||
OPT_DELEGATE_PROGS,
|
||||
OPT_DELEGATE_ATTACHS,
|
||||
};
|
||||
|
||||
static const struct fs_parameter_spec bpf_fs_parameters[] = {
|
||||
fsparam_u32 ("uid", OPT_UID),
|
||||
fsparam_u32 ("gid", OPT_GID),
|
||||
fsparam_u32oct ("mode", OPT_MODE),
|
||||
fsparam_string ("delegate_cmds", OPT_DELEGATE_CMDS),
|
||||
fsparam_string ("delegate_maps", OPT_DELEGATE_MAPS),
|
||||
fsparam_string ("delegate_progs", OPT_DELEGATE_PROGS),
|
||||
fsparam_string ("delegate_attachs", OPT_DELEGATE_ATTACHS),
|
||||
{}
|
||||
};
|
||||
|
||||
struct bpf_mount_opts {
|
||||
kuid_t uid;
|
||||
kgid_t gid;
|
||||
umode_t mode;
|
||||
};
|
||||
|
||||
static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
|
||||
{
|
||||
struct bpf_mount_opts *opts = fc->fs_private;
|
||||
struct bpf_mount_opts *opts = fc->s_fs_info;
|
||||
struct fs_parse_result result;
|
||||
kuid_t uid;
|
||||
kgid_t gid;
|
||||
int opt;
|
||||
int opt, err;
|
||||
|
||||
opt = fs_parse(fc, bpf_fs_parameters, param, &result);
|
||||
if (opt < 0) {
|
||||
@ -708,6 +872,67 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
|
||||
case OPT_MODE:
|
||||
opts->mode = result.uint_32 & S_IALLUGO;
|
||||
break;
|
||||
case OPT_DELEGATE_CMDS:
|
||||
case OPT_DELEGATE_MAPS:
|
||||
case OPT_DELEGATE_PROGS:
|
||||
case OPT_DELEGATE_ATTACHS: {
|
||||
struct bpffs_btf_enums info;
|
||||
const struct btf_type *enum_t;
|
||||
const char *enum_pfx;
|
||||
u64 *delegate_msk, msk = 0;
|
||||
char *p;
|
||||
int val;
|
||||
|
||||
/* ignore errors, fallback to hex */
|
||||
(void)find_bpffs_btf_enums(&info);
|
||||
|
||||
switch (opt) {
|
||||
case OPT_DELEGATE_CMDS:
|
||||
delegate_msk = &opts->delegate_cmds;
|
||||
enum_t = info.cmd_t;
|
||||
enum_pfx = "BPF_";
|
||||
break;
|
||||
case OPT_DELEGATE_MAPS:
|
||||
delegate_msk = &opts->delegate_maps;
|
||||
enum_t = info.map_t;
|
||||
enum_pfx = "BPF_MAP_TYPE_";
|
||||
break;
|
||||
case OPT_DELEGATE_PROGS:
|
||||
delegate_msk = &opts->delegate_progs;
|
||||
enum_t = info.prog_t;
|
||||
enum_pfx = "BPF_PROG_TYPE_";
|
||||
break;
|
||||
case OPT_DELEGATE_ATTACHS:
|
||||
delegate_msk = &opts->delegate_attachs;
|
||||
enum_t = info.attach_t;
|
||||
enum_pfx = "BPF_";
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
while ((p = strsep(¶m->string, ":"))) {
|
||||
if (strcmp(p, "any") == 0) {
|
||||
msk |= ~0ULL;
|
||||
} else if (find_btf_enum_const(info.btf, enum_t, enum_pfx, p, &val)) {
|
||||
msk |= 1ULL << val;
|
||||
} else {
|
||||
err = kstrtou64(p, 0, &msk);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
/* Setting delegation mount options requires privileges */
|
||||
if (msk && !capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
|
||||
*delegate_msk |= msk;
|
||||
break;
|
||||
}
|
||||
default:
|
||||
/* ignore unknown mount options */
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -784,10 +1009,14 @@ out:
|
||||
static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
|
||||
{
|
||||
static const struct tree_descr bpf_rfiles[] = { { "" } };
|
||||
struct bpf_mount_opts *opts = fc->fs_private;
|
||||
struct bpf_mount_opts *opts = sb->s_fs_info;
|
||||
struct inode *inode;
|
||||
int ret;
|
||||
|
||||
/* Mounting an instance of BPF FS requires privileges */
|
||||
if (fc->user_ns != &init_user_ns && !capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
|
||||
ret = simple_fill_super(sb, BPF_FS_MAGIC, bpf_rfiles);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -811,7 +1040,7 @@ static int bpf_get_tree(struct fs_context *fc)
|
||||
|
||||
static void bpf_free_fc(struct fs_context *fc)
|
||||
{
|
||||
kfree(fc->fs_private);
|
||||
kfree(fc->s_fs_info);
|
||||
}
|
||||
|
||||
static const struct fs_context_operations bpf_context_ops = {
|
||||
@ -835,17 +1064,32 @@ static int bpf_init_fs_context(struct fs_context *fc)
|
||||
opts->uid = current_fsuid();
|
||||
opts->gid = current_fsgid();
|
||||
|
||||
fc->fs_private = opts;
|
||||
/* start out with no BPF token delegation enabled */
|
||||
opts->delegate_cmds = 0;
|
||||
opts->delegate_maps = 0;
|
||||
opts->delegate_progs = 0;
|
||||
opts->delegate_attachs = 0;
|
||||
|
||||
fc->s_fs_info = opts;
|
||||
fc->ops = &bpf_context_ops;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bpf_kill_super(struct super_block *sb)
|
||||
{
|
||||
struct bpf_mount_opts *opts = sb->s_fs_info;
|
||||
|
||||
kill_litter_super(sb);
|
||||
kfree(opts);
|
||||
}
|
||||
|
||||
static struct file_system_type bpf_fs_type = {
|
||||
.owner = THIS_MODULE,
|
||||
.name = "bpf",
|
||||
.init_fs_context = bpf_init_fs_context,
|
||||
.parameters = bpf_fs_parameters,
|
||||
.kill_sb = kill_litter_super,
|
||||
.kill_sb = bpf_kill_super,
|
||||
.fs_flags = FS_USERNS_MOUNT,
|
||||
};
|
||||
|
||||
static int __init bpf_init(void)
|
||||
|
@ -1011,8 +1011,8 @@ int map_check_no_btf(const struct bpf_map *map,
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
static int map_check_btf(struct bpf_map *map, const struct btf *btf,
|
||||
u32 btf_key_id, u32 btf_value_id)
|
||||
static int map_check_btf(struct bpf_map *map, struct bpf_token *token,
|
||||
const struct btf *btf, u32 btf_key_id, u32 btf_value_id)
|
||||
{
|
||||
const struct btf_type *key_type, *value_type;
|
||||
u32 key_size, value_size;
|
||||
@ -1040,7 +1040,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
|
||||
if (!IS_ERR_OR_NULL(map->record)) {
|
||||
int i;
|
||||
|
||||
if (!bpf_capable()) {
|
||||
if (!bpf_token_capable(token, CAP_BPF)) {
|
||||
ret = -EPERM;
|
||||
goto free_map_tab;
|
||||
}
|
||||
@ -1123,14 +1123,21 @@ free_map_tab:
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define BPF_MAP_CREATE_LAST_FIELD value_type_btf_obj_fd
|
||||
static bool bpf_net_capable(void)
|
||||
{
|
||||
return capable(CAP_NET_ADMIN) || capable(CAP_SYS_ADMIN);
|
||||
}
|
||||
|
||||
#define BPF_MAP_CREATE_LAST_FIELD map_token_fd
|
||||
/* called via syscall */
|
||||
static int map_create(union bpf_attr *attr)
|
||||
{
|
||||
const struct bpf_map_ops *ops;
|
||||
struct bpf_token *token = NULL;
|
||||
int numa_node = bpf_map_attr_numa_node(attr);
|
||||
u32 map_type = attr->map_type;
|
||||
struct bpf_map *map;
|
||||
bool token_flag;
|
||||
int f_flags;
|
||||
int err;
|
||||
|
||||
@ -1138,6 +1145,12 @@ static int map_create(union bpf_attr *attr)
|
||||
if (err)
|
||||
return -EINVAL;
|
||||
|
||||
/* check BPF_F_TOKEN_FD flag, remember if it's set, and then clear it
|
||||
* to avoid per-map type checks tripping on unknown flag
|
||||
*/
|
||||
token_flag = attr->map_flags & BPF_F_TOKEN_FD;
|
||||
attr->map_flags &= ~BPF_F_TOKEN_FD;
|
||||
|
||||
if (attr->btf_vmlinux_value_type_id) {
|
||||
if (attr->map_type != BPF_MAP_TYPE_STRUCT_OPS ||
|
||||
attr->btf_key_type_id || attr->btf_value_type_id)
|
||||
@ -1178,14 +1191,32 @@ static int map_create(union bpf_attr *attr)
|
||||
if (!ops->map_mem_usage)
|
||||
return -EINVAL;
|
||||
|
||||
if (token_flag) {
|
||||
token = bpf_token_get_from_fd(attr->map_token_fd);
|
||||
if (IS_ERR(token))
|
||||
return PTR_ERR(token);
|
||||
|
||||
/* if current token doesn't grant map creation permissions,
|
||||
* then we can't use this token, so ignore it and rely on
|
||||
* system-wide capabilities checks
|
||||
*/
|
||||
if (!bpf_token_allow_cmd(token, BPF_MAP_CREATE) ||
|
||||
!bpf_token_allow_map_type(token, attr->map_type)) {
|
||||
bpf_token_put(token);
|
||||
token = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
err = -EPERM;
|
||||
|
||||
/* Intent here is for unprivileged_bpf_disabled to block BPF map
|
||||
* creation for unprivileged users; other actions depend
|
||||
* on fd availability and access to bpffs, so are dependent on
|
||||
* object creation success. Even with unprivileged BPF disabled,
|
||||
* capability checks are still carried out.
|
||||
*/
|
||||
if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
|
||||
return -EPERM;
|
||||
if (sysctl_unprivileged_bpf_disabled && !bpf_token_capable(token, CAP_BPF))
|
||||
goto put_token;
|
||||
|
||||
/* check privileged map type permissions */
|
||||
switch (map_type) {
|
||||
@ -1218,25 +1249,27 @@ static int map_create(union bpf_attr *attr)
|
||||
case BPF_MAP_TYPE_LRU_PERCPU_HASH:
|
||||
case BPF_MAP_TYPE_STRUCT_OPS:
|
||||
case BPF_MAP_TYPE_CPUMAP:
|
||||
if (!bpf_capable())
|
||||
return -EPERM;
|
||||
if (!bpf_token_capable(token, CAP_BPF))
|
||||
goto put_token;
|
||||
break;
|
||||
case BPF_MAP_TYPE_SOCKMAP:
|
||||
case BPF_MAP_TYPE_SOCKHASH:
|
||||
case BPF_MAP_TYPE_DEVMAP:
|
||||
case BPF_MAP_TYPE_DEVMAP_HASH:
|
||||
case BPF_MAP_TYPE_XSKMAP:
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
return -EPERM;
|
||||
if (!bpf_token_capable(token, CAP_NET_ADMIN))
|
||||
goto put_token;
|
||||
break;
|
||||
default:
|
||||
WARN(1, "unsupported map type %d", map_type);
|
||||
return -EPERM;
|
||||
goto put_token;
|
||||
}
|
||||
|
||||
map = ops->map_alloc(attr);
|
||||
if (IS_ERR(map))
|
||||
return PTR_ERR(map);
|
||||
if (IS_ERR(map)) {
|
||||
err = PTR_ERR(map);
|
||||
goto put_token;
|
||||
}
|
||||
map->ops = ops;
|
||||
map->map_type = map_type;
|
||||
|
||||
@ -1273,7 +1306,7 @@ static int map_create(union bpf_attr *attr)
|
||||
map->btf = btf;
|
||||
|
||||
if (attr->btf_value_type_id) {
|
||||
err = map_check_btf(map, btf, attr->btf_key_type_id,
|
||||
err = map_check_btf(map, token, btf, attr->btf_key_type_id,
|
||||
attr->btf_value_type_id);
|
||||
if (err)
|
||||
goto free_map;
|
||||
@ -1285,15 +1318,16 @@ static int map_create(union bpf_attr *attr)
|
||||
attr->btf_vmlinux_value_type_id;
|
||||
}
|
||||
|
||||
err = security_bpf_map_alloc(map);
|
||||
err = security_bpf_map_create(map, attr, token);
|
||||
if (err)
|
||||
goto free_map;
|
||||
goto free_map_sec;
|
||||
|
||||
err = bpf_map_alloc_id(map);
|
||||
if (err)
|
||||
goto free_map_sec;
|
||||
|
||||
bpf_map_save_memcg(map);
|
||||
bpf_token_put(token);
|
||||
|
||||
err = bpf_map_new_fd(map, f_flags);
|
||||
if (err < 0) {
|
||||
@ -1314,6 +1348,8 @@ free_map_sec:
|
||||
free_map:
|
||||
btf_put(map->btf);
|
||||
map->ops->map_free(map);
|
||||
put_token:
|
||||
bpf_token_put(token);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -2144,7 +2180,7 @@ static void __bpf_prog_put_rcu(struct rcu_head *rcu)
|
||||
kvfree(aux->func_info);
|
||||
kfree(aux->func_info_aux);
|
||||
free_uid(aux->user);
|
||||
security_bpf_prog_free(aux);
|
||||
security_bpf_prog_free(aux->prog);
|
||||
bpf_prog_free(aux->prog);
|
||||
}
|
||||
|
||||
@ -2590,13 +2626,15 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type)
|
||||
}
|
||||
|
||||
/* last field in 'union bpf_attr' used by this command */
|
||||
#define BPF_PROG_LOAD_LAST_FIELD log_true_size
|
||||
#define BPF_PROG_LOAD_LAST_FIELD prog_token_fd
|
||||
|
||||
static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
{
|
||||
enum bpf_prog_type type = attr->prog_type;
|
||||
struct bpf_prog *prog, *dst_prog = NULL;
|
||||
struct btf *attach_btf = NULL;
|
||||
struct bpf_token *token = NULL;
|
||||
bool bpf_cap;
|
||||
int err;
|
||||
char license[128];
|
||||
|
||||
@ -2610,13 +2648,35 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
BPF_F_TEST_RND_HI32 |
|
||||
BPF_F_XDP_HAS_FRAGS |
|
||||
BPF_F_XDP_DEV_BOUND_ONLY |
|
||||
BPF_F_TEST_REG_INVARIANTS))
|
||||
BPF_F_TEST_REG_INVARIANTS |
|
||||
BPF_F_TOKEN_FD))
|
||||
return -EINVAL;
|
||||
|
||||
bpf_prog_load_fixup_attach_type(attr);
|
||||
|
||||
if (attr->prog_flags & BPF_F_TOKEN_FD) {
|
||||
token = bpf_token_get_from_fd(attr->prog_token_fd);
|
||||
if (IS_ERR(token))
|
||||
return PTR_ERR(token);
|
||||
/* if current token doesn't grant prog loading permissions,
|
||||
* then we can't use this token, so ignore it and rely on
|
||||
* system-wide capabilities checks
|
||||
*/
|
||||
if (!bpf_token_allow_cmd(token, BPF_PROG_LOAD) ||
|
||||
!bpf_token_allow_prog_type(token, attr->prog_type,
|
||||
attr->expected_attach_type)) {
|
||||
bpf_token_put(token);
|
||||
token = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
bpf_cap = bpf_token_capable(token, CAP_BPF);
|
||||
err = -EPERM;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) &&
|
||||
(attr->prog_flags & BPF_F_ANY_ALIGNMENT) &&
|
||||
!bpf_capable())
|
||||
return -EPERM;
|
||||
!bpf_cap)
|
||||
goto put_token;
|
||||
|
||||
/* Intent here is for unprivileged_bpf_disabled to block BPF program
|
||||
* creation for unprivileged users; other actions depend
|
||||
@ -2625,21 +2685,23 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
* capability checks are still carried out for these
|
||||
* and other operations.
|
||||
*/
|
||||
if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
|
||||
return -EPERM;
|
||||
if (sysctl_unprivileged_bpf_disabled && !bpf_cap)
|
||||
goto put_token;
|
||||
|
||||
if (attr->insn_cnt == 0 ||
|
||||
attr->insn_cnt > (bpf_capable() ? BPF_COMPLEXITY_LIMIT_INSNS : BPF_MAXINSNS))
|
||||
return -E2BIG;
|
||||
attr->insn_cnt > (bpf_cap ? BPF_COMPLEXITY_LIMIT_INSNS : BPF_MAXINSNS)) {
|
||||
err = -E2BIG;
|
||||
goto put_token;
|
||||
}
|
||||
if (type != BPF_PROG_TYPE_SOCKET_FILTER &&
|
||||
type != BPF_PROG_TYPE_CGROUP_SKB &&
|
||||
!bpf_capable())
|
||||
return -EPERM;
|
||||
!bpf_cap)
|
||||
goto put_token;
|
||||
|
||||
if (is_net_admin_prog_type(type) && !capable(CAP_NET_ADMIN) && !capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
if (is_perfmon_prog_type(type) && !perfmon_capable())
|
||||
return -EPERM;
|
||||
if (is_net_admin_prog_type(type) && !bpf_token_capable(token, CAP_NET_ADMIN))
|
||||
goto put_token;
|
||||
if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON))
|
||||
goto put_token;
|
||||
|
||||
/* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog
|
||||
* or btf, we need to check which one it is
|
||||
@ -2649,27 +2711,33 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
if (IS_ERR(dst_prog)) {
|
||||
dst_prog = NULL;
|
||||
attach_btf = btf_get_by_fd(attr->attach_btf_obj_fd);
|
||||
if (IS_ERR(attach_btf))
|
||||
return -EINVAL;
|
||||
if (IS_ERR(attach_btf)) {
|
||||
err = -EINVAL;
|
||||
goto put_token;
|
||||
}
|
||||
if (!btf_is_kernel(attach_btf)) {
|
||||
/* attaching through specifying bpf_prog's BTF
|
||||
* objects directly might be supported eventually
|
||||
*/
|
||||
btf_put(attach_btf);
|
||||
return -ENOTSUPP;
|
||||
err = -ENOTSUPP;
|
||||
goto put_token;
|
||||
}
|
||||
}
|
||||
} else if (attr->attach_btf_id) {
|
||||
/* fall back to vmlinux BTF, if BTF type ID is specified */
|
||||
attach_btf = bpf_get_btf_vmlinux();
|
||||
if (IS_ERR(attach_btf))
|
||||
return PTR_ERR(attach_btf);
|
||||
if (!attach_btf)
|
||||
return -EINVAL;
|
||||
if (IS_ERR(attach_btf)) {
|
||||
err = PTR_ERR(attach_btf);
|
||||
goto put_token;
|
||||
}
|
||||
if (!attach_btf) {
|
||||
err = -EINVAL;
|
||||
goto put_token;
|
||||
}
|
||||
btf_get(attach_btf);
|
||||
}
|
||||
|
||||
bpf_prog_load_fixup_attach_type(attr);
|
||||
if (bpf_prog_load_check_attach(type, attr->expected_attach_type,
|
||||
attach_btf, attr->attach_btf_id,
|
||||
dst_prog)) {
|
||||
@ -2677,7 +2745,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
bpf_prog_put(dst_prog);
|
||||
if (attach_btf)
|
||||
btf_put(attach_btf);
|
||||
return -EINVAL;
|
||||
err = -EINVAL;
|
||||
goto put_token;
|
||||
}
|
||||
|
||||
/* plain bpf_prog allocation */
|
||||
@ -2687,7 +2756,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
bpf_prog_put(dst_prog);
|
||||
if (attach_btf)
|
||||
btf_put(attach_btf);
|
||||
return -ENOMEM;
|
||||
err = -EINVAL;
|
||||
goto put_token;
|
||||
}
|
||||
|
||||
prog->expected_attach_type = attr->expected_attach_type;
|
||||
@ -2698,9 +2768,9 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
prog->aux->sleepable = attr->prog_flags & BPF_F_SLEEPABLE;
|
||||
prog->aux->xdp_has_frags = attr->prog_flags & BPF_F_XDP_HAS_FRAGS;
|
||||
|
||||
err = security_bpf_prog_alloc(prog->aux);
|
||||
if (err)
|
||||
goto free_prog;
|
||||
/* move token into prog->aux, reuse taken refcnt */
|
||||
prog->aux->token = token;
|
||||
token = NULL;
|
||||
|
||||
prog->aux->user = get_current_user();
|
||||
prog->len = attr->insn_cnt;
|
||||
@ -2709,12 +2779,12 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
if (copy_from_bpfptr(prog->insns,
|
||||
make_bpfptr(attr->insns, uattr.is_kernel),
|
||||
bpf_prog_insn_size(prog)) != 0)
|
||||
goto free_prog_sec;
|
||||
goto free_prog;
|
||||
/* copy eBPF program license from user space */
|
||||
if (strncpy_from_bpfptr(license,
|
||||
make_bpfptr(attr->license, uattr.is_kernel),
|
||||
sizeof(license) - 1) < 0)
|
||||
goto free_prog_sec;
|
||||
goto free_prog;
|
||||
license[sizeof(license) - 1] = 0;
|
||||
|
||||
/* eBPF programs must be GPL compatible to use GPL-ed functions */
|
||||
@ -2728,14 +2798,14 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
if (bpf_prog_is_dev_bound(prog->aux)) {
|
||||
err = bpf_prog_dev_bound_init(prog, attr);
|
||||
if (err)
|
||||
goto free_prog_sec;
|
||||
goto free_prog;
|
||||
}
|
||||
|
||||
if (type == BPF_PROG_TYPE_EXT && dst_prog &&
|
||||
bpf_prog_is_dev_bound(dst_prog->aux)) {
|
||||
err = bpf_prog_dev_bound_inherit(prog, dst_prog);
|
||||
if (err)
|
||||
goto free_prog_sec;
|
||||
goto free_prog;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2757,12 +2827,16 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
|
||||
/* find program type: socket_filter vs tracing_filter */
|
||||
err = find_prog_type(type, prog);
|
||||
if (err < 0)
|
||||
goto free_prog_sec;
|
||||
goto free_prog;
|
||||
|
||||
prog->aux->load_time = ktime_get_boottime_ns();
|
||||
err = bpf_obj_name_cpy(prog->aux->name, attr->prog_name,
|
||||
sizeof(attr->prog_name));
|
||||
if (err < 0)
|
||||
goto free_prog;
|
||||
|
||||
err = security_bpf_prog_load(prog, attr, token);
|
||||
if (err)
|
||||
goto free_prog_sec;
|
||||
|
||||
/* run eBPF verifier */
|
||||
@ -2808,13 +2882,16 @@ free_used_maps:
|
||||
*/
|
||||
__bpf_prog_put_noref(prog, prog->aux->real_func_cnt);
|
||||
return err;
|
||||
|
||||
free_prog_sec:
|
||||
free_uid(prog->aux->user);
|
||||
security_bpf_prog_free(prog->aux);
|
||||
security_bpf_prog_free(prog);
|
||||
free_prog:
|
||||
free_uid(prog->aux->user);
|
||||
if (prog->aux->attach_btf)
|
||||
btf_put(prog->aux->attach_btf);
|
||||
bpf_prog_free(prog);
|
||||
put_token:
|
||||
bpf_token_put(token);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -3822,7 +3899,7 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
|
||||
case BPF_PROG_TYPE_SK_LOOKUP:
|
||||
return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
|
||||
case BPF_PROG_TYPE_CGROUP_SKB:
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
if (!bpf_token_capable(prog->aux->token, CAP_NET_ADMIN))
|
||||
/* cg-skb progs can be loaded by unpriv user.
|
||||
* check permissions at attach time.
|
||||
*/
|
||||
@ -4025,7 +4102,7 @@ static int bpf_prog_detach(const union bpf_attr *attr)
|
||||
static int bpf_prog_query(const union bpf_attr *attr,
|
||||
union bpf_attr __user *uattr)
|
||||
{
|
||||
if (!capable(CAP_NET_ADMIN))
|
||||
if (!bpf_net_capable())
|
||||
return -EPERM;
|
||||
if (CHECK_ATTR(BPF_PROG_QUERY))
|
||||
return -EINVAL;
|
||||
@ -4795,15 +4872,34 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr,
|
||||
return err;
|
||||
}
|
||||
|
||||
#define BPF_BTF_LOAD_LAST_FIELD btf_log_true_size
|
||||
#define BPF_BTF_LOAD_LAST_FIELD btf_token_fd
|
||||
|
||||
static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size)
|
||||
{
|
||||
struct bpf_token *token = NULL;
|
||||
|
||||
if (CHECK_ATTR(BPF_BTF_LOAD))
|
||||
return -EINVAL;
|
||||
|
||||
if (!bpf_capable())
|
||||
if (attr->btf_flags & ~BPF_F_TOKEN_FD)
|
||||
return -EINVAL;
|
||||
|
||||
if (attr->btf_flags & BPF_F_TOKEN_FD) {
|
||||
token = bpf_token_get_from_fd(attr->btf_token_fd);
|
||||
if (IS_ERR(token))
|
||||
return PTR_ERR(token);
|
||||
if (!bpf_token_allow_cmd(token, BPF_BTF_LOAD)) {
|
||||
bpf_token_put(token);
|
||||
token = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
if (!bpf_token_capable(token, CAP_BPF)) {
|
||||
bpf_token_put(token);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
bpf_token_put(token);
|
||||
|
||||
return btf_new_fd(attr, uattr, uattr_size);
|
||||
}
|
||||
@ -5421,6 +5517,20 @@ out_prog_put:
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define BPF_TOKEN_CREATE_LAST_FIELD token_create.bpffs_fd
|
||||
|
||||
static int token_create(union bpf_attr *attr)
|
||||
{
|
||||
if (CHECK_ATTR(BPF_TOKEN_CREATE))
|
||||
return -EINVAL;
|
||||
|
||||
/* no flags are supported yet */
|
||||
if (attr->token_create.flags)
|
||||
return -EINVAL;
|
||||
|
||||
return bpf_token_create(attr);
|
||||
}
|
||||
|
||||
static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
|
||||
{
|
||||
union bpf_attr attr;
|
||||
@ -5554,6 +5664,9 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
|
||||
case BPF_PROG_BIND_MAP:
|
||||
err = bpf_prog_bind_map(&attr);
|
||||
break;
|
||||
case BPF_TOKEN_CREATE:
|
||||
err = token_create(&attr);
|
||||
break;
|
||||
default:
|
||||
err = -EINVAL;
|
||||
break;
|
||||
@ -5660,7 +5773,7 @@ static const struct bpf_func_proto bpf_sys_bpf_proto = {
|
||||
const struct bpf_func_proto * __weak
|
||||
tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
|
||||
BPF_CALL_1(bpf_sys_close, u32, fd)
|
||||
@ -5710,7 +5823,8 @@ syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
switch (func_id) {
|
||||
case BPF_FUNC_sys_bpf:
|
||||
return !perfmon_capable() ? NULL : &bpf_sys_bpf_proto;
|
||||
return !bpf_token_capable(prog->aux->token, CAP_PERFMON)
|
||||
? NULL : &bpf_sys_bpf_proto;
|
||||
case BPF_FUNC_btf_find_by_name_kind:
|
||||
return &bpf_btf_find_by_name_kind_proto;
|
||||
case BPF_FUNC_sys_close:
|
||||
|
278
kernel/bpf/token.c
Normal file
278
kernel/bpf/token.c
Normal file
@ -0,0 +1,278 @@
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/fdtable.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/namei.h>
|
||||
#include <linux/user_namespace.h>
|
||||
#include <linux/security.h>
|
||||
|
||||
static bool bpf_ns_capable(struct user_namespace *ns, int cap)
|
||||
{
|
||||
return ns_capable(ns, cap) || (cap != CAP_SYS_ADMIN && ns_capable(ns, CAP_SYS_ADMIN));
|
||||
}
|
||||
|
||||
bool bpf_token_capable(const struct bpf_token *token, int cap)
|
||||
{
|
||||
struct user_namespace *userns;
|
||||
|
||||
/* BPF token allows ns_capable() level of capabilities */
|
||||
userns = token ? token->userns : &init_user_ns;
|
||||
if (!bpf_ns_capable(userns, cap))
|
||||
return false;
|
||||
if (token && security_bpf_token_capable(token, cap) < 0)
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
void bpf_token_inc(struct bpf_token *token)
|
||||
{
|
||||
atomic64_inc(&token->refcnt);
|
||||
}
|
||||
|
||||
static void bpf_token_free(struct bpf_token *token)
|
||||
{
|
||||
security_bpf_token_free(token);
|
||||
put_user_ns(token->userns);
|
||||
kfree(token);
|
||||
}
|
||||
|
||||
static void bpf_token_put_deferred(struct work_struct *work)
|
||||
{
|
||||
struct bpf_token *token = container_of(work, struct bpf_token, work);
|
||||
|
||||
bpf_token_free(token);
|
||||
}
|
||||
|
||||
void bpf_token_put(struct bpf_token *token)
|
||||
{
|
||||
if (!token)
|
||||
return;
|
||||
|
||||
if (!atomic64_dec_and_test(&token->refcnt))
|
||||
return;
|
||||
|
||||
INIT_WORK(&token->work, bpf_token_put_deferred);
|
||||
schedule_work(&token->work);
|
||||
}
|
||||
|
||||
static int bpf_token_release(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct bpf_token *token = filp->private_data;
|
||||
|
||||
bpf_token_put(token);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bpf_token_show_fdinfo(struct seq_file *m, struct file *filp)
|
||||
{
|
||||
struct bpf_token *token = filp->private_data;
|
||||
u64 mask;
|
||||
|
||||
BUILD_BUG_ON(__MAX_BPF_CMD >= 64);
|
||||
mask = (1ULL << __MAX_BPF_CMD) - 1;
|
||||
if ((token->allowed_cmds & mask) == mask)
|
||||
seq_printf(m, "allowed_cmds:\tany\n");
|
||||
else
|
||||
seq_printf(m, "allowed_cmds:\t0x%llx\n", token->allowed_cmds);
|
||||
|
||||
BUILD_BUG_ON(__MAX_BPF_MAP_TYPE >= 64);
|
||||
mask = (1ULL << __MAX_BPF_MAP_TYPE) - 1;
|
||||
if ((token->allowed_maps & mask) == mask)
|
||||
seq_printf(m, "allowed_maps:\tany\n");
|
||||
else
|
||||
seq_printf(m, "allowed_maps:\t0x%llx\n", token->allowed_maps);
|
||||
|
||||
BUILD_BUG_ON(__MAX_BPF_PROG_TYPE >= 64);
|
||||
mask = (1ULL << __MAX_BPF_PROG_TYPE) - 1;
|
||||
if ((token->allowed_progs & mask) == mask)
|
||||
seq_printf(m, "allowed_progs:\tany\n");
|
||||
else
|
||||
seq_printf(m, "allowed_progs:\t0x%llx\n", token->allowed_progs);
|
||||
|
||||
BUILD_BUG_ON(__MAX_BPF_ATTACH_TYPE >= 64);
|
||||
mask = (1ULL << __MAX_BPF_ATTACH_TYPE) - 1;
|
||||
if ((token->allowed_attachs & mask) == mask)
|
||||
seq_printf(m, "allowed_attachs:\tany\n");
|
||||
else
|
||||
seq_printf(m, "allowed_attachs:\t0x%llx\n", token->allowed_attachs);
|
||||
}
|
||||
|
||||
#define BPF_TOKEN_INODE_NAME "bpf-token"
|
||||
|
||||
static const struct inode_operations bpf_token_iops = { };
|
||||
|
||||
static const struct file_operations bpf_token_fops = {
|
||||
.release = bpf_token_release,
|
||||
.show_fdinfo = bpf_token_show_fdinfo,
|
||||
};
|
||||
|
||||
int bpf_token_create(union bpf_attr *attr)
|
||||
{
|
||||
struct bpf_mount_opts *mnt_opts;
|
||||
struct bpf_token *token = NULL;
|
||||
struct user_namespace *userns;
|
||||
struct inode *inode;
|
||||
struct file *file;
|
||||
struct path path;
|
||||
struct fd f;
|
||||
umode_t mode;
|
||||
int err, fd;
|
||||
|
||||
f = fdget(attr->token_create.bpffs_fd);
|
||||
if (!f.file)
|
||||
return -EBADF;
|
||||
|
||||
path = f.file->f_path;
|
||||
path_get(&path);
|
||||
fdput(f);
|
||||
|
||||
if (path.dentry != path.mnt->mnt_sb->s_root) {
|
||||
err = -EINVAL;
|
||||
goto out_path;
|
||||
}
|
||||
if (path.mnt->mnt_sb->s_op != &bpf_super_ops) {
|
||||
err = -EINVAL;
|
||||
goto out_path;
|
||||
}
|
||||
err = path_permission(&path, MAY_ACCESS);
|
||||
if (err)
|
||||
goto out_path;
|
||||
|
||||
userns = path.dentry->d_sb->s_user_ns;
|
||||
/*
|
||||
* Enforce that creators of BPF tokens are in the same user
|
||||
* namespace as the BPF FS instance. This makes reasoning about
|
||||
* permissions a lot easier and we can always relax this later.
|
||||
*/
|
||||
if (current_user_ns() != userns) {
|
||||
err = -EPERM;
|
||||
goto out_path;
|
||||
}
|
||||
if (!ns_capable(userns, CAP_BPF)) {
|
||||
err = -EPERM;
|
||||
goto out_path;
|
||||
}
|
||||
|
||||
/* Creating BPF token in init_user_ns doesn't make much sense. */
|
||||
if (current_user_ns() == &init_user_ns) {
|
||||
err = -EOPNOTSUPP;
|
||||
goto out_path;
|
||||
}
|
||||
|
||||
mnt_opts = path.dentry->d_sb->s_fs_info;
|
||||
if (mnt_opts->delegate_cmds == 0 &&
|
||||
mnt_opts->delegate_maps == 0 &&
|
||||
mnt_opts->delegate_progs == 0 &&
|
||||
mnt_opts->delegate_attachs == 0) {
|
||||
err = -ENOENT; /* no BPF token delegation is set up */
|
||||
goto out_path;
|
||||
}
|
||||
|
||||
mode = S_IFREG | ((S_IRUSR | S_IWUSR) & ~current_umask());
|
||||
inode = bpf_get_inode(path.mnt->mnt_sb, NULL, mode);
|
||||
if (IS_ERR(inode)) {
|
||||
err = PTR_ERR(inode);
|
||||
goto out_path;
|
||||
}
|
||||
|
||||
inode->i_op = &bpf_token_iops;
|
||||
inode->i_fop = &bpf_token_fops;
|
||||
clear_nlink(inode); /* make sure it is unlinked */
|
||||
|
||||
file = alloc_file_pseudo(inode, path.mnt, BPF_TOKEN_INODE_NAME, O_RDWR, &bpf_token_fops);
|
||||
if (IS_ERR(file)) {
|
||||
iput(inode);
|
||||
err = PTR_ERR(file);
|
||||
goto out_path;
|
||||
}
|
||||
|
||||
token = kzalloc(sizeof(*token), GFP_USER);
|
||||
if (!token) {
|
||||
err = -ENOMEM;
|
||||
goto out_file;
|
||||
}
|
||||
|
||||
atomic64_set(&token->refcnt, 1);
|
||||
|
||||
/* remember bpffs owning userns for future ns_capable() checks */
|
||||
token->userns = get_user_ns(userns);
|
||||
|
||||
token->allowed_cmds = mnt_opts->delegate_cmds;
|
||||
token->allowed_maps = mnt_opts->delegate_maps;
|
||||
token->allowed_progs = mnt_opts->delegate_progs;
|
||||
token->allowed_attachs = mnt_opts->delegate_attachs;
|
||||
|
||||
err = security_bpf_token_create(token, attr, &path);
|
||||
if (err)
|
||||
goto out_token;
|
||||
|
||||
fd = get_unused_fd_flags(O_CLOEXEC);
|
||||
if (fd < 0) {
|
||||
err = fd;
|
||||
goto out_token;
|
||||
}
|
||||
|
||||
file->private_data = token;
|
||||
fd_install(fd, file);
|
||||
|
||||
path_put(&path);
|
||||
return fd;
|
||||
|
||||
out_token:
|
||||
bpf_token_free(token);
|
||||
out_file:
|
||||
fput(file);
|
||||
out_path:
|
||||
path_put(&path);
|
||||
return err;
|
||||
}
|
||||
|
||||
struct bpf_token *bpf_token_get_from_fd(u32 ufd)
|
||||
{
|
||||
struct fd f = fdget(ufd);
|
||||
struct bpf_token *token;
|
||||
|
||||
if (!f.file)
|
||||
return ERR_PTR(-EBADF);
|
||||
if (f.file->f_op != &bpf_token_fops) {
|
||||
fdput(f);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
token = f.file->private_data;
|
||||
bpf_token_inc(token);
|
||||
fdput(f);
|
||||
|
||||
return token;
|
||||
}
|
||||
|
||||
bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd)
|
||||
{
|
||||
if (!token)
|
||||
return false;
|
||||
if (!(token->allowed_cmds & (1ULL << cmd)))
|
||||
return false;
|
||||
return security_bpf_token_cmd(token, cmd) == 0;
|
||||
}
|
||||
|
||||
bool bpf_token_allow_map_type(const struct bpf_token *token, enum bpf_map_type type)
|
||||
{
|
||||
if (!token || type >= __MAX_BPF_MAP_TYPE)
|
||||
return false;
|
||||
|
||||
return token->allowed_maps & (1ULL << type);
|
||||
}
|
||||
|
||||
bool bpf_token_allow_prog_type(const struct bpf_token *token,
|
||||
enum bpf_prog_type prog_type,
|
||||
enum bpf_attach_type attach_type)
|
||||
{
|
||||
if (!token || prog_type >= __MAX_BPF_PROG_TYPE || attach_type >= __MAX_BPF_ATTACH_TYPE)
|
||||
return false;
|
||||
|
||||
return (token->allowed_progs & (1ULL << prog_type)) &&
|
||||
(token->allowed_attachs & (1ULL << attach_type));
|
||||
}
|
@ -20830,7 +20830,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
|
||||
env->prog = *prog;
|
||||
env->ops = bpf_verifier_ops[env->prog->type];
|
||||
env->fd_array = make_bpfptr(attr->fd_array, uattr.is_kernel);
|
||||
is_priv = bpf_capable();
|
||||
|
||||
env->allow_ptr_leaks = bpf_allow_ptr_leaks(env->prog->aux->token);
|
||||
env->allow_uninit_stack = bpf_allow_uninit_stack(env->prog->aux->token);
|
||||
env->bypass_spec_v1 = bpf_bypass_spec_v1(env->prog->aux->token);
|
||||
env->bypass_spec_v4 = bpf_bypass_spec_v4(env->prog->aux->token);
|
||||
env->bpf_capable = is_priv = bpf_token_capable(env->prog->aux->token, CAP_BPF);
|
||||
|
||||
bpf_get_btf_vmlinux();
|
||||
|
||||
@ -20862,12 +20867,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
|
||||
if (attr->prog_flags & BPF_F_ANY_ALIGNMENT)
|
||||
env->strict_alignment = false;
|
||||
|
||||
env->allow_ptr_leaks = bpf_allow_ptr_leaks();
|
||||
env->allow_uninit_stack = bpf_allow_uninit_stack();
|
||||
env->bypass_spec_v1 = bpf_bypass_spec_v1();
|
||||
env->bypass_spec_v4 = bpf_bypass_spec_v4();
|
||||
env->bpf_capable = bpf_capable();
|
||||
|
||||
if (is_priv)
|
||||
env->test_state_freq = attr->prog_flags & BPF_F_TEST_STATE_FREQ;
|
||||
env->test_reg_invariants = attr->prog_flags & BPF_F_TEST_REG_INVARIANTS;
|
||||
|
@ -1629,7 +1629,7 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_trace_vprintk:
|
||||
return bpf_get_trace_vprintk_proto();
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -87,7 +87,7 @@
|
||||
#include "dev.h"
|
||||
|
||||
static const struct bpf_func_proto *
|
||||
bpf_sk_base_func_proto(enum bpf_func_id func_id);
|
||||
bpf_sk_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog);
|
||||
|
||||
int copy_bpf_fprog_from_user(struct sock_fprog *dst, sockptr_t src, int len)
|
||||
{
|
||||
@ -7862,7 +7862,7 @@ sock_filter_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_ktime_get_coarse_ns:
|
||||
return &bpf_ktime_get_coarse_ns_proto;
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -7955,7 +7955,7 @@ sock_addr_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
return NULL;
|
||||
}
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -7974,7 +7974,7 @@ sk_filter_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_perf_event_output:
|
||||
return &bpf_skb_event_output_proto;
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -8161,7 +8161,7 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
#endif
|
||||
#endif
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -8220,7 +8220,7 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
#endif
|
||||
#endif
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
|
||||
#if IS_MODULE(CONFIG_NF_CONNTRACK) && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES)
|
||||
@ -8281,7 +8281,7 @@ sock_ops_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
return &bpf_tcp_sock_proto;
|
||||
#endif /* CONFIG_INET */
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -8323,7 +8323,7 @@ sk_msg_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
return &bpf_get_cgroup_classid_curr_proto;
|
||||
#endif
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -8367,7 +8367,7 @@ sk_skb_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
return &bpf_skc_lookup_tcp_proto;
|
||||
#endif
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -8378,7 +8378,7 @@ flow_dissector_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_skb_load_bytes:
|
||||
return &bpf_flow_dissector_load_bytes_proto;
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -8405,7 +8405,7 @@ lwt_out_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_skb_under_cgroup:
|
||||
return &bpf_skb_under_cgroup_proto;
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -8580,7 +8580,7 @@ static bool cg_skb_is_valid_access(int off, int size,
|
||||
return false;
|
||||
case bpf_ctx_range(struct __sk_buff, data):
|
||||
case bpf_ctx_range(struct __sk_buff, data_end):
|
||||
if (!bpf_capable())
|
||||
if (!bpf_token_capable(prog->aux->token, CAP_BPF))
|
||||
return false;
|
||||
break;
|
||||
}
|
||||
@ -8592,7 +8592,7 @@ static bool cg_skb_is_valid_access(int off, int size,
|
||||
case bpf_ctx_range_till(struct __sk_buff, cb[0], cb[4]):
|
||||
break;
|
||||
case bpf_ctx_range(struct __sk_buff, tstamp):
|
||||
if (!bpf_capable())
|
||||
if (!bpf_token_capable(prog->aux->token, CAP_BPF))
|
||||
return false;
|
||||
break;
|
||||
default:
|
||||
@ -11236,7 +11236,7 @@ sk_reuseport_func_proto(enum bpf_func_id func_id,
|
||||
case BPF_FUNC_ktime_get_coarse_ns:
|
||||
return &bpf_ktime_get_coarse_ns_proto;
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -11418,7 +11418,7 @@ sk_lookup_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
case BPF_FUNC_sk_release:
|
||||
return &bpf_sk_release_proto;
|
||||
default:
|
||||
return bpf_sk_base_func_proto(func_id);
|
||||
return bpf_sk_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
@ -11752,7 +11752,7 @@ const struct bpf_func_proto bpf_sock_from_file_proto = {
|
||||
};
|
||||
|
||||
static const struct bpf_func_proto *
|
||||
bpf_sk_base_func_proto(enum bpf_func_id func_id)
|
||||
bpf_sk_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
const struct bpf_func_proto *func;
|
||||
|
||||
@ -11781,10 +11781,10 @@ bpf_sk_base_func_proto(enum bpf_func_id func_id)
|
||||
case BPF_FUNC_ktime_get_coarse_ns:
|
||||
return &bpf_ktime_get_coarse_ns_proto;
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
|
||||
if (!perfmon_capable())
|
||||
if (!bpf_token_capable(prog->aux->token, CAP_PERFMON))
|
||||
return NULL;
|
||||
|
||||
return func;
|
||||
|
@ -197,7 +197,7 @@ bpf_tcp_ca_get_func_proto(enum bpf_func_id func_id,
|
||||
case BPF_FUNC_ktime_get_coarse_ns:
|
||||
return &bpf_ktime_get_coarse_ns_proto;
|
||||
default:
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -314,7 +314,7 @@ static bool nf_is_valid_access(int off, int size, enum bpf_access_type type,
|
||||
static const struct bpf_func_proto *
|
||||
bpf_nf_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
return bpf_base_func_proto(func_id);
|
||||
return bpf_base_func_proto(func_id, prog);
|
||||
}
|
||||
|
||||
const struct bpf_verifier_ops netfilter_verifier_ops = {
|
||||
|
@ -5410,29 +5410,87 @@ int security_bpf_prog(struct bpf_prog *prog)
|
||||
}
|
||||
|
||||
/**
|
||||
* security_bpf_map_alloc() - Allocate a bpf map LSM blob
|
||||
* @map: bpf map
|
||||
* security_bpf_map_create() - Check if BPF map creation is allowed
|
||||
* @map: BPF map object
|
||||
* @attr: BPF syscall attributes used to create BPF map
|
||||
* @token: BPF token used to grant user access
|
||||
*
|
||||
* Initialize the security field inside bpf map.
|
||||
* Do a check when the kernel creates a new BPF map. This is also the
|
||||
* point where LSM blob is allocated for LSMs that need them.
|
||||
*
|
||||
* Return: Returns 0 on success, error on failure.
|
||||
*/
|
||||
int security_bpf_map_alloc(struct bpf_map *map)
|
||||
int security_bpf_map_create(struct bpf_map *map, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
{
|
||||
return call_int_hook(bpf_map_alloc_security, 0, map);
|
||||
return call_int_hook(bpf_map_create, 0, map, attr, token);
|
||||
}
|
||||
|
||||
/**
|
||||
* security_bpf_prog_alloc() - Allocate a bpf program LSM blob
|
||||
* @aux: bpf program aux info struct
|
||||
* security_bpf_prog_load() - Check if loading of BPF program is allowed
|
||||
* @prog: BPF program object
|
||||
* @attr: BPF syscall attributes used to create BPF program
|
||||
* @token: BPF token used to grant user access to BPF subsystem
|
||||
*
|
||||
* Initialize the security field inside bpf program.
|
||||
* Perform an access control check when the kernel loads a BPF program and
|
||||
* allocates associated BPF program object. This hook is also responsible for
|
||||
* allocating any required LSM state for the BPF program.
|
||||
*
|
||||
* Return: Returns 0 on success, error on failure.
|
||||
*/
|
||||
int security_bpf_prog_alloc(struct bpf_prog_aux *aux)
|
||||
int security_bpf_prog_load(struct bpf_prog *prog, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
{
|
||||
return call_int_hook(bpf_prog_alloc_security, 0, aux);
|
||||
return call_int_hook(bpf_prog_load, 0, prog, attr, token);
|
||||
}
|
||||
|
||||
/**
|
||||
* security_bpf_token_create() - Check if creating of BPF token is allowed
|
||||
* @token: BPF token object
|
||||
* @attr: BPF syscall attributes used to create BPF token
|
||||
* @path: path pointing to BPF FS mount point from which BPF token is created
|
||||
*
|
||||
* Do a check when the kernel instantiates a new BPF token object from BPF FS
|
||||
* instance. This is also the point where LSM blob can be allocated for LSMs.
|
||||
*
|
||||
* Return: Returns 0 on success, error on failure.
|
||||
*/
|
||||
int security_bpf_token_create(struct bpf_token *token, union bpf_attr *attr,
|
||||
struct path *path)
|
||||
{
|
||||
return call_int_hook(bpf_token_create, 0, token, attr, path);
|
||||
}
|
||||
|
||||
/**
|
||||
* security_bpf_token_cmd() - Check if BPF token is allowed to delegate
|
||||
* requested BPF syscall command
|
||||
* @token: BPF token object
|
||||
* @cmd: BPF syscall command requested to be delegated by BPF token
|
||||
*
|
||||
* Do a check when the kernel decides whether provided BPF token should allow
|
||||
* delegation of requested BPF syscall command.
|
||||
*
|
||||
* Return: Returns 0 on success, error on failure.
|
||||
*/
|
||||
int security_bpf_token_cmd(const struct bpf_token *token, enum bpf_cmd cmd)
|
||||
{
|
||||
return call_int_hook(bpf_token_cmd, 0, token, cmd);
|
||||
}
|
||||
|
||||
/**
|
||||
* security_bpf_token_capable() - Check if BPF token is allowed to delegate
|
||||
* requested BPF-related capability
|
||||
* @token: BPF token object
|
||||
* @cap: capabilities requested to be delegated by BPF token
|
||||
*
|
||||
* Do a check when the kernel decides whether provided BPF token should allow
|
||||
* delegation of requested BPF-related capabilities.
|
||||
*
|
||||
* Return: Returns 0 on success, error on failure.
|
||||
*/
|
||||
int security_bpf_token_capable(const struct bpf_token *token, int cap)
|
||||
{
|
||||
return call_int_hook(bpf_token_capable, 0, token, cap);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -5443,18 +5501,29 @@ int security_bpf_prog_alloc(struct bpf_prog_aux *aux)
|
||||
*/
|
||||
void security_bpf_map_free(struct bpf_map *map)
|
||||
{
|
||||
call_void_hook(bpf_map_free_security, map);
|
||||
call_void_hook(bpf_map_free, map);
|
||||
}
|
||||
|
||||
/**
|
||||
* security_bpf_prog_free() - Free a bpf program's LSM blob
|
||||
* @aux: bpf program aux info struct
|
||||
* security_bpf_prog_free() - Free a BPF program's LSM blob
|
||||
* @prog: BPF program struct
|
||||
*
|
||||
* Clean up the security information stored inside bpf prog.
|
||||
* Clean up the security information stored inside BPF program.
|
||||
*/
|
||||
void security_bpf_prog_free(struct bpf_prog_aux *aux)
|
||||
void security_bpf_prog_free(struct bpf_prog *prog)
|
||||
{
|
||||
call_void_hook(bpf_prog_free_security, aux);
|
||||
call_void_hook(bpf_prog_free, prog);
|
||||
}
|
||||
|
||||
/**
|
||||
* security_bpf_token_free() - Free a BPF token's LSM blob
|
||||
* @token: BPF token struct
|
||||
*
|
||||
* Clean up the security information stored inside BPF token.
|
||||
*/
|
||||
void security_bpf_token_free(struct bpf_token *token)
|
||||
{
|
||||
call_void_hook(bpf_token_free, token);
|
||||
}
|
||||
#endif /* CONFIG_BPF_SYSCALL */
|
||||
|
||||
|
@ -6920,7 +6920,8 @@ static int selinux_bpf_prog(struct bpf_prog *prog)
|
||||
BPF__PROG_RUN, NULL);
|
||||
}
|
||||
|
||||
static int selinux_bpf_map_alloc(struct bpf_map *map)
|
||||
static int selinux_bpf_map_create(struct bpf_map *map, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
{
|
||||
struct bpf_security_struct *bpfsec;
|
||||
|
||||
@ -6942,7 +6943,8 @@ static void selinux_bpf_map_free(struct bpf_map *map)
|
||||
kfree(bpfsec);
|
||||
}
|
||||
|
||||
static int selinux_bpf_prog_alloc(struct bpf_prog_aux *aux)
|
||||
static int selinux_bpf_prog_load(struct bpf_prog *prog, union bpf_attr *attr,
|
||||
struct bpf_token *token)
|
||||
{
|
||||
struct bpf_security_struct *bpfsec;
|
||||
|
||||
@ -6951,16 +6953,39 @@ static int selinux_bpf_prog_alloc(struct bpf_prog_aux *aux)
|
||||
return -ENOMEM;
|
||||
|
||||
bpfsec->sid = current_sid();
|
||||
aux->security = bpfsec;
|
||||
prog->aux->security = bpfsec;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void selinux_bpf_prog_free(struct bpf_prog_aux *aux)
|
||||
static void selinux_bpf_prog_free(struct bpf_prog *prog)
|
||||
{
|
||||
struct bpf_security_struct *bpfsec = aux->security;
|
||||
struct bpf_security_struct *bpfsec = prog->aux->security;
|
||||
|
||||
aux->security = NULL;
|
||||
prog->aux->security = NULL;
|
||||
kfree(bpfsec);
|
||||
}
|
||||
|
||||
static int selinux_bpf_token_create(struct bpf_token *token, union bpf_attr *attr,
|
||||
struct path *path)
|
||||
{
|
||||
struct bpf_security_struct *bpfsec;
|
||||
|
||||
bpfsec = kzalloc(sizeof(*bpfsec), GFP_KERNEL);
|
||||
if (!bpfsec)
|
||||
return -ENOMEM;
|
||||
|
||||
bpfsec->sid = current_sid();
|
||||
token->security = bpfsec;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void selinux_bpf_token_free(struct bpf_token *token)
|
||||
{
|
||||
struct bpf_security_struct *bpfsec = token->security;
|
||||
|
||||
token->security = NULL;
|
||||
kfree(bpfsec);
|
||||
}
|
||||
#endif
|
||||
@ -7324,8 +7349,9 @@ static struct security_hook_list selinux_hooks[] __ro_after_init = {
|
||||
LSM_HOOK_INIT(bpf, selinux_bpf),
|
||||
LSM_HOOK_INIT(bpf_map, selinux_bpf_map),
|
||||
LSM_HOOK_INIT(bpf_prog, selinux_bpf_prog),
|
||||
LSM_HOOK_INIT(bpf_map_free_security, selinux_bpf_map_free),
|
||||
LSM_HOOK_INIT(bpf_prog_free_security, selinux_bpf_prog_free),
|
||||
LSM_HOOK_INIT(bpf_map_free, selinux_bpf_map_free),
|
||||
LSM_HOOK_INIT(bpf_prog_free, selinux_bpf_prog_free),
|
||||
LSM_HOOK_INIT(bpf_token_free, selinux_bpf_token_free),
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
@ -7382,8 +7408,9 @@ static struct security_hook_list selinux_hooks[] __ro_after_init = {
|
||||
LSM_HOOK_INIT(audit_rule_init, selinux_audit_rule_init),
|
||||
#endif
|
||||
#ifdef CONFIG_BPF_SYSCALL
|
||||
LSM_HOOK_INIT(bpf_map_alloc_security, selinux_bpf_map_alloc),
|
||||
LSM_HOOK_INIT(bpf_prog_alloc_security, selinux_bpf_prog_alloc),
|
||||
LSM_HOOK_INIT(bpf_map_create, selinux_bpf_map_create),
|
||||
LSM_HOOK_INIT(bpf_prog_load, selinux_bpf_prog_load),
|
||||
LSM_HOOK_INIT(bpf_token_create, selinux_bpf_token_create),
|
||||
#endif
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
LSM_HOOK_INIT(perf_event_alloc, selinux_perf_event_alloc),
|
||||
|
@ -847,6 +847,36 @@ union bpf_iter_link_info {
|
||||
* Returns zero on success. On error, -1 is returned and *errno*
|
||||
* is set appropriately.
|
||||
*
|
||||
* BPF_TOKEN_CREATE
|
||||
* Description
|
||||
* Create BPF token with embedded information about what
|
||||
* BPF-related functionality it allows:
|
||||
* - a set of allowed bpf() syscall commands;
|
||||
* - a set of allowed BPF map types to be created with
|
||||
* BPF_MAP_CREATE command, if BPF_MAP_CREATE itself is allowed;
|
||||
* - a set of allowed BPF program types and BPF program attach
|
||||
* types to be loaded with BPF_PROG_LOAD command, if
|
||||
* BPF_PROG_LOAD itself is allowed.
|
||||
*
|
||||
* BPF token is created (derived) from an instance of BPF FS,
|
||||
* assuming it has necessary delegation mount options specified.
|
||||
* This BPF token can be passed as an extra parameter to various
|
||||
* bpf() syscall commands to grant BPF subsystem functionality to
|
||||
* unprivileged processes.
|
||||
*
|
||||
* When created, BPF token is "associated" with the owning
|
||||
* user namespace of BPF FS instance (super block) that it was
|
||||
* derived from, and subsequent BPF operations performed with
|
||||
* BPF token would be performing capabilities checks (i.e.,
|
||||
* CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, CAP_SYS_ADMIN) within
|
||||
* that user namespace. Without BPF token, such capabilities
|
||||
* have to be granted in init user namespace, making bpf()
|
||||
* syscall incompatible with user namespace, for the most part.
|
||||
*
|
||||
* Return
|
||||
* A new file descriptor (a nonnegative integer), or -1 if an
|
||||
* error occurred (in which case, *errno* is set appropriately).
|
||||
*
|
||||
* NOTES
|
||||
* eBPF objects (maps and programs) can be shared between processes.
|
||||
*
|
||||
@ -901,6 +931,8 @@ enum bpf_cmd {
|
||||
BPF_ITER_CREATE,
|
||||
BPF_LINK_DETACH,
|
||||
BPF_PROG_BIND_MAP,
|
||||
BPF_TOKEN_CREATE,
|
||||
__MAX_BPF_CMD,
|
||||
};
|
||||
|
||||
enum bpf_map_type {
|
||||
@ -951,6 +983,7 @@ enum bpf_map_type {
|
||||
BPF_MAP_TYPE_BLOOM_FILTER,
|
||||
BPF_MAP_TYPE_USER_RINGBUF,
|
||||
BPF_MAP_TYPE_CGRP_STORAGE,
|
||||
__MAX_BPF_MAP_TYPE
|
||||
};
|
||||
|
||||
/* Note that tracing related programs such as
|
||||
@ -995,6 +1028,7 @@ enum bpf_prog_type {
|
||||
BPF_PROG_TYPE_SK_LOOKUP,
|
||||
BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */
|
||||
BPF_PROG_TYPE_NETFILTER,
|
||||
__MAX_BPF_PROG_TYPE
|
||||
};
|
||||
|
||||
enum bpf_attach_type {
|
||||
@ -1333,6 +1367,9 @@ enum {
|
||||
|
||||
/* Flag for value_type_btf_obj_fd, the fd is available */
|
||||
BPF_F_VTYPE_BTF_OBJ_FD = (1U << 15),
|
||||
|
||||
/* BPF token FD is passed in a corresponding command's token_fd field */
|
||||
BPF_F_TOKEN_FD = (1U << 16),
|
||||
};
|
||||
|
||||
/* Flags for BPF_PROG_QUERY. */
|
||||
@ -1411,6 +1448,10 @@ union bpf_attr {
|
||||
* type data for
|
||||
* btf_vmlinux_value_type_id.
|
||||
*/
|
||||
/* BPF token FD to use with BPF_MAP_CREATE operation.
|
||||
* If provided, map_flags should have BPF_F_TOKEN_FD flag set.
|
||||
*/
|
||||
__s32 map_token_fd;
|
||||
};
|
||||
|
||||
struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
|
||||
@ -1480,6 +1521,10 @@ union bpf_attr {
|
||||
* truncated), or smaller (if log buffer wasn't filled completely).
|
||||
*/
|
||||
__u32 log_true_size;
|
||||
/* BPF token FD to use with BPF_PROG_LOAD operation.
|
||||
* If provided, prog_flags should have BPF_F_TOKEN_FD flag set.
|
||||
*/
|
||||
__s32 prog_token_fd;
|
||||
};
|
||||
|
||||
struct { /* anonymous struct used by BPF_OBJ_* commands */
|
||||
@ -1592,6 +1637,11 @@ union bpf_attr {
|
||||
* truncated), or smaller (if log buffer wasn't filled completely).
|
||||
*/
|
||||
__u32 btf_log_true_size;
|
||||
__u32 btf_flags;
|
||||
/* BPF token FD to use with BPF_BTF_LOAD operation.
|
||||
* If provided, btf_flags should have BPF_F_TOKEN_FD flag set.
|
||||
*/
|
||||
__s32 btf_token_fd;
|
||||
};
|
||||
|
||||
struct {
|
||||
@ -1722,6 +1772,11 @@ union bpf_attr {
|
||||
__u32 flags; /* extra flags */
|
||||
} prog_bind_map;
|
||||
|
||||
struct { /* struct used by BPF_TOKEN_CREATE command */
|
||||
__u32 flags;
|
||||
__u32 bpffs_fd;
|
||||
} token_create;
|
||||
|
||||
} __attribute__((aligned(8)));
|
||||
|
||||
/* The description below is an attempt at providing documentation to eBPF
|
||||
|
@ -1,4 +1,4 @@
|
||||
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
|
||||
netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \
|
||||
btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \
|
||||
usdt.o zip.o elf.o
|
||||
usdt.o zip.o elf.o features.o
|
||||
|
@ -103,7 +103,7 @@ int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts)
|
||||
* [0] https://lore.kernel.org/bpf/20201201215900.3569844-1-guro@fb.com/
|
||||
* [1] d05512618056 ("bpf: Add bpf_ktime_get_coarse_ns helper")
|
||||
*/
|
||||
int probe_memcg_account(void)
|
||||
int probe_memcg_account(int token_fd)
|
||||
{
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
|
||||
struct bpf_insn insns[] = {
|
||||
@ -120,6 +120,9 @@ int probe_memcg_account(void)
|
||||
attr.insns = ptr_to_u64(insns);
|
||||
attr.insn_cnt = insn_cnt;
|
||||
attr.license = ptr_to_u64("GPL");
|
||||
attr.prog_token_fd = token_fd;
|
||||
if (token_fd)
|
||||
attr.prog_flags |= BPF_F_TOKEN_FD;
|
||||
|
||||
prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, attr_sz);
|
||||
if (prog_fd >= 0) {
|
||||
@ -146,7 +149,7 @@ int bump_rlimit_memlock(void)
|
||||
struct rlimit rlim;
|
||||
|
||||
/* if kernel supports memcg-based accounting, skip bumping RLIMIT_MEMLOCK */
|
||||
if (memlock_bumped || kernel_supports(NULL, FEAT_MEMCG_ACCOUNT))
|
||||
if (memlock_bumped || feat_supported(NULL, FEAT_MEMCG_ACCOUNT))
|
||||
return 0;
|
||||
|
||||
memlock_bumped = true;
|
||||
@ -169,8 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
|
||||
__u32 max_entries,
|
||||
const struct bpf_map_create_opts *opts)
|
||||
{
|
||||
const size_t attr_sz = offsetofend(union bpf_attr,
|
||||
value_type_btf_obj_fd);
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
|
||||
union bpf_attr attr;
|
||||
int fd;
|
||||
|
||||
@ -182,7 +184,7 @@ int bpf_map_create(enum bpf_map_type map_type,
|
||||
return libbpf_err(-EINVAL);
|
||||
|
||||
attr.map_type = map_type;
|
||||
if (map_name && kernel_supports(NULL, FEAT_PROG_NAME))
|
||||
if (map_name && feat_supported(NULL, FEAT_PROG_NAME))
|
||||
libbpf_strlcpy(attr.map_name, map_name, sizeof(attr.map_name));
|
||||
attr.key_size = key_size;
|
||||
attr.value_size = value_size;
|
||||
@ -200,6 +202,8 @@ int bpf_map_create(enum bpf_map_type map_type,
|
||||
attr.numa_node = OPTS_GET(opts, numa_node, 0);
|
||||
attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
|
||||
|
||||
attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
|
||||
|
||||
fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
|
||||
return libbpf_err_errno(fd);
|
||||
}
|
||||
@ -234,7 +238,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
|
||||
const struct bpf_insn *insns, size_t insn_cnt,
|
||||
struct bpf_prog_load_opts *opts)
|
||||
{
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, log_true_size);
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, prog_token_fd);
|
||||
void *finfo = NULL, *linfo = NULL;
|
||||
const char *func_info, *line_info;
|
||||
__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
|
||||
@ -263,8 +267,9 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
|
||||
attr.prog_flags = OPTS_GET(opts, prog_flags, 0);
|
||||
attr.prog_ifindex = OPTS_GET(opts, prog_ifindex, 0);
|
||||
attr.kern_version = OPTS_GET(opts, kern_version, 0);
|
||||
attr.prog_token_fd = OPTS_GET(opts, token_fd, 0);
|
||||
|
||||
if (prog_name && kernel_supports(NULL, FEAT_PROG_NAME))
|
||||
if (prog_name && feat_supported(NULL, FEAT_PROG_NAME))
|
||||
libbpf_strlcpy(attr.prog_name, prog_name, sizeof(attr.prog_name));
|
||||
attr.license = ptr_to_u64(license);
|
||||
|
||||
@ -1184,7 +1189,7 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd)
|
||||
|
||||
int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts *opts)
|
||||
{
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, btf_log_true_size);
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, btf_token_fd);
|
||||
union bpf_attr attr;
|
||||
char *log_buf;
|
||||
size_t log_size;
|
||||
@ -1209,6 +1214,10 @@ int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts
|
||||
|
||||
attr.btf = ptr_to_u64(btf_data);
|
||||
attr.btf_size = btf_size;
|
||||
|
||||
attr.btf_flags = OPTS_GET(opts, btf_flags, 0);
|
||||
attr.btf_token_fd = OPTS_GET(opts, token_fd, 0);
|
||||
|
||||
/* log_level == 0 and log_buf != NULL means "try loading without
|
||||
* log_buf, but retry with log_buf and log_level=1 on error", which is
|
||||
* consistent across low-level and high-level BTF and program loading
|
||||
@ -1289,3 +1298,20 @@ int bpf_prog_bind_map(int prog_fd, int map_fd,
|
||||
ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, attr_sz);
|
||||
return libbpf_err_errno(ret);
|
||||
}
|
||||
|
||||
int bpf_token_create(int bpffs_fd, struct bpf_token_create_opts *opts)
|
||||
{
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, token_create);
|
||||
union bpf_attr attr;
|
||||
int fd;
|
||||
|
||||
if (!OPTS_VALID(opts, bpf_token_create_opts))
|
||||
return libbpf_err(-EINVAL);
|
||||
|
||||
memset(&attr, 0, attr_sz);
|
||||
attr.token_create.bpffs_fd = bpffs_fd;
|
||||
attr.token_create.flags = OPTS_GET(opts, flags, 0);
|
||||
|
||||
fd = sys_bpf_fd(BPF_TOKEN_CREATE, &attr, attr_sz);
|
||||
return libbpf_err_errno(fd);
|
||||
}
|
||||
|
@ -52,9 +52,11 @@ struct bpf_map_create_opts {
|
||||
__u32 numa_node;
|
||||
__u32 map_ifindex;
|
||||
__s32 value_type_btf_obj_fd;
|
||||
size_t:0;
|
||||
|
||||
__u32 token_fd;
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_map_create_opts__last_field value_type_btf_obj_fd
|
||||
#define bpf_map_create_opts__last_field token_fd
|
||||
|
||||
LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
|
||||
const char *map_name,
|
||||
@ -104,9 +106,10 @@ struct bpf_prog_load_opts {
|
||||
* If kernel doesn't support this feature, log_size is left unchanged.
|
||||
*/
|
||||
__u32 log_true_size;
|
||||
__u32 token_fd;
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_prog_load_opts__last_field log_true_size
|
||||
#define bpf_prog_load_opts__last_field token_fd
|
||||
|
||||
LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type,
|
||||
const char *prog_name, const char *license,
|
||||
@ -132,9 +135,12 @@ struct bpf_btf_load_opts {
|
||||
* If kernel doesn't support this feature, log_size is left unchanged.
|
||||
*/
|
||||
__u32 log_true_size;
|
||||
|
||||
__u32 btf_flags;
|
||||
__u32 token_fd;
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_btf_load_opts__last_field log_true_size
|
||||
#define bpf_btf_load_opts__last_field token_fd
|
||||
|
||||
LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size,
|
||||
struct bpf_btf_load_opts *opts);
|
||||
@ -642,6 +648,30 @@ struct bpf_test_run_opts {
|
||||
LIBBPF_API int bpf_prog_test_run_opts(int prog_fd,
|
||||
struct bpf_test_run_opts *opts);
|
||||
|
||||
struct bpf_token_create_opts {
|
||||
size_t sz; /* size of this struct for forward/backward compatibility */
|
||||
__u32 flags;
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_token_create_opts__last_field flags
|
||||
|
||||
/**
|
||||
* @brief **bpf_token_create()** creates a new instance of BPF token derived
|
||||
* from specified BPF FS mount point.
|
||||
*
|
||||
* BPF token created with this API can be passed to bpf() syscall for
|
||||
* commands like BPF_PROG_LOAD, BPF_MAP_CREATE, etc.
|
||||
*
|
||||
* @param bpffs_fd FD for BPF FS instance from which to derive a BPF token
|
||||
* instance.
|
||||
* @param opts optional BPF token creation options, can be NULL
|
||||
*
|
||||
* @return BPF token FD > 0, on success; negative error code, otherwise (errno
|
||||
* is also set to the error code)
|
||||
*/
|
||||
LIBBPF_API int bpf_token_create(int bpffs_fd,
|
||||
struct bpf_token_create_opts *opts);
|
||||
|
||||
#ifdef __cplusplus
|
||||
} /* extern "C" */
|
||||
#endif
|
||||
|
@ -1317,7 +1317,9 @@ struct btf *btf__parse_split(const char *path, struct btf *base_btf)
|
||||
|
||||
static void *btf_get_raw_data(const struct btf *btf, __u32 *size, bool swap_endian);
|
||||
|
||||
int btf_load_into_kernel(struct btf *btf, char *log_buf, size_t log_sz, __u32 log_level)
|
||||
int btf_load_into_kernel(struct btf *btf,
|
||||
char *log_buf, size_t log_sz, __u32 log_level,
|
||||
int token_fd)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_btf_load_opts, opts);
|
||||
__u32 buf_sz = 0, raw_size;
|
||||
@ -1367,6 +1369,10 @@ retry_load:
|
||||
opts.log_level = log_level;
|
||||
}
|
||||
|
||||
opts.token_fd = token_fd;
|
||||
if (token_fd)
|
||||
opts.btf_flags |= BPF_F_TOKEN_FD;
|
||||
|
||||
btf->fd = bpf_btf_load(raw_data, raw_size, &opts);
|
||||
if (btf->fd < 0) {
|
||||
/* time to turn on verbose mode and try again */
|
||||
@ -1394,7 +1400,7 @@ done:
|
||||
|
||||
int btf__load_into_kernel(struct btf *btf)
|
||||
{
|
||||
return btf_load_into_kernel(btf, NULL, 0, 0);
|
||||
return btf_load_into_kernel(btf, NULL, 0, 0, 0);
|
||||
}
|
||||
|
||||
int btf__fd(const struct btf *btf)
|
||||
|
@ -11,8 +11,6 @@
|
||||
#include "libbpf_internal.h"
|
||||
#include "str_error.h"
|
||||
|
||||
#define STRERR_BUFSIZE 128
|
||||
|
||||
/* A SHT_GNU_versym section holds 16-bit words. This bit is set if
|
||||
* the symbol is hidden and can only be seen when referenced using an
|
||||
* explicit version number. This is a GNU extension.
|
||||
|
503
tools/lib/bpf/features.c
Normal file
503
tools/lib/bpf/features.c
Normal file
@ -0,0 +1,503 @@
|
||||
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
||||
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/filter.h>
|
||||
#include "bpf.h"
|
||||
#include "libbpf.h"
|
||||
#include "libbpf_common.h"
|
||||
#include "libbpf_internal.h"
|
||||
#include "str_error.h"
|
||||
|
||||
static inline __u64 ptr_to_u64(const void *ptr)
|
||||
{
|
||||
return (__u64)(unsigned long)ptr;
|
||||
}
|
||||
|
||||
int probe_fd(int fd)
|
||||
{
|
||||
if (fd >= 0)
|
||||
close(fd);
|
||||
return fd >= 0;
|
||||
}
|
||||
|
||||
static int probe_kern_prog_name(int token_fd)
|
||||
{
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, prog_name);
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
union bpf_attr attr;
|
||||
int ret;
|
||||
|
||||
memset(&attr, 0, attr_sz);
|
||||
attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
|
||||
attr.license = ptr_to_u64("GPL");
|
||||
attr.insns = ptr_to_u64(insns);
|
||||
attr.insn_cnt = (__u32)ARRAY_SIZE(insns);
|
||||
attr.prog_token_fd = token_fd;
|
||||
if (token_fd)
|
||||
attr.prog_flags |= BPF_F_TOKEN_FD;
|
||||
libbpf_strlcpy(attr.prog_name, "libbpf_nametest", sizeof(attr.prog_name));
|
||||
|
||||
/* make sure loading with name works */
|
||||
ret = sys_bpf_prog_load(&attr, attr_sz, PROG_LOAD_ATTEMPTS);
|
||||
return probe_fd(ret);
|
||||
}
|
||||
|
||||
static int probe_kern_global_data(int token_fd)
|
||||
{
|
||||
char *cp, errmsg[STRERR_BUFSIZE];
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 16),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 42),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
LIBBPF_OPTS(bpf_map_create_opts, map_opts,
|
||||
.token_fd = token_fd,
|
||||
.map_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, prog_opts,
|
||||
.token_fd = token_fd,
|
||||
.prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
int ret, map, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_global", sizeof(int), 32, 1, &map_opts);
|
||||
if (map < 0) {
|
||||
ret = -errno;
|
||||
cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
|
||||
pr_warn("Error in %s():%s(%d). Couldn't create simple array map.\n",
|
||||
__func__, cp, -ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
insns[0].imm = map;
|
||||
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, &prog_opts);
|
||||
close(map);
|
||||
return probe_fd(ret);
|
||||
}
|
||||
|
||||
static int probe_kern_btf(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0int";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_func(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0int\0x\0a";
|
||||
/* void x(int a) {} */
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* FUNC_PROTO */ /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
|
||||
BTF_PARAM_ENC(7, 1),
|
||||
/* FUNC x */ /* [3] */
|
||||
BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, 0), 2),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_func_global(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0int\0x\0a";
|
||||
/* static void x(int a) {} */
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* FUNC_PROTO */ /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
|
||||
BTF_PARAM_ENC(7, 1),
|
||||
/* FUNC x BTF_FUNC_GLOBAL */ /* [3] */
|
||||
BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 2),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_datasec(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0x\0.data";
|
||||
/* static int a; */
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* VAR x */ /* [2] */
|
||||
BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
|
||||
BTF_VAR_STATIC,
|
||||
/* DATASEC val */ /* [3] */
|
||||
BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
|
||||
BTF_VAR_SECINFO_ENC(2, 0, 4),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_float(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0float";
|
||||
__u32 types[] = {
|
||||
/* float */
|
||||
BTF_TYPE_FLOAT_ENC(1, 4),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_decl_tag(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0tag";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* VAR x */ /* [2] */
|
||||
BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
|
||||
BTF_VAR_STATIC,
|
||||
/* attr */
|
||||
BTF_TYPE_DECL_TAG_ENC(1, 2, -1),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_type_tag(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0tag";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* attr */
|
||||
BTF_TYPE_TYPE_TAG_ENC(1, 1), /* [2] */
|
||||
/* ptr */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), /* [3] */
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
static int probe_kern_array_mmap(int token_fd)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_map_create_opts, opts,
|
||||
.map_flags = BPF_F_MMAPABLE | (token_fd ? BPF_F_TOKEN_FD : 0),
|
||||
.token_fd = token_fd,
|
||||
);
|
||||
int fd;
|
||||
|
||||
fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_mmap", sizeof(int), sizeof(int), 1, &opts);
|
||||
return probe_fd(fd);
|
||||
}
|
||||
|
||||
static int probe_kern_exp_attach_type(int token_fd)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts,
|
||||
.expected_attach_type = BPF_CGROUP_INET_SOCK_CREATE,
|
||||
.token_fd = token_fd,
|
||||
.prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int fd, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
/* use any valid combination of program type and (optional)
|
||||
* non-zero expected attach type (i.e., not a BPF_CGROUP_INET_INGRESS)
|
||||
* to see if kernel supports expected_attach_type field for
|
||||
* BPF_PROG_LOAD command
|
||||
*/
|
||||
fd = bpf_prog_load(BPF_PROG_TYPE_CGROUP_SOCK, NULL, "GPL", insns, insn_cnt, &opts);
|
||||
return probe_fd(fd);
|
||||
}
|
||||
|
||||
static int probe_kern_probe_read_kernel(int token_fd)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts,
|
||||
.token_fd = token_fd,
|
||||
.prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), /* r1 = r10 (fp) */
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), /* r1 += -8 */
|
||||
BPF_MOV64_IMM(BPF_REG_2, 8), /* r2 = 8 */
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0), /* r3 = 0 */
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int fd, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, &opts);
|
||||
return probe_fd(fd);
|
||||
}
|
||||
|
||||
static int probe_prog_bind_map(int token_fd)
|
||||
{
|
||||
char *cp, errmsg[STRERR_BUFSIZE];
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
LIBBPF_OPTS(bpf_map_create_opts, map_opts,
|
||||
.token_fd = token_fd,
|
||||
.map_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, prog_opts,
|
||||
.token_fd = token_fd,
|
||||
.prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
int ret, map, prog, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_det_bind", sizeof(int), 32, 1, &map_opts);
|
||||
if (map < 0) {
|
||||
ret = -errno;
|
||||
cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
|
||||
pr_warn("Error in %s():%s(%d). Couldn't create simple array map.\n",
|
||||
__func__, cp, -ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
prog = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, &prog_opts);
|
||||
if (prog < 0) {
|
||||
close(map);
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = bpf_prog_bind_map(prog, map, NULL);
|
||||
|
||||
close(map);
|
||||
close(prog);
|
||||
|
||||
return ret >= 0;
|
||||
}
|
||||
|
||||
static int probe_module_btf(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0int";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
|
||||
};
|
||||
struct bpf_btf_info info;
|
||||
__u32 len = sizeof(info);
|
||||
char name[16];
|
||||
int fd, err;
|
||||
|
||||
fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs), token_fd);
|
||||
if (fd < 0)
|
||||
return 0; /* BTF not supported at all */
|
||||
|
||||
memset(&info, 0, sizeof(info));
|
||||
info.name = ptr_to_u64(name);
|
||||
info.name_len = sizeof(name);
|
||||
|
||||
/* check that BPF_OBJ_GET_INFO_BY_FD supports specifying name pointer;
|
||||
* kernel's module BTF support coincides with support for
|
||||
* name/name_len fields in struct bpf_btf_info.
|
||||
*/
|
||||
err = bpf_btf_get_info_by_fd(fd, &info, &len);
|
||||
close(fd);
|
||||
return !err;
|
||||
}
|
||||
|
||||
static int probe_perf_link(int token_fd)
|
||||
{
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts,
|
||||
.token_fd = token_fd,
|
||||
.prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
int prog_fd, link_fd, err;
|
||||
|
||||
prog_fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL",
|
||||
insns, ARRAY_SIZE(insns), &opts);
|
||||
if (prog_fd < 0)
|
||||
return -errno;
|
||||
|
||||
/* use invalid perf_event FD to get EBADF, if link is supported;
|
||||
* otherwise EINVAL should be returned
|
||||
*/
|
||||
link_fd = bpf_link_create(prog_fd, -1, BPF_PERF_EVENT, NULL);
|
||||
err = -errno; /* close() can clobber errno */
|
||||
|
||||
if (link_fd >= 0)
|
||||
close(link_fd);
|
||||
close(prog_fd);
|
||||
|
||||
return link_fd < 0 && err == -EBADF;
|
||||
}
|
||||
|
||||
static int probe_uprobe_multi_link(int token_fd)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, load_opts,
|
||||
.expected_attach_type = BPF_TRACE_UPROBE_MULTI,
|
||||
.token_fd = token_fd,
|
||||
.prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
LIBBPF_OPTS(bpf_link_create_opts, link_opts);
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int prog_fd, link_fd, err;
|
||||
unsigned long offset = 0;
|
||||
|
||||
prog_fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL",
|
||||
insns, ARRAY_SIZE(insns), &load_opts);
|
||||
if (prog_fd < 0)
|
||||
return -errno;
|
||||
|
||||
/* Creating uprobe in '/' binary should fail with -EBADF. */
|
||||
link_opts.uprobe_multi.path = "/";
|
||||
link_opts.uprobe_multi.offsets = &offset;
|
||||
link_opts.uprobe_multi.cnt = 1;
|
||||
|
||||
link_fd = bpf_link_create(prog_fd, -1, BPF_TRACE_UPROBE_MULTI, &link_opts);
|
||||
err = -errno; /* close() can clobber errno */
|
||||
|
||||
if (link_fd >= 0)
|
||||
close(link_fd);
|
||||
close(prog_fd);
|
||||
|
||||
return link_fd < 0 && err == -EBADF;
|
||||
}
|
||||
|
||||
static int probe_kern_bpf_cookie(int token_fd)
|
||||
{
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_attach_cookie),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts,
|
||||
.token_fd = token_fd,
|
||||
.prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
int ret, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, &opts);
|
||||
return probe_fd(ret);
|
||||
}
|
||||
|
||||
static int probe_kern_btf_enum64(int token_fd)
|
||||
{
|
||||
static const char strs[] = "\0enum64";
|
||||
__u32 types[] = {
|
||||
BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_ENUM64, 0, 0), 8),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs), token_fd));
|
||||
}
|
||||
|
||||
typedef int (*feature_probe_fn)(int /* token_fd */);
|
||||
|
||||
static struct kern_feature_cache feature_cache;
|
||||
|
||||
static struct kern_feature_desc {
|
||||
const char *desc;
|
||||
feature_probe_fn probe;
|
||||
} feature_probes[__FEAT_CNT] = {
|
||||
[FEAT_PROG_NAME] = {
|
||||
"BPF program name", probe_kern_prog_name,
|
||||
},
|
||||
[FEAT_GLOBAL_DATA] = {
|
||||
"global variables", probe_kern_global_data,
|
||||
},
|
||||
[FEAT_BTF] = {
|
||||
"minimal BTF", probe_kern_btf,
|
||||
},
|
||||
[FEAT_BTF_FUNC] = {
|
||||
"BTF functions", probe_kern_btf_func,
|
||||
},
|
||||
[FEAT_BTF_GLOBAL_FUNC] = {
|
||||
"BTF global function", probe_kern_btf_func_global,
|
||||
},
|
||||
[FEAT_BTF_DATASEC] = {
|
||||
"BTF data section and variable", probe_kern_btf_datasec,
|
||||
},
|
||||
[FEAT_ARRAY_MMAP] = {
|
||||
"ARRAY map mmap()", probe_kern_array_mmap,
|
||||
},
|
||||
[FEAT_EXP_ATTACH_TYPE] = {
|
||||
"BPF_PROG_LOAD expected_attach_type attribute",
|
||||
probe_kern_exp_attach_type,
|
||||
},
|
||||
[FEAT_PROBE_READ_KERN] = {
|
||||
"bpf_probe_read_kernel() helper", probe_kern_probe_read_kernel,
|
||||
},
|
||||
[FEAT_PROG_BIND_MAP] = {
|
||||
"BPF_PROG_BIND_MAP support", probe_prog_bind_map,
|
||||
},
|
||||
[FEAT_MODULE_BTF] = {
|
||||
"module BTF support", probe_module_btf,
|
||||
},
|
||||
[FEAT_BTF_FLOAT] = {
|
||||
"BTF_KIND_FLOAT support", probe_kern_btf_float,
|
||||
},
|
||||
[FEAT_PERF_LINK] = {
|
||||
"BPF perf link support", probe_perf_link,
|
||||
},
|
||||
[FEAT_BTF_DECL_TAG] = {
|
||||
"BTF_KIND_DECL_TAG support", probe_kern_btf_decl_tag,
|
||||
},
|
||||
[FEAT_BTF_TYPE_TAG] = {
|
||||
"BTF_KIND_TYPE_TAG support", probe_kern_btf_type_tag,
|
||||
},
|
||||
[FEAT_MEMCG_ACCOUNT] = {
|
||||
"memcg-based memory accounting", probe_memcg_account,
|
||||
},
|
||||
[FEAT_BPF_COOKIE] = {
|
||||
"BPF cookie support", probe_kern_bpf_cookie,
|
||||
},
|
||||
[FEAT_BTF_ENUM64] = {
|
||||
"BTF_KIND_ENUM64 support", probe_kern_btf_enum64,
|
||||
},
|
||||
[FEAT_SYSCALL_WRAPPER] = {
|
||||
"Kernel using syscall wrapper", probe_kern_syscall_wrapper,
|
||||
},
|
||||
[FEAT_UPROBE_MULTI_LINK] = {
|
||||
"BPF multi-uprobe link support", probe_uprobe_multi_link,
|
||||
},
|
||||
};
|
||||
|
||||
bool feat_supported(struct kern_feature_cache *cache, enum kern_feature_id feat_id)
|
||||
{
|
||||
struct kern_feature_desc *feat = &feature_probes[feat_id];
|
||||
int ret;
|
||||
|
||||
/* assume global feature cache, unless custom one is provided */
|
||||
if (!cache)
|
||||
cache = &feature_cache;
|
||||
|
||||
if (READ_ONCE(cache->res[feat_id]) == FEAT_UNKNOWN) {
|
||||
ret = feat->probe(cache->token_fd);
|
||||
if (ret > 0) {
|
||||
WRITE_ONCE(cache->res[feat_id], FEAT_SUPPORTED);
|
||||
} else if (ret == 0) {
|
||||
WRITE_ONCE(cache->res[feat_id], FEAT_MISSING);
|
||||
} else {
|
||||
pr_warn("Detection of kernel %s support failed: %d\n", feat->desc, ret);
|
||||
WRITE_ONCE(cache->res[feat_id], FEAT_MISSING);
|
||||
}
|
||||
}
|
||||
|
||||
return READ_ONCE(cache->res[feat_id]) == FEAT_SUPPORTED;
|
||||
}
|
@ -59,6 +59,8 @@
|
||||
#define BPF_FS_MAGIC 0xcafe4a11
|
||||
#endif
|
||||
|
||||
#define BPF_FS_DEFAULT_PATH "/sys/fs/bpf"
|
||||
|
||||
#define BPF_INSN_SZ (sizeof(struct bpf_insn))
|
||||
|
||||
/* vsprintf() in __base_pr() uses nonliteral format string. It may break
|
||||
@ -695,6 +697,10 @@ struct bpf_object {
|
||||
|
||||
struct usdt_manager *usdt_man;
|
||||
|
||||
struct kern_feature_cache *feat_cache;
|
||||
char *token_path;
|
||||
int token_fd;
|
||||
|
||||
char path[];
|
||||
};
|
||||
|
||||
@ -2231,7 +2237,7 @@ static int build_map_pin_path(struct bpf_map *map, const char *path)
|
||||
int err;
|
||||
|
||||
if (!path)
|
||||
path = "/sys/fs/bpf";
|
||||
path = BPF_FS_DEFAULT_PATH;
|
||||
|
||||
err = pathname_concat(buf, sizeof(buf), path, bpf_map__name(map));
|
||||
if (err)
|
||||
@ -3240,7 +3246,7 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
|
||||
} else {
|
||||
/* currently BPF_BTF_LOAD only supports log_level 1 */
|
||||
err = btf_load_into_kernel(kern_btf, obj->log_buf, obj->log_size,
|
||||
obj->log_level ? 1 : 0);
|
||||
obj->log_level ? 1 : 0, obj->token_fd);
|
||||
}
|
||||
if (sanitize) {
|
||||
if (!err) {
|
||||
@ -4561,6 +4567,58 @@ int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bpf_object_prepare_token(struct bpf_object *obj)
|
||||
{
|
||||
const char *bpffs_path;
|
||||
int bpffs_fd = -1, token_fd, err;
|
||||
bool mandatory;
|
||||
enum libbpf_print_level level;
|
||||
|
||||
/* token is explicitly prevented */
|
||||
if (obj->token_path && obj->token_path[0] == '\0') {
|
||||
pr_debug("object '%s': token is prevented, skipping...\n", obj->name);
|
||||
return 0;
|
||||
}
|
||||
|
||||
mandatory = obj->token_path != NULL;
|
||||
level = mandatory ? LIBBPF_WARN : LIBBPF_DEBUG;
|
||||
|
||||
bpffs_path = obj->token_path ?: BPF_FS_DEFAULT_PATH;
|
||||
bpffs_fd = open(bpffs_path, O_DIRECTORY, O_RDWR);
|
||||
if (bpffs_fd < 0) {
|
||||
err = -errno;
|
||||
__pr(level, "object '%s': failed (%d) to open BPF FS mount at '%s'%s\n",
|
||||
obj->name, err, bpffs_path,
|
||||
mandatory ? "" : ", skipping optional step...");
|
||||
return mandatory ? err : 0;
|
||||
}
|
||||
|
||||
token_fd = bpf_token_create(bpffs_fd, 0);
|
||||
close(bpffs_fd);
|
||||
if (token_fd < 0) {
|
||||
if (!mandatory && token_fd == -ENOENT) {
|
||||
pr_debug("object '%s': BPF FS at '%s' doesn't have BPF token delegation set up, skipping...\n",
|
||||
obj->name, bpffs_path);
|
||||
return 0;
|
||||
}
|
||||
__pr(level, "object '%s': failed (%d) to create BPF token from '%s'%s\n",
|
||||
obj->name, token_fd, bpffs_path,
|
||||
mandatory ? "" : ", skipping optional step...");
|
||||
return mandatory ? token_fd : 0;
|
||||
}
|
||||
|
||||
obj->feat_cache = calloc(1, sizeof(*obj->feat_cache));
|
||||
if (!obj->feat_cache) {
|
||||
close(token_fd);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
obj->token_fd = token_fd;
|
||||
obj->feat_cache->token_fd = token_fd;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
bpf_object__probe_loading(struct bpf_object *obj)
|
||||
{
|
||||
@ -4570,6 +4628,10 @@ bpf_object__probe_loading(struct bpf_object *obj)
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int ret, insn_cnt = ARRAY_SIZE(insns);
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts,
|
||||
.token_fd = obj->token_fd,
|
||||
.prog_flags = obj->token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
|
||||
if (obj->gen_loader)
|
||||
return 0;
|
||||
@ -4579,9 +4641,9 @@ bpf_object__probe_loading(struct bpf_object *obj)
|
||||
pr_warn("Failed to bump RLIMIT_MEMLOCK (err = %d), you might need to do it explicitly!\n", ret);
|
||||
|
||||
/* make sure basic loading works */
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, NULL);
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, &opts);
|
||||
if (ret < 0)
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, NULL);
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, &opts);
|
||||
if (ret < 0) {
|
||||
ret = errno;
|
||||
cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
|
||||
@ -4596,462 +4658,18 @@ bpf_object__probe_loading(struct bpf_object *obj)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int probe_fd(int fd)
|
||||
{
|
||||
if (fd >= 0)
|
||||
close(fd);
|
||||
return fd >= 0;
|
||||
}
|
||||
|
||||
static int probe_kern_prog_name(void)
|
||||
{
|
||||
const size_t attr_sz = offsetofend(union bpf_attr, prog_name);
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
union bpf_attr attr;
|
||||
int ret;
|
||||
|
||||
memset(&attr, 0, attr_sz);
|
||||
attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
|
||||
attr.license = ptr_to_u64("GPL");
|
||||
attr.insns = ptr_to_u64(insns);
|
||||
attr.insn_cnt = (__u32)ARRAY_SIZE(insns);
|
||||
libbpf_strlcpy(attr.prog_name, "libbpf_nametest", sizeof(attr.prog_name));
|
||||
|
||||
/* make sure loading with name works */
|
||||
ret = sys_bpf_prog_load(&attr, attr_sz, PROG_LOAD_ATTEMPTS);
|
||||
return probe_fd(ret);
|
||||
}
|
||||
|
||||
static int probe_kern_global_data(void)
|
||||
{
|
||||
char *cp, errmsg[STRERR_BUFSIZE];
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 16),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 42),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int ret, map, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_global", sizeof(int), 32, 1, NULL);
|
||||
if (map < 0) {
|
||||
ret = -errno;
|
||||
cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
|
||||
pr_warn("Error in %s():%s(%d). Couldn't create simple array map.\n",
|
||||
__func__, cp, -ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
insns[0].imm = map;
|
||||
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, NULL);
|
||||
close(map);
|
||||
return probe_fd(ret);
|
||||
}
|
||||
|
||||
static int probe_kern_btf(void)
|
||||
{
|
||||
static const char strs[] = "\0int";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_func(void)
|
||||
{
|
||||
static const char strs[] = "\0int\0x\0a";
|
||||
/* void x(int a) {} */
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* FUNC_PROTO */ /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
|
||||
BTF_PARAM_ENC(7, 1),
|
||||
/* FUNC x */ /* [3] */
|
||||
BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, 0), 2),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_func_global(void)
|
||||
{
|
||||
static const char strs[] = "\0int\0x\0a";
|
||||
/* static void x(int a) {} */
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* FUNC_PROTO */ /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
|
||||
BTF_PARAM_ENC(7, 1),
|
||||
/* FUNC x BTF_FUNC_GLOBAL */ /* [3] */
|
||||
BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 2),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_datasec(void)
|
||||
{
|
||||
static const char strs[] = "\0x\0.data";
|
||||
/* static int a; */
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* VAR x */ /* [2] */
|
||||
BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
|
||||
BTF_VAR_STATIC,
|
||||
/* DATASEC val */ /* [3] */
|
||||
BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
|
||||
BTF_VAR_SECINFO_ENC(2, 0, 4),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_float(void)
|
||||
{
|
||||
static const char strs[] = "\0float";
|
||||
__u32 types[] = {
|
||||
/* float */
|
||||
BTF_TYPE_FLOAT_ENC(1, 4),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_decl_tag(void)
|
||||
{
|
||||
static const char strs[] = "\0tag";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* VAR x */ /* [2] */
|
||||
BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
|
||||
BTF_VAR_STATIC,
|
||||
/* attr */
|
||||
BTF_TYPE_DECL_TAG_ENC(1, 2, -1),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_btf_type_tag(void)
|
||||
{
|
||||
static const char strs[] = "\0tag";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
/* attr */
|
||||
BTF_TYPE_TYPE_TAG_ENC(1, 1), /* [2] */
|
||||
/* ptr */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), /* [3] */
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_array_mmap(void)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_map_create_opts, opts, .map_flags = BPF_F_MMAPABLE);
|
||||
int fd;
|
||||
|
||||
fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_mmap", sizeof(int), sizeof(int), 1, &opts);
|
||||
return probe_fd(fd);
|
||||
}
|
||||
|
||||
static int probe_kern_exp_attach_type(void)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts, .expected_attach_type = BPF_CGROUP_INET_SOCK_CREATE);
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int fd, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
/* use any valid combination of program type and (optional)
|
||||
* non-zero expected attach type (i.e., not a BPF_CGROUP_INET_INGRESS)
|
||||
* to see if kernel supports expected_attach_type field for
|
||||
* BPF_PROG_LOAD command
|
||||
*/
|
||||
fd = bpf_prog_load(BPF_PROG_TYPE_CGROUP_SOCK, NULL, "GPL", insns, insn_cnt, &opts);
|
||||
return probe_fd(fd);
|
||||
}
|
||||
|
||||
static int probe_kern_probe_read_kernel(void)
|
||||
{
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), /* r1 = r10 (fp) */
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), /* r1 += -8 */
|
||||
BPF_MOV64_IMM(BPF_REG_2, 8), /* r2 = 8 */
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0), /* r3 = 0 */
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int fd, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, NULL);
|
||||
return probe_fd(fd);
|
||||
}
|
||||
|
||||
static int probe_prog_bind_map(void)
|
||||
{
|
||||
char *cp, errmsg[STRERR_BUFSIZE];
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int ret, map, prog, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_det_bind", sizeof(int), 32, 1, NULL);
|
||||
if (map < 0) {
|
||||
ret = -errno;
|
||||
cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
|
||||
pr_warn("Error in %s():%s(%d). Couldn't create simple array map.\n",
|
||||
__func__, cp, -ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
prog = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, NULL);
|
||||
if (prog < 0) {
|
||||
close(map);
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = bpf_prog_bind_map(prog, map, NULL);
|
||||
|
||||
close(map);
|
||||
close(prog);
|
||||
|
||||
return ret >= 0;
|
||||
}
|
||||
|
||||
static int probe_module_btf(void)
|
||||
{
|
||||
static const char strs[] = "\0int";
|
||||
__u32 types[] = {
|
||||
/* int */
|
||||
BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
|
||||
};
|
||||
struct bpf_btf_info info;
|
||||
__u32 len = sizeof(info);
|
||||
char name[16];
|
||||
int fd, err;
|
||||
|
||||
fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs));
|
||||
if (fd < 0)
|
||||
return 0; /* BTF not supported at all */
|
||||
|
||||
memset(&info, 0, sizeof(info));
|
||||
info.name = ptr_to_u64(name);
|
||||
info.name_len = sizeof(name);
|
||||
|
||||
/* check that BPF_OBJ_GET_INFO_BY_FD supports specifying name pointer;
|
||||
* kernel's module BTF support coincides with support for
|
||||
* name/name_len fields in struct bpf_btf_info.
|
||||
*/
|
||||
err = bpf_btf_get_info_by_fd(fd, &info, &len);
|
||||
close(fd);
|
||||
return !err;
|
||||
}
|
||||
|
||||
static int probe_perf_link(void)
|
||||
{
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int prog_fd, link_fd, err;
|
||||
|
||||
prog_fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL",
|
||||
insns, ARRAY_SIZE(insns), NULL);
|
||||
if (prog_fd < 0)
|
||||
return -errno;
|
||||
|
||||
/* use invalid perf_event FD to get EBADF, if link is supported;
|
||||
* otherwise EINVAL should be returned
|
||||
*/
|
||||
link_fd = bpf_link_create(prog_fd, -1, BPF_PERF_EVENT, NULL);
|
||||
err = -errno; /* close() can clobber errno */
|
||||
|
||||
if (link_fd >= 0)
|
||||
close(link_fd);
|
||||
close(prog_fd);
|
||||
|
||||
return link_fd < 0 && err == -EBADF;
|
||||
}
|
||||
|
||||
static int probe_uprobe_multi_link(void)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, load_opts,
|
||||
.expected_attach_type = BPF_TRACE_UPROBE_MULTI,
|
||||
);
|
||||
LIBBPF_OPTS(bpf_link_create_opts, link_opts);
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int prog_fd, link_fd, err;
|
||||
unsigned long offset = 0;
|
||||
|
||||
prog_fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL",
|
||||
insns, ARRAY_SIZE(insns), &load_opts);
|
||||
if (prog_fd < 0)
|
||||
return -errno;
|
||||
|
||||
/* Creating uprobe in '/' binary should fail with -EBADF. */
|
||||
link_opts.uprobe_multi.path = "/";
|
||||
link_opts.uprobe_multi.offsets = &offset;
|
||||
link_opts.uprobe_multi.cnt = 1;
|
||||
|
||||
link_fd = bpf_link_create(prog_fd, -1, BPF_TRACE_UPROBE_MULTI, &link_opts);
|
||||
err = -errno; /* close() can clobber errno */
|
||||
|
||||
if (link_fd >= 0)
|
||||
close(link_fd);
|
||||
close(prog_fd);
|
||||
|
||||
return link_fd < 0 && err == -EBADF;
|
||||
}
|
||||
|
||||
static int probe_kern_bpf_cookie(void)
|
||||
{
|
||||
struct bpf_insn insns[] = {
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_attach_cookie),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
int ret, insn_cnt = ARRAY_SIZE(insns);
|
||||
|
||||
ret = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL", insns, insn_cnt, NULL);
|
||||
return probe_fd(ret);
|
||||
}
|
||||
|
||||
static int probe_kern_btf_enum64(void)
|
||||
{
|
||||
static const char strs[] = "\0enum64";
|
||||
__u32 types[] = {
|
||||
BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_ENUM64, 0, 0), 8),
|
||||
};
|
||||
|
||||
return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs)));
|
||||
}
|
||||
|
||||
static int probe_kern_syscall_wrapper(void);
|
||||
|
||||
enum kern_feature_result {
|
||||
FEAT_UNKNOWN = 0,
|
||||
FEAT_SUPPORTED = 1,
|
||||
FEAT_MISSING = 2,
|
||||
};
|
||||
|
||||
typedef int (*feature_probe_fn)(void);
|
||||
|
||||
static struct kern_feature_desc {
|
||||
const char *desc;
|
||||
feature_probe_fn probe;
|
||||
enum kern_feature_result res;
|
||||
} feature_probes[__FEAT_CNT] = {
|
||||
[FEAT_PROG_NAME] = {
|
||||
"BPF program name", probe_kern_prog_name,
|
||||
},
|
||||
[FEAT_GLOBAL_DATA] = {
|
||||
"global variables", probe_kern_global_data,
|
||||
},
|
||||
[FEAT_BTF] = {
|
||||
"minimal BTF", probe_kern_btf,
|
||||
},
|
||||
[FEAT_BTF_FUNC] = {
|
||||
"BTF functions", probe_kern_btf_func,
|
||||
},
|
||||
[FEAT_BTF_GLOBAL_FUNC] = {
|
||||
"BTF global function", probe_kern_btf_func_global,
|
||||
},
|
||||
[FEAT_BTF_DATASEC] = {
|
||||
"BTF data section and variable", probe_kern_btf_datasec,
|
||||
},
|
||||
[FEAT_ARRAY_MMAP] = {
|
||||
"ARRAY map mmap()", probe_kern_array_mmap,
|
||||
},
|
||||
[FEAT_EXP_ATTACH_TYPE] = {
|
||||
"BPF_PROG_LOAD expected_attach_type attribute",
|
||||
probe_kern_exp_attach_type,
|
||||
},
|
||||
[FEAT_PROBE_READ_KERN] = {
|
||||
"bpf_probe_read_kernel() helper", probe_kern_probe_read_kernel,
|
||||
},
|
||||
[FEAT_PROG_BIND_MAP] = {
|
||||
"BPF_PROG_BIND_MAP support", probe_prog_bind_map,
|
||||
},
|
||||
[FEAT_MODULE_BTF] = {
|
||||
"module BTF support", probe_module_btf,
|
||||
},
|
||||
[FEAT_BTF_FLOAT] = {
|
||||
"BTF_KIND_FLOAT support", probe_kern_btf_float,
|
||||
},
|
||||
[FEAT_PERF_LINK] = {
|
||||
"BPF perf link support", probe_perf_link,
|
||||
},
|
||||
[FEAT_BTF_DECL_TAG] = {
|
||||
"BTF_KIND_DECL_TAG support", probe_kern_btf_decl_tag,
|
||||
},
|
||||
[FEAT_BTF_TYPE_TAG] = {
|
||||
"BTF_KIND_TYPE_TAG support", probe_kern_btf_type_tag,
|
||||
},
|
||||
[FEAT_MEMCG_ACCOUNT] = {
|
||||
"memcg-based memory accounting", probe_memcg_account,
|
||||
},
|
||||
[FEAT_BPF_COOKIE] = {
|
||||
"BPF cookie support", probe_kern_bpf_cookie,
|
||||
},
|
||||
[FEAT_BTF_ENUM64] = {
|
||||
"BTF_KIND_ENUM64 support", probe_kern_btf_enum64,
|
||||
},
|
||||
[FEAT_SYSCALL_WRAPPER] = {
|
||||
"Kernel using syscall wrapper", probe_kern_syscall_wrapper,
|
||||
},
|
||||
[FEAT_UPROBE_MULTI_LINK] = {
|
||||
"BPF multi-uprobe link support", probe_uprobe_multi_link,
|
||||
},
|
||||
};
|
||||
|
||||
bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id)
|
||||
{
|
||||
struct kern_feature_desc *feat = &feature_probes[feat_id];
|
||||
int ret;
|
||||
|
||||
if (obj && obj->gen_loader)
|
||||
/* To generate loader program assume the latest kernel
|
||||
* to avoid doing extra prog_load, map_create syscalls.
|
||||
*/
|
||||
return true;
|
||||
|
||||
if (READ_ONCE(feat->res) == FEAT_UNKNOWN) {
|
||||
ret = feat->probe();
|
||||
if (ret > 0) {
|
||||
WRITE_ONCE(feat->res, FEAT_SUPPORTED);
|
||||
} else if (ret == 0) {
|
||||
WRITE_ONCE(feat->res, FEAT_MISSING);
|
||||
} else {
|
||||
pr_warn("Detection of kernel %s support failed: %d\n", feat->desc, ret);
|
||||
WRITE_ONCE(feat->res, FEAT_MISSING);
|
||||
}
|
||||
}
|
||||
if (obj->token_fd)
|
||||
return feat_supported(obj->feat_cache, feat_id);
|
||||
|
||||
return READ_ONCE(feat->res) == FEAT_SUPPORTED;
|
||||
return feat_supported(NULL, feat_id);
|
||||
}
|
||||
|
||||
static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
|
||||
@ -5175,6 +4793,9 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
|
||||
create_attr.map_flags = def->map_flags;
|
||||
create_attr.numa_node = map->numa_node;
|
||||
create_attr.map_extra = map->map_extra;
|
||||
create_attr.token_fd = obj->token_fd;
|
||||
if (obj->token_fd)
|
||||
create_attr.map_flags |= BPF_F_TOKEN_FD;
|
||||
|
||||
if (bpf_map__is_struct_ops(map)) {
|
||||
create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
|
||||
@ -6887,7 +6508,7 @@ static int probe_kern_arg_ctx_tag(void)
|
||||
if (cached_result >= 0)
|
||||
return cached_result;
|
||||
|
||||
btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs));
|
||||
btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs), 0);
|
||||
if (btf_fd < 0)
|
||||
return 0;
|
||||
|
||||
@ -7496,6 +7117,10 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
|
||||
load_attr.prog_flags = prog->prog_flags;
|
||||
load_attr.fd_array = obj->fd_array;
|
||||
|
||||
load_attr.token_fd = obj->token_fd;
|
||||
if (obj->token_fd)
|
||||
load_attr.prog_flags |= BPF_F_TOKEN_FD;
|
||||
|
||||
/* adjust load_attr if sec_def provides custom preload callback */
|
||||
if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
|
||||
err = prog->sec_def->prog_prepare_load_fn(prog, &load_attr, prog->sec_def->cookie);
|
||||
@ -7941,7 +7566,7 @@ static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object
|
||||
static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf, size_t obj_buf_sz,
|
||||
const struct bpf_object_open_opts *opts)
|
||||
{
|
||||
const char *obj_name, *kconfig, *btf_tmp_path;
|
||||
const char *obj_name, *kconfig, *btf_tmp_path, *token_path;
|
||||
struct bpf_object *obj;
|
||||
char tmp_name[64];
|
||||
int err;
|
||||
@ -7978,6 +7603,16 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
|
||||
if (log_size && !log_buf)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
token_path = OPTS_GET(opts, bpf_token_path, NULL);
|
||||
/* if user didn't specify bpf_token_path explicitly, check if
|
||||
* LIBBPF_BPF_TOKEN_PATH envvar was set and treat it as bpf_token_path
|
||||
* option
|
||||
*/
|
||||
if (!token_path)
|
||||
token_path = getenv("LIBBPF_BPF_TOKEN_PATH");
|
||||
if (token_path && strlen(token_path) >= PATH_MAX)
|
||||
return ERR_PTR(-ENAMETOOLONG);
|
||||
|
||||
obj = bpf_object__new(path, obj_buf, obj_buf_sz, obj_name);
|
||||
if (IS_ERR(obj))
|
||||
return obj;
|
||||
@ -7986,6 +7621,14 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
|
||||
obj->log_size = log_size;
|
||||
obj->log_level = log_level;
|
||||
|
||||
if (token_path) {
|
||||
obj->token_path = strdup(token_path);
|
||||
if (!obj->token_path) {
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
btf_tmp_path = OPTS_GET(opts, btf_custom_path, NULL);
|
||||
if (btf_tmp_path) {
|
||||
if (strlen(btf_tmp_path) >= PATH_MAX) {
|
||||
@ -8496,7 +8139,8 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
|
||||
if (obj->gen_loader)
|
||||
bpf_gen__init(obj->gen_loader, extra_log_level, obj->nr_programs, obj->nr_maps);
|
||||
|
||||
err = bpf_object__probe_loading(obj);
|
||||
err = bpf_object_prepare_token(obj);
|
||||
err = err ? : bpf_object__probe_loading(obj);
|
||||
err = err ? : bpf_object__load_vmlinux_btf(obj, false);
|
||||
err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
|
||||
err = err ? : bpf_object__sanitize_maps(obj);
|
||||
@ -9031,6 +8675,11 @@ void bpf_object__close(struct bpf_object *obj)
|
||||
}
|
||||
zfree(&obj->programs);
|
||||
|
||||
zfree(&obj->feat_cache);
|
||||
zfree(&obj->token_path);
|
||||
if (obj->token_fd > 0)
|
||||
close(obj->token_fd);
|
||||
|
||||
free(obj);
|
||||
}
|
||||
|
||||
@ -11053,7 +10702,7 @@ static const char *arch_specific_syscall_pfx(void)
|
||||
#endif
|
||||
}
|
||||
|
||||
static int probe_kern_syscall_wrapper(void)
|
||||
int probe_kern_syscall_wrapper(int token_fd)
|
||||
{
|
||||
char syscall_name[64];
|
||||
const char *ksys_pfx;
|
||||
|
@ -177,10 +177,29 @@ struct bpf_object_open_opts {
|
||||
* logs through its print callback.
|
||||
*/
|
||||
__u32 kernel_log_level;
|
||||
/* Path to BPF FS mount point to derive BPF token from.
|
||||
*
|
||||
* Created BPF token will be used for all bpf() syscall operations
|
||||
* that accept BPF token (e.g., map creation, BTF and program loads,
|
||||
* etc) automatically within instantiated BPF object.
|
||||
*
|
||||
* If bpf_token_path is not specified, libbpf will consult
|
||||
* LIBBPF_BPF_TOKEN_PATH environment variable. If set, it will be
|
||||
* taken as a value of bpf_token_path option and will force libbpf to
|
||||
* either create BPF token from provided custom BPF FS path, or will
|
||||
* disable implicit BPF token creation, if envvar value is an empty
|
||||
* string. bpf_token_path overrides LIBBPF_BPF_TOKEN_PATH, if both are
|
||||
* set at the same time.
|
||||
*
|
||||
* Setting bpf_token_path option to empty string disables libbpf's
|
||||
* automatic attempt to create BPF token from default BPF FS mount
|
||||
* point (/sys/fs/bpf), in case this default behavior is undesirable.
|
||||
*/
|
||||
const char *bpf_token_path;
|
||||
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_object_open_opts__last_field kernel_log_level
|
||||
#define bpf_object_open_opts__last_field bpf_token_path
|
||||
|
||||
/**
|
||||
* @brief **bpf_object__open()** creates a bpf_object by opening
|
||||
|
@ -411,4 +411,5 @@ LIBBPF_1.3.0 {
|
||||
} LIBBPF_1.2.0;
|
||||
|
||||
LIBBPF_1.4.0 {
|
||||
bpf_token_create;
|
||||
} LIBBPF_1.3.0;
|
||||
|
@ -361,15 +361,32 @@ enum kern_feature_id {
|
||||
__FEAT_CNT,
|
||||
};
|
||||
|
||||
int probe_memcg_account(void);
|
||||
enum kern_feature_result {
|
||||
FEAT_UNKNOWN = 0,
|
||||
FEAT_SUPPORTED = 1,
|
||||
FEAT_MISSING = 2,
|
||||
};
|
||||
|
||||
struct kern_feature_cache {
|
||||
enum kern_feature_result res[__FEAT_CNT];
|
||||
int token_fd;
|
||||
};
|
||||
|
||||
bool feat_supported(struct kern_feature_cache *cache, enum kern_feature_id feat_id);
|
||||
bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id);
|
||||
|
||||
int probe_kern_syscall_wrapper(int token_fd);
|
||||
int probe_memcg_account(int token_fd);
|
||||
int bump_rlimit_memlock(void);
|
||||
|
||||
int parse_cpu_mask_str(const char *s, bool **mask, int *mask_sz);
|
||||
int parse_cpu_mask_file(const char *fcpu, bool **mask, int *mask_sz);
|
||||
int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
|
||||
const char *str_sec, size_t str_len);
|
||||
int btf_load_into_kernel(struct btf *btf, char *log_buf, size_t log_sz, __u32 log_level);
|
||||
const char *str_sec, size_t str_len,
|
||||
int token_fd);
|
||||
int btf_load_into_kernel(struct btf *btf,
|
||||
char *log_buf, size_t log_sz, __u32 log_level,
|
||||
int token_fd);
|
||||
|
||||
struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf);
|
||||
void btf_get_kernel_prefix_kind(enum bpf_attach_type attach_type,
|
||||
@ -533,6 +550,17 @@ static inline bool is_ldimm64_insn(struct bpf_insn *insn)
|
||||
return insn->code == (BPF_LD | BPF_IMM | BPF_DW);
|
||||
}
|
||||
|
||||
/* Unconditionally dup FD, ensuring it doesn't use [0, 2] range.
|
||||
* Original FD is not closed or altered in any other way.
|
||||
* Preserves original FD value, if it's invalid (negative).
|
||||
*/
|
||||
static inline int dup_good_fd(int fd)
|
||||
{
|
||||
if (fd < 0)
|
||||
return fd;
|
||||
return fcntl(fd, F_DUPFD_CLOEXEC, 3);
|
||||
}
|
||||
|
||||
/* if fd is stdin, stdout, or stderr, dup to a fd greater than 2
|
||||
* Takes ownership of the fd passed in, and closes it if calling
|
||||
* fcntl(fd, F_DUPFD_CLOEXEC, 3).
|
||||
@ -544,7 +572,7 @@ static inline int ensure_good_fd(int fd)
|
||||
if (fd < 0)
|
||||
return fd;
|
||||
if (fd < 3) {
|
||||
fd = fcntl(fd, F_DUPFD_CLOEXEC, 3);
|
||||
fd = dup_good_fd(fd);
|
||||
saved_errno = errno;
|
||||
close(old_fd);
|
||||
errno = saved_errno;
|
||||
@ -623,4 +651,6 @@ int elf_resolve_syms_offsets(const char *binary_path, int cnt,
|
||||
int elf_resolve_pattern_offsets(const char *binary_path, const char *pattern,
|
||||
unsigned long **poffsets, size_t *pcnt);
|
||||
|
||||
int probe_fd(int fd);
|
||||
|
||||
#endif /* __LIBBPF_LIBBPF_INTERNAL_H */
|
||||
|
@ -219,7 +219,8 @@ int libbpf_probe_bpf_prog_type(enum bpf_prog_type prog_type, const void *opts)
|
||||
}
|
||||
|
||||
int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
|
||||
const char *str_sec, size_t str_len)
|
||||
const char *str_sec, size_t str_len,
|
||||
int token_fd)
|
||||
{
|
||||
struct btf_header hdr = {
|
||||
.magic = BTF_MAGIC,
|
||||
@ -229,6 +230,10 @@ int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
|
||||
.str_off = types_len,
|
||||
.str_len = str_len,
|
||||
};
|
||||
LIBBPF_OPTS(bpf_btf_load_opts, opts,
|
||||
.token_fd = token_fd,
|
||||
.btf_flags = token_fd ? BPF_F_TOKEN_FD : 0,
|
||||
);
|
||||
int btf_fd, btf_len;
|
||||
__u8 *raw_btf;
|
||||
|
||||
@ -241,7 +246,7 @@ int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
|
||||
memcpy(raw_btf + hdr.hdr_len, raw_types, hdr.type_len);
|
||||
memcpy(raw_btf + hdr.hdr_len + hdr.type_len, str_sec, hdr.str_len);
|
||||
|
||||
btf_fd = bpf_btf_load(raw_btf, btf_len, NULL);
|
||||
btf_fd = bpf_btf_load(raw_btf, btf_len, &opts);
|
||||
|
||||
free(raw_btf);
|
||||
return btf_fd;
|
||||
@ -271,7 +276,7 @@ static int load_local_storage_btf(void)
|
||||
};
|
||||
|
||||
return libbpf__load_raw_btf((char *)types, sizeof(types),
|
||||
strs, sizeof(strs));
|
||||
strs, sizeof(strs), 0);
|
||||
}
|
||||
|
||||
static int probe_map_create(enum bpf_map_type map_type)
|
||||
|
@ -2,5 +2,8 @@
|
||||
#ifndef __LIBBPF_STR_ERROR_H
|
||||
#define __LIBBPF_STR_ERROR_H
|
||||
|
||||
#define STRERR_BUFSIZE 128
|
||||
|
||||
char *libbpf_strerror_r(int err, char *dst, int len);
|
||||
|
||||
#endif /* __LIBBPF_STR_ERROR_H */
|
||||
|
@ -30,6 +30,8 @@ void test_libbpf_probe_prog_types(void)
|
||||
|
||||
if (prog_type == BPF_PROG_TYPE_UNSPEC)
|
||||
continue;
|
||||
if (strcmp(prog_type_name, "__MAX_BPF_PROG_TYPE") == 0)
|
||||
continue;
|
||||
|
||||
if (!test__start_subtest(prog_type_name))
|
||||
continue;
|
||||
@ -68,6 +70,8 @@ void test_libbpf_probe_map_types(void)
|
||||
|
||||
if (map_type == BPF_MAP_TYPE_UNSPEC)
|
||||
continue;
|
||||
if (strcmp(map_type_name, "__MAX_BPF_MAP_TYPE") == 0)
|
||||
continue;
|
||||
|
||||
if (!test__start_subtest(map_type_name))
|
||||
continue;
|
||||
|
@ -132,6 +132,9 @@ static void test_libbpf_bpf_map_type_str(void)
|
||||
const char *map_type_str;
|
||||
char buf[256];
|
||||
|
||||
if (map_type == __MAX_BPF_MAP_TYPE)
|
||||
continue;
|
||||
|
||||
map_type_name = btf__str_by_offset(btf, e->name_off);
|
||||
map_type_str = libbpf_bpf_map_type_str(map_type);
|
||||
ASSERT_OK_PTR(map_type_str, map_type_name);
|
||||
@ -186,6 +189,9 @@ static void test_libbpf_bpf_prog_type_str(void)
|
||||
const char *prog_type_str;
|
||||
char buf[256];
|
||||
|
||||
if (prog_type == __MAX_BPF_PROG_TYPE)
|
||||
continue;
|
||||
|
||||
prog_type_name = btf__str_by_offset(btf, e->name_off);
|
||||
prog_type_str = libbpf_bpf_prog_type_str(prog_type);
|
||||
ASSERT_OK_PTR(prog_type_str, prog_type_name);
|
||||
|
1052
tools/testing/selftests/bpf/prog_tests/token.c
Normal file
1052
tools/testing/selftests/bpf/prog_tests/token.c
Normal file
File diff suppressed because it is too large
Load Diff
13
tools/testing/selftests/bpf/progs/priv_map.c
Normal file
13
tools/testing/selftests/bpf/progs/priv_map.c
Normal file
@ -0,0 +1,13 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
|
||||
|
||||
#include "vmlinux.h"
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_QUEUE);
|
||||
__uint(max_entries, 1);
|
||||
__type(value, __u32);
|
||||
} priv_map SEC(".maps");
|
13
tools/testing/selftests/bpf/progs/priv_prog.c
Normal file
13
tools/testing/selftests/bpf/progs/priv_prog.c
Normal file
@ -0,0 +1,13 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
|
||||
|
||||
#include "vmlinux.h"
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
||||
SEC("kprobe")
|
||||
int kprobe_prog(void *ctx)
|
||||
{
|
||||
return 1;
|
||||
}
|
32
tools/testing/selftests/bpf/progs/token_lsm.c
Normal file
32
tools/testing/selftests/bpf/progs/token_lsm.c
Normal file
@ -0,0 +1,32 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
|
||||
|
||||
#include "vmlinux.h"
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include <bpf/bpf_tracing.h>
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
||||
int my_pid;
|
||||
bool reject_capable;
|
||||
bool reject_cmd;
|
||||
|
||||
SEC("lsm/bpf_token_capable")
|
||||
int BPF_PROG(token_capable, struct bpf_token *token, int cap)
|
||||
{
|
||||
if (my_pid == 0 || my_pid != (bpf_get_current_pid_tgid() >> 32))
|
||||
return 0;
|
||||
if (reject_capable)
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("lsm/bpf_token_cmd")
|
||||
int BPF_PROG(token_cmd, struct bpf_token *token, enum bpf_cmd cmd)
|
||||
{
|
||||
if (my_pid == 0 || my_pid != (bpf_get_current_pid_tgid() >> 32))
|
||||
return 0;
|
||||
if (reject_cmd)
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
Loading…
x
Reference in New Issue
Block a user