Merge patch series "nstree: listns()"

Christian Brauner <brauner@kernel.org> says:

As announced a while ago this is the next step building on the nstree
work from prior cycles. There's a bunch of fixes and semantic cleanups
in here and a ton of tests.

Currently listns() is relying on active namespace reference counts which
are introduced alongside this series.

While a namespace is on the namespace trees with a valid reference count
it is possible to reopen it through a namespace file handle. This is all
fine but has some issues that should be addressed.

On current kernels a namespace is visible to userspace in the
following cases:

(1) The namespace is in use by a task.
(2) The namespace is persisted through a VFS object (namespace file
    descriptor or bind-mount).
    Note that (2) only cares about direct persistence of the namespace
    itself not indirectly via e.g., file->f_cred file references or
    similar.
(3) The namespace is a hierarchical namespace type and is the parent of
    a single or multiple child namespaces.

Case (3) is interesting because it is possible that a parent namespace
might not fulfill any of (1) or (2), i.e., is invisible to userspace but
it may still be resurrected through the NS_GET_PARENT ioctl().

Currently namespace file handles allow much broader access to namespaces
than what is currently possible via (1)-(3). The reason is that
namespaces may remain pinned for completely internal reasons yet are
inaccessible to userspace.

For example, a user namespace my remain pinned by get_cred() calls to
stash the opener's credentials into file->f_cred. As it stands file
handles allow to resurrect such a users namespace even though this
should not be possible via (1)-(3). This is a fundamental uapi change
that we shouldn't do if we don't have to.

Consider the following insane case: Various architectures support the
CONFIG_MMU_LAZY_TLB_REFCOUNT option which uses lazy TLB destruction.
When this option is set a userspace task's struct mm_struct may be used
for kernel threads such as the idle task and will only be destroyed once
the cpu's runqueue switches back to another task. But because of ptrace()
permission checks struct mm_struct stashes the user namespace of the
task that struct mm_struct originally belonged to. The kernel thread
will take a reference on the struct mm_struct and thus pin it.

So on an idle system user namespaces can be persisted for arbitrary
amounts of time which also means that they can be resurrected using
namespace file handles. That makes no sense whatsoever. The problem is
of course excarabted on large systems with a huge number of cpus.

To handle this nicely we introduce an active reference count which
tracks (1)-(3). This is easy to do as all of these things are already
managed centrally. Only (1)-(3) will count towards the active reference
count and only namespaces which are active may be opened via namespace
file handles.

The problem is that namespaces may be resurrected. Which means that they
can become temporarily inactive and will be reactived some time later.
Currently the only example of this is the SIOGCSKNS socket ioctl. The
SIOCGSKNS ioctl allows to open a network namespace file descriptor based
on a socket file descriptor.

If a socket is tied to a network namespace that subsequently becomes
inactive but that socket is persisted by another process in another
network namespace (e.g., via SCM_RIGHTS of pidfd_getfd()) then the
SIOCGSKNS ioctl will resurrect this network namespace.

So calls to open_related_ns() and open_namespace() will end up
resurrecting the corresponding namespace tree.

Note that the active reference count does not regulate the lifetime of
the namespace itself. This is still done by the normal reference count.
The active reference count can only be elevated if the regular reference
count is elevated.

The active reference count also doesn't regulate the presence of a
namespace on the namespace trees. It only regulates its visiblity to
namespace file handles (and in later patches to listns()).

A namespace remains on the namespace trees from creation until its
actual destruction. This will allow the kernel to always reach any
namespace trivially and it will also enable subsystems like bpf to walk
the namespace lists on the system for tracing or general introspection
purposes.

Note that different namespaces have different visibility lifetimes on
current kernels. While most namespace are immediately released when the
last task using them exits, the user- and pid namespace are persisted
and thus both remain accessible via /proc/<pid>/ns/<ns_type>.

The user namespace lifetime is aliged with struct cred and is only
released through exit_creds(). However, it becomes inaccessible to
userspace once the last task using it is reaped, i.e., when
release_task() is called and all proc entries are flushed. Similarly,
the pid namespace is also visible until the last task using it has been
reaped and the associated pid numbers are freed.

The active reference counts of the user- and pid namespace are
decremented once the task is reaped.

Based on the namespace trees and the active reference count, a new
listns() system call that allows userspace to iterate through namespaces
in the system. This provides a programmatic interface to discover and
inspect namespaces, enhancing existing namespace apis.

Currently, there is no direct way for userspace to enumerate namespaces
in the system. Applications must resort to scanning /proc/<pid>/ns/
across all processes, which is:

1. Inefficient - requires iterating over all processes
2. Incomplete - misses inactive namespaces that aren't attached to any
   running process but are kept alive by file descriptors, bind mounts,
   or parent namespace references
3. Permission-heavy - requires access to /proc for many processes
4. No ordering or ownership.
5. No filtering per namespace type: Must always iterate and check all
   namespaces.

The list goes on. The listns() system call solves these problems by
providing direct kernel-level enumeration of namespaces. It is similar
to listmount() but obviously tailored to namespaces.

/*
 * @req: Pointer to struct ns_id_req specifying search parameters
 * @ns_ids: User buffer to receive namespace IDs
 * @nr_ns_ids: Size of ns_ids buffer (maximum number of IDs to return)
 * @flags: Reserved for future use (must be 0)
 */
ssize_t listns(const struct ns_id_req *req, u64 *ns_ids,
               size_t nr_ns_ids, unsigned int flags);

Returns:
- On success: Number of namespace IDs written to ns_ids
- On error: Negative error code

/*
 * @size: Structure size
 * @ns_id: Starting point for iteration; use 0 for first call, then
 *         use the last returned ID for subsequent calls to paginate
 * @ns_type: Bitmask of namespace types to include (from enum ns_type):
 *           0: Return all namespace types
 *           MNT_NS: Mount namespaces
 *           NET_NS: Network namespaces
 *           USER_NS: User namespaces
 *           etc. Can be OR'd together
 * @user_ns_id: Filter results to namespaces owned by this user namespace:
 *              0: Return all namespaces (subject to permission checks)
 *              LISTNS_CURRENT_USER: Namespaces owned by caller's user namespace
 *              Other value: Namespaces owned by the specified user namespace ID
 */
struct ns_id_req {
        __u32 size;         /* sizeof(struct ns_id_req) */
        __u32 spare;        /* Reserved, must be 0 */
        __u64 ns_id;        /* Last seen namespace ID (for pagination) */
        __u32 ns_type;      /* Filter by namespace type(s) */
        __u32 spare2;       /* Reserved, must be 0 */
        __u64 user_ns_id;   /* Filter by owning user namespace */
};

Example 1: List all namespaces

void list_all_namespaces(void)
{
	struct ns_id_req req = {
		.size = sizeof(req),
		.ns_id = 0,      /* Start from beginning */
		.ns_type = 0,    /* All types */
		.user_ns_id = 0, /* All user namespaces */
	};
	uint64_t ids[100];
	ssize_t ret;

	printf("All namespaces in the system:\n");
	do {
		ret = listns(&req, ids, 100, 0);
		if (ret < 0) {
			perror("listns");
			break;
		}

		for (ssize_t i = 0; i < ret; i++)
			printf("  Namespace ID: %llu\n", (unsigned long long)ids[i]);

		/* Continue from last seen ID */
		if (ret > 0)
			req.ns_id = ids[ret - 1];
	} while (ret == 100); /* Buffer was full, more may exist */
}

Example 2 : List network namespaces only

void list_network_namespaces(void)
{
	struct ns_id_req req = {
		.size = sizeof(req),
		.ns_id = 0,
		.ns_type = NET_NS, /* Only network namespaces */
		.user_ns_id = 0,
	};
	uint64_t ids[100];
	ssize_t ret;

	ret = listns(&req, ids, 100, 0);
	if (ret < 0) {
		perror("listns");
		return;
	}

	printf("Network namespaces: %zd found\n", ret);
	for (ssize_t i = 0; i < ret; i++)
		printf("  netns ID: %llu\n", (unsigned long long)ids[i]);
}

Example 3 : List namespaces owned by current user namespace

void list_owned_namespaces(void)
{
	struct ns_id_req req = {
		.size = sizeof(req),
		.ns_id = 0,
		.ns_type = 0,                      /* All types */
		.user_ns_id = LISTNS_CURRENT_USER, /* Current userns */
	};
	uint64_t ids[100];
	ssize_t ret;

	ret = listns(&req, ids, 100, 0);
	if (ret < 0) {
		perror("listns");
		return;
	}

	printf("Namespaces owned by my user namespace: %zd\n", ret);
	for (ssize_t i = 0; i < ret; i++)
		printf("  ns ID: %llu\n", (unsigned long long)ids[i]);
}

Example 4 : List multiple namespace types

void list_network_and_mount_namespaces(void)
{
	struct ns_id_req req = {
		.size = sizeof(req),
		.ns_id = 0,
		.ns_type = NET_NS | MNT_NS, /* Network and mount */
		.user_ns_id = 0,
	};
	uint64_t ids[100];
	ssize_t ret;

	ret = listns(&req, ids, 100, 0);
	printf("Network and mount namespaces: %zd found\n", ret);
}

Example 5 : Pagination through large namespace sets

void list_all_with_pagination(void)
{
	struct ns_id_req req = {
		.size = sizeof(req),
		.ns_id = 0,
		.ns_type = 0,
		.user_ns_id = 0,
	};
	uint64_t ids[50];
	size_t total = 0;
	ssize_t ret;

	printf("Enumerating all namespaces with pagination:\n");

	while (1) {
		ret = listns(&req, ids, 50, 0);
		if (ret < 0) {
			perror("listns");
			break;
		}
		if (ret == 0)
			break; /* No more namespaces */

		total += ret;
		printf("  Batch: %zd namespaces\n", ret);

		/* Last ID in this batch becomes start of next batch */
		req.ns_id = ids[ret - 1];

		if (ret < 50)
			break; /* Partial batch = end of results */
	}

	printf("Total: %zu namespaces\n", total);
}

listns() respects namespace isolation and capabilities:

(1) Global listing (user_ns_id = 0):
    - Requires CAP_SYS_ADMIN in the namespace's owning user namespace
    - OR the namespace must be in the caller's namespace context (e.g.,
      a namespace the caller is currently using)
    - User namespaces additionally allow listing if the caller has
      CAP_SYS_ADMIN in that user namespace itself
(2) Owner-filtered listing (user_ns_id != 0):
    - Requires CAP_SYS_ADMIN in the specified owner user namespace
    - OR the namespace must be in the caller's namespace context
    - This allows unprivileged processes to enumerate namespaces they own
(3) Visibility:
    - Only "active" namespaces are listed
    - A namespace is active if it has a non-zero __ns_ref_active count
    - This includes namespaces used by running processes, held by open
      file descriptors, or kept active by bind mounts
    - Inactive namespaces (kept alive only by internal kernel
      references) are not visible via listns()

* patches from https://patch.msgid.link/20251029-work-namespace-nstree-listns-v4-0-2e6f823ebdc0@kernel.org: (74 commits)
  selftests/namespace: test listns() pagination
  selftests/namespace: add stress test
  selftests/namespace: commit_creds() active reference tests
  selftests/namespace: third threaded active reference count test
  selftests/namespace: second threaded active reference count test
  selftests/namespace: first threaded active reference count test
  selftests/namespaces: twelth inactive namespace resurrection test
  selftests/namespaces: eleventh inactive namespace resurrection test
  selftests/namespaces: tenth inactive namespace resurrection test
  selftests/namespaces: ninth inactive namespace resurrection test
  selftests/namespaces: eigth inactive namespace resurrection test
  selftests/namespaces: seventh inactive namespace resurrection test
  selftests/namespaces: sixth inactive namespace resurrection test
  selftests/namespaces: fifth inactive namespace resurrection test
  selftests/namespaces: fourth inactive namespace resurrection test
  selftests/namespaces: third inactive namespace resurrection test
  selftests/namespaces: second inactive namespace resurrection test
  selftests/namespaces: first inactive namespace resurrection test
  selftests/namespaces: seventh listns() permission test
  selftests/namespaces: sixth listns() permission test
  ...

Link: https://patch.msgid.link/20251029-work-namespace-nstree-listns-v4-0-2e6f823ebdc0@kernel.org
Signed-off-by: Christian Brauner <brauner@kernel.org>
This commit is contained in:
Christian Brauner 2025-10-30 13:04:20 +01:00
commit 8ebfb9896c
No known key found for this signature in database
GPG Key ID: 91C61BC06578DCA2
56 changed files with 8859 additions and 106 deletions

View File

@ -509,3 +509,4 @@
577 common open_tree_attr sys_open_tree_attr
578 common file_getattr sys_file_getattr
579 common file_setattr sys_file_setattr
580 common listns sys_listns

View File

@ -484,3 +484,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -481,3 +481,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -469,3 +469,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -475,3 +475,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -408,3 +408,4 @@
467 n32 open_tree_attr sys_open_tree_attr
468 n32 file_getattr sys_file_getattr
469 n32 file_setattr sys_file_setattr
470 n32 listns sys_listns

View File

@ -384,3 +384,4 @@
467 n64 open_tree_attr sys_open_tree_attr
468 n64 file_getattr sys_file_getattr
469 n64 file_setattr sys_file_setattr
470 n64 listns sys_listns

View File

@ -457,3 +457,4 @@
467 o32 open_tree_attr sys_open_tree_attr
468 o32 file_getattr sys_file_getattr
469 o32 file_setattr sys_file_setattr
470 o32 listns sys_listns

View File

@ -468,3 +468,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -560,3 +560,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -472,3 +472,4 @@
467 common open_tree_attr sys_open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr sys_file_setattr
470 common listns sys_listns sys_listns

View File

@ -473,3 +473,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -515,3 +515,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -475,3 +475,4 @@
467 i386 open_tree_attr sys_open_tree_attr
468 i386 file_getattr sys_file_getattr
469 i386 file_setattr sys_file_setattr
470 i386 listns sys_listns

View File

@ -394,6 +394,7 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns
#
# Due to a historical design error, certain syscalls are numbered differently

View File

@ -440,3 +440,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -680,6 +680,7 @@ static int pseudo_fs_fill_super(struct super_block *s, struct fs_context *fc)
s->s_export_op = ctx->eops;
s->s_xattr = ctx->xattr;
s->s_time_gran = 1;
s->s_d_flags |= ctx->s_d_flags;
root = new_inode(s);
if (!root)
return -ENOMEM;

View File

@ -4094,7 +4094,7 @@ static struct mnt_namespace *alloc_mnt_ns(struct user_namespace *user_ns, bool a
return ERR_PTR(ret);
}
if (!anon)
ns_tree_gen_id(&new_ns->ns);
ns_tree_gen_id(new_ns);
refcount_set(&new_ns->passive, 1);
new_ns->mounts = RB_ROOT;
init_waitqueue_head(&new_ns->poll);
@ -5985,11 +5985,8 @@ SYSCALL_DEFINE4(listmount, const struct mnt_id_req __user *, req,
}
struct mnt_namespace init_mnt_ns = {
.ns.inum = ns_init_inum(&init_mnt_ns),
.ns.ops = &mntns_operations,
.ns = NS_COMMON_INIT(init_mnt_ns, 1),
.user_ns = &init_user_ns,
.ns.__ns_ref = REFCOUNT_INIT(1),
.ns.ns_type = ns_common_type(&init_mnt_ns),
.passive = REFCOUNT_INIT(1),
.mounts = RB_ROOT,
.poll = __WAIT_QUEUE_HEAD_INITIALIZER(init_mnt_ns.poll),

101
fs/nsfs.c
View File

@ -58,6 +58,8 @@ const struct dentry_operations ns_dentry_operations = {
static void nsfs_evict(struct inode *inode)
{
struct ns_common *ns = inode->i_private;
__ns_ref_active_put(ns);
clear_inode(inode);
ns->ops->put(ns);
}
@ -408,6 +410,7 @@ static const struct super_operations nsfs_ops = {
.statfs = simple_statfs,
.evict_inode = nsfs_evict,
.show_path = nsfs_show_path,
.drop_inode = inode_just_drop,
};
static int nsfs_init_inode(struct inode *inode, void *data)
@ -418,6 +421,16 @@ static int nsfs_init_inode(struct inode *inode, void *data)
inode->i_mode |= S_IRUGO;
inode->i_fop = &ns_file_operations;
inode->i_ino = ns->inum;
/*
* Bring the namespace subtree back to life if we have to. This
* can happen when e.g., all processes using a network namespace
* and all namespace files or namespace file bind-mounts have
* died but there are still sockets pinning it. The SIOCGSKNS
* ioctl on such a socket will resurrect the relevant namespace
* subtree.
*/
__ns_ref_active_resurrect(ns);
return 0;
}
@ -458,6 +471,45 @@ static int nsfs_encode_fh(struct inode *inode, u32 *fh, int *max_len,
return FILEID_NSFS;
}
bool is_current_namespace(struct ns_common *ns)
{
switch (ns->ns_type) {
#ifdef CONFIG_CGROUPS
case CLONE_NEWCGROUP:
return current_in_namespace(to_cg_ns(ns));
#endif
#ifdef CONFIG_IPC_NS
case CLONE_NEWIPC:
return current_in_namespace(to_ipc_ns(ns));
#endif
case CLONE_NEWNS:
return current_in_namespace(to_mnt_ns(ns));
#ifdef CONFIG_NET_NS
case CLONE_NEWNET:
return current_in_namespace(to_net_ns(ns));
#endif
#ifdef CONFIG_PID_NS
case CLONE_NEWPID:
return current_in_namespace(to_pid_ns(ns));
#endif
#ifdef CONFIG_TIME_NS
case CLONE_NEWTIME:
return current_in_namespace(to_time_ns(ns));
#endif
#ifdef CONFIG_USER_NS
case CLONE_NEWUSER:
return current_in_namespace(to_user_ns(ns));
#endif
#ifdef CONFIG_UTS_NS
case CLONE_NEWUTS:
return current_in_namespace(to_uts_ns(ns));
#endif
default:
VFS_WARN_ON_ONCE(true);
return false;
}
}
static struct dentry *nsfs_fh_to_dentry(struct super_block *sb, struct fid *fh,
int fh_len, int fh_type)
{
@ -483,18 +535,35 @@ static struct dentry *nsfs_fh_to_dentry(struct super_block *sb, struct fid *fh,
return NULL;
}
if (!fid->ns_id)
return NULL;
/* Either both are set or both are unset. */
if (!fid->ns_inum != !fid->ns_type)
return NULL;
scoped_guard(rcu) {
ns = ns_tree_lookup_rcu(fid->ns_id, fid->ns_type);
if (!ns)
return NULL;
VFS_WARN_ON_ONCE(ns->ns_id != fid->ns_id);
VFS_WARN_ON_ONCE(ns->ns_type != fid->ns_type);
if (ns->inum != fid->ns_inum)
if (fid->ns_inum && (fid->ns_inum != ns->inum))
return NULL;
if (fid->ns_type && (fid->ns_type != ns->ns_type))
return NULL;
if (!__ns_ref_get(ns))
/*
* This is racy because we're not actually taking an
* active reference. IOW, it could happen that the
* namespace becomes inactive after this check.
* We don't care because nsfs_init_inode() will just
* resurrect the relevant namespace tree for us. If it
* has been active here we just allow it's resurrection.
* We could try to take an active reference here and
* then drop it again. But really, why bother.
*/
if (!ns_get_unless_inactive(ns))
return NULL;
}
@ -590,6 +659,8 @@ static int nsfs_init_fs_context(struct fs_context *fc)
struct pseudo_fs_context *ctx = init_pseudo(fc, NSFS_MAGIC);
if (!ctx)
return -ENOMEM;
fc->s_iflags |= SB_I_NOEXEC | SB_I_NODEV;
ctx->s_d_flags |= DCACHE_DONTCACHE;
ctx->ops = &nsfs_ops;
ctx->eops = &nsfs_export_operations;
ctx->dops = &ns_dentry_operations;
@ -612,3 +683,27 @@ void __init nsfs_init(void)
nsfs_root_path.mnt = nsfs_mnt;
nsfs_root_path.dentry = nsfs_mnt->mnt_root;
}
void nsproxy_ns_active_get(struct nsproxy *ns)
{
ns_ref_active_get(ns->mnt_ns);
ns_ref_active_get(ns->uts_ns);
ns_ref_active_get(ns->ipc_ns);
ns_ref_active_get(ns->pid_ns_for_children);
ns_ref_active_get(ns->cgroup_ns);
ns_ref_active_get(ns->net_ns);
ns_ref_active_get(ns->time_ns);
ns_ref_active_get(ns->time_ns_for_children);
}
void nsproxy_ns_active_put(struct nsproxy *ns)
{
ns_ref_active_put(ns->mnt_ns);
ns_ref_active_put(ns->uts_ns);
ns_ref_active_put(ns->ipc_ns);
ns_ref_active_put(ns->pid_ns_for_children);
ns_ref_active_put(ns->cgroup_ns);
ns_ref_active_put(ns->net_ns);
ns_ref_active_put(ns->time_ns);
ns_ref_active_put(ns->time_ns_for_children);
}

View File

@ -1022,6 +1022,7 @@ static int pidfs_init_fs_context(struct fs_context *fc)
fc->s_iflags |= SB_I_NOEXEC;
fc->s_iflags |= SB_I_NODEV;
ctx->s_d_flags |= DCACHE_DONTCACHE;
ctx->ops = &pidfs_sops;
ctx->eops = &pidfs_export_operations;
ctx->dops = &pidfs_dentry_operations;

View File

@ -4,7 +4,9 @@
#include <linux/refcount.h>
#include <linux/rbtree.h>
#include <linux/vfsdebug.h>
#include <uapi/linux/sched.h>
#include <uapi/linux/nsfs.h>
struct proc_ns_operations;
@ -37,6 +39,67 @@ extern const struct proc_ns_operations cgroupns_operations;
extern const struct proc_ns_operations timens_operations;
extern const struct proc_ns_operations timens_for_children_operations;
/*
* Namespace lifetimes are managed via a two-tier reference counting model:
*
* (1) __ns_ref (refcount_t): Main reference count tracking memory
* lifetime. Controls when the namespace structure itself is freed.
* It also pins the namespace on the namespace trees whereas (2)
* only regulates their visibility to userspace.
*
* (2) __ns_ref_active (atomic_t): Reference count tracking active users.
* Controls visibility of the namespace in the namespace trees.
* Any live task that uses the namespace (via nsproxy or cred) holds
* an active reference. Any open file descriptor or bind-mount of
* the namespace holds an active reference. Once all tasks have
* called exited their namespaces and all file descriptors and
* bind-mounts have been released the active reference count drops
* to zero and the namespace becomes inactive. IOW, the namespace
* cannot be listed or opened via file handles anymore.
*
* Note that it is valid to transition from active to inactive and
* back from inactive to active e.g., when resurrecting an inactive
* namespace tree via the SIOCGSKNS ioctl().
*
* Relationship and lifecycle states:
*
* - Active (__ns_ref_active > 0):
* Namespace is actively used and visible to userspace. The namespace
* can be reopened via /proc/<pid>/ns/<ns_type>, via namespace file
* handles, or discovered via listns().
*
* - Inactive (__ns_ref_active == 0, __ns_ref > 0):
* No tasks are actively using the namespace and it isn't pinned by
* any bind-mounts or open file descriptors anymore. But the namespace
* is still kept alive by internal references. For example, the user
* namespace could be pinned by an open file through file->f_cred
* references when one of the now defunct tasks had opened a file and
* handed the file descriptor off to another process via a UNIX
* sockets. Such references keep the namespace structure alive through
* __ns_ref but will not hold an active reference.
*
* - Destroyed (__ns_ref == 0):
* No references remain. The namespace is removed from the tree and freed.
*
* State transitions:
*
* Active -> Inactive:
* When the last task using the namespace exits it drops its active
* references to all namespaces. However, user and pid namespaces
* remain accessible until the task has been reaped.
*
* Inactive -> Active:
* An inactive namespace tree might be resurrected due to e.g., the
* SIOCGSKNS ioctl() on a socket.
*
* Inactive -> Destroyed:
* When __ns_ref drops to zero the namespace is removed from the
* namespaces trees and the memory is freed (after RCU grace period).
*
* Initial namespaces:
* Boot-time namespaces (init_net, init_pid_ns, etc.) start with
* __ns_ref_active = 1 and remain active forever.
*/
struct ns_common {
u32 ns_type;
struct dentry *stashed;
@ -46,15 +109,37 @@ struct ns_common {
union {
struct {
u64 ns_id;
struct /* global namespace rbtree and list */ {
struct rb_node ns_unified_tree_node;
struct list_head ns_unified_list_node;
};
struct /* per type rbtree and list */ {
struct rb_node ns_tree_node;
struct list_head ns_list_node;
};
struct /* namespace ownership rbtree and list */ {
struct rb_root ns_owner_tree; /* rbtree of namespaces owned by this namespace */
struct list_head ns_owner; /* list of namespaces owned by this namespace */
struct rb_node ns_owner_tree_node; /* node in the owner namespace's rbtree */
struct list_head ns_owner_entry; /* node in the owner namespace's ns_owned list */
};
atomic_t __ns_ref_active; /* do not use directly */
};
struct rcu_head ns_rcu;
};
};
bool is_current_namespace(struct ns_common *ns);
int __ns_common_init(struct ns_common *ns, u32 ns_type, const struct proc_ns_operations *ops, int inum);
void __ns_common_free(struct ns_common *ns);
struct ns_common *__must_check ns_owner(struct ns_common *ns);
static __always_inline bool is_initial_namespace(struct ns_common *ns)
{
VFS_WARN_ON_ONCE(ns->inum == 0);
return unlikely(in_range(ns->inum, MNT_NS_INIT_INO,
IPC_NS_INIT_INO - MNT_NS_INIT_INO + 1));
}
#define to_ns_common(__ns) \
_Generic((__ns), \
@ -97,6 +182,17 @@ void __ns_common_free(struct ns_common *ns);
struct user_namespace *: &init_user_ns, \
struct uts_namespace *: &init_uts_ns)
#define ns_init_id(__ns) \
_Generic((__ns), \
struct cgroup_namespace *: CGROUP_NS_INIT_ID, \
struct ipc_namespace *: IPC_NS_INIT_ID, \
struct mnt_namespace *: MNT_NS_INIT_ID, \
struct net *: NET_NS_INIT_ID, \
struct pid_namespace *: PID_NS_INIT_ID, \
struct time_namespace *: TIME_NS_INIT_ID, \
struct user_namespace *: USER_NS_INIT_ID, \
struct uts_namespace *: UTS_NS_INIT_ID)
#define to_ns_operations(__ns) \
_Generic((__ns), \
struct cgroup_namespace *: (IS_ENABLED(CONFIG_CGROUPS) ? &cgroupns_operations : NULL), \
@ -119,6 +215,21 @@ void __ns_common_free(struct ns_common *ns);
struct user_namespace *: CLONE_NEWUSER, \
struct uts_namespace *: CLONE_NEWUTS)
#define NS_COMMON_INIT(nsname, refs) \
{ \
.ns_type = ns_common_type(&nsname), \
.ns_id = ns_init_id(&nsname), \
.inum = ns_init_inum(&nsname), \
.ops = to_ns_operations(&nsname), \
.stashed = NULL, \
.__ns_ref = REFCOUNT_INIT(refs), \
.__ns_ref_active = ATOMIC_INIT(1), \
.ns_list_node = LIST_HEAD_INIT(nsname.ns.ns_list_node), \
.ns_owner_entry = LIST_HEAD_INIT(nsname.ns.ns_owner_entry), \
.ns_owner = LIST_HEAD_INIT(nsname.ns.ns_owner), \
.ns_unified_list_node = LIST_HEAD_INIT(nsname.ns.ns_unified_list_node), \
}
#define ns_common_init(__ns) \
__ns_common_init(to_ns_common(__ns), \
ns_common_type(__ns), \
@ -133,21 +244,91 @@ void __ns_common_free(struct ns_common *ns);
#define ns_common_free(__ns) __ns_common_free(to_ns_common((__ns)))
static __always_inline __must_check int __ns_ref_active_read(const struct ns_common *ns)
{
return atomic_read(&ns->__ns_ref_active);
}
static __always_inline __must_check bool __ns_ref_put(struct ns_common *ns)
{
return refcount_dec_and_test(&ns->__ns_ref);
if (refcount_dec_and_test(&ns->__ns_ref)) {
VFS_WARN_ON_ONCE(__ns_ref_active_read(ns));
return true;
}
return false;
}
static __always_inline __must_check bool __ns_ref_get(struct ns_common *ns)
{
return refcount_inc_not_zero(&ns->__ns_ref);
if (refcount_inc_not_zero(&ns->__ns_ref))
return true;
VFS_WARN_ON_ONCE(__ns_ref_active_read(ns));
return false;
}
#define ns_ref_read(__ns) refcount_read(&to_ns_common((__ns))->__ns_ref)
static __always_inline __must_check int __ns_ref_read(const struct ns_common *ns)
{
return refcount_read(&ns->__ns_ref);
}
#define ns_ref_read(__ns) __ns_ref_read(to_ns_common((__ns)))
#define ns_ref_inc(__ns) refcount_inc(&to_ns_common((__ns))->__ns_ref)
#define ns_ref_get(__ns) __ns_ref_get(to_ns_common((__ns)))
#define ns_ref_put(__ns) __ns_ref_put(to_ns_common((__ns)))
#define ns_ref_put_and_lock(__ns, __lock) \
refcount_dec_and_lock(&to_ns_common((__ns))->__ns_ref, (__lock))
#define ns_ref_active_read(__ns) \
((__ns) ? __ns_ref_active_read(to_ns_common(__ns)) : 0)
void __ns_ref_active_get_owner(struct ns_common *ns);
static __always_inline void __ns_ref_active_get(struct ns_common *ns)
{
WARN_ON_ONCE(atomic_add_negative(1, &ns->__ns_ref_active));
VFS_WARN_ON_ONCE(is_initial_namespace(ns) && __ns_ref_active_read(ns) <= 0);
}
#define ns_ref_active_get(__ns) \
do { if (__ns) __ns_ref_active_get(to_ns_common(__ns)); } while (0)
static __always_inline bool __ns_ref_active_get_not_zero(struct ns_common *ns)
{
if (atomic_inc_not_zero(&ns->__ns_ref_active)) {
VFS_WARN_ON_ONCE(!__ns_ref_read(ns));
return true;
}
return false;
}
#define ns_ref_active_get_owner(__ns) \
do { if (__ns) __ns_ref_active_get_owner(to_ns_common(__ns)); } while (0)
void __ns_ref_active_put_owner(struct ns_common *ns);
static __always_inline void __ns_ref_active_put(struct ns_common *ns)
{
if (atomic_dec_and_test(&ns->__ns_ref_active)) {
VFS_WARN_ON_ONCE(is_initial_namespace(ns));
VFS_WARN_ON_ONCE(!__ns_ref_read(ns));
__ns_ref_active_put_owner(ns);
}
}
#define ns_ref_active_put(__ns) \
do { if (__ns) __ns_ref_active_put(to_ns_common(__ns)); } while (0)
static __always_inline struct ns_common *__must_check ns_get_unless_inactive(struct ns_common *ns)
{
VFS_WARN_ON_ONCE(__ns_ref_active_read(ns) && !__ns_ref_read(ns));
if (!__ns_ref_active_read(ns))
return NULL;
if (!__ns_ref_get(ns))
return NULL;
return ns;
}
void __ns_ref_active_resurrect(struct ns_common *ns);
#define ns_ref_active_resurrect(__ns) \
do { if (__ns) __ns_ref_active_resurrect(to_ns_common(__ns)); } while (0)
#endif

View File

@ -37,4 +37,7 @@ void nsfs_init(void);
#define current_in_namespace(__ns) (__current_namespace_from_type(__ns) == __ns)
void nsproxy_ns_active_get(struct nsproxy *ns);
void nsproxy_ns_active_put(struct nsproxy *ns);
#endif /* _LINUX_NSFS_H */

View File

@ -93,7 +93,10 @@ static inline struct cred *nsset_cred(struct nsset *set)
*/
int copy_namespaces(u64 flags, struct task_struct *tsk);
void exit_task_namespaces(struct task_struct *tsk);
void switch_cred_namespaces(const struct cred *old, const struct cred *new);
void exit_nsproxy_namespaces(struct task_struct *tsk);
void get_cred_namespaces(struct task_struct *tsk);
void exit_cred_namespaces(struct task_struct *tsk);
void switch_task_namespaces(struct task_struct *tsk, struct nsproxy *new);
int exec_task_namespaces(void);
void free_nsproxy(struct nsproxy *ns);

View File

@ -1,4 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2025 Christian Brauner <brauner@kernel.org> */
#ifndef _LINUX_NSTREE_H
#define _LINUX_NSTREE_H
@ -8,6 +9,7 @@
#include <linux/seqlock.h>
#include <linux/rculist.h>
#include <linux/cookie.h>
#include <uapi/linux/nsfs.h>
extern struct ns_tree cgroup_ns_tree;
extern struct ns_tree ipc_ns_tree;
@ -29,7 +31,11 @@ extern struct ns_tree uts_ns_tree;
struct user_namespace *: &(user_ns_tree), \
struct uts_namespace *: &(uts_ns_tree))
u64 ns_tree_gen_id(struct ns_common *ns);
#define ns_tree_gen_id(__ns) \
__ns_tree_gen_id(to_ns_common(__ns), \
(((__ns) == ns_init_ns(__ns)) ? ns_init_id(__ns) : 0))
u64 __ns_tree_gen_id(struct ns_common *ns, u64 id);
void __ns_tree_add_raw(struct ns_common *ns, struct ns_tree *ns_tree);
void __ns_tree_remove(struct ns_common *ns, struct ns_tree *ns_tree);
struct ns_common *ns_tree_lookup_rcu(u64 ns_id, int ns_type);
@ -37,9 +43,9 @@ struct ns_common *__ns_tree_adjoined_rcu(struct ns_common *ns,
struct ns_tree *ns_tree,
bool previous);
static inline void __ns_tree_add(struct ns_common *ns, struct ns_tree *ns_tree)
static inline void __ns_tree_add(struct ns_common *ns, struct ns_tree *ns_tree, u64 id)
{
ns_tree_gen_id(ns);
__ns_tree_gen_id(ns, id);
__ns_tree_add_raw(ns, ns_tree);
}
@ -59,7 +65,9 @@ static inline void __ns_tree_add(struct ns_common *ns, struct ns_tree *ns_tree)
* This function assigns a new id to the namespace and adds it to the
* appropriate namespace tree and list.
*/
#define ns_tree_add(__ns) __ns_tree_add(to_ns_common(__ns), to_ns_tree(__ns))
#define ns_tree_add(__ns) \
__ns_tree_add(to_ns_common(__ns), to_ns_tree(__ns), \
(((__ns) == ns_init_ns(__ns)) ? ns_init_id(__ns) : 0))
/**
* ns_tree_remove - Remove a namespace from a namespace tree

View File

@ -9,6 +9,7 @@ struct pseudo_fs_context {
const struct xattr_handler * const *xattr;
const struct dentry_operations *dops;
unsigned long magic;
unsigned int s_d_flags;
};
struct pseudo_fs_context *init_pseudo(struct fs_context *fc,

View File

@ -77,6 +77,7 @@ struct cachestat_range;
struct cachestat;
struct statmount;
struct mnt_id_req;
struct ns_id_req;
struct xattr_args;
struct file_attr;
@ -437,6 +438,9 @@ asmlinkage long sys_statmount(const struct mnt_id_req __user *req,
asmlinkage long sys_listmount(const struct mnt_id_req __user *req,
u64 __user *mnt_ids, size_t nr_mnt_ids,
unsigned int flags);
asmlinkage long sys_listns(const struct ns_id_req __user *req,
u64 __user *ns_ids, size_t nr_ns_ids,
unsigned int flags);
asmlinkage long sys_truncate(const char __user *path, long length);
asmlinkage long sys_ftruncate(unsigned int fd, off_t length);
#if BITS_PER_LONG == 32

View File

@ -166,13 +166,13 @@ static inline void set_userns_rlimit_max(struct user_namespace *ns,
ns->rlimit_max[type] = max <= LONG_MAX ? max : LONG_MAX;
}
#ifdef CONFIG_USER_NS
static inline struct user_namespace *to_user_ns(struct ns_common *ns)
{
return container_of(ns, struct user_namespace, ns);
}
#ifdef CONFIG_USER_NS
static inline struct user_namespace *get_user_ns(struct user_namespace *ns)
{
if (ns)

View File

@ -857,9 +857,11 @@ __SYSCALL(__NR_open_tree_attr, sys_open_tree_attr)
__SYSCALL(__NR_file_getattr, sys_file_getattr)
#define __NR_file_setattr 469
__SYSCALL(__NR_file_setattr, sys_file_setattr)
#define __NR_listns 470
__SYSCALL(__NR_listns, sys_listns)
#undef __NR_syscalls
#define __NR_syscalls 470
#define __NR_syscalls 471
/*
* 32 bit systems traditionally used different

View File

@ -67,4 +67,62 @@ struct nsfs_file_handle {
#define NSFS_FILE_HANDLE_SIZE_VER0 16 /* sizeof first published struct */
#define NSFS_FILE_HANDLE_SIZE_LATEST sizeof(struct nsfs_file_handle) /* sizeof latest published struct */
enum init_ns_id {
IPC_NS_INIT_ID = 1ULL,
UTS_NS_INIT_ID = 2ULL,
USER_NS_INIT_ID = 3ULL,
PID_NS_INIT_ID = 4ULL,
CGROUP_NS_INIT_ID = 5ULL,
TIME_NS_INIT_ID = 6ULL,
NET_NS_INIT_ID = 7ULL,
MNT_NS_INIT_ID = 8ULL,
#ifdef __KERNEL__
NS_LAST_INIT_ID = MNT_NS_INIT_ID,
#endif
};
enum ns_type {
TIME_NS = (1ULL << 7), /* CLONE_NEWTIME */
MNT_NS = (1ULL << 17), /* CLONE_NEWNS */
CGROUP_NS = (1ULL << 25), /* CLONE_NEWCGROUP */
UTS_NS = (1ULL << 26), /* CLONE_NEWUTS */
IPC_NS = (1ULL << 27), /* CLONE_NEWIPC */
USER_NS = (1ULL << 28), /* CLONE_NEWUSER */
PID_NS = (1ULL << 29), /* CLONE_NEWPID */
NET_NS = (1ULL << 30), /* CLONE_NEWNET */
};
/**
* struct ns_id_req - namespace ID request structure
* @size: size of this structure
* @spare: reserved for future use
* @filter: filter mask
* @ns_id: last namespace id
* @user_ns_id: owning user namespace ID
*
* Structure for passing namespace ID and miscellaneous parameters to
* statns(2) and listns(2).
*
* For statns(2) @param represents the request mask.
* For listns(2) @param represents the last listed mount id (or zero).
*/
struct ns_id_req {
__u32 size;
__u32 spare;
__u64 ns_id;
struct /* listns */ {
__u32 ns_type;
__u32 spare2;
__u64 user_ns_id;
};
};
/*
* Special @user_ns_id value that can be passed to listns()
*/
#define LISTNS_CURRENT_USER 0xffffffffffffffff /* Caller's userns */
/* List of all ns_id_req versions. */
#define NS_ID_REQ_SIZE_VER0 32 /* sizeof first published struct */
#endif /* __LINUX_NSFS_H */

View File

@ -8,8 +8,7 @@
#include <linux/utsname.h>
struct uts_namespace init_uts_ns = {
.ns.ns_type = ns_common_type(&init_uts_ns),
.ns.__ns_ref = REFCOUNT_INIT(2),
.ns = NS_COMMON_INIT(init_uts_ns, 2),
.name = {
.sysname = UTS_SYSNAME,
.nodename = UTS_NODENAME,
@ -19,10 +18,6 @@ struct uts_namespace init_uts_ns = {
.domainname = UTS_DOMAINNAME,
},
.user_ns = &init_user_ns,
.ns.inum = ns_init_inum(&init_uts_ns),
#ifdef CONFIG_UTS_NS
.ns.ops = &utsns_operations,
#endif
};
/* FIXED STRINGS! Don't touch! */

View File

@ -27,13 +27,8 @@ DEFINE_SPINLOCK(mq_lock);
* and not CONFIG_IPC_NS.
*/
struct ipc_namespace init_ipc_ns = {
.ns.__ns_ref = REFCOUNT_INIT(1),
.ns = NS_COMMON_INIT(init_ipc_ns, 1),
.user_ns = &init_user_ns,
.ns.inum = ns_init_inum(&init_ipc_ns),
#ifdef CONFIG_IPC_NS
.ns.ops = &ipcns_operations,
#endif
.ns.ns_type = ns_common_type(&init_ipc_ns),
};
struct msg_msgseg {

View File

@ -250,12 +250,9 @@ bool cgroup_enable_per_threadgroup_rwsem __read_mostly;
/* cgroup namespace for init task */
struct cgroup_namespace init_cgroup_ns = {
.ns.__ns_ref = REFCOUNT_INIT(2),
.ns = NS_COMMON_INIT(init_cgroup_ns, 2),
.user_ns = &init_user_ns,
.ns.ops = &cgroupns_operations,
.ns.inum = ns_init_inum(&init_cgroup_ns),
.root_cset = &init_css_set,
.ns.ns_type = ns_common_type(&init_cgroup_ns),
};
static struct file_system_type cgroup2_fs_type;
@ -1522,9 +1519,9 @@ static struct cgroup *current_cgns_cgroup_dfl(void)
} else {
/*
* NOTE: This function may be called from bpf_cgroup_from_id()
* on a task which has already passed exit_task_namespaces() and
* nsproxy == NULL. Fall back to cgrp_dfl_root which will make all
* cgroups visible for lookups.
* on a task which has already passed exit_nsproxy_namespaces()
* and nsproxy == NULL. Fall back to cgrp_dfl_root which will
* make all cgroups visible for lookups.
*/
return &cgrp_dfl_root.cgrp;
}

View File

@ -30,7 +30,6 @@ static struct cgroup_namespace *alloc_cgroup_ns(void)
ret = ns_common_init(new_ns);
if (ret)
return ERR_PTR(ret);
ns_tree_add(new_ns);
return no_free_ptr(new_ns);
}
@ -86,6 +85,7 @@ struct cgroup_namespace *copy_cgroup_ns(u64 flags,
new_ns->ucounts = ucounts;
new_ns->root_cset = cset;
ns_tree_add(new_ns);
return new_ns;
}

View File

@ -306,6 +306,7 @@ int copy_creds(struct task_struct *p, u64 clone_flags)
kdebug("share_creds(%p{%ld})",
p->cred, atomic_long_read(&p->cred->usage));
inc_rlimit_ucounts(task_ucounts(p), UCOUNT_RLIMIT_NPROC, 1);
get_cred_namespaces(p);
return 0;
}
@ -343,6 +344,8 @@ int copy_creds(struct task_struct *p, u64 clone_flags)
p->cred = p->real_cred = get_cred(new);
inc_rlimit_ucounts(task_ucounts(p), UCOUNT_RLIMIT_NPROC, 1);
get_cred_namespaces(p);
return 0;
error_put:
@ -435,10 +438,13 @@ int commit_creds(struct cred *new)
*/
if (new->user != old->user || new->user_ns != old->user_ns)
inc_rlimit_ucounts(new->ucounts, UCOUNT_RLIMIT_NPROC, 1);
rcu_assign_pointer(task->real_cred, new);
rcu_assign_pointer(task->cred, new);
if (new->user != old->user || new->user_ns != old->user_ns)
dec_rlimit_ucounts(old->ucounts, UCOUNT_RLIMIT_NPROC, 1);
if (new->user_ns != old->user_ns)
switch_cred_namespaces(old, new);
/* send notifications */
if (!uid_eq(new->uid, old->uid) ||

View File

@ -291,6 +291,7 @@ void release_task(struct task_struct *p)
write_unlock_irq(&tasklist_lock);
/* @thread_pid can't go away until free_pids() below */
proc_flush_pid(thread_pid);
exit_cred_namespaces(p);
add_device_randomness(&p->se.sum_exec_runtime,
sizeof(p->se.sum_exec_runtime));
free_pids(post.pids);
@ -962,7 +963,7 @@ void __noreturn do_exit(long code)
exit_fs(tsk);
if (group_dead)
disassociate_ctty(1);
exit_task_namespaces(tsk);
exit_nsproxy_namespaces(tsk);
exit_task_work(tsk);
exit_thread(tsk);

View File

@ -2453,7 +2453,7 @@ __latent_entropy struct task_struct *copy_process(
if (p->io_context)
exit_io_context(p);
bad_fork_cleanup_namespaces:
exit_task_namespaces(p);
exit_nsproxy_namespaces(p);
bad_fork_cleanup_mm:
if (p->mm) {
mm_clear_owner(p->mm, p);
@ -2487,6 +2487,7 @@ __latent_entropy struct task_struct *copy_process(
delayacct_tsk_free(p);
bad_fork_cleanup_count:
dec_rlimit_ucounts(task_ucounts(p), UCOUNT_RLIMIT_NPROC, 1);
exit_cred_namespaces(p);
exit_creds(p);
bad_fork_free:
WRITE_ONCE(p->__state, TASK_DEAD);

View File

@ -1,7 +1,9 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2025 Christian Brauner <brauner@kernel.org> */
#include <linux/ns_common.h>
#include <linux/proc_ns.h>
#include <linux/user_namespace.h>
#include <linux/vfsdebug.h>
#ifdef CONFIG_DEBUG_VFS
@ -52,13 +54,21 @@ static void ns_debug(struct ns_common *ns, const struct proc_ns_operations *ops)
int __ns_common_init(struct ns_common *ns, u32 ns_type, const struct proc_ns_operations *ops, int inum)
{
int ret;
refcount_set(&ns->__ns_ref, 1);
ns->stashed = NULL;
ns->ops = ops;
ns->ns_id = 0;
ns->ns_type = ns_type;
RB_CLEAR_NODE(&ns->ns_tree_node);
RB_CLEAR_NODE(&ns->ns_unified_tree_node);
RB_CLEAR_NODE(&ns->ns_owner_tree_node);
INIT_LIST_HEAD(&ns->ns_list_node);
INIT_LIST_HEAD(&ns->ns_unified_list_node);
ns->ns_owner_tree = RB_ROOT;
INIT_LIST_HEAD(&ns->ns_owner);
INIT_LIST_HEAD(&ns->ns_owner_entry);
#ifdef CONFIG_DEBUG_VFS
ns_debug(ns, ops);
@ -68,10 +78,219 @@ int __ns_common_init(struct ns_common *ns, u32 ns_type, const struct proc_ns_ope
ns->inum = inum;
return 0;
}
return proc_alloc_inum(&ns->inum);
ret = proc_alloc_inum(&ns->inum);
if (ret)
return ret;
/*
* Tree ref starts at 0. It's incremented when namespace enters
* active use (installed in nsproxy) and decremented when all
* active uses are gone. Initial namespaces are always active.
*/
if (is_initial_namespace(ns))
atomic_set(&ns->__ns_ref_active, 1);
else
atomic_set(&ns->__ns_ref_active, 0);
return 0;
}
void __ns_common_free(struct ns_common *ns)
{
proc_free_inum(ns->inum);
}
struct ns_common *__must_check ns_owner(struct ns_common *ns)
{
struct user_namespace *owner;
if (unlikely(!ns->ops))
return NULL;
VFS_WARN_ON_ONCE(!ns->ops->owner);
owner = ns->ops->owner(ns);
VFS_WARN_ON_ONCE(!owner && ns != to_ns_common(&init_user_ns));
if (!owner)
return NULL;
/* Skip init_user_ns as it's always active */
if (owner == &init_user_ns)
return NULL;
return to_ns_common(owner);
}
void __ns_ref_active_get_owner(struct ns_common *ns)
{
ns = ns_owner(ns);
if (ns)
WARN_ON_ONCE(atomic_add_negative(1, &ns->__ns_ref_active));
}
/*
* The active reference count works by having each namespace that gets
* created take a single active reference on its owning user namespace.
* That single reference is only released once the child namespace's
* active count itself goes down.
*
* A regular namespace tree might look as follow:
* Legend:
* + : adding active reference
* - : dropping active reference
* x : always active (initial namespace)
*
*
* net_ns pid_ns
* \ /
* + +
* user_ns1 (2)
* |
* ipc_ns | uts_ns
* \ | /
* + + +
* user_ns2 (3)
* |
* cgroup_ns | mnt_ns
* \ | /
* x x x
* init_user_ns (1)
*
* If both net_ns and pid_ns put their last active reference on
* themselves it will cascade to user_ns1 dropping its own active
* reference and dropping one active reference on user_ns2:
*
* net_ns pid_ns
* \ /
* - -
* user_ns1 (0)
* |
* ipc_ns | uts_ns
* \ | /
* + - +
* user_ns2 (2)
* |
* cgroup_ns | mnt_ns
* \ | /
* x x x
* init_user_ns (1)
*
* The iteration stops once we reach a namespace that still has active
* references.
*/
void __ns_ref_active_put_owner(struct ns_common *ns)
{
for (;;) {
ns = ns_owner(ns);
if (!ns)
return;
if (!atomic_dec_and_test(&ns->__ns_ref_active))
return;
}
}
/*
* The active reference count works by having each namespace that gets
* created take a single active reference on its owning user namespace.
* That single reference is only released once the child namespace's
* active count itself goes down. This makes it possible to efficiently
* resurrect a namespace tree:
*
* A regular namespace tree might look as follow:
* Legend:
* + : adding active reference
* - : dropping active reference
* x : always active (initial namespace)
*
*
* net_ns pid_ns
* \ /
* + +
* user_ns1 (2)
* |
* ipc_ns | uts_ns
* \ | /
* + + +
* user_ns2 (3)
* |
* cgroup_ns | mnt_ns
* \ | /
* x x x
* init_user_ns (1)
*
* If both net_ns and pid_ns put their last active reference on
* themselves it will cascade to user_ns1 dropping its own active
* reference and dropping one active reference on user_ns2:
*
* net_ns pid_ns
* \ /
* - -
* user_ns1 (0)
* |
* ipc_ns | uts_ns
* \ | /
* + - +
* user_ns2 (2)
* |
* cgroup_ns | mnt_ns
* \ | /
* x x x
* init_user_ns (1)
*
* Assume the whole tree is dead but all namespaces are still active:
*
* net_ns pid_ns
* \ /
* - -
* user_ns1 (0)
* |
* ipc_ns | uts_ns
* \ | /
* - - -
* user_ns2 (0)
* |
* cgroup_ns | mnt_ns
* \ | /
* x x x
* init_user_ns (1)
*
* Now assume the net_ns gets resurrected (.e.g., via the SIOCGSKNS ioctl()):
*
* net_ns pid_ns
* \ /
* + -
* user_ns1 (0)
* |
* ipc_ns | uts_ns
* \ | /
* - + -
* user_ns2 (0)
* |
* cgroup_ns | mnt_ns
* \ | /
* x x x
* init_user_ns (1)
*
* If net_ns had a zero reference count and we bumped it we also need to
* take another reference on its owning user namespace. Similarly, if
* pid_ns had a zero reference count it also needs to take another
* reference on its owning user namespace. So both net_ns and pid_ns
* will each have their own reference on the owning user namespace.
*
* If the owning user namespace user_ns1 had a zero reference count then
* it also needs to take another reference on its owning user namespace
* and so on.
*/
void __ns_ref_active_resurrect(struct ns_common *ns)
{
/* If we didn't resurrect the namespace we're done. */
if (atomic_fetch_add(1, &ns->__ns_ref_active))
return;
/*
* We did resurrect it. Walk the ownership hierarchy upwards
* until we found an owning user namespace that is active.
*/
for (;;) {
ns = ns_owner(ns);
if (!ns)
return;
if (atomic_fetch_add(1, &ns->__ns_ref_active))
return;
}
}

View File

@ -26,6 +26,7 @@
#include <linux/syscalls.h>
#include <linux/cgroup.h>
#include <linux/perf_event.h>
#include <linux/nstree.h>
static struct kmem_cache *nsproxy_cachep;
@ -179,12 +180,15 @@ int copy_namespaces(u64 flags, struct task_struct *tsk)
if ((flags & CLONE_VM) == 0)
timens_on_fork(new_ns, tsk);
nsproxy_ns_active_get(new_ns);
tsk->nsproxy = new_ns;
return 0;
}
void free_nsproxy(struct nsproxy *ns)
{
nsproxy_ns_active_put(ns);
put_mnt_ns(ns->mnt_ns);
put_uts_ns(ns->uts_ns);
put_ipc_ns(ns->ipc_ns);
@ -232,6 +236,9 @@ void switch_task_namespaces(struct task_struct *p, struct nsproxy *new)
might_sleep();
if (new)
nsproxy_ns_active_get(new);
task_lock(p);
ns = p->nsproxy;
p->nsproxy = new;
@ -241,11 +248,27 @@ void switch_task_namespaces(struct task_struct *p, struct nsproxy *new)
put_nsproxy(ns);
}
void exit_task_namespaces(struct task_struct *p)
void exit_nsproxy_namespaces(struct task_struct *p)
{
switch_task_namespaces(p, NULL);
}
void switch_cred_namespaces(const struct cred *old, const struct cred *new)
{
ns_ref_active_get(new->user_ns);
ns_ref_active_put(old->user_ns);
}
void get_cred_namespaces(struct task_struct *tsk)
{
ns_ref_active_get(tsk->real_cred->user_ns);
}
void exit_cred_namespaces(struct task_struct *tsk)
{
ns_ref_active_put(tsk->real_cred->user_ns);
}
int exec_task_namespaces(void)
{
struct task_struct *tsk = current;

View File

@ -1,34 +1,38 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2025 Christian Brauner <brauner@kernel.org> */
#include <linux/nstree.h>
#include <linux/proc_ns.h>
#include <linux/rculist.h>
#include <linux/vfsdebug.h>
#include <linux/syscalls.h>
#include <linux/user_namespace.h>
static __cacheline_aligned_in_smp DEFINE_SEQLOCK(ns_tree_lock);
static struct rb_root ns_unified_tree = RB_ROOT; /* protected by ns_tree_lock */
static LIST_HEAD(ns_unified_list); /* protected by ns_tree_lock */
/**
* struct ns_tree - Namespace tree
* @ns_tree: Rbtree of namespaces of a particular type
* @ns_list: Sequentially walkable list of all namespaces of this type
* @ns_tree_lock: Seqlock to protect the tree and list
* @type: type of namespaces in this tree
*/
struct ns_tree {
struct rb_root ns_tree;
struct list_head ns_list;
seqlock_t ns_tree_lock;
int type;
};
struct ns_tree mnt_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(mnt_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(mnt_ns_tree.ns_tree_lock),
.type = CLONE_NEWNS,
};
struct ns_tree net_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(net_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(net_ns_tree.ns_tree_lock),
.type = CLONE_NEWNET,
};
EXPORT_SYMBOL_GPL(net_ns_tree);
@ -36,47 +40,39 @@ EXPORT_SYMBOL_GPL(net_ns_tree);
struct ns_tree uts_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(uts_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(uts_ns_tree.ns_tree_lock),
.type = CLONE_NEWUTS,
};
struct ns_tree user_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(user_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(user_ns_tree.ns_tree_lock),
.type = CLONE_NEWUSER,
};
struct ns_tree ipc_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(ipc_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(ipc_ns_tree.ns_tree_lock),
.type = CLONE_NEWIPC,
};
struct ns_tree pid_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(pid_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(pid_ns_tree.ns_tree_lock),
.type = CLONE_NEWPID,
};
struct ns_tree cgroup_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(cgroup_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(cgroup_ns_tree.ns_tree_lock),
.type = CLONE_NEWCGROUP,
};
struct ns_tree time_ns_tree = {
.ns_tree = RB_ROOT,
.ns_list = LIST_HEAD_INIT(time_ns_tree.ns_list),
.ns_tree_lock = __SEQLOCK_UNLOCKED(time_ns_tree.ns_tree_lock),
.type = CLONE_NEWTIME,
};
DEFINE_COOKIE(namespace_cookie);
static inline struct ns_common *node_to_ns(const struct rb_node *node)
{
if (!node)
@ -84,30 +80,54 @@ static inline struct ns_common *node_to_ns(const struct rb_node *node)
return rb_entry(node, struct ns_common, ns_tree_node);
}
static inline int ns_cmp(struct rb_node *a, const struct rb_node *b)
static inline struct ns_common *node_to_ns_unified(const struct rb_node *node)
{
struct ns_common *ns_a = node_to_ns(a);
struct ns_common *ns_b = node_to_ns(b);
u64 ns_id_a = ns_a->ns_id;
u64 ns_id_b = ns_b->ns_id;
if (!node)
return NULL;
return rb_entry(node, struct ns_common, ns_unified_tree_node);
}
if (ns_id_a < ns_id_b)
static inline struct ns_common *node_to_ns_owner(const struct rb_node *node)
{
if (!node)
return NULL;
return rb_entry(node, struct ns_common, ns_owner_tree_node);
}
static int ns_id_cmp(u64 id_a, u64 id_b)
{
if (id_a < id_b)
return -1;
if (ns_id_a > ns_id_b)
if (id_a > id_b)
return 1;
return 0;
}
static int ns_cmp(struct rb_node *a, const struct rb_node *b)
{
return ns_id_cmp(node_to_ns(a)->ns_id, node_to_ns(b)->ns_id);
}
static int ns_cmp_unified(struct rb_node *a, const struct rb_node *b)
{
return ns_id_cmp(node_to_ns_unified(a)->ns_id, node_to_ns_unified(b)->ns_id);
}
static int ns_cmp_owner(struct rb_node *a, const struct rb_node *b)
{
return ns_id_cmp(node_to_ns_owner(a)->ns_id, node_to_ns_owner(b)->ns_id);
}
void __ns_tree_add_raw(struct ns_common *ns, struct ns_tree *ns_tree)
{
struct rb_node *node, *prev;
const struct proc_ns_operations *ops = ns->ops;
VFS_WARN_ON_ONCE(!ns->ns_id);
write_seqlock(&ns_tree->ns_tree_lock);
VFS_WARN_ON_ONCE(ns->ns_type != ns_tree->type);
write_seqlock(&ns_tree_lock);
node = rb_find_add_rcu(&ns->ns_tree_node, &ns_tree->ns_tree, ns_cmp);
/*
* If there's no previous entry simply add it after the
@ -119,22 +139,83 @@ void __ns_tree_add_raw(struct ns_common *ns, struct ns_tree *ns_tree)
else
list_add_rcu(&ns->ns_list_node, &node_to_ns(prev)->ns_list_node);
write_sequnlock(&ns_tree->ns_tree_lock);
/* Add to unified tree and list */
rb_find_add_rcu(&ns->ns_unified_tree_node, &ns_unified_tree, ns_cmp_unified);
prev = rb_prev(&ns->ns_unified_tree_node);
if (!prev)
list_add_rcu(&ns->ns_unified_list_node, &ns_unified_list);
else
list_add_rcu(&ns->ns_unified_list_node, &node_to_ns_unified(prev)->ns_unified_list_node);
if (ops) {
struct user_namespace *user_ns;
VFS_WARN_ON_ONCE(!ops->owner);
user_ns = ops->owner(ns);
if (user_ns) {
struct ns_common *owner = &user_ns->ns;
VFS_WARN_ON_ONCE(owner->ns_type != CLONE_NEWUSER);
/* Insert into owner's rbtree */
rb_find_add_rcu(&ns->ns_owner_tree_node, &owner->ns_owner_tree, ns_cmp_owner);
/* Insert into owner's list in sorted order */
prev = rb_prev(&ns->ns_owner_tree_node);
if (!prev)
list_add_rcu(&ns->ns_owner_entry, &owner->ns_owner);
else
list_add_rcu(&ns->ns_owner_entry, &node_to_ns_owner(prev)->ns_owner_entry);
} else {
/* Only the initial user namespace doesn't have an owner. */
VFS_WARN_ON_ONCE(ns != to_ns_common(&init_user_ns));
}
}
write_sequnlock(&ns_tree_lock);
VFS_WARN_ON_ONCE(node);
/*
* Take an active reference on the owner namespace. This ensures
* that the owner remains visible while any of its child namespaces
* are active. For init namespaces this is a no-op as ns_owner()
* returns NULL for namespaces owned by init_user_ns.
*/
__ns_ref_active_get_owner(ns);
}
void __ns_tree_remove(struct ns_common *ns, struct ns_tree *ns_tree)
{
const struct proc_ns_operations *ops = ns->ops;
struct user_namespace *user_ns;
VFS_WARN_ON_ONCE(RB_EMPTY_NODE(&ns->ns_tree_node));
VFS_WARN_ON_ONCE(list_empty(&ns->ns_list_node));
VFS_WARN_ON_ONCE(ns->ns_type != ns_tree->type);
write_seqlock(&ns_tree->ns_tree_lock);
write_seqlock(&ns_tree_lock);
rb_erase(&ns->ns_tree_node, &ns_tree->ns_tree);
list_bidir_del_rcu(&ns->ns_list_node);
RB_CLEAR_NODE(&ns->ns_tree_node);
write_sequnlock(&ns_tree->ns_tree_lock);
list_bidir_del_rcu(&ns->ns_list_node);
rb_erase(&ns->ns_unified_tree_node, &ns_unified_tree);
RB_CLEAR_NODE(&ns->ns_unified_tree_node);
list_bidir_del_rcu(&ns->ns_unified_list_node);
/* Remove from owner's rbtree if this namespace has an owner */
if (ops) {
user_ns = ops->owner(ns);
if (user_ns) {
struct ns_common *owner = &user_ns->ns;
rb_erase(&ns->ns_owner_tree_node, &owner->ns_owner_tree);
RB_CLEAR_NODE(&ns->ns_owner_tree_node);
}
list_bidir_del_rcu(&ns->ns_owner_entry);
}
write_sequnlock(&ns_tree_lock);
}
EXPORT_SYMBOL_GPL(__ns_tree_remove);
@ -150,6 +231,17 @@ static int ns_find(const void *key, const struct rb_node *node)
return 0;
}
static int ns_find_unified(const void *key, const struct rb_node *node)
{
const u64 ns_id = *(u64 *)key;
const struct ns_common *ns = node_to_ns_unified(node);
if (ns_id < ns->ns_id)
return -1;
if (ns_id > ns->ns_id)
return 1;
return 0;
}
static struct ns_tree *ns_tree_from_type(int ns_type)
{
@ -175,33 +267,51 @@ static struct ns_tree *ns_tree_from_type(int ns_type)
return NULL;
}
struct ns_common *ns_tree_lookup_rcu(u64 ns_id, int ns_type)
static struct ns_common *__ns_unified_tree_lookup_rcu(u64 ns_id)
{
struct rb_node *node;
unsigned int seq;
do {
seq = read_seqbegin(&ns_tree_lock);
node = rb_find_rcu(&ns_id, &ns_unified_tree, ns_find_unified);
if (node)
break;
} while (read_seqretry(&ns_tree_lock, seq));
return node_to_ns_unified(node);
}
static struct ns_common *__ns_tree_lookup_rcu(u64 ns_id, int ns_type)
{
struct ns_tree *ns_tree;
struct rb_node *node;
unsigned int seq;
RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "suspicious ns_tree_lookup_rcu() usage");
ns_tree = ns_tree_from_type(ns_type);
if (!ns_tree)
return NULL;
do {
seq = read_seqbegin(&ns_tree->ns_tree_lock);
seq = read_seqbegin(&ns_tree_lock);
node = rb_find_rcu(&ns_id, &ns_tree->ns_tree, ns_find);
if (node)
break;
} while (read_seqretry(&ns_tree->ns_tree_lock, seq));
if (!node)
return NULL;
VFS_WARN_ON_ONCE(node_to_ns(node)->ns_type != ns_type);
} while (read_seqretry(&ns_tree_lock, seq));
return node_to_ns(node);
}
struct ns_common *ns_tree_lookup_rcu(u64 ns_id, int ns_type)
{
RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "suspicious ns_tree_lookup_rcu() usage");
if (ns_type)
return __ns_tree_lookup_rcu(ns_id, ns_type);
return __ns_unified_tree_lookup_rcu(ns_id);
}
/**
* ns_tree_adjoined_rcu - find the next/previous namespace in the same
* tree
@ -233,15 +343,416 @@ struct ns_common *__ns_tree_adjoined_rcu(struct ns_common *ns,
/**
* ns_tree_gen_id - generate a new namespace id
* @ns: namespace to generate id for
* @id: if non-zero, this is the initial namespace and this is a fixed id
*
* Generates a new namespace id and assigns it to the namespace. All
* namespaces types share the same id space and thus can be compared
* directly. IOW, when two ids of two namespace are equal, they are
* identical.
*/
u64 ns_tree_gen_id(struct ns_common *ns)
u64 __ns_tree_gen_id(struct ns_common *ns, u64 id)
{
guard(preempt)();
ns->ns_id = gen_cookie_next(&namespace_cookie);
static atomic64_t namespace_cookie = ATOMIC64_INIT(NS_LAST_INIT_ID + 1);
if (id)
ns->ns_id = id;
else
ns->ns_id = atomic64_inc_return(&namespace_cookie);
return ns->ns_id;
}
struct klistns {
u64 __user *uns_ids;
u32 nr_ns_ids;
u64 last_ns_id;
u64 user_ns_id;
u32 ns_type;
struct user_namespace *user_ns;
bool userns_capable;
struct ns_common *first_ns;
};
static void __free_klistns_free(const struct klistns *kls)
{
if (kls->user_ns_id != LISTNS_CURRENT_USER)
put_user_ns(kls->user_ns);
if (kls->first_ns && kls->first_ns->ops)
kls->first_ns->ops->put(kls->first_ns);
}
#define NS_ALL (PID_NS | USER_NS | MNT_NS | UTS_NS | IPC_NS | NET_NS | CGROUP_NS | TIME_NS)
static int copy_ns_id_req(const struct ns_id_req __user *req,
struct ns_id_req *kreq)
{
int ret;
size_t usize;
BUILD_BUG_ON(sizeof(struct ns_id_req) != NS_ID_REQ_SIZE_VER0);
ret = get_user(usize, &req->size);
if (ret)
return -EFAULT;
if (unlikely(usize > PAGE_SIZE))
return -E2BIG;
if (unlikely(usize < NS_ID_REQ_SIZE_VER0))
return -EINVAL;
memset(kreq, 0, sizeof(*kreq));
ret = copy_struct_from_user(kreq, sizeof(*kreq), req, usize);
if (ret)
return ret;
if (kreq->spare != 0)
return -EINVAL;
if (kreq->ns_type & ~NS_ALL)
return -EOPNOTSUPP;
return 0;
}
static inline int prepare_klistns(struct klistns *kls, struct ns_id_req *kreq,
u64 __user *ns_ids, size_t nr_ns_ids)
{
kls->last_ns_id = kreq->ns_id;
kls->user_ns_id = kreq->user_ns_id;
kls->nr_ns_ids = nr_ns_ids;
kls->ns_type = kreq->ns_type;
kls->uns_ids = ns_ids;
return 0;
}
/*
* Lookup a namespace owned by owner with id >= ns_id.
* Returns the namespace with the smallest id that is >= ns_id.
*/
static struct ns_common *lookup_ns_owner_at(u64 ns_id, struct ns_common *owner)
{
struct ns_common *ret = NULL;
struct rb_node *node;
VFS_WARN_ON_ONCE(owner->ns_type != CLONE_NEWUSER);
read_seqlock_excl(&ns_tree_lock);
node = owner->ns_owner_tree.rb_node;
while (node) {
struct ns_common *ns;
ns = node_to_ns_owner(node);
if (ns_id <= ns->ns_id) {
ret = ns;
if (ns_id == ns->ns_id)
break;
node = node->rb_left;
} else {
node = node->rb_right;
}
}
if (ret)
ret = ns_get_unless_inactive(ret);
read_sequnlock_excl(&ns_tree_lock);
return ret;
}
static struct ns_common *lookup_ns_id(u64 mnt_ns_id, int ns_type)
{
struct ns_common *ns;
guard(rcu)();
ns = ns_tree_lookup_rcu(mnt_ns_id, ns_type);
if (!ns)
return NULL;
if (!ns_get_unless_inactive(ns))
return NULL;
return ns;
}
static inline bool __must_check ns_requested(const struct klistns *kls,
const struct ns_common *ns)
{
return !kls->ns_type || (kls->ns_type & ns->ns_type);
}
static inline bool __must_check may_list_ns(const struct klistns *kls,
struct ns_common *ns)
{
if (kls->user_ns) {
if (kls->userns_capable)
return true;
} else {
struct ns_common *owner;
struct user_namespace *user_ns;
owner = ns_owner(ns);
if (owner)
user_ns = to_user_ns(owner);
else
user_ns = &init_user_ns;
if (ns_capable_noaudit(user_ns, CAP_SYS_ADMIN))
return true;
}
if (is_current_namespace(ns))
return true;
if (ns->ns_type != CLONE_NEWUSER)
return false;
if (ns_capable_noaudit(to_user_ns(ns), CAP_SYS_ADMIN))
return true;
return false;
}
static void __ns_put(struct ns_common *ns)
{
if (ns->ops)
ns->ops->put(ns);
}
DEFINE_FREE(ns_put, struct ns_common *, if (!IS_ERR_OR_NULL(_T)) __ns_put(_T))
static inline struct ns_common *__must_check legitimize_ns(const struct klistns *kls,
struct ns_common *candidate)
{
struct ns_common *ns __free(ns_put) = NULL;
if (!ns_requested(kls, candidate))
return NULL;
ns = ns_get_unless_inactive(candidate);
if (!ns)
return NULL;
if (!may_list_ns(kls, ns))
return NULL;
return no_free_ptr(ns);
}
static ssize_t do_listns_userns(struct klistns *kls)
{
u64 __user *ns_ids = kls->uns_ids;
size_t nr_ns_ids = kls->nr_ns_ids;
struct ns_common *ns = NULL, *first_ns = NULL;
const struct list_head *head;
ssize_t ret;
VFS_WARN_ON_ONCE(!kls->user_ns_id);
if (kls->user_ns_id == LISTNS_CURRENT_USER)
ns = to_ns_common(current_user_ns());
else if (kls->user_ns_id)
ns = lookup_ns_id(kls->user_ns_id, CLONE_NEWUSER);
if (!ns)
return -EINVAL;
kls->user_ns = to_user_ns(ns);
/*
* Use the rbtree to find the first namespace we care about and
* then use it's list entry to iterate from there.
*/
if (kls->last_ns_id) {
kls->first_ns = lookup_ns_owner_at(kls->last_ns_id + 1, ns);
if (!kls->first_ns)
return -ENOENT;
first_ns = kls->first_ns;
}
ret = 0;
head = &to_ns_common(kls->user_ns)->ns_owner;
kls->userns_capable = ns_capable_noaudit(kls->user_ns, CAP_SYS_ADMIN);
rcu_read_lock();
if (!first_ns)
first_ns = list_entry_rcu(head->next, typeof(*ns), ns_owner_entry);
for (ns = first_ns; &ns->ns_owner_entry != head && nr_ns_ids;
ns = list_entry_rcu(ns->ns_owner_entry.next, typeof(*ns), ns_owner_entry)) {
struct ns_common *valid __free(ns_put);
valid = legitimize_ns(kls, ns);
if (!valid)
continue;
rcu_read_unlock();
if (put_user(valid->ns_id, ns_ids + ret))
return -EINVAL;
nr_ns_ids--;
ret++;
rcu_read_lock();
}
rcu_read_unlock();
return ret;
}
/*
* Lookup a namespace with id >= ns_id in either the unified tree or a type-specific tree.
* Returns the namespace with the smallest id that is >= ns_id.
*/
static struct ns_common *lookup_ns_id_at(u64 ns_id, int ns_type)
{
struct ns_common *ret = NULL;
struct ns_tree *ns_tree = NULL;
struct rb_node *node;
if (ns_type) {
ns_tree = ns_tree_from_type(ns_type);
if (!ns_tree)
return NULL;
}
read_seqlock_excl(&ns_tree_lock);
if (ns_tree)
node = ns_tree->ns_tree.rb_node;
else
node = ns_unified_tree.rb_node;
while (node) {
struct ns_common *ns;
if (ns_type)
ns = node_to_ns(node);
else
ns = node_to_ns_unified(node);
if (ns_id <= ns->ns_id) {
if (ns_type)
ret = node_to_ns(node);
else
ret = node_to_ns_unified(node);
if (ns_id == ns->ns_id)
break;
node = node->rb_left;
} else {
node = node->rb_right;
}
}
if (ret)
ret = ns_get_unless_inactive(ret);
read_sequnlock_excl(&ns_tree_lock);
return ret;
}
static inline struct ns_common *first_ns_common(const struct list_head *head,
struct ns_tree *ns_tree)
{
if (ns_tree)
return list_entry_rcu(head->next, struct ns_common, ns_list_node);
return list_entry_rcu(head->next, struct ns_common, ns_unified_list_node);
}
static inline struct ns_common *next_ns_common(struct ns_common *ns,
struct ns_tree *ns_tree)
{
if (ns_tree)
return list_entry_rcu(ns->ns_list_node.next, struct ns_common, ns_list_node);
return list_entry_rcu(ns->ns_unified_list_node.next, struct ns_common, ns_unified_list_node);
}
static inline bool ns_common_is_head(struct ns_common *ns,
const struct list_head *head,
struct ns_tree *ns_tree)
{
if (ns_tree)
return &ns->ns_list_node == head;
return &ns->ns_unified_list_node == head;
}
static ssize_t do_listns(struct klistns *kls)
{
u64 __user *ns_ids = kls->uns_ids;
size_t nr_ns_ids = kls->nr_ns_ids;
struct ns_common *ns, *first_ns = NULL;
struct ns_tree *ns_tree = NULL;
const struct list_head *head;
u32 ns_type;
ssize_t ret;
if (hweight32(kls->ns_type) == 1)
ns_type = kls->ns_type;
else
ns_type = 0;
if (ns_type) {
ns_tree = ns_tree_from_type(ns_type);
if (!ns_tree)
return -EINVAL;
}
if (kls->last_ns_id) {
kls->first_ns = lookup_ns_id_at(kls->last_ns_id + 1, ns_type);
if (!kls->first_ns)
return -ENOENT;
first_ns = kls->first_ns;
}
ret = 0;
if (ns_tree)
head = &ns_tree->ns_list;
else
head = &ns_unified_list;
rcu_read_lock();
if (!first_ns)
first_ns = first_ns_common(head, ns_tree);
for (ns = first_ns; !ns_common_is_head(ns, head, ns_tree) && nr_ns_ids;
ns = next_ns_common(ns, ns_tree)) {
struct ns_common *valid __free(ns_put);
valid = legitimize_ns(kls, ns);
if (!valid)
continue;
rcu_read_unlock();
if (put_user(valid->ns_id, ns_ids + ret))
return -EINVAL;
nr_ns_ids--;
ret++;
rcu_read_lock();
}
rcu_read_unlock();
return ret;
}
SYSCALL_DEFINE4(listns, const struct ns_id_req __user *, req,
u64 __user *, ns_ids, size_t, nr_ns_ids, unsigned int, flags)
{
struct klistns klns __free(klistns_free) = {};
const size_t maxcount = 1000000;
struct ns_id_req kreq;
ssize_t ret;
if (flags)
return -EINVAL;
if (unlikely(nr_ns_ids > maxcount))
return -EOVERFLOW;
if (!access_ok(ns_ids, nr_ns_ids * sizeof(*ns_ids)))
return -EFAULT;
ret = copy_ns_id_req(req, &kreq);
if (ret)
return ret;
ret = prepare_klistns(&klns, &kreq, ns_ids, nr_ns_ids);
if (ret)
return ret;
if (kreq.user_ns_id)
return do_listns_userns(&klns);
return do_listns(&klns);
}

View File

@ -71,21 +71,16 @@ static int pid_max_max = PID_MAX_LIMIT;
* the scheme scales to up to 4 million PIDs, runtime.
*/
struct pid_namespace init_pid_ns = {
.ns.__ns_ref = REFCOUNT_INIT(2),
.ns = NS_COMMON_INIT(init_pid_ns, 2),
.idr = IDR_INIT(init_pid_ns.idr),
.pid_allocated = PIDNS_ADDING,
.level = 0,
.child_reaper = &init_task,
.user_ns = &init_user_ns,
.ns.inum = ns_init_inum(&init_pid_ns),
#ifdef CONFIG_PID_NS
.ns.ops = &pidns_operations,
#endif
.pid_max = PID_MAX_DEFAULT,
#if defined(CONFIG_SYSCTL) && defined(CONFIG_MEMFD_CREATE)
.memfd_noexec_scope = MEMFD_NOEXEC_SCOPE_EXEC,
#endif
.ns.ns_type = ns_common_type(&init_pid_ns),
};
EXPORT_SYMBOL_GPL(init_pid_ns);
@ -117,9 +112,13 @@ static void delayed_put_pid(struct rcu_head *rhp)
void free_pid(struct pid *pid)
{
int i;
struct pid_namespace *active_ns;
lockdep_assert_not_held(&tasklist_lock);
active_ns = pid->numbers[pid->level].ns;
ns_ref_active_put(active_ns);
spin_lock(&pidmap_lock);
for (i = 0; i <= pid->level; i++) {
struct upid *upid = pid->numbers + i;
@ -283,6 +282,7 @@ struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid,
}
spin_unlock(&pidmap_lock);
idr_preload_end();
ns_ref_active_get(ns);
return pid;

View File

@ -478,11 +478,8 @@ const struct proc_ns_operations timens_for_children_operations = {
};
struct time_namespace init_time_ns = {
.ns.ns_type = ns_common_type(&init_time_ns),
.ns.__ns_ref = REFCOUNT_INIT(3),
.ns = NS_COMMON_INIT(init_time_ns, 3),
.user_ns = &init_user_ns,
.ns.inum = ns_init_inum(&init_time_ns),
.ns.ops = &timens_operations,
.frozen_offsets = true,
};

View File

@ -35,6 +35,7 @@ EXPORT_SYMBOL_GPL(init_binfmt_misc);
* and 1 for... ?
*/
struct user_namespace init_user_ns = {
.ns = NS_COMMON_INIT(init_user_ns, 3),
.uid_map = {
{
.extent[0] = {
@ -65,14 +66,8 @@ struct user_namespace init_user_ns = {
.nr_extents = 1,
},
},
.ns.ns_type = ns_common_type(&init_user_ns),
.ns.__ns_ref = REFCOUNT_INIT(3),
.owner = GLOBAL_ROOT_UID,
.group = GLOBAL_ROOT_GID,
.ns.inum = ns_init_inum(&init_user_ns),
#ifdef CONFIG_USER_NS
.ns.ops = &userns_operations,
#endif
.flags = USERNS_INIT_FLAGS,
#ifdef CONFIG_KEYS
.keyring_name_list = LIST_HEAD_INIT(init_user_ns.keyring_name_list),

View File

@ -439,7 +439,7 @@ static __net_init int setup_net(struct net *net)
LIST_HEAD(net_exit_list);
int error = 0;
net->net_cookie = ns_tree_gen_id(&net->ns);
net->net_cookie = ns_tree_gen_id(net);
list_for_each_entry(ops, &pernet_list, list) {
error = ops_init(ops, net);

View File

@ -410,3 +410,4 @@
467 common open_tree_attr sys_open_tree_attr
468 common file_getattr sys_file_getattr
469 common file_setattr sys_file_setattr
470 common listns sys_listns

View File

@ -53,6 +53,76 @@ enum init_ns_ino {
TIME_NS_INIT_INO = 0xEFFFFFFAU,
NET_NS_INIT_INO = 0xEFFFFFF9U,
MNT_NS_INIT_INO = 0xEFFFFFF8U,
#ifdef __KERNEL__
MNT_NS_ANON_INO = 0xEFFFFFF7U,
#endif
};
struct nsfs_file_handle {
__u64 ns_id;
__u32 ns_type;
__u32 ns_inum;
};
#define NSFS_FILE_HANDLE_SIZE_VER0 16 /* sizeof first published struct */
#define NSFS_FILE_HANDLE_SIZE_LATEST sizeof(struct nsfs_file_handle) /* sizeof latest published struct */
enum init_ns_id {
IPC_NS_INIT_ID = 1ULL,
UTS_NS_INIT_ID = 2ULL,
USER_NS_INIT_ID = 3ULL,
PID_NS_INIT_ID = 4ULL,
CGROUP_NS_INIT_ID = 5ULL,
TIME_NS_INIT_ID = 6ULL,
NET_NS_INIT_ID = 7ULL,
MNT_NS_INIT_ID = 8ULL,
#ifdef __KERNEL__
NS_LAST_INIT_ID = MNT_NS_INIT_ID,
#endif
};
enum ns_type {
TIME_NS = (1ULL << 7), /* CLONE_NEWTIME */
MNT_NS = (1ULL << 17), /* CLONE_NEWNS */
CGROUP_NS = (1ULL << 25), /* CLONE_NEWCGROUP */
UTS_NS = (1ULL << 26), /* CLONE_NEWUTS */
IPC_NS = (1ULL << 27), /* CLONE_NEWIPC */
USER_NS = (1ULL << 28), /* CLONE_NEWUSER */
PID_NS = (1ULL << 29), /* CLONE_NEWPID */
NET_NS = (1ULL << 30), /* CLONE_NEWNET */
};
/**
* struct ns_id_req - namespace ID request structure
* @size: size of this structure
* @spare: reserved for future use
* @filter: filter mask
* @ns_id: last namespace id
* @user_ns_id: owning user namespace ID
*
* Structure for passing namespace ID and miscellaneous parameters to
* statns(2) and listns(2).
*
* For statns(2) @param represents the request mask.
* For listns(2) @param represents the last listed mount id (or zero).
*/
struct ns_id_req {
__u32 size;
__u32 spare;
__u64 ns_id;
struct /* listns */ {
__u32 ns_type;
__u32 spare2;
__u64 user_ns_id;
};
};
/*
* Special @user_ns_id value that can be passed to listns()
*/
#define LISTNS_CURRENT_USER 0xffffffffffffffff /* Caller's userns */
/* List of all ns_id_req versions. */
#define NS_ID_REQ_SIZE_VER0 32 /* sizeof first published struct */
#endif /* __LINUX_NSFS_H */

View File

@ -487,7 +487,7 @@ int setup_userns(void)
uid_t uid = getuid();
gid_t gid = getgid();
ret = unshare(CLONE_NEWNS|CLONE_NEWUSER|CLONE_NEWPID);
ret = unshare(CLONE_NEWNS|CLONE_NEWUSER);
if (ret) {
ksft_exit_fail_msg("unsharing mountns and userns: %s\n",
strerror(errno));

View File

@ -1,3 +1,10 @@
nsid_test
file_handle_test
init_ino_test
ns_active_ref_test
listns_test
listns_permissions_test
siocgskns_test
cred_change_test
stress_test
listns_pagination_bug

View File

@ -1,7 +1,25 @@
# SPDX-License-Identifier: GPL-2.0-only
CFLAGS += -Wall -O0 -g $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
LDLIBS += -lcap
TEST_GEN_PROGS := nsid_test file_handle_test init_ino_test
TEST_GEN_PROGS := nsid_test \
file_handle_test \
init_ino_test \
ns_active_ref_test \
listns_test \
listns_permissions_test \
siocgskns_test \
cred_change_test \
stress_test \
listns_pagination_bug
include ../lib.mk
$(OUTPUT)/ns_active_ref_test: ../filesystems/utils.c
$(OUTPUT)/listns_test: ../filesystems/utils.c
$(OUTPUT)/listns_permissions_test: ../filesystems/utils.c
$(OUTPUT)/siocgskns_test: ../filesystems/utils.c
$(OUTPUT)/cred_change_test: ../filesystems/utils.c
$(OUTPUT)/stress_test: ../filesystems/utils.c
$(OUTPUT)/listns_pagination_bug: ../filesystems/utils.c

View File

@ -0,0 +1,814 @@
// SPDX-License-Identifier: GPL-2.0
#define _GNU_SOURCE
#include <errno.h>
#include <fcntl.h>
#include <limits.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/capability.h>
#include <sys/ioctl.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <linux/nsfs.h>
#include "../kselftest_harness.h"
#include "../filesystems/utils.h"
#include "wrappers.h"
/*
* Test credential changes and their impact on namespace active references.
*/
/*
* Test setuid() in a user namespace properly swaps active references.
* Create a user namespace with multiple UIDs mapped, then setuid() between them.
* Verify that the user namespace remains active throughout.
*/
TEST(setuid_preserves_active_refs)
{
pid_t pid;
int status;
__u64 userns_id;
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[256];
ssize_t ret;
int i;
bool found = false;
int pipefd[2];
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child process */
int fd, userns_fd;
__u64 child_userns_id;
uid_t orig_uid = getuid();
int setuid_count;
close(pipefd[0]);
/* Create new user namespace with multiple UIDs mapped (0-9) */
userns_fd = get_userns_fd(0, orig_uid, 10);
if (userns_fd < 0) {
close(pipefd[1]);
exit(1);
}
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
close(pipefd[1]);
exit(1);
}
close(userns_fd);
/* Get user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Send namespace ID to parent */
write(pipefd[1], &child_userns_id, sizeof(child_userns_id));
/*
* Perform multiple setuid() calls.
* Each setuid() triggers commit_creds() which should properly
* swap active references via switch_cred_namespaces().
*/
for (setuid_count = 0; setuid_count < 50; setuid_count++) {
uid_t target_uid = (setuid_count % 10);
if (setuid(target_uid) < 0) {
if (errno != EPERM) {
close(pipefd[1]);
exit(1);
}
}
}
close(pipefd[1]);
exit(0);
}
/* Parent process */
close(pipefd[1]);
if (read(pipefd[0], &userns_id, sizeof(userns_id)) != sizeof(userns_id)) {
close(pipefd[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get namespace ID from child");
}
close(pipefd[0]);
TH_LOG("Child user namespace ID: %llu", (unsigned long long)userns_id);
/* Verify namespace is active while child is running */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret, 0);
}
for (i = 0; i < ret; i++) {
if (ns_ids[i] == userns_id) {
found = true;
break;
}
}
ASSERT_TRUE(found);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
/* Verify namespace becomes inactive after child exits */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
ASSERT_GE(ret, 0);
found = false;
for (i = 0; i < ret; i++) {
if (ns_ids[i] == userns_id) {
found = true;
break;
}
}
ASSERT_FALSE(found);
TH_LOG("setuid() correctly preserved active references (no leak)");
}
/*
* Test setgid() in a user namespace properly handles active references.
*/
TEST(setgid_preserves_active_refs)
{
pid_t pid;
int status;
__u64 userns_id;
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[256];
ssize_t ret;
int i;
bool found = false;
int pipefd[2];
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child process */
int fd, userns_fd;
__u64 child_userns_id;
uid_t orig_uid = getuid();
int setgid_count;
close(pipefd[0]);
/* Create new user namespace with multiple GIDs mapped */
userns_fd = get_userns_fd(0, orig_uid, 10);
if (userns_fd < 0) {
close(pipefd[1]);
exit(1);
}
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
close(pipefd[1]);
exit(1);
}
close(userns_fd);
/* Get user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
write(pipefd[1], &child_userns_id, sizeof(child_userns_id));
/* Perform multiple setgid() calls */
for (setgid_count = 0; setgid_count < 50; setgid_count++) {
gid_t target_gid = (setgid_count % 10);
if (setgid(target_gid) < 0) {
if (errno != EPERM) {
close(pipefd[1]);
exit(1);
}
}
}
close(pipefd[1]);
exit(0);
}
/* Parent process */
close(pipefd[1]);
if (read(pipefd[0], &userns_id, sizeof(userns_id)) != sizeof(userns_id)) {
close(pipefd[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get namespace ID from child");
}
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
/* Verify namespace becomes inactive */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret, 0);
}
for (i = 0; i < ret; i++) {
if (ns_ids[i] == userns_id) {
found = true;
break;
}
}
ASSERT_FALSE(found);
TH_LOG("setgid() correctly preserved active references (no leak)");
}
/*
* Test setresuid() which changes real, effective, and saved UIDs.
* This should properly swap active references via commit_creds().
*/
TEST(setresuid_preserves_active_refs)
{
pid_t pid;
int status;
__u64 userns_id;
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[256];
ssize_t ret;
int i;
bool found = false;
int pipefd[2];
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child process */
int fd, userns_fd;
__u64 child_userns_id;
uid_t orig_uid = getuid();
int setres_count;
close(pipefd[0]);
/* Create new user namespace */
userns_fd = get_userns_fd(0, orig_uid, 10);
if (userns_fd < 0) {
close(pipefd[1]);
exit(1);
}
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
close(pipefd[1]);
exit(1);
}
close(userns_fd);
/* Get user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
write(pipefd[1], &child_userns_id, sizeof(child_userns_id));
/* Perform multiple setresuid() calls */
for (setres_count = 0; setres_count < 30; setres_count++) {
uid_t uid1 = (setres_count % 5);
uid_t uid2 = ((setres_count + 1) % 5);
uid_t uid3 = ((setres_count + 2) % 5);
if (setresuid(uid1, uid2, uid3) < 0) {
if (errno != EPERM) {
close(pipefd[1]);
exit(1);
}
}
}
close(pipefd[1]);
exit(0);
}
/* Parent process */
close(pipefd[1]);
if (read(pipefd[0], &userns_id, sizeof(userns_id)) != sizeof(userns_id)) {
close(pipefd[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get namespace ID from child");
}
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
/* Verify namespace becomes inactive */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret, 0);
}
for (i = 0; i < ret; i++) {
if (ns_ids[i] == userns_id) {
found = true;
break;
}
}
ASSERT_FALSE(found);
TH_LOG("setresuid() correctly preserved active references (no leak)");
}
/*
* Test credential changes across multiple user namespaces.
* Create nested user namespaces and verify active reference tracking.
*/
TEST(cred_change_nested_userns)
{
pid_t pid;
int status;
__u64 parent_userns_id, child_userns_id;
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[256];
ssize_t ret;
int i;
bool found_parent = false, found_child = false;
int pipefd[2];
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child process */
int fd, userns_fd;
__u64 parent_id, child_id;
uid_t orig_uid = getuid();
close(pipefd[0]);
/* Create first user namespace */
userns_fd = get_userns_fd(0, orig_uid, 1);
if (userns_fd < 0) {
close(pipefd[1]);
exit(1);
}
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
close(pipefd[1]);
exit(1);
}
close(userns_fd);
/* Get first namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &parent_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Create nested user namespace */
userns_fd = get_userns_fd(0, 0, 1);
if (userns_fd < 0) {
close(pipefd[1]);
exit(1);
}
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
close(pipefd[1]);
exit(1);
}
close(userns_fd);
/* Get nested namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Send both IDs to parent */
write(pipefd[1], &parent_id, sizeof(parent_id));
write(pipefd[1], &child_id, sizeof(child_id));
/* Perform some credential changes in nested namespace */
setuid(0);
setgid(0);
close(pipefd[1]);
exit(0);
}
/* Parent process */
close(pipefd[1]);
/* Read both namespace IDs */
if (read(pipefd[0], &parent_userns_id, sizeof(parent_userns_id)) != sizeof(parent_userns_id)) {
close(pipefd[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get parent namespace ID");
}
if (read(pipefd[0], &child_userns_id, sizeof(child_userns_id)) != sizeof(child_userns_id)) {
close(pipefd[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get child namespace ID");
}
close(pipefd[0]);
TH_LOG("Parent userns: %llu, Child userns: %llu",
(unsigned long long)parent_userns_id,
(unsigned long long)child_userns_id);
/* Verify both namespaces are active */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret, 0);
}
for (i = 0; i < ret; i++) {
if (ns_ids[i] == parent_userns_id)
found_parent = true;
if (ns_ids[i] == child_userns_id)
found_child = true;
}
ASSERT_TRUE(found_parent);
ASSERT_TRUE(found_child);
/* Wait for child */
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
/* Verify both namespaces become inactive */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
ASSERT_GE(ret, 0);
found_parent = false;
found_child = false;
for (i = 0; i < ret; i++) {
if (ns_ids[i] == parent_userns_id)
found_parent = true;
if (ns_ids[i] == child_userns_id)
found_child = true;
}
ASSERT_FALSE(found_parent);
ASSERT_FALSE(found_child);
TH_LOG("Nested user namespace credential changes preserved active refs (no leak)");
}
/*
* Test rapid credential changes don't cause refcount imbalances.
* This stress-tests the switch_cred_namespaces() logic.
*/
TEST(rapid_cred_changes_no_leak)
{
pid_t pid;
int status;
__u64 userns_id;
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[256];
ssize_t ret;
int i;
bool found = false;
int pipefd[2];
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child process */
int fd, userns_fd;
__u64 child_userns_id;
uid_t orig_uid = getuid();
int change_count;
close(pipefd[0]);
/* Create new user namespace with wider range of UIDs/GIDs */
userns_fd = get_userns_fd(0, orig_uid, 100);
if (userns_fd < 0) {
close(pipefd[1]);
exit(1);
}
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
close(pipefd[1]);
exit(1);
}
close(userns_fd);
/* Get user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
write(pipefd[1], &child_userns_id, sizeof(child_userns_id));
/*
* Perform many rapid credential changes.
* Mix setuid, setgid, setreuid, setregid, setresuid, setresgid.
*/
for (change_count = 0; change_count < 200; change_count++) {
switch (change_count % 6) {
case 0:
setuid(change_count % 50);
break;
case 1:
setgid(change_count % 50);
break;
case 2:
setreuid(change_count % 50, (change_count + 1) % 50);
break;
case 3:
setregid(change_count % 50, (change_count + 1) % 50);
break;
case 4:
setresuid(change_count % 50, (change_count + 1) % 50, (change_count + 2) % 50);
break;
case 5:
setresgid(change_count % 50, (change_count + 1) % 50, (change_count + 2) % 50);
break;
}
}
close(pipefd[1]);
exit(0);
}
/* Parent process */
close(pipefd[1]);
if (read(pipefd[0], &userns_id, sizeof(userns_id)) != sizeof(userns_id)) {
close(pipefd[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get namespace ID from child");
}
close(pipefd[0]);
TH_LOG("Testing with user namespace ID: %llu", (unsigned long long)userns_id);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
/* Verify namespace becomes inactive (no leaked active refs) */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret, 0);
}
for (i = 0; i < ret; i++) {
if (ns_ids[i] == userns_id) {
found = true;
break;
}
}
ASSERT_FALSE(found);
TH_LOG("200 rapid credential changes completed with no active ref leak");
}
/*
* Test setfsuid/setfsgid which change filesystem UID/GID.
* These also trigger credential changes but may have different code paths.
*/
TEST(setfsuid_preserves_active_refs)
{
pid_t pid;
int status;
__u64 userns_id;
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[256];
ssize_t ret;
int i;
bool found = false;
int pipefd[2];
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child process */
int fd, userns_fd;
__u64 child_userns_id;
uid_t orig_uid = getuid();
int change_count;
close(pipefd[0]);
/* Create new user namespace */
userns_fd = get_userns_fd(0, orig_uid, 10);
if (userns_fd < 0) {
close(pipefd[1]);
exit(1);
}
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
close(pipefd[1]);
exit(1);
}
close(userns_fd);
/* Get user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
write(pipefd[1], &child_userns_id, sizeof(child_userns_id));
/* Perform multiple setfsuid/setfsgid calls */
for (change_count = 0; change_count < 50; change_count++) {
setfsuid(change_count % 10);
setfsgid(change_count % 10);
}
close(pipefd[1]);
exit(0);
}
/* Parent process */
close(pipefd[1]);
if (read(pipefd[0], &userns_id, sizeof(userns_id)) != sizeof(userns_id)) {
close(pipefd[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get namespace ID from child");
}
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
/* Verify namespace becomes inactive */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret, 0);
}
for (i = 0; i < ret; i++) {
if (ns_ids[i] == userns_id) {
found = true;
break;
}
}
ASSERT_FALSE(found);
TH_LOG("setfsuid/setfsgid correctly preserved active references (no leak)");
}
TEST_HARNESS_MAIN

View File

@ -0,0 +1,138 @@
// SPDX-License-Identifier: GPL-2.0
#define _GNU_SOURCE
#include <errno.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <sys/wait.h>
#include <unistd.h>
#include "../kselftest_harness.h"
#include "../filesystems/utils.h"
#include "wrappers.h"
/*
* Minimal test case to reproduce KASAN out-of-bounds in listns pagination.
*
* The bug occurs when:
* 1. Filtering by a specific namespace type (e.g., CLONE_NEWUSER)
* 2. Using pagination (req.ns_id != 0)
* 3. The lookup_ns_id_at() call in do_listns() passes ns_type=0 instead of
* the filtered type, causing it to search the unified tree and potentially
* return a namespace of the wrong type.
*/
TEST(pagination_with_type_filter)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER, /* Filter by user namespace */
.spare2 = 0,
.user_ns_id = 0,
};
pid_t pids[10];
int num_children = 10;
int i;
int sv[2];
__u64 first_batch[3];
ssize_t ret;
ASSERT_EQ(socketpair(AF_UNIX, SOCK_STREAM, 0, sv), 0);
/* Create children with user namespaces */
for (i = 0; i < num_children; i++) {
pids[i] = fork();
ASSERT_GE(pids[i], 0);
if (pids[i] == 0) {
char c;
close(sv[0]);
if (setup_userns() < 0) {
close(sv[1]);
exit(1);
}
/* Signal parent we're ready */
if (write(sv[1], &c, 1) != 1) {
close(sv[1]);
exit(1);
}
/* Wait for parent signal to exit */
if (read(sv[1], &c, 1) != 1) {
close(sv[1]);
exit(1);
}
close(sv[1]);
exit(0);
}
}
close(sv[1]);
/* Wait for all children to signal ready */
for (i = 0; i < num_children; i++) {
char c;
if (read(sv[0], &c, 1) != 1) {
close(sv[0]);
for (int j = 0; j < num_children; j++)
kill(pids[j], SIGKILL);
for (int j = 0; j < num_children; j++)
waitpid(pids[j], NULL, 0);
ASSERT_TRUE(false);
}
}
/* First batch - this should work */
ret = sys_listns(&req, first_batch, 3, 0);
if (ret < 0) {
if (errno == ENOSYS) {
close(sv[0]);
for (i = 0; i < num_children; i++)
kill(pids[i], SIGKILL);
for (i = 0; i < num_children; i++)
waitpid(pids[i], NULL, 0);
SKIP(return, "listns() not supported");
}
ASSERT_GE(ret, 0);
}
TH_LOG("First batch returned %zd entries", ret);
if (ret == 3) {
__u64 second_batch[3];
/* Second batch - pagination triggers the bug */
req.ns_id = first_batch[2]; /* Continue from last ID */
ret = sys_listns(&req, second_batch, 3, 0);
TH_LOG("Second batch returned %zd entries", ret);
ASSERT_GE(ret, 0);
}
/* Signal all children to exit */
for (i = 0; i < num_children; i++) {
char c = 'X';
if (write(sv[0], &c, 1) != 1) {
close(sv[0]);
for (int j = i; j < num_children; j++)
kill(pids[j], SIGKILL);
for (int j = 0; j < num_children; j++)
waitpid(pids[j], NULL, 0);
ASSERT_TRUE(false);
}
}
close(sv[0]);
/* Cleanup */
for (i = 0; i < num_children; i++) {
int status;
waitpid(pids[i], &status, 0);
}
}
TEST_HARNESS_MAIN

View File

@ -0,0 +1,759 @@
// SPDX-License-Identifier: GPL-2.0
#define _GNU_SOURCE
#include <errno.h>
#include <fcntl.h>
#include <limits.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <linux/nsfs.h>
#include <sys/capability.h>
#include <sys/ioctl.h>
#include <sys/prctl.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include "../kselftest_harness.h"
#include "../filesystems/utils.h"
#include "wrappers.h"
/*
* Test that unprivileged users can only see namespaces they're currently in.
* Create a namespace, drop privileges, verify we can only see our own namespaces.
*/
TEST(listns_unprivileged_current_only)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWNET,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[100];
ssize_t ret;
int pipefd[2];
pid_t pid;
int status;
bool found_ours;
int unexpected_count;
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int fd;
__u64 our_netns_id;
bool found_ours;
int unexpected_count;
close(pipefd[0]);
/* Create user namespace to be unprivileged */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Create a network namespace */
if (unshare(CLONE_NEWNET) < 0) {
close(pipefd[1]);
exit(1);
}
/* Get our network namespace ID */
fd = open("/proc/self/ns/net", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &our_netns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Now we're unprivileged - list all network namespaces */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
close(pipefd[1]);
exit(1);
}
/* We should only see our own network namespace */
found_ours = false;
unexpected_count = 0;
for (ssize_t i = 0; i < ret; i++) {
if (ns_ids[i] == our_netns_id) {
found_ours = true;
} else {
/* This is either init_net (which we can see) or unexpected */
unexpected_count++;
}
}
/* Send results to parent */
write(pipefd[1], &found_ours, sizeof(found_ours));
write(pipefd[1], &unexpected_count, sizeof(unexpected_count));
close(pipefd[1]);
exit(0);
}
/* Parent */
close(pipefd[1]);
found_ours = false;
unexpected_count = 0;
read(pipefd[0], &found_ours, sizeof(found_ours));
read(pipefd[0], &unexpected_count, sizeof(unexpected_count));
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
/* Child should have seen its own namespace */
ASSERT_TRUE(found_ours);
TH_LOG("Unprivileged child saw its own namespace, plus %d others (likely init_net)",
unexpected_count);
}
/*
* Test that users with CAP_SYS_ADMIN in a user namespace can see
* all namespaces owned by that user namespace.
*/
TEST(listns_cap_sys_admin_in_userns)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0, /* All types */
.spare2 = 0,
.user_ns_id = 0, /* Will be set to our created user namespace */
};
__u64 ns_ids[100];
int pipefd[2];
pid_t pid;
int status;
bool success;
ssize_t count;
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int fd;
__u64 userns_id;
ssize_t ret;
int min_expected;
bool success;
close(pipefd[0]);
/* Create user namespace - we'll have CAP_SYS_ADMIN in it */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Get the user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Create several namespaces owned by this user namespace */
unshare(CLONE_NEWNET);
unshare(CLONE_NEWUTS);
unshare(CLONE_NEWIPC);
/* List namespaces owned by our user namespace */
req.user_ns_id = userns_id;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
close(pipefd[1]);
exit(1);
}
/*
* We have CAP_SYS_ADMIN in this user namespace,
* so we should see all namespaces owned by it.
* That includes: net, uts, ipc, and the user namespace itself.
*/
min_expected = 4;
success = (ret >= min_expected);
write(pipefd[1], &success, sizeof(success));
write(pipefd[1], &ret, sizeof(ret));
close(pipefd[1]);
exit(0);
}
/* Parent */
close(pipefd[1]);
success = false;
count = 0;
read(pipefd[0], &success, sizeof(success));
read(pipefd[0], &count, sizeof(count));
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
ASSERT_TRUE(success);
TH_LOG("User with CAP_SYS_ADMIN saw %zd namespaces owned by their user namespace",
count);
}
/*
* Test that users cannot see namespaces from unrelated user namespaces.
* Create two sibling user namespaces, verify they can't see each other's
* owned namespaces.
*/
TEST(listns_cannot_see_sibling_userns_namespaces)
{
int pipefd[2];
pid_t pid1, pid2;
int status;
__u64 netns_a_id;
int pipefd2[2];
bool found_sibling_netns;
ASSERT_EQ(pipe(pipefd), 0);
/* Fork first child - creates user namespace A */
pid1 = fork();
ASSERT_GE(pid1, 0);
if (pid1 == 0) {
int fd;
__u64 netns_a_id;
char buf;
close(pipefd[0]);
/* Create user namespace A */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Create network namespace owned by user namespace A */
if (unshare(CLONE_NEWNET) < 0) {
close(pipefd[1]);
exit(1);
}
/* Get network namespace ID */
fd = open("/proc/self/ns/net", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &netns_a_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Send namespace ID to parent */
write(pipefd[1], &netns_a_id, sizeof(netns_a_id));
/* Keep alive for sibling to check */
read(pipefd[1], &buf, 1);
close(pipefd[1]);
exit(0);
}
/* Parent reads namespace A ID */
close(pipefd[1]);
netns_a_id = 0;
read(pipefd[0], &netns_a_id, sizeof(netns_a_id));
TH_LOG("User namespace A created network namespace with ID %llu",
(unsigned long long)netns_a_id);
/* Fork second child - creates user namespace B */
ASSERT_EQ(pipe(pipefd2), 0);
pid2 = fork();
ASSERT_GE(pid2, 0);
if (pid2 == 0) {
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWNET,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[100];
ssize_t ret;
bool found_sibling_netns;
close(pipefd[0]);
close(pipefd2[0]);
/* Create user namespace B (sibling to A) */
if (setup_userns() < 0) {
close(pipefd2[1]);
exit(1);
}
/* Try to list all network namespaces */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
found_sibling_netns = false;
if (ret > 0) {
for (ssize_t i = 0; i < ret; i++) {
if (ns_ids[i] == netns_a_id) {
found_sibling_netns = true;
break;
}
}
}
/* We should NOT see the sibling's network namespace */
write(pipefd2[1], &found_sibling_netns, sizeof(found_sibling_netns));
close(pipefd2[1]);
exit(0);
}
/* Parent reads result from second child */
close(pipefd2[1]);
found_sibling_netns = false;
read(pipefd2[0], &found_sibling_netns, sizeof(found_sibling_netns));
close(pipefd2[0]);
/* Signal first child to exit */
close(pipefd[0]);
/* Wait for both children */
waitpid(pid2, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
waitpid(pid1, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
/* Second child should NOT have seen first child's namespace */
ASSERT_FALSE(found_sibling_netns);
TH_LOG("User namespace B correctly could not see sibling namespace A's network namespace");
}
/*
* Test permission checking with LISTNS_CURRENT_USER.
* Verify that listing with LISTNS_CURRENT_USER respects permissions.
*/
TEST(listns_current_user_permissions)
{
int pipefd[2];
pid_t pid;
int status;
bool success;
ssize_t count;
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0,
.spare2 = 0,
.user_ns_id = LISTNS_CURRENT_USER,
};
__u64 ns_ids[100];
ssize_t ret;
bool success;
close(pipefd[0]);
/* Create user namespace */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Create some namespaces owned by this user namespace */
if (unshare(CLONE_NEWNET) < 0) {
close(pipefd[1]);
exit(1);
}
if (unshare(CLONE_NEWUTS) < 0) {
close(pipefd[1]);
exit(1);
}
/* List with LISTNS_CURRENT_USER - should see our owned namespaces */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
success = (ret >= 3); /* At least user, net, uts */
write(pipefd[1], &success, sizeof(success));
write(pipefd[1], &ret, sizeof(ret));
close(pipefd[1]);
exit(0);
}
/* Parent */
close(pipefd[1]);
success = false;
count = 0;
read(pipefd[0], &success, sizeof(success));
read(pipefd[0], &count, sizeof(count));
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
ASSERT_TRUE(success);
TH_LOG("LISTNS_CURRENT_USER returned %zd namespaces", count);
}
/*
* Test that CAP_SYS_ADMIN in parent user namespace allows seeing
* child user namespace's owned namespaces.
*/
TEST(listns_parent_userns_cap_sys_admin)
{
int pipefd[2];
pid_t pid;
int status;
bool found_child_userns;
ssize_t count;
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int fd;
__u64 parent_userns_id;
__u64 child_userns_id;
struct ns_id_req req;
__u64 ns_ids[100];
ssize_t ret;
bool found_child_userns;
close(pipefd[0]);
/* Create parent user namespace - we have CAP_SYS_ADMIN in it */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Get parent user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &parent_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Create child user namespace */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Get child user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Create namespaces owned by child user namespace */
if (unshare(CLONE_NEWNET) < 0) {
close(pipefd[1]);
exit(1);
}
/* List namespaces owned by parent user namespace */
req.size = sizeof(req);
req.spare = 0;
req.ns_id = 0;
req.ns_type = 0;
req.spare2 = 0;
req.user_ns_id = parent_userns_id;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
/* Should see child user namespace in the list */
found_child_userns = false;
if (ret > 0) {
for (ssize_t i = 0; i < ret; i++) {
if (ns_ids[i] == child_userns_id) {
found_child_userns = true;
break;
}
}
}
write(pipefd[1], &found_child_userns, sizeof(found_child_userns));
write(pipefd[1], &ret, sizeof(ret));
close(pipefd[1]);
exit(0);
}
/* Parent */
close(pipefd[1]);
found_child_userns = false;
count = 0;
read(pipefd[0], &found_child_userns, sizeof(found_child_userns));
read(pipefd[0], &count, sizeof(count));
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
ASSERT_TRUE(found_child_userns);
TH_LOG("Process with CAP_SYS_ADMIN in parent user namespace saw child user namespace (total: %zd)",
count);
}
/*
* Test that we can see user namespaces we have CAP_SYS_ADMIN inside of.
* This is different from seeing namespaces owned by a user namespace.
*/
TEST(listns_cap_sys_admin_inside_userns)
{
int pipefd[2];
pid_t pid;
int status;
bool found_ours;
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int fd;
__u64 our_userns_id;
struct ns_id_req req;
__u64 ns_ids[100];
ssize_t ret;
bool found_ours;
close(pipefd[0]);
/* Create user namespace - we have CAP_SYS_ADMIN inside it */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Get our user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &our_userns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* List all user namespaces globally */
req.size = sizeof(req);
req.spare = 0;
req.ns_id = 0;
req.ns_type = CLONE_NEWUSER;
req.spare2 = 0;
req.user_ns_id = 0;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
/* We should be able to see our own user namespace */
found_ours = false;
if (ret > 0) {
for (ssize_t i = 0; i < ret; i++) {
if (ns_ids[i] == our_userns_id) {
found_ours = true;
break;
}
}
}
write(pipefd[1], &found_ours, sizeof(found_ours));
close(pipefd[1]);
exit(0);
}
/* Parent */
close(pipefd[1]);
found_ours = false;
read(pipefd[0], &found_ours, sizeof(found_ours));
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
ASSERT_TRUE(found_ours);
TH_LOG("Process can see user namespace it has CAP_SYS_ADMIN inside of");
}
/*
* Test that dropping CAP_SYS_ADMIN restricts what we can see.
*/
TEST(listns_drop_cap_sys_admin)
{
cap_t caps;
cap_value_t cap_list[1] = { CAP_SYS_ADMIN };
/* This test needs to start with CAP_SYS_ADMIN */
caps = cap_get_proc();
if (!caps) {
SKIP(return, "Cannot get capabilities");
}
cap_flag_value_t cap_val;
if (cap_get_flag(caps, CAP_SYS_ADMIN, CAP_EFFECTIVE, &cap_val) < 0) {
cap_free(caps);
SKIP(return, "Cannot check CAP_SYS_ADMIN");
}
if (cap_val != CAP_SET) {
cap_free(caps);
SKIP(return, "Test needs CAP_SYS_ADMIN to start");
}
cap_free(caps);
int pipefd[2];
pid_t pid;
int status;
bool correct;
ssize_t count_before, count_after;
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWNET,
.spare2 = 0,
.user_ns_id = LISTNS_CURRENT_USER,
};
__u64 ns_ids_before[100];
ssize_t count_before;
__u64 ns_ids_after[100];
ssize_t count_after;
bool correct;
close(pipefd[0]);
/* Create user namespace */
if (setup_userns() < 0) {
close(pipefd[1]);
exit(1);
}
/* Count namespaces with CAP_SYS_ADMIN */
count_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
/* Drop CAP_SYS_ADMIN */
caps = cap_get_proc();
if (caps) {
cap_set_flag(caps, CAP_EFFECTIVE, 1, cap_list, CAP_CLEAR);
cap_set_flag(caps, CAP_PERMITTED, 1, cap_list, CAP_CLEAR);
cap_set_proc(caps);
cap_free(caps);
}
/* Ensure we can't regain the capability */
prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);
/* Count namespaces without CAP_SYS_ADMIN */
count_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
/* Without CAP_SYS_ADMIN, we should see same or fewer namespaces */
correct = (count_after <= count_before);
write(pipefd[1], &correct, sizeof(correct));
write(pipefd[1], &count_before, sizeof(count_before));
write(pipefd[1], &count_after, sizeof(count_after));
close(pipefd[1]);
exit(0);
}
/* Parent */
close(pipefd[1]);
correct = false;
count_before = 0;
count_after = 0;
read(pipefd[0], &correct, sizeof(correct));
read(pipefd[0], &count_before, sizeof(count_before));
read(pipefd[0], &count_after, sizeof(count_after));
close(pipefd[0]);
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
ASSERT_TRUE(correct);
TH_LOG("With CAP_SYS_ADMIN: %zd namespaces, without: %zd namespaces",
count_before, count_after);
}
TEST_HARNESS_MAIN

View File

@ -0,0 +1,679 @@
// SPDX-License-Identifier: GPL-2.0
#define _GNU_SOURCE
#include <errno.h>
#include <fcntl.h>
#include <limits.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <linux/nsfs.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include "../kselftest_harness.h"
#include "../filesystems/utils.h"
#include "wrappers.h"
/*
* Test basic listns() functionality with the unified namespace tree.
* List all active namespaces globally.
*/
TEST(listns_basic_unified)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0, /* All types */
.spare2 = 0,
.user_ns_id = 0, /* Global listing */
};
__u64 ns_ids[100];
ssize_t ret;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
TH_LOG("listns failed: %s (errno=%d)", strerror(errno), errno);
ASSERT_TRUE(false);
}
/* Should find at least the initial namespaces */
ASSERT_GT(ret, 0);
TH_LOG("Found %zd active namespaces", ret);
/* Verify all returned IDs are non-zero */
for (ssize_t i = 0; i < ret; i++) {
ASSERT_NE(ns_ids[i], 0);
TH_LOG(" [%zd] ns_id: %llu", i, (unsigned long long)ns_ids[i]);
}
}
/*
* Test listns() with type filtering.
* List only network namespaces.
*/
TEST(listns_filter_by_type)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWNET, /* Only network namespaces */
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[100];
ssize_t ret;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
TH_LOG("listns failed: %s (errno=%d)", strerror(errno), errno);
ASSERT_TRUE(false);
}
ASSERT_GE(ret, 0);
/* Should find at least init_net */
ASSERT_GT(ret, 0);
TH_LOG("Found %zd active network namespaces", ret);
/* Verify we can open each namespace and it's actually a network namespace */
for (ssize_t i = 0; i < ret && i < 5; i++) {
struct nsfs_file_handle nsfh = {
.ns_id = ns_ids[i],
.ns_type = CLONE_NEWNET,
.ns_inum = 0,
};
struct file_handle *fh;
int fd;
fh = (struct file_handle *)malloc(sizeof(*fh) + sizeof(nsfh));
ASSERT_NE(fh, NULL);
fh->handle_bytes = sizeof(nsfh);
fh->handle_type = 0;
memcpy(fh->f_handle, &nsfh, sizeof(nsfh));
fd = open_by_handle_at(-10003, fh, O_RDONLY);
free(fh);
if (fd >= 0) {
int ns_type;
/* Verify it's a network namespace via ioctl */
ns_type = ioctl(fd, NS_GET_NSTYPE);
if (ns_type >= 0) {
ASSERT_EQ(ns_type, CLONE_NEWNET);
}
close(fd);
}
}
}
/*
* Test listns() pagination.
* List namespaces in batches.
*/
TEST(listns_pagination)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 batch1[2], batch2[2];
ssize_t ret1, ret2;
/* Get first batch */
ret1 = sys_listns(&req, batch1, ARRAY_SIZE(batch1), 0);
if (ret1 < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
TH_LOG("listns failed: %s (errno=%d)", strerror(errno), errno);
ASSERT_TRUE(false);
}
ASSERT_GE(ret1, 0);
if (ret1 == 0)
SKIP(return, "No namespaces found");
TH_LOG("First batch: %zd namespaces", ret1);
/* Get second batch using last ID from first batch */
if (ret1 == ARRAY_SIZE(batch1)) {
req.ns_id = batch1[ret1 - 1];
ret2 = sys_listns(&req, batch2, ARRAY_SIZE(batch2), 0);
ASSERT_GE(ret2, 0);
TH_LOG("Second batch: %zd namespaces (after ns_id=%llu)",
ret2, (unsigned long long)req.ns_id);
/* If we got more results, verify IDs are monotonically increasing */
if (ret2 > 0) {
ASSERT_GT(batch2[0], batch1[ret1 - 1]);
TH_LOG("Pagination working: %llu > %llu",
(unsigned long long)batch2[0],
(unsigned long long)batch1[ret1 - 1]);
}
} else {
TH_LOG("All namespaces fit in first batch");
}
}
/*
* Test listns() with LISTNS_CURRENT_USER.
* List namespaces owned by current user namespace.
*/
TEST(listns_current_user)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0,
.spare2 = 0,
.user_ns_id = LISTNS_CURRENT_USER,
};
__u64 ns_ids[100];
ssize_t ret;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
TH_LOG("listns failed: %s (errno=%d)", strerror(errno), errno);
ASSERT_TRUE(false);
}
ASSERT_GE(ret, 0);
/* Should find at least the initial namespaces if we're in init_user_ns */
TH_LOG("Found %zd namespaces owned by current user namespace", ret);
for (ssize_t i = 0; i < ret; i++)
TH_LOG(" [%zd] ns_id: %llu", i, (unsigned long long)ns_ids[i]);
}
/*
* Test that listns() only returns active namespaces.
* Create a namespace, let it become inactive, verify it's not listed.
*/
TEST(listns_only_active)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWNET,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids_before[100], ns_ids_after[100];
ssize_t ret_before, ret_after;
int pipefd[2];
pid_t pid;
__u64 new_ns_id = 0;
int status;
/* Get initial list */
ret_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
if (ret_before < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
TH_LOG("listns failed: %s (errno=%d)", strerror(errno), errno);
ASSERT_TRUE(false);
}
ASSERT_GE(ret_before, 0);
TH_LOG("Before: %zd active network namespaces", ret_before);
/* Create a new namespace in a child process and get its ID */
ASSERT_EQ(pipe(pipefd), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int fd;
__u64 ns_id;
close(pipefd[0]);
/* Create new network namespace */
if (unshare(CLONE_NEWNET) < 0) {
close(pipefd[1]);
exit(1);
}
/* Get its ID */
fd = open("/proc/self/ns/net", O_RDONLY);
if (fd < 0) {
close(pipefd[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &ns_id) < 0) {
close(fd);
close(pipefd[1]);
exit(1);
}
close(fd);
/* Send ID to parent */
write(pipefd[1], &ns_id, sizeof(ns_id));
close(pipefd[1]);
/* Keep namespace active briefly */
usleep(100000);
exit(0);
}
/* Parent reads the new namespace ID */
{
int bytes;
close(pipefd[1]);
bytes = read(pipefd[0], &new_ns_id, sizeof(new_ns_id));
close(pipefd[0]);
if (bytes == sizeof(new_ns_id)) {
__u64 ns_ids_during[100];
int ret_during;
TH_LOG("Child created namespace with ID %llu", (unsigned long long)new_ns_id);
/* List namespaces while child is still alive - should see new one */
ret_during = sys_listns(&req, ns_ids_during, ARRAY_SIZE(ns_ids_during), 0);
ASSERT_GE(ret_during, 0);
TH_LOG("During: %d active network namespaces", ret_during);
/* Should have more namespaces than before */
ASSERT_GE(ret_during, ret_before);
}
}
/* Wait for child to exit */
waitpid(pid, &status, 0);
/* Give time for namespace to become inactive */
usleep(100000);
/* List namespaces after child exits - should not see new one */
ret_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
ASSERT_GE(ret_after, 0);
TH_LOG("After: %zd active network namespaces", ret_after);
/* Verify the new namespace ID is not in the after list */
if (new_ns_id != 0) {
bool found = false;
for (ssize_t i = 0; i < ret_after; i++) {
if (ns_ids_after[i] == new_ns_id) {
found = true;
break;
}
}
ASSERT_FALSE(found);
}
}
/*
* Test listns() with specific user namespace ID.
* Create a user namespace and list namespaces it owns.
*/
TEST(listns_specific_userns)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0,
.spare2 = 0,
.user_ns_id = 0, /* Will be filled with created userns ID */
};
__u64 ns_ids[100];
int sv[2];
pid_t pid;
int status;
__u64 user_ns_id = 0;
int bytes;
ssize_t ret;
ASSERT_EQ(socketpair(AF_UNIX, SOCK_STREAM, 0, sv), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int fd;
__u64 ns_id;
char buf;
close(sv[0]);
/* Create new user namespace */
if (setup_userns() < 0) {
close(sv[1]);
exit(1);
}
/* Get user namespace ID */
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(sv[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &ns_id) < 0) {
close(fd);
close(sv[1]);
exit(1);
}
close(fd);
/* Send ID to parent */
if (write(sv[1], &ns_id, sizeof(ns_id)) != sizeof(ns_id)) {
close(sv[1]);
exit(1);
}
/* Create some namespaces owned by this user namespace */
unshare(CLONE_NEWNET);
unshare(CLONE_NEWUTS);
/* Wait for parent signal */
if (read(sv[1], &buf, 1) != 1) {
close(sv[1]);
exit(1);
}
close(sv[1]);
exit(0);
}
/* Parent */
close(sv[1]);
bytes = read(sv[0], &user_ns_id, sizeof(user_ns_id));
if (bytes != sizeof(user_ns_id)) {
close(sv[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get user namespace ID from child");
}
TH_LOG("Child created user namespace with ID %llu", (unsigned long long)user_ns_id);
/* List namespaces owned by this user namespace */
req.user_ns_id = user_ns_id;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
TH_LOG("listns failed: %s (errno=%d)", strerror(errno), errno);
close(sv[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
if (errno == ENOSYS) {
SKIP(return, "listns() not supported");
}
ASSERT_GE(ret, 0);
}
TH_LOG("Found %zd namespaces owned by user namespace %llu", ret,
(unsigned long long)user_ns_id);
/* Should find at least the network and UTS namespaces we created */
if (ret > 0) {
for (ssize_t i = 0; i < ret && i < 10; i++)
TH_LOG(" [%zd] ns_id: %llu", i, (unsigned long long)ns_ids[i]);
}
/* Signal child to exit */
if (write(sv[0], "X", 1) != 1) {
close(sv[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
ASSERT_TRUE(false);
}
close(sv[0]);
waitpid(pid, &status, 0);
}
/*
* Test listns() with multiple namespace types filter.
*/
TEST(listns_multiple_types)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWNET | CLONE_NEWUTS, /* Network and UTS */
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[100];
ssize_t ret;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
TH_LOG("listns failed: %s (errno=%d)", strerror(errno), errno);
ASSERT_TRUE(false);
}
ASSERT_GE(ret, 0);
TH_LOG("Found %zd active network/UTS namespaces", ret);
for (ssize_t i = 0; i < ret; i++)
TH_LOG(" [%zd] ns_id: %llu", i, (unsigned long long)ns_ids[i]);
}
/*
* Test that hierarchical active reference propagation keeps parent
* user namespaces visible in listns().
*/
TEST(listns_hierarchical_visibility)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 parent_ns_id = 0, child_ns_id = 0;
int sv[2];
pid_t pid;
int status;
int bytes;
__u64 ns_ids[100];
ssize_t ret;
bool found_parent, found_child;
ASSERT_EQ(socketpair(AF_UNIX, SOCK_STREAM, 0, sv), 0);
pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int fd;
char buf;
close(sv[0]);
/* Create parent user namespace */
if (setup_userns() < 0) {
close(sv[1]);
exit(1);
}
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(sv[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &parent_ns_id) < 0) {
close(fd);
close(sv[1]);
exit(1);
}
close(fd);
/* Create child user namespace */
if (setup_userns() < 0) {
close(sv[1]);
exit(1);
}
fd = open("/proc/self/ns/user", O_RDONLY);
if (fd < 0) {
close(sv[1]);
exit(1);
}
if (ioctl(fd, NS_GET_ID, &child_ns_id) < 0) {
close(fd);
close(sv[1]);
exit(1);
}
close(fd);
/* Send both IDs to parent */
if (write(sv[1], &parent_ns_id, sizeof(parent_ns_id)) != sizeof(parent_ns_id)) {
close(sv[1]);
exit(1);
}
if (write(sv[1], &child_ns_id, sizeof(child_ns_id)) != sizeof(child_ns_id)) {
close(sv[1]);
exit(1);
}
/* Wait for parent signal */
if (read(sv[1], &buf, 1) != 1) {
close(sv[1]);
exit(1);
}
close(sv[1]);
exit(0);
}
/* Parent */
close(sv[1]);
/* Read both namespace IDs */
bytes = read(sv[0], &parent_ns_id, sizeof(parent_ns_id));
bytes += read(sv[0], &child_ns_id, sizeof(child_ns_id));
if (bytes != (int)(2 * sizeof(__u64))) {
close(sv[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "Failed to get namespace IDs from child");
}
TH_LOG("Parent user namespace ID: %llu", (unsigned long long)parent_ns_id);
TH_LOG("Child user namespace ID: %llu", (unsigned long long)child_ns_id);
/* List all user namespaces */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (ret < 0 && errno == ENOSYS) {
close(sv[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
SKIP(return, "listns() not supported");
}
ASSERT_GE(ret, 0);
TH_LOG("Found %zd active user namespaces", ret);
/* Both parent and child should be visible (active due to child process) */
found_parent = false;
found_child = false;
for (ssize_t i = 0; i < ret; i++) {
if (ns_ids[i] == parent_ns_id)
found_parent = true;
if (ns_ids[i] == child_ns_id)
found_child = true;
}
TH_LOG("Parent namespace %s, child namespace %s",
found_parent ? "found" : "NOT FOUND",
found_child ? "found" : "NOT FOUND");
ASSERT_TRUE(found_child);
/* With hierarchical propagation, parent should also be active */
ASSERT_TRUE(found_parent);
/* Signal child to exit */
if (write(sv[0], "X", 1) != 1) {
close(sv[0]);
kill(pid, SIGKILL);
waitpid(pid, NULL, 0);
ASSERT_TRUE(false);
}
close(sv[0]);
waitpid(pid, &status, 0);
}
/*
* Test error cases for listns().
*/
TEST(listns_error_cases)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids[10];
int ret;
/* Test with invalid flags */
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0xFFFF);
if (errno == ENOSYS) {
/* listns() not supported, skip this check */
} else {
ASSERT_LT(ret, 0);
ASSERT_EQ(errno, EINVAL);
}
/* Test with NULL ns_ids array */
ret = sys_listns(&req, NULL, 10, 0);
ASSERT_LT(ret, 0);
/* Test with invalid spare field */
req.spare = 1;
ret = sys_listns(&req, ns_ids, ARRAY_SIZE(ns_ids), 0);
if (errno == ENOSYS) {
/* listns() not supported, skip this check */
} else {
ASSERT_LT(ret, 0);
ASSERT_EQ(errno, EINVAL);
}
req.spare = 0;
/* Test with huge nr_ns_ids */
ret = sys_listns(&req, ns_ids, 2000000, 0);
if (errno == ENOSYS) {
/* listns() not supported, skip this check */
} else {
ASSERT_LT(ret, 0);
}
}
TEST_HARNESS_MAIN

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,626 @@
// SPDX-License-Identifier: GPL-2.0
#define _GNU_SOURCE
#include <errno.h>
#include <fcntl.h>
#include <limits.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <linux/nsfs.h>
#include "../kselftest_harness.h"
#include "../filesystems/utils.h"
#include "wrappers.h"
/*
* Stress tests for namespace active reference counting.
*
* These tests validate that the active reference counting system can handle
* high load scenarios including rapid namespace creation/destruction, large
* numbers of concurrent namespaces, and various edge cases under stress.
*/
/*
* Test rapid creation and destruction of user namespaces.
* Create and destroy namespaces in quick succession to stress the
* active reference tracking and ensure no leaks occur.
*/
TEST(rapid_namespace_creation_destruction)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids_before[256], ns_ids_after[256];
ssize_t ret_before, ret_after;
int i;
/* Get baseline count of active user namespaces */
ret_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
if (ret_before < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret_before, 0);
}
TH_LOG("Baseline: %zd active user namespaces", ret_before);
/* Rapidly create and destroy 100 user namespaces */
for (i = 0; i < 100; i++) {
pid_t pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child: create user namespace and immediately exit */
if (setup_userns() < 0)
exit(1);
exit(0);
}
/* Parent: wait for child */
int status;
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
}
/* Verify we're back to baseline (no leaked namespaces) */
ret_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
ASSERT_GE(ret_after, 0);
TH_LOG("After 100 rapid create/destroy cycles: %zd active user namespaces", ret_after);
ASSERT_EQ(ret_before, ret_after);
}
/*
* Test creating many concurrent namespaces.
* Verify that listns() correctly tracks all of them and that they all
* become inactive after processes exit.
*/
TEST(many_concurrent_namespaces)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids_before[512], ns_ids_during[512], ns_ids_after[512];
ssize_t ret_before, ret_during, ret_after;
pid_t pids[50];
int num_children = 50;
int i;
int sv[2];
/* Get baseline */
ret_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
if (ret_before < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret_before, 0);
}
TH_LOG("Baseline: %zd active user namespaces", ret_before);
ASSERT_EQ(socketpair(AF_UNIX, SOCK_STREAM, 0, sv), 0);
/* Create many children, each with their own user namespace */
for (i = 0; i < num_children; i++) {
pids[i] = fork();
ASSERT_GE(pids[i], 0);
if (pids[i] == 0) {
/* Child: create user namespace and wait for parent signal */
char c;
close(sv[0]);
if (setup_userns() < 0) {
close(sv[1]);
exit(1);
}
/* Signal parent we're ready */
if (write(sv[1], &c, 1) != 1) {
close(sv[1]);
exit(1);
}
/* Wait for parent signal to exit */
if (read(sv[1], &c, 1) != 1) {
close(sv[1]);
exit(1);
}
close(sv[1]);
exit(0);
}
}
close(sv[1]);
/* Wait for all children to signal ready */
for (i = 0; i < num_children; i++) {
char c;
if (read(sv[0], &c, 1) != 1) {
/* If we fail to read, kill all children and exit */
close(sv[0]);
for (int j = 0; j < num_children; j++)
kill(pids[j], SIGKILL);
for (int j = 0; j < num_children; j++)
waitpid(pids[j], NULL, 0);
ASSERT_TRUE(false);
}
}
/* List namespaces while all children are running */
ret_during = sys_listns(&req, ns_ids_during, ARRAY_SIZE(ns_ids_during), 0);
ASSERT_GE(ret_during, 0);
TH_LOG("With %d children running: %zd active user namespaces", num_children, ret_during);
/* Should have at least num_children more namespaces than baseline */
ASSERT_GE(ret_during, ret_before + num_children);
/* Signal all children to exit */
for (i = 0; i < num_children; i++) {
char c = 'X';
if (write(sv[0], &c, 1) != 1) {
/* If we fail to write, kill remaining children */
close(sv[0]);
for (int j = i; j < num_children; j++)
kill(pids[j], SIGKILL);
for (int j = 0; j < num_children; j++)
waitpid(pids[j], NULL, 0);
ASSERT_TRUE(false);
}
}
close(sv[0]);
/* Wait for all children */
for (i = 0; i < num_children; i++) {
int status;
waitpid(pids[i], &status, 0);
ASSERT_TRUE(WIFEXITED(status));
}
/* Verify we're back to baseline */
ret_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
ASSERT_GE(ret_after, 0);
TH_LOG("After all children exit: %zd active user namespaces", ret_after);
ASSERT_EQ(ret_before, ret_after);
}
/*
* Test rapid namespace creation with different namespace types.
* Create multiple types of namespaces rapidly to stress the tracking system.
*/
TEST(rapid_mixed_namespace_creation)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0, /* All types */
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids_before[512], ns_ids_after[512];
ssize_t ret_before, ret_after;
int i;
/* Get baseline count */
ret_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
if (ret_before < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret_before, 0);
}
TH_LOG("Baseline: %zd active namespaces (all types)", ret_before);
/* Rapidly create and destroy namespaces with multiple types */
for (i = 0; i < 50; i++) {
pid_t pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
/* Child: create multiple namespace types */
if (setup_userns() < 0)
exit(1);
/* Create additional namespace types */
if (unshare(CLONE_NEWNET) < 0)
exit(1);
if (unshare(CLONE_NEWUTS) < 0)
exit(1);
if (unshare(CLONE_NEWIPC) < 0)
exit(1);
exit(0);
}
/* Parent: wait for child */
int status;
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
}
/* Verify we're back to baseline */
ret_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
ASSERT_GE(ret_after, 0);
TH_LOG("After 50 rapid mixed namespace cycles: %zd active namespaces", ret_after);
ASSERT_EQ(ret_before, ret_after);
}
/*
* Test nested namespace creation under stress.
* Create deeply nested namespace hierarchies and verify proper cleanup.
*/
TEST(nested_namespace_stress)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids_before[512], ns_ids_after[512];
ssize_t ret_before, ret_after;
int i;
/* Get baseline */
ret_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
if (ret_before < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret_before, 0);
}
TH_LOG("Baseline: %zd active user namespaces", ret_before);
/* Create 20 processes, each with nested user namespaces */
for (i = 0; i < 20; i++) {
pid_t pid = fork();
ASSERT_GE(pid, 0);
if (pid == 0) {
int userns_fd;
uid_t orig_uid = getuid();
int depth;
/* Create nested user namespaces (up to 5 levels) */
for (depth = 0; depth < 5; depth++) {
userns_fd = get_userns_fd(0, (depth == 0) ? orig_uid : 0, 1);
if (userns_fd < 0)
exit(1);
if (setns(userns_fd, CLONE_NEWUSER) < 0) {
close(userns_fd);
exit(1);
}
close(userns_fd);
}
exit(0);
}
/* Parent: wait for child */
int status;
waitpid(pid, &status, 0);
ASSERT_TRUE(WIFEXITED(status));
}
/* Verify we're back to baseline */
ret_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
ASSERT_GE(ret_after, 0);
TH_LOG("After 20 nested namespace hierarchies: %zd active user namespaces", ret_after);
ASSERT_EQ(ret_before, ret_after);
}
/*
* Test listns() pagination under stress.
* Create many namespaces and verify pagination works correctly.
*/
TEST(listns_pagination_stress)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER,
.spare2 = 0,
.user_ns_id = 0,
};
pid_t pids[30];
int num_children = 30;
int i;
int sv[2];
__u64 all_ns_ids[512];
int total_found = 0;
ASSERT_EQ(socketpair(AF_UNIX, SOCK_STREAM, 0, sv), 0);
/* Create many children with user namespaces */
for (i = 0; i < num_children; i++) {
pids[i] = fork();
ASSERT_GE(pids[i], 0);
if (pids[i] == 0) {
char c;
close(sv[0]);
if (setup_userns() < 0) {
close(sv[1]);
exit(1);
}
/* Signal parent we're ready */
if (write(sv[1], &c, 1) != 1) {
close(sv[1]);
exit(1);
}
/* Wait for parent signal to exit */
if (read(sv[1], &c, 1) != 1) {
close(sv[1]);
exit(1);
}
close(sv[1]);
exit(0);
}
}
close(sv[1]);
/* Wait for all children to signal ready */
for (i = 0; i < num_children; i++) {
char c;
if (read(sv[0], &c, 1) != 1) {
/* If we fail to read, kill all children and exit */
close(sv[0]);
for (int j = 0; j < num_children; j++)
kill(pids[j], SIGKILL);
for (int j = 0; j < num_children; j++)
waitpid(pids[j], NULL, 0);
ASSERT_TRUE(false);
}
}
/* Paginate through all namespaces using small batch sizes */
req.ns_id = 0;
while (1) {
__u64 batch[5]; /* Small batch size to force pagination */
ssize_t ret;
ret = sys_listns(&req, batch, ARRAY_SIZE(batch), 0);
if (ret < 0) {
if (errno == ENOSYS) {
close(sv[0]);
for (i = 0; i < num_children; i++)
kill(pids[i], SIGKILL);
for (i = 0; i < num_children; i++)
waitpid(pids[i], NULL, 0);
SKIP(return, "listns() not supported");
}
ASSERT_GE(ret, 0);
}
if (ret == 0)
break;
/* Store results */
for (i = 0; i < ret && total_found < 512; i++) {
all_ns_ids[total_found++] = batch[i];
}
/* Update cursor for next batch */
if (ret == ARRAY_SIZE(batch))
req.ns_id = batch[ret - 1];
else
break;
}
TH_LOG("Paginated through %d user namespaces", total_found);
/* Verify no duplicates in pagination */
for (i = 0; i < total_found; i++) {
for (int j = i + 1; j < total_found; j++) {
if (all_ns_ids[i] == all_ns_ids[j]) {
TH_LOG("Found duplicate ns_id: %llu at positions %d and %d",
(unsigned long long)all_ns_ids[i], i, j);
ASSERT_TRUE(false);
}
}
}
/* Signal all children to exit */
for (i = 0; i < num_children; i++) {
char c = 'X';
if (write(sv[0], &c, 1) != 1) {
close(sv[0]);
for (int j = i; j < num_children; j++)
kill(pids[j], SIGKILL);
for (int j = 0; j < num_children; j++)
waitpid(pids[j], NULL, 0);
ASSERT_TRUE(false);
}
}
close(sv[0]);
/* Wait for all children */
for (i = 0; i < num_children; i++) {
int status;
waitpid(pids[i], &status, 0);
}
}
/*
* Test concurrent namespace operations.
* Multiple processes creating, querying, and destroying namespaces concurrently.
*/
TEST(concurrent_namespace_operations)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = 0,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids_before[512], ns_ids_after[512];
ssize_t ret_before, ret_after;
pid_t pids[20];
int num_workers = 20;
int i;
/* Get baseline */
ret_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
if (ret_before < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret_before, 0);
}
TH_LOG("Baseline: %zd active namespaces", ret_before);
/* Create worker processes that do concurrent operations */
for (i = 0; i < num_workers; i++) {
pids[i] = fork();
ASSERT_GE(pids[i], 0);
if (pids[i] == 0) {
/* Each worker: create namespaces, list them, repeat */
int iterations;
for (iterations = 0; iterations < 10; iterations++) {
int userns_fd;
__u64 temp_ns_ids[100];
ssize_t ret;
/* Create a user namespace */
userns_fd = get_userns_fd(0, getuid(), 1);
if (userns_fd < 0)
continue;
/* List namespaces */
ret = sys_listns(&req, temp_ns_ids, ARRAY_SIZE(temp_ns_ids), 0);
(void)ret;
close(userns_fd);
/* Small delay */
usleep(1000);
}
exit(0);
}
}
/* Wait for all workers */
for (i = 0; i < num_workers; i++) {
int status;
waitpid(pids[i], &status, 0);
ASSERT_TRUE(WIFEXITED(status));
ASSERT_EQ(WEXITSTATUS(status), 0);
}
/* Verify we're back to baseline */
ret_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
ASSERT_GE(ret_after, 0);
TH_LOG("After concurrent operations: %zd active namespaces", ret_after);
ASSERT_EQ(ret_before, ret_after);
}
/*
* Test namespace churn - continuous creation and destruction.
* Simulates high-churn scenarios like container orchestration.
*/
TEST(namespace_churn)
{
struct ns_id_req req = {
.size = sizeof(req),
.spare = 0,
.ns_id = 0,
.ns_type = CLONE_NEWUSER | CLONE_NEWNET | CLONE_NEWUTS,
.spare2 = 0,
.user_ns_id = 0,
};
__u64 ns_ids_before[512], ns_ids_after[512];
ssize_t ret_before, ret_after;
int cycle;
/* Get baseline */
ret_before = sys_listns(&req, ns_ids_before, ARRAY_SIZE(ns_ids_before), 0);
if (ret_before < 0) {
if (errno == ENOSYS)
SKIP(return, "listns() not supported");
ASSERT_GE(ret_before, 0);
}
TH_LOG("Baseline: %zd active namespaces", ret_before);
/* Simulate churn: batches of namespaces created and destroyed */
for (cycle = 0; cycle < 10; cycle++) {
pid_t batch_pids[10];
int i;
/* Create batch */
for (i = 0; i < 10; i++) {
batch_pids[i] = fork();
ASSERT_GE(batch_pids[i], 0);
if (batch_pids[i] == 0) {
/* Create multiple namespace types */
if (setup_userns() < 0)
exit(1);
if (unshare(CLONE_NEWNET) < 0)
exit(1);
if (unshare(CLONE_NEWUTS) < 0)
exit(1);
/* Keep namespaces alive briefly */
usleep(10000);
exit(0);
}
}
/* Wait for batch to complete */
for (i = 0; i < 10; i++) {
int status;
waitpid(batch_pids[i], &status, 0);
}
}
/* Verify we're back to baseline */
ret_after = sys_listns(&req, ns_ids_after, ARRAY_SIZE(ns_ids_after), 0);
ASSERT_GE(ret_after, 0);
TH_LOG("After 10 churn cycles (100 namespace sets): %zd active namespaces", ret_after);
ASSERT_EQ(ret_before, ret_after);
}
TEST_HARNESS_MAIN

View File

@ -0,0 +1,35 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/nsfs.h>
#include <linux/types.h>
#include <sys/syscall.h>
#include <unistd.h>
#ifndef __SELFTESTS_NAMESPACES_WRAPPERS_H__
#define __SELFTESTS_NAMESPACES_WRAPPERS_H__
#ifndef __NR_listns
#if defined __alpha__
#define __NR_listns 580
#elif defined _MIPS_SIM
#if _MIPS_SIM == _MIPS_SIM_ABI32 /* o32 */
#define __NR_listns 4470
#endif
#if _MIPS_SIM == _MIPS_SIM_NABI32 /* n32 */
#define __NR_listns 6470
#endif
#if _MIPS_SIM == _MIPS_SIM_ABI64 /* n64 */
#define __NR_listns 5470
#endif
#else
#define __NR_listns 470
#endif
#endif
static inline int sys_listns(const struct ns_id_req *req, __u64 *ns_ids,
size_t nr_ns_ids, unsigned int flags)
{
return syscall(__NR_listns, req, ns_ids, nr_ns_ids, flags);
}
#endif /* __SELFTESTS_NAMESPACES_WRAPPERS_H__ */