mirror of https://github.com/torvalds/linux.git
The recent enablement of RSEQ in glibc resulted in regressions which are
caused by the related overhead. It turned out that the decision to invoke
the exit to user work was not really a decision. More or less each
context switch caused that. There is a long list of small issues which
sums up nicely and results in a 3-4% regression in I/O benchmarks.
The other detail which caused issues due to extra work in context switch
and task migration is the CID (memory context ID) management. It also
requires to use a task work to consolidate the CID space, which is
executed in the context of an arbitrary task and results in sporadic
uncontrolled exit latencies.
The rewrite addresses this by:
- Removing deprecated and long unsupported functionality
- Moving the related data into dedicated data structures which are
optimized for fast path processing.
- Caching values so actual decisions can be made
- Replacing the current implementation with a optimized inlined variant.
- Separating fast and slow path for architectures which use the generic
entry code, so that only fault and error handling goes into the
TIF_NOTIFY_RESUME handler.
- Rewriting the CID management so that it becomes mostly invisible in the
context switch path. That moves the work of switching modes into the
fork/exit path, which is a reasonable tradeoff. That work is only
required when a process creates more threads than the cpuset it is
allowed to run on or when enough threads exit after that. An artificial
thread pool benchmarks which triggers this did not degrade, it actually
improved significantly.
The main effect in migration heavy scenarios is that runqueue lock held
time and therefore contention goes down significantly.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmksaRYTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoencEADA5he8PAFPmSRRPo6+2G5mHzWe8kIU
5ZViQStWFNAA0qqy8VXryWiJ6qqrO6la9o7K4YOXASUtlkVjquRp1DF7PabqGwuy
zshbRCXNlT51J8uqanN8VrGVjlf+bMdHDbGoI1SLkUTxG8b+kDD5PXUQE1ARelPP
Slbg9u+EMrxj6D5MDTPbuW6TqryJEkPtiNScyOz43emp9ww9+WVxenOcRqU4D+Th
mjWmrGIzkroSf4XReMoD/wg9TPTpUjXnNCwl2viY9JvBpkMfYtU4tJAGK3aNFOWy
zsAN0O9CaFGrUEFne7qUmtwhNLdtnjx5HN5pe7yZd1EhdTuQKq4jPiiQnwwm8w72
c0o6m45FNPmPoSyfaZWCkLjbTEUXonT9JF61iN35JVxim8gBDDJjHFKnLxDmLrH3
X0eESE48ReY2EneDV6Y8RJRo6oG14Fccvc39aTf/2Rw3trpmtt2agvConQzupQIg
DzANw4jhUUzFRrHrMHACNsqKFXh9ratue/S9DM3xxTpGO/bKdeK7jGIgzNf8O34M
J0O6Hvk5jMdcWlIJTx21GoGzoSkkXnR49g/71aCcp+MwdY4x9zFz5SWi8LWQRmkx
xRo6tY27Bma8/SEwMJjPpAUXDTpq6v+j3cPisybL1yGsyt9lh+p8LX7VUtwcoEqe
6ZelC5Kgw/+/kg==
=n5KT
-----END PGP SIGNATURE-----
Merge tag 'core-rseq-2025-11-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull rseq updates from Thomas Gleixner:
"A large overhaul of the restartable sequences and CID management:
The recent enablement of RSEQ in glibc resulted in regressions which
are caused by the related overhead. It turned out that the decision to
invoke the exit to user work was not really a decision. More or less
each context switch caused that. There is a long list of small issues
which sums up nicely and results in a 3-4% regression in I/O
benchmarks.
The other detail which caused issues due to extra work in context
switch and task migration is the CID (memory context ID) management.
It also requires to use a task work to consolidate the CID space,
which is executed in the context of an arbitrary task and results in
sporadic uncontrolled exit latencies.
The rewrite addresses this by:
- Removing deprecated and long unsupported functionality
- Moving the related data into dedicated data structures which are
optimized for fast path processing.
- Caching values so actual decisions can be made
- Replacing the current implementation with a optimized inlined
variant.
- Separating fast and slow path for architectures which use the
generic entry code, so that only fault and error handling goes into
the TIF_NOTIFY_RESUME handler.
- Rewriting the CID management so that it becomes mostly invisible in
the context switch path. That moves the work of switching modes
into the fork/exit path, which is a reasonable tradeoff. That work
is only required when a process creates more threads than the
cpuset it is allowed to run on or when enough threads exit after
that. An artificial thread pool benchmarks which triggers this did
not degrade, it actually improved significantly.
The main effect in migration heavy scenarios is that runqueue lock
held time and therefore contention goes down significantly"
* tag 'core-rseq-2025-11-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (54 commits)
sched/mmcid: Switch over to the new mechanism
sched/mmcid: Implement deferred mode change
irqwork: Move data struct to a types header
sched/mmcid: Provide CID ownership mode fixup functions
sched/mmcid: Provide new scheduler CID mechanism
sched/mmcid: Introduce per task/CPU ownership infrastructure
sched/mmcid: Serialize sched_mm_cid_fork()/exit() with a mutex
sched/mmcid: Provide precomputed maximal value
sched/mmcid: Move initialization out of line
signal: Move MMCID exit out of sighand lock
sched/mmcid: Convert mm CID mask to a bitmap
cpumask: Cache num_possible_cpus()
sched/mmcid: Use cpumask_weighted_or()
cpumask: Introduce cpumask_weighted_or()
sched/mmcid: Prevent pointless work in mm_update_cpus_allowed()
sched/mmcid: Move scheduler code out of global header
sched: Fixup whitespace damage
sched/mmcid: Cacheline align MM CID storage
sched/mmcid: Use proper data structures
sched/mmcid: Revert the complex CID management
...
|
||
|---|---|---|
| .. | ||
| bpf | ||
| cgroup | ||
| configs | ||
| debug | ||
| dma | ||
| entry | ||
| events | ||
| futex | ||
| gcov | ||
| irq | ||
| kcsan | ||
| livepatch | ||
| locking | ||
| module | ||
| power | ||
| printk | ||
| rcu | ||
| sched | ||
| time | ||
| trace | ||
| unwind | ||
| .gitignore | ||
| Kconfig.freezer | ||
| Kconfig.hz | ||
| Kconfig.kexec | ||
| Kconfig.locks | ||
| Kconfig.preempt | ||
| Makefile | ||
| acct.c | ||
| async.c | ||
| audit.c | ||
| audit.h | ||
| audit_fsnotify.c | ||
| audit_tree.c | ||
| audit_watch.c | ||
| auditfilter.c | ||
| auditsc.c | ||
| backtracetest.c | ||
| bounds.c | ||
| capability.c | ||
| cfi.c | ||
| compat.c | ||
| configs.c | ||
| context_tracking.c | ||
| cpu.c | ||
| cpu_pm.c | ||
| crash_core.c | ||
| crash_core_test.c | ||
| crash_dump_dm_crypt.c | ||
| crash_reserve.c | ||
| cred.c | ||
| delayacct.c | ||
| dma.c | ||
| elfcorehdr.c | ||
| exec_domain.c | ||
| exit.c | ||
| exit.h | ||
| extable.c | ||
| fail_function.c | ||
| fork.c | ||
| freezer.c | ||
| gen_kheaders.sh | ||
| groups.c | ||
| hung_task.c | ||
| iomem.c | ||
| irq_work.c | ||
| jump_label.c | ||
| kallsyms.c | ||
| kallsyms_internal.h | ||
| kallsyms_selftest.c | ||
| kallsyms_selftest.h | ||
| kcmp.c | ||
| kcov.c | ||
| kexec.c | ||
| kexec_core.c | ||
| kexec_elf.c | ||
| kexec_file.c | ||
| kexec_handover.c | ||
| kexec_handover_debug.c | ||
| kexec_handover_internal.h | ||
| kexec_internal.h | ||
| kheaders.c | ||
| kprobes.c | ||
| kstack_erase.c | ||
| ksyms_common.c | ||
| ksysfs.c | ||
| kthread.c | ||
| latencytop.c | ||
| module_signature.c | ||
| notifier.c | ||
| nscommon.c | ||
| nsproxy.c | ||
| nstree.c | ||
| padata.c | ||
| panic.c | ||
| params.c | ||
| pid.c | ||
| pid_namespace.c | ||
| pid_sysctl.h | ||
| profile.c | ||
| ptrace.c | ||
| range.c | ||
| reboot.c | ||
| regset.c | ||
| relay.c | ||
| resource.c | ||
| resource_kunit.c | ||
| rseq.c | ||
| scftorture.c | ||
| scs.c | ||
| seccomp.c | ||
| signal.c | ||
| smp.c | ||
| smpboot.c | ||
| smpboot.h | ||
| softirq.c | ||
| stacktrace.c | ||
| static_call.c | ||
| static_call_inline.c | ||
| stop_machine.c | ||
| sys.c | ||
| sys_ni.c | ||
| sysctl-test.c | ||
| sysctl.c | ||
| task_work.c | ||
| taskstats.c | ||
| torture.c | ||
| tracepoint.c | ||
| tsacct.c | ||
| ucount.c | ||
| uid16.c | ||
| uid16.h | ||
| umh.c | ||
| up.c | ||
| user-return-notifier.c | ||
| user.c | ||
| user_namespace.c | ||
| utsname.c | ||
| utsname_sysctl.c | ||
| vhost_task.c | ||
| vmcore_info.c | ||
| watch_queue.c | ||
| watchdog.c | ||
| watchdog_buddy.c | ||
| watchdog_perf.c | ||
| workqueue.c | ||
| workqueue_internal.h | ||