提交 a6537be9 编写于 作者: S Steven Rostedt 提交者: Linus Torvalds

[PATCH] pi-futex: rt mutex docs

Add rt-mutex documentation.

[rostedt@goodmis.org: Update rt-mutex-design.txt as per Randy Dunlap suggestions]
Signed-off-by: NIngo Molnar <mingo@elte.hu>
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
Cc: "Randy.Dunlap" <rdunlap@xenotime.net>
Signed-off-by: NAndrew Morton <akpm@osdl.org>
Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
上级 23f78d4a
Lightweight PI-futexes
----------------------
We are calling them lightweight for 3 reasons:
- in the user-space fastpath a PI-enabled futex involves no kernel work
(or any other PI complexity) at all. No registration, no extra kernel
calls - just pure fast atomic ops in userspace.
- even in the slowpath, the system call and scheduling pattern is very
similar to normal futexes.
- the in-kernel PI implementation is streamlined around the mutex
abstraction, with strict rules that keep the implementation
relatively simple: only a single owner may own a lock (i.e. no
read-write lock support), only the owner may unlock a lock, no
recursive locking, etc.
Priority Inheritance - why?
---------------------------
The short reply: user-space PI helps achieving/improving determinism for
user-space applications. In the best-case, it can help achieve
determinism and well-bound latencies. Even in the worst-case, PI will
improve the statistical distribution of locking related application
delays.
The longer reply:
-----------------
Firstly, sharing locks between multiple tasks is a common programming
technique that often cannot be replaced with lockless algorithms. As we
can see it in the kernel [which is a quite complex program in itself],
lockless structures are rather the exception than the norm - the current
ratio of lockless vs. locky code for shared data structures is somewhere
between 1:10 and 1:100. Lockless is hard, and the complexity of lockless
algorithms often endangers to ability to do robust reviews of said code.
I.e. critical RT apps often choose lock structures to protect critical
data structures, instead of lockless algorithms. Furthermore, there are
cases (like shared hardware, or other resource limits) where lockless
access is mathematically impossible.
Media players (such as Jack) are an example of reasonable application
design with multiple tasks (with multiple priority levels) sharing
short-held locks: for example, a highprio audio playback thread is
combined with medium-prio construct-audio-data threads and low-prio
display-colory-stuff threads. Add video and decoding to the mix and
we've got even more priority levels.
So once we accept that synchronization objects (locks) are an
unavoidable fact of life, and once we accept that multi-task userspace
apps have a very fair expectation of being able to use locks, we've got
to think about how to offer the option of a deterministic locking
implementation to user-space.
Most of the technical counter-arguments against doing priority
inheritance only apply to kernel-space locks. But user-space locks are
different, there we cannot disable interrupts or make the task
non-preemptible in a critical section, so the 'use spinlocks' argument
does not apply (user-space spinlocks have the same priority inversion
problems as other user-space locking constructs). Fact is, pretty much
the only technique that currently enables good determinism for userspace
locks (such as futex-based pthread mutexes) is priority inheritance:
Currently (without PI), if a high-prio and a low-prio task shares a lock
[this is a quite common scenario for most non-trivial RT applications],
even if all critical sections are coded carefully to be deterministic
(i.e. all critical sections are short in duration and only execute a
limited number of instructions), the kernel cannot guarantee any
deterministic execution of the high-prio task: any medium-priority task
could preempt the low-prio task while it holds the shared lock and
executes the critical section, and could delay it indefinitely.
Implementation:
---------------
As mentioned before, the userspace fastpath of PI-enabled pthread
mutexes involves no kernel work at all - they behave quite similarly to
normal futex-based locks: a 0 value means unlocked, and a value==TID
means locked. (This is the same method as used by list-based robust
futexes.) Userspace uses atomic ops to lock/unlock these mutexes without
entering the kernel.
To handle the slowpath, we have added two new futex ops:
FUTEX_LOCK_PI
FUTEX_UNLOCK_PI
If the lock-acquire fastpath fails, [i.e. an atomic transition from 0 to
TID fails], then FUTEX_LOCK_PI is called. The kernel does all the
remaining work: if there is no futex-queue attached to the futex address
yet then the code looks up the task that owns the futex [it has put its
own TID into the futex value], and attaches a 'PI state' structure to
the futex-queue. The pi_state includes an rt-mutex, which is a PI-aware,
kernel-based synchronization object. The 'other' task is made the owner
of the rt-mutex, and the FUTEX_WAITERS bit is atomically set in the
futex value. Then this task tries to lock the rt-mutex, on which it
blocks. Once it returns, it has the mutex acquired, and it sets the
futex value to its own TID and returns. Userspace has no other work to
perform - it now owns the lock, and futex value contains
FUTEX_WAITERS|TID.
If the unlock side fastpath succeeds, [i.e. userspace manages to do a
TID -> 0 atomic transition of the futex value], then no kernel work is
triggered.
If the unlock fastpath fails (because the FUTEX_WAITERS bit is set),
then FUTEX_UNLOCK_PI is called, and the kernel unlocks the futex on the
behalf of userspace - and it also unlocks the attached
pi_state->rt_mutex and thus wakes up any potential waiters.
Note that under this approach, contrary to previous PI-futex approaches,
there is no prior 'registration' of a PI-futex. [which is not quite
possible anyway, due to existing ABI properties of pthread mutexes.]
Also, under this scheme, 'robustness' and 'PI' are two orthogonal
properties of futexes, and all four combinations are possible: futex,
robust-futex, PI-futex, robust+PI-futex.
More details about priority inheritance can be found in
Documentation/rtmutex.txt.
此差异已折叠。
RT-mutex subsystem with PI support
----------------------------------
RT-mutexes with priority inheritance are used to support PI-futexes,
which enable pthread_mutex_t priority inheritance attributes
(PTHREAD_PRIO_INHERIT). [See Documentation/pi-futex.txt for more details
about PI-futexes.]
This technology was developed in the -rt tree and streamlined for
pthread_mutex support.
Basic principles:
-----------------
RT-mutexes extend the semantics of simple mutexes by the priority
inheritance protocol.
A low priority owner of a rt-mutex inherits the priority of a higher
priority waiter until the rt-mutex is released. If the temporarily
boosted owner blocks on a rt-mutex itself it propagates the priority
boosting to the owner of the other rt_mutex it gets blocked on. The
priority boosting is immediately removed once the rt_mutex has been
unlocked.
This approach allows us to shorten the block of high-prio tasks on
mutexes which protect shared resources. Priority inheritance is not a
magic bullet for poorly designed applications, but it allows
well-designed applications to use userspace locks in critical parts of
an high priority thread, without losing determinism.
The enqueueing of the waiters into the rtmutex waiter list is done in
priority order. For same priorities FIFO order is chosen. For each
rtmutex, only the top priority waiter is enqueued into the owner's
priority waiters list. This list too queues in priority order. Whenever
the top priority waiter of a task changes (for example it timed out or
got a signal), the priority of the owner task is readjusted. [The
priority enqueueing is handled by "plists", see include/linux/plist.h
for more details.]
RT-mutexes are optimized for fastpath operations and have no internal
locking overhead when locking an uncontended mutex or unlocking a mutex
without waiters. The optimized fastpath operations require cmpxchg
support. [If that is not available then the rt-mutex internal spinlock
is used]
The state of the rt-mutex is tracked via the owner field of the rt-mutex
structure:
rt_mutex->owner holds the task_struct pointer of the owner. Bit 0 and 1
are used to keep track of the "owner is pending" and "rtmutex has
waiters" state.
owner bit1 bit0
NULL 0 0 mutex is free (fast acquire possible)
NULL 0 1 invalid state
NULL 1 0 Transitional state*
NULL 1 1 invalid state
taskpointer 0 0 mutex is held (fast release possible)
taskpointer 0 1 task is pending owner
taskpointer 1 0 mutex is held and has waiters
taskpointer 1 1 task is pending owner and mutex has waiters
Pending-ownership handling is a performance optimization:
pending-ownership is assigned to the first (highest priority) waiter of
the mutex, when the mutex is released. The thread is woken up and once
it starts executing it can acquire the mutex. Until the mutex is taken
by it (bit 0 is cleared) a competing higher priority thread can "steal"
the mutex which puts the woken up thread back on the waiters list.
The pending-ownership optimization is especially important for the
uninterrupted workflow of high-prio tasks which repeatedly
takes/releases locks that have lower-prio waiters. Without this
optimization the higher-prio thread would ping-pong to the lower-prio
task [because at unlock time we always assign a new owner].
(*) The "mutex has waiters" bit gets set to take the lock. If the lock
doesn't already have an owner, this bit is quickly cleared if there are
no waiters. So this is a transitional state to synchronize with looking
at the owner field of the mutex and the mutex owner releasing the lock.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册