- 27 1月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
Peter Zijlstra pointed out, that the only user of asmregparm (x86) is compiling the kernel already with -mregparm=3. So the annotation of the rwsem functions is redundant. Remove it. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: David Howells <dhowells@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David Miller <davem@davemloft.net> Cc: Chris Zankel <chris@zankel.net> LKML-Reference: <alpine.LFD.2.00.1101262130450.31804@localhost6.localdomain6> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 10 8月, 2010 5 次提交
-
-
由 Michel Lespinasse 提交于
More code can be pushed from rwsem_down_read_failed and rwsem_down_write_failed into rwsem_down_failed_common. Following change adding down_read_critical infrastructure support also enjoys having flags available in a register rather than having to fish it out in the struct rwsem_waiter... Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
This change addresses the following situation: - Thread A acquires the rwsem for read - Thread B tries to acquire the rwsem for write, notices there is already an active owner for the rwsem. - Thread C tries to acquire the rwsem for read, notices that thread B already tried to acquire it. - Thread C grabs the spinlock and queues itself on the wait queue. - Thread B grabs the spinlock and queues itself behind C. At this point A is the only remaining active owner on the rwsem. In this situation thread B could notice that it was the last active writer on the rwsem, and decide to wake C to let it proceed in parallel with A since they both only want the rwsem for read. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
Previously each waiting thread added a bias of RWSEM_WAITING_BIAS. With this change, the bias is added only once to indicate that the wait list is non-empty. This has a few nice properties which will be used in following changes: - when the spinlock is held and the waiter list is known to be non-empty, count < RWSEM_WAITING_BIAS <=> there is an active writer on that sem - count == RWSEM_WAITING_BIAS <=> there are waiting threads and no active readers/writers on that sem Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
In __rwsem_do_wake(), we can skip the active count check unless we come there from up_xxxx(). Also when checking the active count, it is not actually necessary to increment it; this allows us to get rid of the read side undo code and simplify the calculation of the final rwsem count adjustment once we've counted the reader threads to wake. The basic observation is the following. When there are waiter threads on a rwsem and the spinlock is held, other threads can only increment the active count by trying to grab the rwsem in down_xxxx(). However down_xxxx() will notice there are waiter threads and take the down_failed path, blocking to acquire the spinlock on the way there. Therefore, a thread observing an active count of zero with waiters queued and the spinlock held, is protected against other threads acquiring the rwsem until it wakes the last waiter or releases the spinlock. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
This is in preparation for later changes in the series. In __rwsem_do_wake(), the first queued waiter is checked first in order to determine whether it's a writer or a reader. The code paths diverge at this point. The code that checks and increments the rwsem active count is duplicated on both sides - the point is that later changes in the series will be able to independently modify both sides. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 5月, 2010 1 次提交
-
-
由 Michel Lespinasse 提交于
If there are no active threasd using a semaphore, it is always correct to unqueue blocked threads. This seems to be what was intended in the undo code. What was done instead, was to look for a sem count of zero - this is an impossible situation, given that at least one thread is known to be queued on the semaphore. The code might be correct as written, but it's hard to reason about and it's not what was intended (otherwise the goto out would have been unconditional). Go for checking the active count - the alternative is not worth the headache. Signed-off-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 1月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
introduce the "asmregparm" calling convention: for functions implemented in assembly with a fixed regparm input parameters calling convention. mark the semaphore and rwsem slowpath functions with that. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 18 12月, 2007 1 次提交
-
-
由 Livio Soares 提交于
This following commit http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=fdf8cb0909b531f9ae8f9b9d7e4eb35ba3505f07 un-inlined a low-level rwsem function, but did not mark it as __sched. The result is that it now shows up as thread wchan (which also affects /proc/profile stats). The following simple patch fixes this by properly marking rwsem_down_failed_common() as a __sched function. Also in this patch, which is up for discussion, marks down_read() and down_write() proper as __sched. For profiling, it is pretty much useless to know that a semaphore is beig help - it is necessary to know _which_ one. By going up another frame on the stack, the information becomes much more useful. In summary, the below change to lib/rwsem.c should be applied; the changes to kernel/rwsem.c could be applied if other kernel hackers agree with my proposal that down_read()/down_write() in the profile is not enough. [ akpm@linux-foundation.org: build fix ] Signed-off-by: NLivio Soares <livio@eecg.toronto.edu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 10月, 2006 1 次提交
-
-
由 Peter Zijlstra 提交于
This annotation makes it possible to assign a subclass on lock init. This annotation is meant to reduce the _nested() annotations by assigning a default subclass. One could do without this annotation and rely on lockdep_set_class() exclusively, but that would require a manual stack of struct lock_class_key objects. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NDmitry Torokhov <dtor@mail.ru>
-
- 30 9月, 2006 1 次提交
-
-
由 Andreas Mohr 提交于
Un-inlining rwsem_down_failed_common() (two callsites) reduced lib/rwsem.o on my Athlon, gcc 4.1.2 from 5935 to 5480 Bytes (455 Bytes saved). I thus guess that reduced icache footprint (and better function caching) is worth more than any function call overhead. Signed-off-by: NAndreas Mohr <andi@lisas.de> Cc: David Howells <dhowells@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 04 7月, 2006 2 次提交
-
-
由 Ingo Molnar 提交于
Use the lock validator framework to prove rwsem locking correctness. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
Clean up rwsems. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 01 5月, 2005 1 次提交
-
-
由 akpm@osdl.org 提交于
Replace a number of memory barriers with smp_ variants. This means we won't take the unnecessary hit on UP machines. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-