1. 09 5月, 2007 2 次提交
  2. 17 3月, 2007 1 次提交
  3. 17 2月, 2007 1 次提交
  4. 09 12月, 2006 1 次提交
  5. 08 12月, 2006 5 次提交
  6. 04 11月, 2006 1 次提交
  7. 11 10月, 2006 1 次提交
  8. 02 10月, 2006 1 次提交
  9. 30 9月, 2006 2 次提交
  10. 09 9月, 2006 1 次提交
    • T
      [PATCH] Use the correct restart option for futex_lock_pi · c5780e97
      Thomas Gleixner 提交于
      The current implementation of futex_lock_pi returns -ERESTART_RESTARTBLOCK
      in case that the lock operation has been interrupted by a signal.  This
      results in a return of -EINTR to userspace in case there is an handler for
      the signal.  This is wrong, because userspace expects that the lock
      function does not return in any case of signal delivery.
      
      This was not caught by my insufficient test case, but triggered a nasty
      userspace problem in an high load application scenario.  Unfortunately also
      glibc does not check for this invalid return value.
      
      Using -ERSTARTNOINTR makes sure, that the interrupted syscall is restarted.
       The restart block related code can be safely removed, as the possible
      timeout argument is an absolute time value.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c5780e97
  11. 28 8月, 2006 1 次提交
  12. 15 8月, 2006 1 次提交
    • J
      [PATCH] futex_handle_fault always fails · e579dcbf
      john stultz 提交于
      We found this issue last week w/ the -RT kernel, but it seems the same
      issue is in mainline as well.
      
      Basically it is possible for futex_unlock_pi to return without actually
      freeing the lock.  This is due to buggy logic in the use of
      futex_handle_fault() and its attempt argument in a failure case.
      
      Looking at futex.c the logic is as follows:
      
      1) In futex_unlock_pi() we start w/ ret=0 and we go down to the first
         futex_atomic_cmpxchg_inatomic(), where we find uval==-EFAULT.  We then
         jump to the pi_faulted label.
      
      2) From pi_faulted: We increment attempt, unlock the sem and hit the
         retry label.
      
      3) From the retry label, with ret still zero, we again hit EFAULT on the
         first futex_atomic_cmpxchg_inatomic(), and again goto the pi_faulted
         label.
      
      4) Again from pi_faulted: we increment attempt and enter the
         conditional, where we call futex_handle_fault.
      
      5) futex_handle_fault fails, and we goto the out_unlock_release_sem
         label.
      
      6) From out_unlock_release_sem we return, and since ret is still zero,
         we return without error, while never actually unlocking the lock.
      
      Issue #1: at the first futex_atomic_cmpxchg_inatomic() we should probably
      be setting ret=-EFAULT before jumping to pi_faulted: However in our case
      this doesn't really affect anything, as the glibc we're using ignores the
      error value from futex_unlock_pi().
      
      Issue #2: Look at futex_handle_fault(), its first conditional will return
      -EFAULT if attempt is >= 2.  However, from the "if(attempt++)
      futex_handle_fault(attempt)" logic above, we'll *never* call
      futex_handle_fault when attempt is less then two.  So we never get a chance
      to even try to fault the page in.
      
      The following patch addresses these two issues by 1) Always setting ret to
      -EFAULT if futex_handle_fault fails, and 2) Removing the = in
      futex_handle_fault's (attempt >= 2) check.
      
      I'm really not sure this is the right fix, but wanted to bring it up so
      folks knew the issue is alive and well in the current -git tree.  From
      looking at the git logs the logic was first introduced (then later copied
      to other places) in the following commit almost a year ago:
      
      http://www.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=4732efbeb997189d9f9b04708dc26bf8613ed721;hp=5b039e681b8c5f30aac9cc04385cc94be45d0823
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Ingo Molnar <mingo@elte.hu>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      e579dcbf
  13. 06 8月, 2006 1 次提交
    • C
      [PATCH] bug in futex unqueue_me · e91467ec
      Christian Borntraeger 提交于
      This patch adds a barrier() in futex unqueue_me to avoid aliasing of two
      pointers.
      
      On my s390x system I saw the following oops:
      
      Unable to handle kernel pointer dereference at virtual kernel address
      0000000000000000
      Oops: 0004 [#1]
      CPU:    0    Not tainted
      Process mytool (pid: 13613, task: 000000003ecb6ac0, ksp: 00000000366bdbd8)
      Krnl PSW : 0704d00180000000 00000000003c9ac2 (_spin_lock+0xe/0x30)
      Krnl GPRS: 00000000ffffffff 000000003ecb6ac0 0000000000000000 0700000000000000
                 0000000000000000 0000000000000000 000001fe00002028 00000000000c091f
                 000001fe00002054 000001fe00002054 0000000000000000 00000000366bddc0
                 00000000005ef8c0 00000000003d00e8 0000000000144f91 00000000366bdcb8
      Krnl Code: ba 4e 20 00 12 44 b9 16 00 3e a7 84 00 08 e3 e0 f0 88 00 04
      Call Trace:
      ([<0000000000144f90>] unqueue_me+0x40/0xe4)
       [<0000000000145a0c>] do_futex+0x33c/0xc40
       [<000000000014643e>] sys_futex+0x12e/0x144
       [<000000000010bb00>] sysc_noemu+0x10/0x16
       [<000002000003741c>] 0x2000003741c
      
      The code in question is:
      
      static int unqueue_me(struct futex_q *q)
      {
              int ret = 0;
              spinlock_t *lock_ptr;
      
              /* In the common case we don't take the spinlock, which is nice. */
       retry:
              lock_ptr = q->lock_ptr;
              if (lock_ptr != 0) {
                      spin_lock(lock_ptr);
      		/*
                       * q->lock_ptr can change between reading it and
                       * spin_lock(), causing us to take the wrong lock.  This
                       * corrects the race condition.
      [...]
      
      and my compiler (gcc 4.1.0) makes the following out of it:
      
      00000000000003c8 <unqueue_me>:
           3c8:       eb bf f0 70 00 24       stmg    %r11,%r15,112(%r15)
           3ce:       c0 d0 00 00 00 00       larl    %r13,3ce <unqueue_me+0x6>
                              3d0: R_390_PC32DBL      .rodata+0x2a
           3d4:       a7 f1 1e 00             tml     %r15,7680
           3d8:       a7 84 00 01             je      3da <unqueue_me+0x12>
           3dc:       b9 04 00 ef             lgr     %r14,%r15
           3e0:       a7 fb ff d0             aghi    %r15,-48
           3e4:       b9 04 00 b2             lgr     %r11,%r2
           3e8:       e3 e0 f0 98 00 24       stg     %r14,152(%r15)
           3ee:       e3 c0 b0 28 00 04       lg      %r12,40(%r11)
      		/* write q->lock_ptr in r12 */
           3f4:       b9 02 00 cc             ltgr    %r12,%r12
           3f8:       a7 84 00 4b             je      48e <unqueue_me+0xc6>
      		/* if r12 is zero then jump over the code.... */
           3fc:       e3 20 b0 28 00 04       lg      %r2,40(%r11)
      		/* write q->lock_ptr in r2 */
           402:       c0 e5 00 00 00 00       brasl   %r14,402 <unqueue_me+0x3a>
                              404: R_390_PC32DBL      _spin_lock+0x2
      		/* use r2 as parameter for spin_lock */
      
      So the code becomes more or less:
      if (q->lock_ptr != 0) spin_lock(q->lock_ptr)
      instead of
      if (lock_ptr != 0) spin_lock(lock_ptr)
      
      Which caused the oops from above.
      After adding a barrier gcc creates code without this problem:
      [...] (the same)
           3ee:       e3 c0 b0 28 00 04       lg      %r12,40(%r11)
           3f4:       b9 02 00 cc             ltgr    %r12,%r12
           3f8:       b9 04 00 2c             lgr     %r2,%r12
           3fc:       a7 84 00 48             je      48c <unqueue_me+0xc4>
           400:       c0 e5 00 00 00 00       brasl   %r14,400 <unqueue_me+0x38>
                              402: R_390_PC32DBL      _spin_lock+0x2
      
      As a general note, this code of unqueue_me seems a bit fishy. The retry logic
      of unqueue_me only works if we can guarantee, that the original value of
      q->lock_ptr is always a spinlock (Otherwise we overwrite kernel memory). We
      know that q->lock_ptr can change. I dont know what happens with the original
      spinlock, as I am not an expert with the futex code.
      
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Acked-by: NIngo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@timesys.com>
      Signed-off-by: NChristian Borntraeger <borntrae@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e91467ec
  14. 29 7月, 2006 2 次提交
  15. 11 7月, 2006 1 次提交
  16. 04 7月, 2006 1 次提交
  17. 02 7月, 2006 2 次提交
  18. 28 6月, 2006 3 次提交
    • S
      [PATCH] futex_requeue() optimization · 59e0e0ac
      Sebastien Dugue 提交于
      In futex_requeue(), when the 2 futexes keys hash to the same bucket, there
      is no need to move the futex_q to the end of the bucket list.
      Signed-off-by: NSebastien Dugue <sebastien.dugue@bull.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      59e0e0ac
    • I
      [PATCH] pi-futex: futex_lock_pi/futex_unlock_pi support · c87e2837
      Ingo Molnar 提交于
      This adds the actual pi-futex implementation, based on rt-mutexes.
      
      [dino@in.ibm.com: fix an oops-causing race]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NDinakar Guniguntala <dino@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c87e2837
    • I
      [PATCH] pi-futex: futex code cleanups · e2970f2f
      Ingo Molnar 提交于
      We are pleased to announce "lightweight userspace priority inheritance" (PI)
      support for futexes.  The following patchset and glibc patch implements it,
      ontop of the robust-futexes patchset which is included in 2.6.16-mm1.
      
      We are calling it lightweight for 3 reasons:
      
       - in the user-space fastpath a PI-enabled futex involves no kernel work
         (or any other PI complexity) at all.  No registration, no extra kernel
         calls - just pure fast atomic ops in userspace.
      
       - in the slowpath (in the lock-contention case), the system call and
         scheduling pattern is in fact better than that of normal futexes, due to
         the 'integrated' nature of FUTEX_LOCK_PI.  [more about that further down]
      
       - the in-kernel PI implementation is streamlined around the mutex
         abstraction, with strict rules that keep the implementation relatively
         simple: only a single owner may own a lock (i.e.  no read-write lock
         support), only the owner may unlock a lock, no recursive locking, etc.
      
        Priority Inheritance - why, oh why???
        -------------------------------------
      
      Many of you heard the horror stories about the evil PI code circling Linux for
      years, which makes no real sense at all and is only used by buggy applications
      and which has horrible overhead.  Some of you have dreaded this very moment,
      when someone actually submits working PI code ;-)
      
      So why would we like to see PI support for futexes?
      
      We'd like to see it done purely for technological reasons.  We dont think it's
      a buggy concept, we think it's useful functionality to offer to applications,
      which functionality cannot be achieved in other ways.  We also think it's the
      right thing to do, and we think we've got the right arguments and the right
      numbers to prove that.  We also believe that we can address all the
      counter-arguments as well.  For these reasons (and the reasons outlined below)
      we are submitting this patch-set for upstream kernel inclusion.
      
      What are the benefits of PI?
      
        The short reply:
        ----------------
      
      User-space PI helps achieving/improving determinism for user-space
      applications.  In the best-case, it can help achieve determinism and
      well-bound latencies.  Even in the worst-case, PI will improve the statistical
      distribution of locking related application delays.
      
        The longer reply:
        -----------------
      
      Firstly, sharing locks between multiple tasks is a common programming
      technique that often cannot be replaced with lockless algorithms.  As we can
      see it in the kernel [which is a quite complex program in itself], lockless
      structures are rather the exception than the norm - the current ratio of
      lockless vs.  locky code for shared data structures is somewhere between 1:10
      and 1:100.  Lockless is hard, and the complexity of lockless algorithms often
      endangers to ability to do robust reviews of said code.  I.e.  critical RT
      apps often choose lock structures to protect critical data structures, instead
      of lockless algorithms.  Furthermore, there are cases (like shared hardware,
      or other resource limits) where lockless access is mathematically impossible.
      
      Media players (such as Jack) are an example of reasonable application design
      with multiple tasks (with multiple priority levels) sharing short-held locks:
      for example, a highprio audio playback thread is combined with medium-prio
      construct-audio-data threads and low-prio display-colory-stuff threads.  Add
      video and decoding to the mix and we've got even more priority levels.
      
      So once we accept that synchronization objects (locks) are an unavoidable fact
      of life, and once we accept that multi-task userspace apps have a very fair
      expectation of being able to use locks, we've got to think about how to offer
      the option of a deterministic locking implementation to user-space.
      
      Most of the technical counter-arguments against doing priority inheritance
      only apply to kernel-space locks.  But user-space locks are different, there
      we cannot disable interrupts or make the task non-preemptible in a critical
      section, so the 'use spinlocks' argument does not apply (user-space spinlocks
      have the same priority inversion problems as other user-space locking
      constructs).  Fact is, pretty much the only technique that currently enables
      good determinism for userspace locks (such as futex-based pthread mutexes) is
      priority inheritance:
      
      Currently (without PI), if a high-prio and a low-prio task shares a lock [this
      is a quite common scenario for most non-trivial RT applications], even if all
      critical sections are coded carefully to be deterministic (i.e.  all critical
      sections are short in duration and only execute a limited number of
      instructions), the kernel cannot guarantee any deterministic execution of the
      high-prio task: any medium-priority task could preempt the low-prio task while
      it holds the shared lock and executes the critical section, and could delay it
      indefinitely.
      
        Implementation:
        ---------------
      
      As mentioned before, the userspace fastpath of PI-enabled pthread mutexes
      involves no kernel work at all - they behave quite similarly to normal
      futex-based locks: a 0 value means unlocked, and a value==TID means locked.
      (This is the same method as used by list-based robust futexes.) Userspace uses
      atomic ops to lock/unlock these mutexes without entering the kernel.
      
      To handle the slowpath, we have added two new futex ops:
      
        FUTEX_LOCK_PI
        FUTEX_UNLOCK_PI
      
      If the lock-acquire fastpath fails, [i.e.  an atomic transition from 0 to TID
      fails], then FUTEX_LOCK_PI is called.  The kernel does all the remaining work:
      if there is no futex-queue attached to the futex address yet then the code
      looks up the task that owns the futex [it has put its own TID into the futex
      value], and attaches a 'PI state' structure to the futex-queue.  The pi_state
      includes an rt-mutex, which is a PI-aware, kernel-based synchronization
      object.  The 'other' task is made the owner of the rt-mutex, and the
      FUTEX_WAITERS bit is atomically set in the futex value.  Then this task tries
      to lock the rt-mutex, on which it blocks.  Once it returns, it has the mutex
      acquired, and it sets the futex value to its own TID and returns.  Userspace
      has no other work to perform - it now owns the lock, and futex value contains
      FUTEX_WAITERS|TID.
      
      If the unlock side fastpath succeeds, [i.e.  userspace manages to do a TID ->
      0 atomic transition of the futex value], then no kernel work is triggered.
      
      If the unlock fastpath fails (because the FUTEX_WAITERS bit is set), then
      FUTEX_UNLOCK_PI is called, and the kernel unlocks the futex on the behalf of
      userspace - and it also unlocks the attached pi_state->rt_mutex and thus wakes
      up any potential waiters.
      
      Note that under this approach, contrary to other PI-futex approaches, there is
      no prior 'registration' of a PI-futex.  [which is not quite possible anyway,
      due to existing ABI properties of pthread mutexes.]
      
      Also, under this scheme, 'robustness' and 'PI' are two orthogonal properties
      of futexes, and all four combinations are possible: futex, robust-futex,
      PI-futex, robust+PI-futex.
      
        glibc support:
        --------------
      
      Ulrich Drepper and Jakub Jelinek have written glibc support for PI-futexes
      (and robust futexes), enabling robust and PI (PTHREAD_PRIO_INHERIT) POSIX
      mutexes.  (PTHREAD_PRIO_PROTECT support will be added later on too, no
      additional kernel changes are needed for that).  [NOTE: The glibc patch is
      obviously inofficial and unsupported without matching upstream kernel
      functionality.]
      
      the patch-queue and the glibc patch can also be downloaded from:
      
        http://redhat.com/~mingo/PI-futex-patches/
      
      Many thanks go to the people who helped us create this kernel feature: Steven
      Rostedt, Esben Nielsen, Benedikt Spranger, Daniel Walker, John Cooper, Arjan
      van de Ven, Oleg Nesterov and others.  Credits for related prior projects goes
      to Dirk Grambow, Inaky Perez-Gonzalez, Bill Huey and many others.
      
      Clean up the futex code, before adding more features to it:
      
       - use u32 as the futex field type - that's the ABI
       - use __user and pointers to u32 instead of unsigned long
       - code style / comment style cleanups
       - rename hash-bucket name from 'bh' to 'hb'.
      
      I checked the pre and post futex.o object files to make sure this
      patch has no code effects.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e2970f2f
  19. 23 6月, 2006 1 次提交
    • D
      [PATCH] VFS: Permit filesystem to override root dentry on mount · 454e2398
      David Howells 提交于
      Extend the get_sb() filesystem operation to take an extra argument that
      permits the VFS to pass in the target vfsmount that defines the mountpoint.
      
      The filesystem is then required to manually set the superblock and root dentry
      pointers.  For most filesystems, this should be done with simple_set_mnt()
      which will set the superblock pointer and then set the root dentry to the
      superblock's s_root (as per the old default behaviour).
      
      The get_sb() op now returns an integer as there's now no need to return the
      superblock pointer.
      
      This patch permits a superblock to be implicitly shared amongst several mount
      points, such as can be done with NFS to avoid potential inode aliasing.  In
      such a case, simple_set_mnt() would not be called, and instead the mnt_root
      and mnt_sb would be set directly.
      
      The patch also makes the following changes:
      
       (*) the get_sb_*() convenience functions in the core kernel now take a vfsmount
           pointer argument and return an integer, so most filesystems have to change
           very little.
      
       (*) If one of the convenience function is not used, then get_sb() should
           normally call simple_set_mnt() to instantiate the vfsmount. This will
           always return 0, and so can be tail-called from get_sb().
      
       (*) generic_shutdown_super() now calls shrink_dcache_sb() to clean up the
           dcache upon superblock destruction rather than shrink_dcache_anon().
      
           This is required because the superblock may now have multiple trees that
           aren't actually bound to s_root, but that still need to be cleaned up. The
           currently called functions assume that the whole tree is rooted at s_root,
           and that anonymous dentries are not the roots of trees which results in
           dentries being left unculled.
      
           However, with the way NFS superblock sharing are currently set to be
           implemented, these assumptions are violated: the root of the filesystem is
           simply a dummy dentry and inode (the real inode for '/' may well be
           inaccessible), and all the vfsmounts are rooted on anonymous[*] dentries
           with child trees.
      
           [*] Anonymous until discovered from another tree.
      
       (*) The documentation has been adjusted, including the additional bit of
           changing ext2_* into foo_* in the documentation.
      
      [akpm@osdl.org: convert ipath_fs, do other stuff]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Cc: Nathan Scott <nathans@sgi.com>
      Cc: Roland Dreier <rolandd@cisco.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      454e2398
  20. 01 4月, 2006 1 次提交
  21. 28 3月, 2006 2 次提交
  22. 07 1月, 2006 1 次提交
  23. 25 12月, 2005 1 次提交
  24. 24 11月, 2005 1 次提交
  25. 07 11月, 2005 1 次提交
    • D
      [PATCH] FUTEX_WAKE_OP: enhanced error handling · 796f8d9b
      David Gibson 提交于
      The code for FUTEX_WAKE_OP calls an arch callback,
      futex_atomic_op_inuser().  That callback can return an error code, but
      currently the caller assumes any error is EFAULT, and will try various
      things to resolve the fault before eventually giving up with EFAULT
      (regardless of the original error code).  This is not a theoretical case -
      arch callbacks currently return -ENOSYS if the opcode they are given is
      bogus.
      
      This patch alters the code to detect non-EFAULT errors and return them
      directly to the user.
      
      Of course, whether -ENOSYS is the correct return value for the bogus opcode
      case, or whether EINVAL would be more appropriate is another question.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jamie Lokier <jamie@shareable.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      796f8d9b
  26. 30 10月, 2005 1 次提交
    • H
      [PATCH] mm: follow_page with inner ptlock · deceb6cd
      Hugh Dickins 提交于
      Final step in pushing down common core's page_table_lock.  follow_page no
      longer wants caller to hold page_table_lock, uses pte_offset_map_lock itself;
      and so no page_table_lock is taken in get_user_pages itself.
      
      But get_user_pages (and get_futex_key) do then need follow_page to pin the
      page for them: take Daniel's suggestion of bitflags to follow_page.
      
      Need one for WRITE, another for TOUCH (it was the accessed flag before:
      vanished along with check_user_page_readable, but surely get_numa_maps is
      wrong to mark every page it finds as accessed), another for GET.
      
      And another, ANON to dispose of untouched_anonymous_page: it seems silly for
      that to descend a second time, let follow_page observe if there was no page
      table and return ZERO_PAGE if so.  Fix minor bug in that: check VM_LOCKED -
      make_pages_present ought to make readonly anonymous present.
      
      Give get_numa_maps a cond_resched while we're there.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      deceb6cd
  27. 08 9月, 2005 2 次提交
    • P
      [PATCH] futex: remove duplicate code · 39ed3fde
      Pekka Enberg 提交于
      This patch cleans up the error path of futex_fd() by removing duplicate
      code.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      39ed3fde
    • J
      [PATCH] FUTEX_WAKE_OP: pthread_cond_signal() speedup · 4732efbe
      Jakub Jelinek 提交于
      ATM pthread_cond_signal is unnecessarily slow, because it wakes one waiter
      (which at least on UP usually means an immediate context switch to one of
      the waiter threads).  This waiter wakes up and after a few instructions it
      attempts to acquire the cv internal lock, but that lock is still held by
      the thread calling pthread_cond_signal.  So it goes to sleep and eventually
      the signalling thread is scheduled in, unlocks the internal lock and wakes
      the waiter again.
      
      Now, before 2003-09-21 NPTL was using FUTEX_REQUEUE in pthread_cond_signal
      to avoid this performance issue, but it was removed when locks were
      redesigned to the 3 state scheme (unlocked, locked uncontended, locked
      contended).
      
      Following scenario shows why simply using FUTEX_REQUEUE in
      pthread_cond_signal together with using lll_mutex_unlock_force in place of
      lll_mutex_unlock is not enough and probably why it has been disabled at
      that time:
      
      The number is value in cv->__data.__lock.
              thr1            thr2            thr3
      0       pthread_cond_wait
      1       lll_mutex_lock (cv->__data.__lock)
      0       lll_mutex_unlock (cv->__data.__lock)
      0       lll_futex_wait (&cv->__data.__futex, futexval)
      0                       pthread_cond_signal
      1                       lll_mutex_lock (cv->__data.__lock)
      1                                       pthread_cond_signal
      2                                       lll_mutex_lock (cv->__data.__lock)
      2                                         lll_futex_wait (&cv->__data.__lock, 2)
      2                       lll_futex_requeue (&cv->__data.__futex, 0, 1, &cv->__data.__lock)
                                # FUTEX_REQUEUE, not FUTEX_CMP_REQUEUE
      2                       lll_mutex_unlock_force (cv->__data.__lock)
      0                         cv->__data.__lock = 0
      0                         lll_futex_wake (&cv->__data.__lock, 1)
      1       lll_mutex_lock (cv->__data.__lock)
      0       lll_mutex_unlock (cv->__data.__lock)
                # Here, lll_mutex_unlock doesn't know there are threads waiting
                # on the internal cv's lock
      
      Now, I believe it is possible to use FUTEX_REQUEUE in pthread_cond_signal,
      but it will cost us not one, but 2 extra syscalls and, what's worse, one of
      these extra syscalls will be done for every single waiting loop in
      pthread_cond_*wait.
      
      We would need to use lll_mutex_unlock_force in pthread_cond_signal after
      requeue and lll_mutex_cond_lock in pthread_cond_*wait after lll_futex_wait.
      
      Another alternative is to do the unlocking pthread_cond_signal needs to do
      (the lock can't be unlocked before lll_futex_wake, as that is racy) in the
      kernel.
      
      I have implemented both variants, futex-requeue-glibc.patch is the first
      one and futex-wake_op{,-glibc}.patch is the unlocking inside of the kernel.
       The kernel interface allows userland to specify how exactly an unlocking
      operation should look like (some atomic arithmetic operation with optional
      constant argument and comparison of the previous futex value with another
      constant).
      
      It has been implemented just for ppc*, x86_64 and i?86, for other
      architectures I'm including just a stub header which can be used as a
      starting point by maintainers to write support for their arches and ATM
      will just return -ENOSYS for FUTEX_WAKE_OP.  The requeue patch has been
      (lightly) tested just on x86_64, the wake_op patch on ppc64 kernel running
      32-bit and 64-bit NPTL and x86_64 kernel running 32-bit and 64-bit NPTL.
      
      With the following benchmark on UP x86-64 I get:
      
      for i in nptl-orig nptl-requeue nptl-wake_op; do echo time elf/ld.so --library-path .:$i /tmp/bench; \
      for j in 1 2; do echo ( time elf/ld.so --library-path .:$i /tmp/bench ) 2>&1; done; done
      time elf/ld.so --library-path .:nptl-orig /tmp/bench
      real 0m0.655s user 0m0.253s sys 0m0.403s
      real 0m0.657s user 0m0.269s sys 0m0.388s
      time elf/ld.so --library-path .:nptl-requeue /tmp/bench
      real 0m0.496s user 0m0.225s sys 0m0.271s
      real 0m0.531s user 0m0.242s sys 0m0.288s
      time elf/ld.so --library-path .:nptl-wake_op /tmp/bench
      real 0m0.380s user 0m0.176s sys 0m0.204s
      real 0m0.382s user 0m0.175s sys 0m0.207s
      
      The benchmark is at:
      http://sourceware.org/ml/libc-alpha/2005-03/txt00001.txt
      Older futex-requeue-glibc.patch version is at:
      http://sourceware.org/ml/libc-alpha/2005-03/txt00002.txt
      Older futex-wake_op-glibc.patch version is at:
      http://sourceware.org/ml/libc-alpha/2005-03/txt00003.txt
      Will post a new version (just x86-64 fixes so that the patch
      applies against pthread_cond_signal.S) to libc-hacker ml soon.
      
      Attached is the kernel FUTEX_WAKE_OP patch as well as a simple-minded
      testcase that will not test the atomicity of the operation, but at least
      check if the threads that should have been woken up are woken up and
      whether the arithmetic operation in the kernel gave the expected results.
      Acked-by: NIngo Molnar <mingo@redhat.com>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Cc: Jamie Lokier <jamie@shareable.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NYoichi Yuasa <yuasa@hh.iij4u.or.jp>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4732efbe
  28. 01 5月, 2005 1 次提交