1. 28 2月, 2017 1 次提交
    • M
      ipc/sem: add hysteresis · 9de5ab8a
      Manfred Spraul 提交于
      sysv sem has two lock modes: One with per-semaphore locks, one lock mode
      with a single global lock for the whole array.  When switching from the
      per-semaphore locks to the global lock, all per-semaphore locks must be
      scanned for ongoing operations.
      
      The patch adds a hysteresis for switching from the global lock to the
      per semaphore locks.  This reduces how often the per-semaphore locks
      must be scanned.
      
      Compared to the initial patch, this is a simplified solution: Setting
      USE_GLOBAL_LOCK_HYSTERESIS to 1 restores the current behavior.
      
      In theory, a workload with exactly 10 simple sops and then one complex
      op now scales a bit worse, but this is pure theory: If there is
      concurrency, the it won't be exactly 10:1:10:1:10:1:...  If there is no
      concurrency, then there is no need for scalability.
      
      Link: http://lkml.kernel.org/r/1476851896-3590-3-git-send-email-manfred@colorfullife.comSigned-off-by: NManfred Spraul <manfred@colorfullife.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: <1vier1@web.de>
      Cc: kernel test robot <xiaolong.ye@intel.com>
      Cc: <felixh@informatik.uni-bremen.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9de5ab8a
  2. 12 10月, 2016 1 次提交
    • M
      ipc/sem.c: fix complex_count vs. simple op race · 5864a2fd
      Manfred Spraul 提交于
      Commit 6d07b68c ("ipc/sem.c: optimize sem_lock()") introduced a
      race:
      
      sem_lock has a fast path that allows parallel simple operations.
      There are two reasons why a simple operation cannot run in parallel:
       - a non-simple operations is ongoing (sma->sem_perm.lock held)
       - a complex operation is sleeping (sma->complex_count != 0)
      
      As both facts are stored independently, a thread can bypass the current
      checks by sleeping in the right positions.  See below for more details
      (or kernel bugzilla 105651).
      
      The patch fixes that by creating one variable (complex_mode)
      that tracks both reasons why parallel operations are not possible.
      
      The patch also updates stale documentation regarding the locking.
      
      With regards to stable kernels:
      The patch is required for all kernels that include the
      commit 6d07b68c ("ipc/sem.c: optimize sem_lock()") (3.10?)
      
      The alternative is to revert the patch that introduced the race.
      
      The patch is safe for backporting, i.e. it makes no assumptions
      about memory barriers in spin_unlock_wait().
      
      Background:
      Here is the race of the current implementation:
      
      Thread A: (simple op)
      - does the first "sma->complex_count == 0" test
      
      Thread B: (complex op)
      - does sem_lock(): This includes an array scan. But the scan can't
        find Thread A, because Thread A does not own sem->lock yet.
      - the thread does the operation, increases complex_count,
        drops sem_lock, sleeps
      
      Thread A:
      - spin_lock(&sem->lock), spin_is_locked(sma->sem_perm.lock)
      - sleeps before the complex_count test
      
      Thread C: (complex op)
      - does sem_lock (no array scan, complex_count==1)
      - wakes up Thread B.
      - decrements complex_count
      
      Thread A:
      - does the complex_count test
      
      Bug:
      Now both thread A and thread C operate on the same array, without
      any synchronization.
      
      Fixes: 6d07b68c ("ipc/sem.c: optimize sem_lock()")
      Link: http://lkml.kernel.org/r/1469123695-5661-1-git-send-email-manfred@colorfullife.com
      Reported-by: <felixh@informatik.uni-bremen.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: <1vier1@web.de>
      Cc: <stable@vger.kernel.org>	[3.10+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5864a2fd
  3. 10 7月, 2013 2 次提交
  4. 13 10月, 2012 1 次提交
  5. 03 11月, 2011 2 次提交
  6. 27 7月, 2011 1 次提交
  7. 28 5月, 2010 1 次提交
  8. 16 12月, 2009 1 次提交
  9. 26 7月, 2008 4 次提交
  10. 20 10月, 2007 1 次提交
    • N
      ipc: store ipcs into IDRs · 7ca7e564
      Nadia Derbey 提交于
      This patch introduces ipcs storage into IDRs. The main changes are:
        . This ipc_ids structure is changed: the entries array is changed into a
          root idr structure.
        . The grow_ary() routine is removed: it is not needed anymore when adding
          an ipc structure, since we are now using the IDR facility.
        . The ipc_rmid() routine interface is changed:
             . there is no need for this routine to return the pointer passed in as
               argument: it is now declared as a void
             . since the id is now part of the kern_ipc_perm structure, no need to
               have it as an argument to the routine
      Signed-off-by: NNadia Derbey <Nadia.Derbey@bull.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7ca7e564
  11. 25 4月, 2006 1 次提交
  12. 07 11月, 2005 1 次提交
  13. 08 9月, 2005 1 次提交
  14. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4