1. 09 9月, 2015 1 次提交
  2. 16 4月, 2015 3 次提交
  3. 15 4月, 2015 1 次提交
  4. 07 6月, 2014 1 次提交
  5. 05 6月, 2014 1 次提交
  6. 08 4月, 2014 1 次提交
  7. 12 9月, 2013 1 次提交
  8. 25 6月, 2012 1 次提交
  9. 11 1月, 2012 3 次提交
    • T
      mempool: fix first round failure behavior · 1ebb7044
      Tejun Heo 提交于
      mempool modifies gfp_mask so that the backing allocator doesn't try too
      hard or trigger warning message when there's pool to fall back on.  In
      addition, for the first try, it removes __GFP_WAIT and IO, so that it
      doesn't trigger reclaim or wait when allocation can be fulfilled from
      pool; however, when that allocation fails and pool is empty too, it waits
      for the pool to be replenished before retrying.
      
      Allocation which could have succeeded after a bit of reclaim has to wait
      on the reserved items and it's not like mempool doesn't retry with
      __GFP_WAIT and IO.  It just does that *after* someone returns an element,
      pointlessly delaying things.
      
      Fix it by retrying immediately if the first round of allocation attempts
      w/o __GFP_WAIT and IO fails.
      
      [akpm@linux-foundation.org: shorten the lock hold time]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1ebb7044
    • T
      mempool: drop unnecessary and incorrect BUG_ON() from mempool_destroy() · 0565d317
      Tejun Heo 提交于
      mempool_destroy() is a thin wrapper around free_pool().  The only thing it
      adds is BUG_ON(pool->curr_nr != pool->min_nr).  The intention seems to be
      to enforce that all allocated elements are freed; however, the BUG_ON()
      can't achieve that (it doesn't know anything about objects above min_nr)
      and incorrect as mempool_resize() is allowed to leave the pool extended
      but not filled.  Furthermore, panicking is way worse than any memory leak
      and there are better debug tools to track memory leaks.
      
      Drop the BUG_ON() from mempool_destory() and as that leaves the function
      identical to free_pool(), replace it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0565d317
    • T
      mempool: fix and document synchronization and memory barrier usage · 5b990546
      Tejun Heo 提交于
      mempool_alloc/free() use undocumented smp_mb()'s.  The code is slightly
      broken and misleading.
      
      The lockless part is in mempool_free().  It wants to determine whether the
      item being freed needs to be returned to the pool or backing allocator
      without grabbing pool->lock.  Two things need to be guaranteed for correct
      operation.
      
      1. pool->curr_nr + #allocated should never dip below pool->min_nr.
      2. Waiters shouldn't be left dangling.
      
      For #1, The only necessary condition is that curr_nr visible at free is
      from after the allocation of the element being freed (details in the
      comment).  For most cases, this is true without any barrier but there can
      be fringe cases where the allocated pointer is passed to the freeing task
      without going through memory barriers.  To cover this case, wmb is
      necessary before returning from allocation and rmb is necessary before
      reading curr_nr.  IOW,
      
      	ALLOCATING TASK			FREEING TASK
      
      	update pool state after alloc;
      	wmb();
      	pass pointer to freeing task;
      					read pointer;
      					rmb();
      					read pool state to free;
      
      The current code doesn't have wmb after pool update during allocation and
      may theoretically, on machines where unlock doesn't behave as full wmb,
      lead to pool depletion and deadlock.  smp_wmb() needs to be added after
      successful allocation from reserved elements and smp_mb() in
      mempool_free() can be replaced with smp_rmb().
      
      For #2, the waiter needs to add itself to waitqueue and then check the
      wait condition and the waker needs to update the wait condition and then
      wake up.  Because waitqueue operations always go through full spinlock
      synchronization, there is no need for extra memory barriers.
      
      Furthermore, mempool_alloc() is already holding pool->lock when it decides
      that it needs to wait.  There is no reason to do unlock - add waitqueue -
      test condition again.  It can simply add itself to waitqueue while holding
      pool->lock and then unlock and sleep.
      
      This patch adds smp_wmb() after successful allocation from reserved pool,
      replaces smp_mb() in mempool_free() with smp_rmb() and extend pool->lock
      over waitqueue addition.  More importantly, it explains what memory
      barriers do and how the lockless testing is correct.
      
      -v2: Oleg pointed out that unlock doesn't imply wmb.  Added explicit
           smp_wmb() after successful allocation from reserved pool and
           updated comments accordingly.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b990546
  10. 31 10月, 2011 1 次提交
  11. 22 9月, 2009 1 次提交
  12. 10 8月, 2009 1 次提交
  13. 20 10月, 2007 1 次提交
  14. 18 7月, 2007 1 次提交
  15. 17 7月, 2007 1 次提交
  16. 12 2月, 2007 1 次提交
  17. 02 9月, 2006 1 次提交
    • P
      [PATCH] dm: work around mempool_alloc, bio_alloc_bioset deadlocks · 0b1d647a
      Pavel Mironchik 提交于
      This patch works around a complex dm-related deadlock/livelock down in the
      mempool allocator.
      
      Alasdair said:
      
        Several dm targets suffer from this.
      
        Mempools are not yet used correctly everywhere in device-mapper: they can
        get shared when devices are stacked, and some targets share them across
        multiple instances.  I made fixing this one of the prerequisites for this
        patch:
      
          md-dm-reduce-stack-usage-with-stacked-block-devices.patch
      
        which in some cases makes people more likely to hit the problem.
      
        There's been some progress on this recently with (unfinished) dm-crypt
        patches at:
      
          http://www.kernel.org/pub/linux/kernel/people/agk/patches/2.6/editing/
            (dm-crypt-move-io-to-workqueue.patch plus dependencies)
      
      and:
      
        I've no problems with a temporary workaround like that, but Milan Broz (a
        new Redhat developer in the Czech Republic) has started reviewing all the
        mempool usage in device-mapper so I'm expecting we'll soon have a proper fix
        for this associated problems.  [He's back from holiday at the start of next
        week.]
      
      For now, this sad-but-safe little patch will allow the machine to recover.
      
      [akpm@osdl.org: rewrote changelog]
      Cc: Alasdair G Kergon <agk@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0b1d647a
  18. 27 3月, 2006 4 次提交
  19. 22 3月, 2006 1 次提交
  20. 28 10月, 2005 1 次提交
  21. 09 10月, 2005 1 次提交
  22. 08 7月, 2005 1 次提交
  23. 24 6月, 2005 2 次提交
  24. 01 5月, 2005 3 次提交
    • A
      [PATCH] use smp_mb/wmb/rmb where possible · d59dd462
      akpm@osdl.org 提交于
      Replace a number of memory barriers with smp_ variants.  This means we won't
      take the unnecessary hit on UP machines.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d59dd462
    • N
      [PATCH] mempool: simplify alloc · 20a77776
      Nick Piggin 提交于
      Mempool is pretty clever.  Looks too clever for its own good :) It
      shouldn't really know so much about page reclaim internals.
      
      - don't guess about what effective page reclaim might involve.
      
      - don't randomly flush out all dirty data if some unlikely thing
        happens (alloc returns NULL). page reclaim can (sort of :P) handle
        it.
      
      I think the main motivation is trying to avoid pool->lock at all costs.
      However the first allocation is attempted with __GFP_WAIT cleared, so it
      will be 'can_try_harder' if it hits the page allocator.  So if allocation
      still fails, then we can probably afford to hit the pool->lock - and what's
      the alternative?  Try page reclaim and hit zone->lru_lock?
      
      A nice upshot is that we don't need to do any fancy memory barriers or do
      (intentionally) racy access to pool-> fields outside the lock.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      20a77776
    • N
      [PATCH] mempool: NOMEMALLOC and NORETRY · b84a35be
      Nick Piggin 提交于
      Mempools have 2 problems.
      
      The first is that mempool_alloc can possibly get stuck in __alloc_pages
      when they should opt to fail, and take an element from their reserved pool.
      
      The second is that it will happily eat emergency PF_MEMALLOC reserves
      instead of going to their reserved pools.
      
      Fix the first by passing __GFP_NORETRY in the allocation calls in
      mempool_alloc.  Fix the second by introducing a __GFP_MEMPOOL flag which
      directs the page allocator not to allocate from the reserve pool.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b84a35be
  25. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4