1. 05 9月, 2012 14 次提交
  2. 16 8月, 2012 3 次提交
  3. 01 8月, 2012 2 次提交
    • C
      mm: slub: optimise the SLUB fast path to avoid pfmemalloc checks · 5091b74a
      Christoph Lameter 提交于
      This patch removes the check for pfmemalloc from the alloc hotpath and
      puts the logic after the election of a new per cpu slab.  For a pfmemalloc
      page we do not use the fast path but force the use of the slow path which
      is also used for the debug case.
      
      This has the side-effect of weakening pfmemalloc processing in the
      following way;
      
      1. A process that is allocating for network swap calls __slab_alloc.
         pfmemalloc_match is true so the freelist is loaded and c->freelist is
         now pointing to a pfmemalloc page.
      
      2. A process that is attempting normal allocations calls slab_alloc,
         finds the pfmemalloc page on the freelist and uses it because it did
         not check pfmemalloc_match()
      
      The patch allows non-pfmemalloc allocations to use pfmemalloc pages with
      the kmalloc slabs being the most vunerable caches on the grounds they
      are most likely to have a mix of pfmemalloc and !pfmemalloc requests. A
      later patch will still protect the system as processes will get throttled
      if the pfmemalloc reserves get depleted but performance will not degrade
      as smoothly.
      
      [mgorman@suse.de: Expanded changelog]
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5091b74a
    • M
      mm: sl[au]b: add knowledge of PFMEMALLOC reserve pages · 072bb0aa
      Mel Gorman 提交于
      When a user or administrator requires swap for their application, they
      create a swap partition and file, format it with mkswap and activate it
      with swapon.  Swap over the network is considered as an option in diskless
      systems.  The two likely scenarios are when blade servers are used as part
      of a cluster where the form factor or maintenance costs do not allow the
      use of disks and thin clients.
      
      The Linux Terminal Server Project recommends the use of the Network Block
      Device (NBD) for swap according to the manual at
      https://sourceforge.net/projects/ltsp/files/Docs-Admin-Guide/LTSPManual.pdf/download
      There is also documentation and tutorials on how to setup swap over NBD at
      places like https://help.ubuntu.com/community/UbuntuLTSP/EnableNBDSWAP The
      nbd-client also documents the use of NBD as swap.  Despite this, the fact
      is that a machine using NBD for swap can deadlock within minutes if swap
      is used intensively.  This patch series addresses the problem.
      
      The core issue is that network block devices do not use mempools like
      normal block devices do.  As the host cannot control where they receive
      packets from, they cannot reliably work out in advance how much memory
      they might need.  Some years ago, Peter Zijlstra developed a series of
      patches that supported swap over an NFS that at least one distribution is
      carrying within their kernels.  This patch series borrows very heavily
      from Peter's work to support swapping over NBD as a pre-requisite to
      supporting swap-over-NFS.  The bulk of the complexity is concerned with
      preserving memory that is allocated from the PFMEMALLOC reserves for use
      by the network layer which is needed for both NBD and NFS.
      
      Patch 1 adds knowledge of the PFMEMALLOC reserves to SLAB and SLUB to
      	preserve access to pages allocated under low memory situations
      	to callers that are freeing memory.
      
      Patch 2 optimises the SLUB fast path to avoid pfmemalloc checks
      
      Patch 3 introduces __GFP_MEMALLOC to allow access to the PFMEMALLOC
      	reserves without setting PFMEMALLOC.
      
      Patch 4 opens the possibility for softirqs to use PFMEMALLOC reserves
      	for later use by network packet processing.
      
      Patch 5 only sets page->pfmemalloc when ALLOC_NO_WATERMARKS was required
      
      Patch 6 ignores memory policies when ALLOC_NO_WATERMARKS is set.
      
      Patches 7-12 allows network processing to use PFMEMALLOC reserves when
      	the socket has been marked as being used by the VM to clean pages. If
      	packets are received and stored in pages that were allocated under
      	low-memory situations and are unrelated to the VM, the packets
      	are dropped.
      
      	Patch 11 reintroduces __skb_alloc_page which the networking
      	folk may object to but is needed in some cases to propogate
      	pfmemalloc from a newly allocated page to an skb. If there is a
      	strong objection, this patch can be dropped with the impact being
      	that swap-over-network will be slower in some cases but it should
      	not fail.
      
      Patch 13 is a micro-optimisation to avoid a function call in the
      	common case.
      
      Patch 14 tags NBD sockets as being SOCK_MEMALLOC so they can use
      	PFMEMALLOC if necessary.
      
      Patch 15 notes that it is still possible for the PFMEMALLOC reserve
      	to be depleted. To prevent this, direct reclaimers get throttled on
      	a waitqueue if 50% of the PFMEMALLOC reserves are depleted.  It is
      	expected that kswapd and the direct reclaimers already running
      	will clean enough pages for the low watermark to be reached and
      	the throttled processes are woken up.
      
      Patch 16 adds a statistic to track how often processes get throttled
      
      Some basic performance testing was run using kernel builds, netperf on
      loopback for UDP and TCP, hackbench (pipes and sockets), iozone and
      sysbench.  Each of them were expected to use the sl*b allocators
      reasonably heavily but there did not appear to be significant performance
      variances.
      
      For testing swap-over-NBD, a machine was booted with 2G of RAM with a
      swapfile backed by NBD.  8*NUM_CPU processes were started that create
      anonymous memory mappings and read them linearly in a loop.  The total
      size of the mappings were 4*PHYSICAL_MEMORY to use swap heavily under
      memory pressure.
      
      Without the patches and using SLUB, the machine locks up within minutes
      and runs to completion with them applied.  With SLAB, the story is
      different as an unpatched kernel run to completion.  However, the patched
      kernel completed the test 45% faster.
      
      MICRO
                                               3.5.0-rc2 3.5.0-rc2
      					 vanilla     swapnbd
      Unrecognised test vmscan-anon-mmap-write
      MMTests Statistics: duration
      Sys Time Running Test (seconds)             197.80    173.07
      User+Sys Time Running Test (seconds)        206.96    182.03
      Total Elapsed Time (seconds)               3240.70   1762.09
      
      This patch: mm: sl[au]b: add knowledge of PFMEMALLOC reserve pages
      
      Allocations of pages below the min watermark run a risk of the machine
      hanging due to a lack of memory.  To prevent this, only callers who have
      PF_MEMALLOC or TIF_MEMDIE set and are not processing an interrupt are
      allowed to allocate with ALLOC_NO_WATERMARKS.  Once they are allocated to
      a slab though, nothing prevents other callers consuming free objects
      within those slabs.  This patch limits access to slab pages that were
      alloced from the PFMEMALLOC reserves.
      
      When this patch is applied, pages allocated from below the low watermark
      are returned with page->pfmemalloc set and it is up to the caller to
      determine how the page should be protected.  SLAB restricts access to any
      page with page->pfmemalloc set to callers which are known to able to
      access the PFMEMALLOC reserve.  If one is not available, an attempt is
      made to allocate a new page rather than use a reserve.  SLUB is a bit more
      relaxed in that it only records if the current per-CPU page was allocated
      from PFMEMALLOC reserve and uses another partial slab if the caller does
      not have the necessary GFP or process flags.  This was found to be
      sufficient in tests to avoid hangs due to SLUB generally maintaining
      smaller lists than SLAB.
      
      In low-memory conditions it does mean that !PFMEMALLOC allocators can fail
      a slab allocation even though free objects are available because they are
      being preserved for callers that are freeing pages.
      
      [a.p.zijlstra@chello.nl: Original implementation]
      [sebastian@breakpoint.cc: Correct order of page flag clearing]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      072bb0aa
  4. 11 7月, 2012 1 次提交
  5. 09 7月, 2012 5 次提交
  6. 20 6月, 2012 3 次提交
    • J
      slub: refactoring unfreeze_partials() · 43d77867
      Joonsoo Kim 提交于
      Current implementation of unfreeze_partials() is so complicated,
      but benefit from it is insignificant. In addition many code in
      do {} while loop have a bad influence to a fail rate of cmpxchg_double_slab.
      Under current implementation which test status of cpu partial slab
      and acquire list_lock in do {} while loop,
      we don't need to acquire a list_lock and gain a little benefit
      when front of the cpu partial slab is to be discarded, but this is a rare case.
      In case that add_partial is performed and cmpxchg_double_slab is failed,
      remove_partial should be called case by case.
      
      I think that these are disadvantages of current implementation,
      so I do refactoring unfreeze_partials().
      
      Minimizing code in do {} while loop introduce a reduced fail rate
      of cmpxchg_double_slab. Below is output of 'slabinfo -r kmalloc-256'
      when './perf stat -r 33 hackbench 50 process 4000 > /dev/null' is done.
      
      ** before **
      Cmpxchg_double Looping
      ------------------------
      Locked Cmpxchg Double redos   182685
      Unlocked Cmpxchg Double redos 0
      
      ** after **
      Cmpxchg_double Looping
      ------------------------
      Locked Cmpxchg Double redos   177995
      Unlocked Cmpxchg Double redos 1
      
      We can see cmpxchg_double_slab fail rate is improved slightly.
      
      Bolow is output of './perf stat -r 30 hackbench 50 process 4000 > /dev/null'.
      
      ** before **
       Performance counter stats for './hackbench 50 process 4000' (30 runs):
      
           108517.190463 task-clock                #    7.926 CPUs utilized            ( +-  0.24% )
               2,919,550 context-switches          #    0.027 M/sec                    ( +-  3.07% )
                 100,774 CPU-migrations            #    0.929 K/sec                    ( +-  4.72% )
                 124,201 page-faults               #    0.001 M/sec                    ( +-  0.15% )
         401,500,234,387 cycles                    #    3.700 GHz                      ( +-  0.24% )
         <not supported> stalled-cycles-frontend
         <not supported> stalled-cycles-backend
         250,576,913,354 instructions              #    0.62  insns per cycle          ( +-  0.13% )
          45,934,956,860 branches                  #  423.297 M/sec                    ( +-  0.14% )
             188,219,787 branch-misses             #    0.41% of all branches          ( +-  0.56% )
      
            13.691837307 seconds time elapsed                                          ( +-  0.24% )
      
      ** after **
       Performance counter stats for './hackbench 50 process 4000' (30 runs):
      
           107784.479767 task-clock                #    7.928 CPUs utilized            ( +-  0.22% )
               2,834,781 context-switches          #    0.026 M/sec                    ( +-  2.33% )
                  93,083 CPU-migrations            #    0.864 K/sec                    ( +-  3.45% )
                 123,967 page-faults               #    0.001 M/sec                    ( +-  0.15% )
         398,781,421,836 cycles                    #    3.700 GHz                      ( +-  0.22% )
         <not supported> stalled-cycles-frontend
         <not supported> stalled-cycles-backend
         250,189,160,419 instructions              #    0.63  insns per cycle          ( +-  0.09% )
          45,855,370,128 branches                  #  425.436 M/sec                    ( +-  0.10% )
             169,881,248 branch-misses             #    0.37% of all branches          ( +-  0.43% )
      
            13.596272341 seconds time elapsed                                          ( +-  0.22% )
      
      No regression is found, but rather we can see slightly better result.
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      43d77867
    • J
      slub: use __cmpxchg_double_slab() at interrupt disabled place · d24ac77f
      Joonsoo Kim 提交于
      get_freelist(), unfreeze_partials() are only called with interrupt disabled,
      so __cmpxchg_double_slab() is suitable.
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      d24ac77f
    • A
      slab/mempolicy: always use local policy from interrupt context · e7b691b0
      Andi Kleen 提交于
      slab_node() could access current->mempolicy from interrupt context.
      However there's a race condition during exit where the mempolicy
      is first freed and then the pointer zeroed.
      
      Using this from interrupts seems bogus anyways. The interrupt
      will interrupt a random process and therefore get a random
      mempolicy. Many times, this will be idle's, which noone can change.
      
      Just disable this here and always use local for slab
      from interrupts. I also cleaned up the callers of slab_node a bit
      which always passed the same argument.
      
      I believe the original mempolicy code did that in fact,
      so it's likely a regression.
      
      v2: send version with correct logic
      v3: simplify. fix typo.
      Reported-by: NArun Sharma <asharma@fb.com>
      Cc: penberg@kernel.org
      Cc: cl@linux.com
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      [tdmackey@twitter.com: Rework control flow based on feedback from
      cl@linux.com, fix logic, and cleanup current task_struct reference]
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NDavid Mackey <tdmackey@twitter.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      e7b691b0
  7. 14 6月, 2012 1 次提交
    • C
      mm, sl[aou]b: Extract common fields from struct kmem_cache · 3b0efdfa
      Christoph Lameter 提交于
      Define a struct that describes common fields used in all slab allocators.
      A slab allocator either uses the common definition (like SLOB) or is
      required to provide members of kmem_cache with the definition given.
      
      After that it will be possible to share code that
      only operates on those fields of kmem_cache.
      
      The patch basically takes the slob definition of kmem cache and
      uses the field namees for the other allocators.
      
      It also standardizes the names used for basic object lengths in
      allocators:
      
      object_size	Struct size specified at kmem_cache_create. Basically
      		the payload expected to be used by the subsystem.
      
      size		The size of memory allocator for each object. This size
      		is larger than object_size and includes padding, alignment
      		and extra metadata for each object (f.e. for debugging
      		and rcu).
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      3b0efdfa
  8. 01 6月, 2012 9 次提交
  9. 18 5月, 2012 2 次提交
    • J
      slub: use __SetPageSlab function to set PG_slab flag · c03f94cc
      Joonsoo Kim 提交于
      To set page-flag, using SetPageXXXX() and __SetPageXXXX() is more
      understandable and maintainable. So change it.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      c03f94cc
    • J
      slub: fix a memory leak in get_partial_node() · 02d7633f
      Joonsoo Kim 提交于
      In the case which is below,
      
      1. acquire slab for cpu partial list
      2. free object to it by remote cpu
      3. page->freelist = t
      
      then memory leak is occurred.
      
      Change acquire_slab() not to zap freelist when it works for cpu partial list.
      I think it is a sufficient solution for fixing a memory leak.
      
      Below is output of 'slabinfo -r kmalloc-256'
      when './perf stat -r 30 hackbench 50 process 4000 > /dev/null' is done.
      
      ***Vanilla***
      Sizes (bytes)     Slabs              Debug                Memory
      ------------------------------------------------------------------------
      Object :     256  Total  :     468   Sanity Checks : Off  Total: 3833856
      SlabObj:     256  Full   :     111   Redzoning     : Off  Used : 2004992
      SlabSiz:    8192  Partial:     302   Poisoning     : Off  Loss : 1828864
      Loss   :       0  CpuSlab:      55   Tracking      : Off  Lalig:       0
      Align  :       8  Objects:      32   Tracing       : Off  Lpadd:       0
      
      ***Patched***
      Sizes (bytes)     Slabs              Debug                Memory
      ------------------------------------------------------------------------
      Object :     256  Total  :     300   Sanity Checks : Off  Total: 2457600
      SlabObj:     256  Full   :     204   Redzoning     : Off  Used : 2348800
      SlabSiz:    8192  Partial:      33   Poisoning     : Off  Loss :  108800
      Loss   :       0  CpuSlab:      63   Tracking      : Off  Lalig:       0
      Align  :       8  Objects:      32   Tracing       : Off  Lpadd:       0
      
      Total and loss number is the impact of this patch.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      02d7633f