1. 01 6月, 2012 7 次提交
  2. 18 5月, 2012 1 次提交
    • M
      slub: missing test for partial pages flush work in flush_all() · 02e1a9cd
      majianpeng 提交于
      I found some kernel messages such as:
      
          SLUB raid5-md127: kmem_cache_destroy called for cache that still has objects.
          Pid: 6143, comm: mdadm Tainted: G           O 3.4.0-rc6+        #75
          Call Trace:
          kmem_cache_destroy+0x328/0x400
          free_conf+0x2d/0xf0 [raid456]
          stop+0x41/0x60 [raid456]
          md_stop+0x1a/0x60 [md_mod]
          do_md_stop+0x74/0x470 [md_mod]
          md_ioctl+0xff/0x11f0 [md_mod]
          blkdev_ioctl+0xd8/0x7a0
          block_ioctl+0x3b/0x40
          do_vfs_ioctl+0x96/0x560
          sys_ioctl+0x91/0xa0
          system_call_fastpath+0x16/0x1b
      
      Then using kmemleak I found these messages:
      
          unreferenced object 0xffff8800b6db7380 (size 112):
            comm "mdadm", pid 5783, jiffies 4294810749 (age 90.589s)
            hex dump (first 32 bytes):
              01 01 db b6 ad 4e ad de ff ff ff ff ff ff ff ff  .....N..........
              ff ff ff ff ff ff ff ff 98 40 4a 82 ff ff ff ff  .........@J.....
            backtrace:
              kmemleak_alloc+0x21/0x50
              kmem_cache_alloc+0xeb/0x1b0
              kmem_cache_open+0x2f1/0x430
              kmem_cache_create+0x158/0x320
              setup_conf+0x649/0x770 [raid456]
              run+0x68b/0x840 [raid456]
              md_run+0x529/0x940 [md_mod]
              do_md_run+0x18/0xc0 [md_mod]
              md_ioctl+0xba8/0x11f0 [md_mod]
              blkdev_ioctl+0xd8/0x7a0
              block_ioctl+0x3b/0x40
              do_vfs_ioctl+0x96/0x560
              sys_ioctl+0x91/0xa0
              system_call_fastpath+0x16/0x1b
      
      This bug was introduced by commit a8364d55 ("slub: only IPI CPUs that
      have per cpu obj to flush"), which did not include checks for per cpu
      partial pages being present on a cpu.
      Signed-off-by: Nmajianpeng <majianpeng@gmail.com>
      Cc: Gilad Ben-Yossef <gilad@benyossef.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Tested-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      02e1a9cd
  3. 29 3月, 2012 1 次提交
    • G
      slub: only IPI CPUs that have per cpu obj to flush · a8364d55
      Gilad Ben-Yossef 提交于
      flush_all() is called for each kmem_cache_destroy().  So every cache being
      destroyed dynamically ends up sending an IPI to each CPU in the system,
      regardless if the cache has ever been used there.
      
      For example, if you close the Infinband ipath driver char device file, the
      close file ops calls kmem_cache_destroy().  So running some infiniband
      config tool on one a single CPU dedicated to system tasks might interrupt
      the rest of the 127 CPUs dedicated to some CPU intensive or latency
      sensitive task.
      
      I suspect there is a good chance that every line in the output of "git
      grep kmem_cache_destroy linux/ | grep '\->'" has a similar scenario.
      
      This patch attempts to rectify this issue by sending an IPI to flush the
      per cpu objects back to the free lists only to CPUs that seem to have such
      objects.
      
      The check which CPU to IPI is racy but we don't care since asking a CPU
      without per cpu objects to flush does no damage and as far as I can tell
      the flush_all by itself is racy against allocs on remote CPUs anyway, so
      if you required the flush_all to be determinstic, you had to arrange for
      locking regardless.
      
      Without this patch the following artificial test case:
      
      $ cd /sys/kernel/slab
      $ for DIR in *; do cat $DIR/alloc_calls > /dev/null; done
      
      produces 166 IPIs on an cpuset isolated CPU. With it it produces none.
      
      The code path of memory allocation failure for CPUMASK_OFFSTACK=y
      config was tested using fault injection framework.
      Signed-off-by: NGilad Ben-Yossef <gilad@benyossef.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Sasha Levin <levinsasha928@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Michal Nazarewicz <mina86@mina86.org>
      Cc: Kosaki Motohiro <kosaki.motohiro@gmail.com>
      Cc: Milton Miller <miltonm@bga.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8364d55
  4. 22 3月, 2012 1 次提交
    • M
      cpuset: mm: reduce large amounts of memory barrier related damage v3 · cc9a6c87
      Mel Gorman 提交于
      Commit c0ff7453 ("cpuset,mm: fix no node to alloc memory when
      changing cpuset's mems") wins a super prize for the largest number of
      memory barriers entered into fast paths for one commit.
      
      [get|put]_mems_allowed is incredibly heavy with pairs of full memory
      barriers inserted into a number of hot paths.  This was detected while
      investigating at large page allocator slowdown introduced some time
      after 2.6.32.  The largest portion of this overhead was shown by
      oprofile to be at an mfence introduced by this commit into the page
      allocator hot path.
      
      For extra style points, the commit introduced the use of yield() in an
      implementation of what looks like a spinning mutex.
      
      This patch replaces the full memory barriers on both read and write
      sides with a sequence counter with just read barriers on the fast path
      side.  This is much cheaper on some architectures, including x86.  The
      main bulk of the patch is the retry logic if the nodemask changes in a
      manner that can cause a false failure.
      
      While updating the nodemask, a check is made to see if a false failure
      is a risk.  If it is, the sequence number gets bumped and parallel
      allocators will briefly stall while the nodemask update takes place.
      
      In a page fault test microbenchmark, oprofile samples from
      __alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The
      actual results were
      
                                   3.3.0-rc3          3.3.0-rc3
                                   rc3-vanilla        nobarrier-v2r1
          Clients   1 UserTime       0.07 (  0.00%)   0.08 (-14.19%)
          Clients   2 UserTime       0.07 (  0.00%)   0.07 (  2.72%)
          Clients   4 UserTime       0.08 (  0.00%)   0.07 (  3.29%)
          Clients   1 SysTime        0.70 (  0.00%)   0.65 (  6.65%)
          Clients   2 SysTime        0.85 (  0.00%)   0.82 (  3.65%)
          Clients   4 SysTime        1.41 (  0.00%)   1.41 (  0.32%)
          Clients   1 WallTime       0.77 (  0.00%)   0.74 (  4.19%)
          Clients   2 WallTime       0.47 (  0.00%)   0.45 (  3.73%)
          Clients   4 WallTime       0.38 (  0.00%)   0.37 (  1.58%)
          Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)
          Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)
          Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)
          Clients   1 Flt/sec      495161.39 (  0.00%) 517292.87 (  4.47%)
          Clients   2 Flt/sec      820325.95 (  0.00%) 850289.77 (  3.65%)
          Clients   4 Flt/sec      1020068.93 (  0.00%) 1022674.06 (  0.26%)
          MMTests Statistics: duration
          Sys Time Running Test (seconds)             135.68    132.17
          User+Sys Time Running Test (seconds)         164.2    160.13
          Total Elapsed Time (seconds)                123.46    120.87
      
      The overall improvement is small but the System CPU time is much
      improved and roughly in correlation to what oprofile reported (these
      performance figures are without profiling so skew is expected).  The
      actual number of page faults is noticeably improved.
      
      For benchmarks like kernel builds, the overall benefit is marginal but
      the system CPU time is slightly reduced.
      
      To test the actual bug the commit fixed I opened two terminals.  The
      first ran within a cpuset and continually ran a small program that
      faulted 100M of anonymous data.  In a second window, the nodemask of the
      cpuset was continually randomised in a loop.
      
      Without the commit, the program would fail every so often (usually
      within 10 seconds) and obviously with the commit everything worked fine.
      With this patch applied, it also worked fine so the fix should be
      functionally equivalent.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cc9a6c87
  5. 18 2月, 2012 1 次提交
  6. 10 2月, 2012 1 次提交
  7. 06 2月, 2012 1 次提交
  8. 25 1月, 2012 1 次提交
    • E
      slub: prefetch next freelist pointer in slab_alloc() · 0ad9500e
      Eric Dumazet 提交于
      Recycling a page is a problem, since freelist link chain is hot on
      cpu(s) which freed objects, and possibly very cold on cpu currently
      owning slab.
      
      Adding a prefetch of cache line containing the pointer to next object in
      slab_alloc() helps a lot in many workloads, in particular on assymetric
      ones (allocations done on one cpu, frees on another cpus). Added cost is
      three machine instructions only.
      
      Examples on my dual socket quad core ht machine (Intel CPU E5540
      @2.53GHz) (16 logical cpus, 2 memory nodes), 64bit kernel.
      
      Before patch :
      
      # perf stat -r 32 hackbench 50 process 4000 >/dev/null
      
       Performance counter stats for 'hackbench 50 process 4000' (32 runs):
      
           327577,471718 task-clock                #   15,821 CPUs utilized            ( +-  0,64% )
              28 866 491 context-switches          #    0,088 M/sec                    ( +-  1,80% )
               1 506 929 CPU-migrations            #    0,005 M/sec                    ( +-  3,24% )
                 127 151 page-faults               #    0,000 M/sec                    ( +-  0,16% )
         829 399 813 448 cycles                    #    2,532 GHz                      ( +-  0,64% )
         580 664 691 740 stalled-cycles-frontend   #   70,01% frontend cycles idle     ( +-  0,71% )
         197 431 700 448 stalled-cycles-backend    #   23,80% backend  cycles idle     ( +-  1,03% )
         503 548 648 975 instructions              #    0,61  insns per cycle
                                                   #    1,15  stalled cycles per insn  ( +-  0,46% )
          95 780 068 471 branches                  #  292,389 M/sec                    ( +-  0,48% )
           1 426 407 916 branch-misses             #    1,49% of all branches          ( +-  1,35% )
      
            20,705679994 seconds time elapsed                                          ( +-  0,64% )
      
      After patch :
      
      # perf stat -r 32 hackbench 50 process 4000 >/dev/null
      
       Performance counter stats for 'hackbench 50 process 4000' (32 runs):
      
           286236,542804 task-clock                #   15,786 CPUs utilized            ( +-  1,32% )
              19 703 372 context-switches          #    0,069 M/sec                    ( +-  4,99% )
               1 658 249 CPU-migrations            #    0,006 M/sec                    ( +-  6,62% )
                 126 776 page-faults               #    0,000 M/sec                    ( +-  0,12% )
         724 636 593 213 cycles                    #    2,532 GHz                      ( +-  1,32% )
         499 320 714 837 stalled-cycles-frontend   #   68,91% frontend cycles idle     ( +-  1,47% )
         156 555 126 809 stalled-cycles-backend    #   21,60% backend  cycles idle     ( +-  2,22% )
         463 897 792 661 instructions              #    0,64  insns per cycle
                                                   #    1,08  stalled cycles per insn  ( +-  0,94% )
          87 717 352 563 branches                  #  306,451 M/sec                    ( +-  0,99% )
             941 738 280 branch-misses             #    1,07% of all branches          ( +-  3,35% )
      
            18,132070670 seconds time elapsed                                          ( +-  1,30% )
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      CC: Matt Mackall <mpm@selenic.com>
      CC: David Rientjes <rientjes@google.com>
      CC: "Alex,Shi" <alex.shi@intel.com>
      CC: Shaohua Li <shaohua.li@intel.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      0ad9500e
  9. 13 1月, 2012 2 次提交
  10. 11 1月, 2012 2 次提交
  11. 04 1月, 2012 1 次提交
    • J
      x86: Fix and improve cmpxchg_double{,_local}() · cdcd6298
      Jan Beulich 提交于
      Just like the per-CPU ones they had several
      problems/shortcomings:
      
      Only the first memory operand was mentioned in the asm()
      operands, and the 2x64-bit version didn't have a memory clobber
      while the 2x32-bit one did. The former allowed the compiler to
      not recognize the need to re-load the data in case it had it
      cached in some register, while the latter was overly
      destructive.
      
      The types of the local copies of the old and new values were
      incorrect (the types of the pointed-to variables should be used
      here, to make sure the respective old/new variable types are
      compatible).
      
      The __dummy/__junk variables were pointless, given that local
      copies of the inputs already existed (and can hence be used for
      discarded outputs).
      
      The 32-bit variant of cmpxchg_double_local() referenced
      cmpxchg16b_local().
      
      At once also:
      
       - change the return value type to what it really is: 'bool'
       - unify 32- and 64-bit variants
       - abstract out the common part of the 'normal' and 'local' variants
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/4F01F12A020000780006A19B@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      cdcd6298
  12. 23 12月, 2011 1 次提交
  13. 14 12月, 2011 4 次提交
  14. 28 11月, 2011 1 次提交
    • S
      slub: add missed accounting · 4c493a5a
      Shaohua Li 提交于
      With per-cpu partial list, slab is added to partial list first and then moved
      to node list. The __slab_free() code path for add/remove_partial is almost
      deprecated(except for slub debug). But we forget to account add/remove_partial
      when move per-cpu partial pages to node list, so the statistics for such events
      are always 0. Add corresponding accounting.
      
      This is against the patch "slub: use correct parameter to add a page to
      partial list tail"
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      4c493a5a
  15. 24 11月, 2011 2 次提交
  16. 17 11月, 2011 1 次提交
  17. 16 11月, 2011 2 次提交
  18. 01 11月, 2011 1 次提交
  19. 28 9月, 2011 3 次提交
    • A
      slub: Discard slab page when node partial > minimum partial number · dcc3be6a
      Alex Shi 提交于
      Discarding slab should be done when node partial > min_partial.  Otherwise,
      node partial slab may eat up all memory.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      dcc3be6a
    • A
      slub: correct comments error for per cpu partial · 9f264904
      Alex Shi 提交于
      Correct comment errors, that mistake cpu partial objects number as pages
      number, may make reader misunderstand.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Reviewed-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      9f264904
    • V
      mm: restrict access to slab files under procfs and sysfs · ab067e99
      Vasiliy Kulikov 提交于
      Historically /proc/slabinfo and files under /sys/kernel/slab/* have
      world read permissions and are accessible to the world.  slabinfo
      contains rather private information related both to the kernel and
      userspace tasks.  Depending on the situation, it might reveal either
      private information per se or information useful to make another
      targeted attack.  Some examples of what can be learned by
      reading/watching for /proc/slabinfo entries:
      
      1) dentry (and different *inode*) number might reveal other processes fs
      activity.  The number of dentry "active objects" doesn't strictly show
      file count opened/touched by a process, however, there is a good
      correlation between them.  The patch "proc: force dcache drop on
      unauthorized access" relies on the privacy of dentry count.
      
      2) different inode entries might reveal the same information as (1), but
      these are more fine granted counters.  If a filesystem is mounted in a
      private mount point (or even a private namespace) and fs type differs from
      other mounted fs types, fs activity in this mount point/namespace is
      revealed.  If there is a single ecryptfs mount point, the whole fs
      activity of a single user is revealed.  Number of files in ecryptfs
      mount point is a private information per se.
      
      3) fuse_* reveals number of files / fs activity of a user in a user
      private mount point.  It is approx. the same severity as ecryptfs
      infoleak in (2).
      
      4) sysfs_dir_cache similar to (2) reveals devices' addition/removal,
      which can be otherwise hidden by "chmod 0700 /sys/".  With 0444 slabinfo
      the precise number of sysfs files is known to the world.
      
      5) buffer_head might reveal some kernel activity.  With other
      information leaks an attacker might identify what specific kernel
      routines generate buffer_head activity.
      
      6) *kmalloc* infoleaks are very situational.  Attacker should watch for
      the specific kmalloc size entry and filter the noise related to the unrelated
      kernel activity.  If an attacker has relatively silent victim system, he
      might get rather precise counters.
      
      Additional information sources might significantly increase the slabinfo
      infoleak benefits.  E.g. if an attacker knows that the processes
      activity on the system is very low (only core daemons like syslog and
      cron), he may run setxid binaries / trigger local daemon activity /
      trigger network services activity / await sporadic cron jobs activity
      / etc. and get rather precise counters for fs and network activity of
      these privileged tasks, which is unknown otherwise.
      
      Also hiding slabinfo and /sys/kernel/slab/* is a one step to complicate
      exploitation of kernel heap overflows (and possibly, other bugs).  The
      related discussion:
      
      http://thread.gmane.org/gmane.linux.kernel/1108378
      
      To keep compatibility with old permission model where non-root
      monitoring daemon could watch for kernel memleaks though slabinfo one
      should do:
      
          groupadd slabinfo
          usermod -a -G slabinfo $MONITOR_USER
      
      And add the following commands to init scripts (to mountall.conf in
      Ubuntu's upstart case):
      
          chmod g+r /proc/slabinfo /sys/kernel/slab/*/*
          chgrp slabinfo /proc/slabinfo /sys/kernel/slab/*/*
      Signed-off-by: NVasiliy Kulikov <segoon@openwall.com>
      Reviewed-by: NKees Cook <kees@ubuntu.com>
      Reviewed-by: NDave Hansen <dave@linux.vnet.ibm.com>
      Acked-by: NChristoph Lameter <cl@gentwo.org>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      CC: Valdis.Kletnieks@vt.edu
      CC: Linus Torvalds <torvalds@linux-foundation.org>
      CC: Alan Cox <alan@linux.intel.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      ab067e99
  20. 14 9月, 2011 1 次提交
  21. 27 8月, 2011 2 次提交
  22. 20 8月, 2011 3 次提交
    • C
      slub: per cpu cache for partial pages · 49e22585
      Christoph Lameter 提交于
      Allow filling out the rest of the kmem_cache_cpu cacheline with pointers to
      partial pages. The partial page list is used in slab_free() to avoid
      per node lock taking.
      
      In __slab_alloc() we can then take multiple partial pages off the per
      node partial list in one go reducing node lock pressure.
      
      We can also use the per cpu partial list in slab_alloc() to avoid scanning
      partial lists for pages with free objects.
      
      The main effect of a per cpu partial list is that the per node list_lock
      is taken for batches of partial pages instead of individual ones.
      
      Potential future enhancements:
      
      1. The pickup from the partial list could be perhaps be done without disabling
         interrupts with some work. The free path already puts the page into the
         per cpu partial list without disabling interrupts.
      
      2. __slab_free() may have some code paths that could use optimization.
      
      Performance:
      
      				Before		After
      ./hackbench 100 process 200000
      				Time: 1953.047	1564.614
      ./hackbench 100 process 20000
      				Time: 207.176   156.940
      ./hackbench 100 process 20000
      				Time: 204.468	156.940
      ./hackbench 100 process 20000
      				Time: 204.879	158.772
      ./hackbench 10 process 20000
      				Time: 20.153	15.853
      ./hackbench 10 process 20000
      				Time: 20.153	15.986
      ./hackbench 10 process 20000
      				Time: 19.363	16.111
      ./hackbench 1 process 20000
      				Time: 2.518	2.307
      ./hackbench 1 process 20000
      				Time: 2.258	2.339
      ./hackbench 1 process 20000
      				Time: 2.864	2.163
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      49e22585
    • C
      slub: return object pointer from get_partial() / new_slab(). · 497b66f2
      Christoph Lameter 提交于
      There is no need anymore to return the pointer to a slab page from get_partial()
      since the page reference can be stored in the kmem_cache_cpu structures "page" field.
      
      Return an object pointer instead.
      
      That in turn allows a simplification of the spaghetti code in __slab_alloc().
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      497b66f2
    • C
      slub: pass kmem_cache_cpu pointer to get_partial() · acd19fd1
      Christoph Lameter 提交于
      Pass the kmem_cache_cpu pointer to get_partial(). That way
      we can avoid the this_cpu_write() statements.
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      acd19fd1