1. 11 12月, 2014 1 次提交
  2. 10 10月, 2014 1 次提交
  3. 07 8月, 2014 4 次提交
    • W
      mm/vmalloc.c: clean up map_vm_area third argument · f6f8ed47
      WANG Chao 提交于
      Currently map_vm_area() takes (struct page *** pages) as third argument,
      and after mapping, it moves (*pages) to point to (*pages +
      nr_mappped_pages).
      
      It looks like this kind of increment is useless to its caller these
      days.  The callers don't care about the increments and actually they're
      trying to avoid this by passing another copy to map_vm_area().
      
      The caller can always guarantee all the pages can be mapped into vm_area
      as specified in first argument and the caller only cares about whether
      map_vm_area() fails or not.
      
      This patch cleans up the pointer movement in map_vm_area() and updates
      its callers accordingly.
      Signed-off-by: NWANG Chao <chaowang@redhat.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f6f8ed47
    • D
      mm, vmalloc: constify allocation mask · 930f036b
      David Rientjes 提交于
      tmp_mask in the __vmalloc_area_node() iteration never changes so it can
      be moved into function scope and marked with const.  This causes the
      movl and orl to only be done once per call rather than area->nr_pages
      times.
      
      nested_gfp can also be marked const.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      930f036b
    • E
      mm/vmalloc.c: add a schedule point to vmalloc() · 660654f9
      Eric Dumazet 提交于
      It is not uncommon on busy servers to get stuck hundred of ms in
      vmalloc() calls (like file descriptor expansions).
      
      Add a cond_resched() to __vmalloc_area_node() to be gentle to
      other tasks.
      
      [akpm@linux-foundation.org: only do it for __GFP_WAIT, per David]
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      660654f9
    • J
      vmalloc: use rcu list iterator to reduce vmap_area_lock contention · 474750ab
      Joonsoo Kim 提交于
      Richard Yao reported a month ago that his system have a trouble with
      vmap_area_lock contention during performance analysis by /proc/meminfo.
      Andrew asked why his analysis checks /proc/meminfo stressfully, but he
      didn't answer it.
      
        https://lkml.org/lkml/2014/4/10/416
      
      Although I'm not sure that this is right usage or not, there is a
      solution reducing vmap_area_lock contention with no side-effect.  That
      is just to use rcu list iterator in get_vmalloc_info().
      
      rcu can be used in this function because all RCU protocol is already
      respected by writers, since Nick Piggin commit db64fe02 ("mm:
      rewrite vmap layer") back in linux-2.6.28
      
      Specifically :
         insertions use list_add_rcu(),
         deletions use list_del_rcu() and kfree_rcu().
      
      Note the rb tree is not used from rcu reader (it would not be safe),
      only the vmap_area_list has full RCU protection.
      
      Note that __purge_vmap_area_lazy() already uses this rcu protection.
      
              rcu_read_lock();
              list_for_each_entry_rcu(va, &vmap_area_list, list) {
                      if (va->flags & VM_LAZY_FREE) {
                              if (va->va_start < *start)
                                      *start = va->va_start;
                              if (va->va_end > *end)
                                      *end = va->va_end;
                              nr += (va->va_end - va->va_start) >> PAGE_SHIFT;
                              list_add_tail(&va->purge_list, &valist);
                              va->flags |= VM_LAZY_FREEING;
                              va->flags &= ~VM_LAZY_FREE;
                      }
              }
              rcu_read_unlock();
      
      Peter:
      
      : While rcu list traversal over the vmap_area_list is safe, this may
      : arrive at different results than the spinlocked version. The rcu list
      : traversal version will not be a 'snapshot' of a single, valid instant
      : of the entire vmap_area_list, but rather a potential amalgam of
      : different list states.
      
      Joonsoo:
      
      : Yes, you are right, but I don't think that we should be strict here.
      : Meminfo is already not a 'snapshot' at specific time.  While we try to get
      : certain stats, the other stats can change.  And, although we may arrive at
      : different results than the spinlocked version, the difference would not be
      : large and would not make serious side-effect.
      
      [edumazet@google.com: add more commit description]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Reported-by: NRichard Yao <ryao@gentoo.org>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Zhang Yanfei <zhangyanfei.yes@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      474750ab
  4. 05 6月, 2014 3 次提交
  5. 08 4月, 2014 2 次提交
    • G
      mm/vmalloc.c: enhance vm_map_ram() comment · 36437638
      Gioh Kim 提交于
      vm_map_ram() has a fragmentation problem when it cannot purge a
      chunk(ie, 4M address space) if there is a pinning object in that
      addresss space.  So it could consume all VMALLOC address space easily.
      
      We can fix the fragmentation problem by using vmap instead of
      vm_map_ram() but vmap() is known to be slow compared to vm_map_ram().
      Minchan said vm_map_ram is 5 times faster than vmap in his tests.  So I
      thought we should fix fragment problem of vm_map_ram because our
      proprietary GPU driver has used it heavily.
      
      On second thought, it's not an easy because we should reuse freed space
      for solving the problem and it could make more IPI and bitmap operation
      for searching hole.  It could mitigate API's goal which is very fast
      mapping.  And even fragmentation problem wouldn't show in 64 bit
      machine.
      
      Another option is that the user should separate long-life and short-life
      object and use vmap for long-life but vm_map_ram for short-life.  If we
      inform the user about the characteristic of vm_map_ram the user can
      choose one according to the page lifetime.
      
      Let's add some notice messages to user.
      
      [akpm@linux-foundation.org: tweak comment text]
      Signed-off-by: NGioh Kim <gioh.kim@lge.com>
      Reviewed-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      36437638
    • G
      mm: use macros from compiler.h instead of __attribute__((...)) · 3b32123d
      Gideon Israel Dsouza 提交于
      To increase compiler portability there is <linux/compiler.h> which
      provides convenience macros for various gcc constructs.  Eg: __weak for
      __attribute__((weak)).  I've replaced all instances of gcc attributes with
      the right macro in the memory management (/mm) subsystem.
      
      [akpm@linux-foundation.org: while-we're-there consistency tweaks]
      Signed-off-by: NGideon Israel Dsouza <gidisrael@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3b32123d
  6. 28 1月, 2014 1 次提交
    • M
      Revert "mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page}" · add688fb
      malc 提交于
      Revert commit ece86e22, which was intended as a small performance
      improvement.
      
      Despite the claim that the patch doesn't introduce any functional
      changes in fact it does.
      
      The "no page" path behaves different now.  Originally, vmalloc_to_page
      might return NULL under some conditions, with new implementation it
      returns pfn_to_page(0) which is not the same as NULL.
      
      Simple test shows the difference.
      
      test.c
      
      #include <linux/kernel.h>
      #include <linux/module.h>
      #include <linux/vmalloc.h>
      #include <linux/mm.h>
      
      int __init myi(void)
      {
      	struct page *p;
      	void *v;
      
      	v = vmalloc(PAGE_SIZE);
      	/* trigger the "no page" path in vmalloc_to_page*/
      	vfree(v);
      
      	p = vmalloc_to_page(v);
      
      	pr_err("expected val = NULL, returned val = %p", p);
      
      	return -EBUSY;
      }
      
      void __exit mye(void)
      {
      
      }
      module_init(myi)
      module_exit(mye)
      
      Before interchange:
      expected val = NULL, returned val =   (null)
      
      After interchange:
      expected val = NULL, returned val = c7ebe000
      Signed-off-by: NVladimir Murzin <murzin.v@gmail.com>
      Cc: Jianyu Zhan <nasa4836@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      add688fb
  7. 22 1月, 2014 1 次提交
  8. 13 11月, 2013 6 次提交
  9. 12 9月, 2013 3 次提交
  10. 10 7月, 2013 9 次提交
  11. 04 7月, 2013 6 次提交
  12. 08 5月, 2013 1 次提交
  13. 30 4月, 2013 2 次提交