1. 14 12月, 2014 3 次提交
  2. 11 12月, 2014 29 次提交
  3. 20 11月, 2014 1 次提交
  4. 30 10月, 2014 1 次提交
    • J
      mm: memcontrol: fix missed end-writeback page accounting · d7365e78
      Johannes Weiner 提交于
      Commit 0a31bc97 ("mm: memcontrol: rewrite uncharge API") changed
      page migration to uncharge the old page right away.  The page is locked,
      unmapped, truncated, and off the LRU, but it could race with writeback
      ending, which then doesn't unaccount the page properly:
      
      test_clear_page_writeback()              migration
                                                 wait_on_page_writeback()
        TestClearPageWriteback()
                                                 mem_cgroup_migrate()
                                                   clear PCG_USED
        mem_cgroup_update_page_stat()
          if (PageCgroupUsed(pc))
            decrease memcg pages under writeback
      
        release pc->mem_cgroup->move_lock
      
      The per-page statistics interface is heavily optimized to avoid a
      function call and a lookup_page_cgroup() in the file unmap fast path,
      which means it doesn't verify whether a page is still charged before
      clearing PageWriteback() and it has to do it in the stat update later.
      
      Rework it so that it looks up the page's memcg once at the beginning of
      the transaction and then uses it throughout.  The charge will be
      verified before clearing PageWriteback() and migration can't uncharge
      the page as long as that is still set.  The RCU lock will protect the
      memcg past uncharge.
      
      As far as losing the optimization goes, the following test results are
      from a microbenchmark that maps, faults, and unmaps a 4GB sparse file
      three times in a nested fashion, so that there are two negative passes
      that don't account but still go through the new transaction overhead.
      There is no actual difference:
      
       old:     33.195102545 seconds time elapsed       ( +-  0.01% )
       new:     33.199231369 seconds time elapsed       ( +-  0.03% )
      
      The time spent in page_remove_rmap()'s callees still adds up to the
      same, but the time spent in the function itself seems reduced:
      
           # Children      Self  Command        Shared Object       Symbol
       old:     0.12%     0.11%  filemapstress  [kernel.kallsyms]   [k] page_remove_rmap
       new:     0.12%     0.08%  filemapstress  [kernel.kallsyms]   [k] page_remove_rmap
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: <stable@vger.kernel.org>	[3.17.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d7365e78
  5. 10 10月, 2014 6 次提交
    • V
      memcg: zap memcg_can_account_kmem · cf2b8fbf
      Vladimir Davydov 提交于
      memcg_can_account_kmem() returns true iff
      
          !mem_cgroup_disabled() && !mem_cgroup_is_root(memcg) &&
                                         memcg_kmem_is_active(memcg);
      
      To begin with the !mem_cgroup_is_root(memcg) check is useless, because one
      can't enable kmem accounting for the root cgroup (mem_cgroup_write()
      returns EINVAL on an attempt to set the limit on the root cgroup).
      
      Furthermore, the !mem_cgroup_disabled() check also seems to be redundant.
      The point is memcg_can_account_kmem() is called from three places:
      mem_cgroup_salbinfo_read(), __memcg_kmem_get_cache(), and
      __memcg_kmem_newpage_charge().  The latter two functions are only invoked
      if memcg_kmem_enabled() returns true, which implies that the memory cgroup
      subsystem is enabled.  And mem_cgroup_slabinfo_read() shows the output of
      memory.kmem.slabinfo, which won't exist if the memory cgroup is completely
      disabled.
      
      So let's substitute all the calls to memcg_can_account_kmem() with plain
      memcg_kmem_is_active(), and kill the former.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf2b8fbf
    • J
      mm: memcontrol: fix transparent huge page allocations under pressure · b70a2a21
      Johannes Weiner 提交于
      In a memcg with even just moderate cache pressure, success rates for
      transparent huge page allocations drop to zero, wasting a lot of effort
      that the allocator puts into assembling these pages.
      
      The reason for this is that the memcg reclaim code was never designed for
      higher-order charges.  It reclaims in small batches until there is room
      for at least one page.  Huge page charges only succeed when these batches
      add up over a series of huge faults, which is unlikely under any
      significant load involving order-0 allocations in the group.
      
      Remove that loop on the memcg side in favor of passing the actual reclaim
      goal to direct reclaim, which is already set up and optimized to meet
      higher-order goals efficiently.
      
      This brings memcg's THP policy in line with the system policy: if the
      allocator painstakingly assembles a hugepage, memcg will at least make an
      honest effort to charge it.  As a result, transparent hugepage allocation
      rates amid cache activity are drastically improved:
      
                                            vanilla                 patched
      pgalloc                 4717530.80 (  +0.00%)   4451376.40 (  -5.64%)
      pgfault                  491370.60 (  +0.00%)    225477.40 ( -54.11%)
      pgmajfault                    2.00 (  +0.00%)         1.80 (  -6.67%)
      thp_fault_alloc               0.00 (  +0.00%)       531.60 (+100.00%)
      thp_fault_fallback          749.00 (  +0.00%)       217.40 ( -70.88%)
      
      [ Note: this may in turn increase memory consumption from internal
        fragmentation, which is an inherent risk of transparent hugepages.
        Some setups may have to adjust the memcg limits accordingly to
        accomodate this - or, if the machine is already packed to capacity,
        disable the transparent huge page feature. ]
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b70a2a21
    • J
      mm: memcontrol: simplify detecting when the memory+swap limit is hit · 3fbe7244
      Johannes Weiner 提交于
      When attempting to charge pages, we first charge the memory counter and
      then the memory+swap counter.  If one of the counters is at its limit, we
      enter reclaim, but if it's the memory+swap counter, reclaim shouldn't swap
      because that wouldn't change the situation.  However, if the counters have
      the same limits, we never get to the memory+swap limit.  To know whether
      reclaim should swap or not, there is a state flag that indicates whether
      the limits are equal and whether hitting the memory limit implies hitting
      the memory+swap limit.
      
      Just try the memory+swap counter first.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NVladimir Davydov <vdavydov@parallels.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3fbe7244
    • V
      memcg: move memcg_update_cache_size() to slab_common.c · 6f817f4c
      Vladimir Davydov 提交于
      `While growing per memcg caches arrays, we jump between memcontrol.c and
      slab_common.c in a weird way:
      
        memcg_alloc_cache_id - memcontrol.c
          memcg_update_all_caches - slab_common.c
            memcg_update_cache_size - memcontrol.c
      
      There's absolutely no reason why memcg_update_cache_size can't live on the
      slab's side though.  So let's move it there and settle it comfortably amid
      per-memcg cache allocation functions.
      
      Besides, this patch cleans this function up a bit, removing all the
      useless comments from it, and renames it to memcg_update_cache_params to
      conform to memcg_alloc/free_cache_params, which we already have in
      slab_common.c.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6f817f4c
    • V
      memcg: don't call memcg_update_all_caches if new cache id fits · f3bb3043
      Vladimir Davydov 提交于
      memcg_update_all_caches grows arrays of per-memcg caches, so we only need
      to call it when memcg_limited_groups_array_size is increased.  However,
      currently we invoke it each time a new kmem-active memory cgroup is
      created.  Then it just iterates over all slab_caches and does nothing
      (memcg_update_cache_size returns immediately).
      
      This patch fixes this insanity.  In the meantime it moves the code dealing
      with id allocations to separate functions, memcg_alloc_cache_id and
      memcg_free_cache_id.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f3bb3043
    • V
      memcg: move memcg_{alloc,free}_cache_params to slab_common.c · 33a690c4
      Vladimir Davydov 提交于
      The only reason why they live in memcontrol.c is that we get/put css
      reference to the owner memory cgroup in them.  However, we can do that in
      memcg_{un,}register_cache.  OTOH, there are several reasons to move them
      to slab_common.c.
      
      First, I think that the less public interface functions we have in
      memcontrol.h the better.  Since the functions I move don't depend on
      memcontrol, I think it's worth making them private to slab, especially
      taking into account that the arrays are defined on the slab's side too.
      
      Second, the way how per-memcg arrays are updated looks rather awkward: it
      proceeds from memcontrol.c (__memcg_activate_kmem) to slab_common.c
      (memcg_update_all_caches) and back to memcontrol.c again
      (memcg_update_array_size).  In the following patches I move the function
      relocating the arrays (memcg_update_array_size) to slab_common.c and
      therefore get rid this circular call path.  I think we should have the
      cache allocation stuff in the same place where we have relocation, because
      it's easier to follow the code then.  So I move arrays alloc/free
      functions to slab_common.c too.
      
      The third point isn't obvious.  I'm going to make the list_lru structure
      per-memcg to allow targeted kmem reclaim.  That means we will have
      per-memcg arrays in list_lrus too.  It turns out that it's much easier to
      update these arrays in list_lru.c rather than in memcontrol.c, because all
      the stuff we need is defined there.  This patch makes memcg caches arrays
      allocation path conform that of the upcoming list_lru.
      
      So let's move these functions to slab_common.c and make them static.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33a690c4