1. 26 7月, 2017 3 次提交
    • D
      percpu: remove has_reserved from pcpu_chunk · 4af1e6fb
      Dennis Zhou (Facebook) 提交于
      Prior this variable was used to manage statistics when the first chunk
      had a reserved region. The previous patch introduced start_offset to
      keep track of the offset by value rather than boolean. Therefore,
      has_reserved can be removed.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      4af1e6fb
    • D
      percpu: introduce start_offset to pcpu_chunk · e2266705
      Dennis Zhou (Facebook) 提交于
      The reserved chunk arithmetic uses a global variable
      pcpu_reserved_chunk_limit that is set in the first chunk init code to
      hide a portion of the area map. The bitmap allocator to come will
      eventually move the base_addr up and require both the reserved chunk
      and static chunk to maintain this offset. pcpu_reserved_chunk_limit is
      removed and start_offset is added.
      
      The first chunk that is circulated and is pcpu_first_chunk serves the
      dynamic region, the region following the reserved region. The reserved
      chunk address check will temporarily use the first chunk to identify its
      address range. A following patch will increase the base_addr and remove
      this. If there is no reserved chunk, this will check the static region
      and return false because those values should never be passed into the
      allocator.
      
      Lastly, when linking in the first chunk, make sure to count the right
      free region for the number of empty populated pages.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e2266705
    • D
      percpu: setup_first_chunk enforce dynamic region must exist · fb29a2cc
      Dennis Zhou (Facebook) 提交于
      The first chunk is handled as a special case as it is composed of the
      static, reserved, and dynamic regions. The code handles each case
      individually. The next several patches will merge these code paths and
      lay the foundation for the bitmap allocator.
      
      This patch modifies logic to enforce that a dynamic region exists and
      changes the area map to account for that. This brings the logic closer
      to the dynamic chunk's init logic.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      fb29a2cc
  2. 17 7月, 2017 2 次提交
  3. 22 6月, 2017 1 次提交
  4. 21 6月, 2017 4 次提交
  5. 11 5月, 2017 1 次提交
  6. 26 3月, 2017 1 次提交
  7. 16 3月, 2017 1 次提交
    • T
      locking/lockdep: Handle statically initialized PER_CPU locks properly · 383776fa
      Thomas Gleixner 提交于
      If a PER_CPU struct which contains a spin_lock is statically initialized
      via:
      
      DEFINE_PER_CPU(struct foo, bla) = {
      	.lock = __SPIN_LOCK_UNLOCKED(bla.lock)
      };
      
      then lockdep assigns a seperate key to each lock because the logic for
      assigning a key to statically initialized locks is to use the address as
      the key. With per CPU locks the address is obvioulsy different on each CPU.
      
      That's wrong, because all locks should have the same key.
      
      To solve this the following modifications are required:
      
       1) Extend the is_kernel/module_percpu_addr() functions to hand back the
          canonical address of the per CPU address, i.e. the per CPU address
          minus the per CPU offset.
      
       2) Check the lock address with these functions and if the per CPU check
          matches use the returned canonical address as the lock key, so all per
          CPU locks have the same key.
      
       3) Move the static_obj(key) check into look_up_lock_class() so this check
          can be avoided for statically initialized per CPU locks.  That's
          required because the canonical address fails the static_obj(key) check
          for obvious reasons.
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      [ Merged Dan's fixups for !MODULES and !SMP into this patch. ]
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dan Murphy <dmurphy@ti.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170227143736.pectaimkjkan5kow@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      383776fa
  8. 07 3月, 2017 1 次提交
  9. 28 2月, 2017 1 次提交
  10. 13 12月, 2016 1 次提交
  11. 20 10月, 2016 1 次提交
    • Z
      percpu: ensure the requested alignment is power of two · 3ca45a46
      zijun_hu 提交于
      The percpu allocator expectedly assumes that the requested alignment
      is power of two but hasn't been veryfing the input.  If the specified
      alignment isn't power of two, the allocator can malfunction.  Add the
      sanity check.
      
      The following is detailed analysis of the effects of alignments which
      aren't power of two.
      
       The alignment must be a even at least since the LSB of a chunk->map
       element is used as free/in-use flag of a area; besides, the alignment
       must be a power of 2 too since ALIGN() doesn't work well for other
       alignment always but is adopted by pcpu_fit_in_area().  IOW, the
       current allocator only works well for a power of 2 aligned area
       allocation.
      
       See below opposite example for why an odd alignment doesn't work.
       Let's assume area [16, 36) is free but its previous one is in-use, we
       want to allocate a @size == 8 and @align == 7 area.  The larger area
       [16, 36) is split to three areas [16, 21), [21, 29), [29, 36)
       eventually.  However, due to the usage for a chunk->map element, the
       actual offset of the aim area [21, 29) is 21 but is recorded in
       relevant element as 20; moreover, the residual tail free area [29,
       36) is mistook as in-use and is lost silently
      
       Unlike macro roundup(), ALIGN(x, a) doesn't work if @a isn't a power
       of 2 for example, roundup(10, 6) == 12 but ALIGN(10, 6) == 10, and
       the latter result isn't desired obviously.
      
      tj: Code style and patch description updates.
      Signed-off-by: Nzijun_hu <zijun_hu@htc.com>
      Suggested-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      3ca45a46
  12. 05 10月, 2016 2 次提交
    • Z
      mm/percpu.c: fix potential memory leakage for pcpu_embed_first_chunk() · 9b739662
      zijun_hu 提交于
      in order to ensure the percpu group areas within a chunk aren't
      distributed too sparsely, pcpu_embed_first_chunk() goes to error handling
      path when a chunk spans over 3/4 VMALLOC area, however, during the error
      handling, it forget to free the memory allocated for all percpu groups by
      going to label @out_free other than @out_free_areas.
      
      it will cause memory leakage issue if the rare scene really happens, in
      order to fix the issue, we check chunk spanned area immediately after
      completing memory allocation for all percpu groups, we go to label
      @out_free_areas to free the memory then return if the checking is failed.
      
      in order to verify the approach, we dump all memory allocated then
      enforce the jump then dump all memory freed, the result is okay after
      checking whether we free all memory we allocate in this function.
      
      BTW, The approach is chosen after thinking over the below scenes
       - we don't go to label @out_free directly to fix this issue since we
         maybe free several allocated memory blocks twice
       - the aim of jumping after pcpu_setup_first_chunk() is bypassing free
         usable memory other than handling error, moreover, the function does
         not return error code in any case, it either panics due to BUG_ON()
         or return 0.
      Signed-off-by: Nzijun_hu <zijun_hu@htc.com>
      Tested-by: Nzijun_hu <zijun_hu@htc.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      9b739662
    • Z
      mm/percpu.c: correct max_distance calculation for pcpu_embed_first_chunk() · 93c76b6b
      zijun_hu 提交于
      pcpu_embed_first_chunk() calculates the range a percpu chunk spans into
      @max_distance and uses it to ensure that a chunk is not too big compared
      to the total vmalloc area. However, during calculation, it used incorrect
      top address by adding a unit size to the highest group's base address.
      
      This can make the calculated max_distance slightly smaller than the actual
      distance although given the scale of values involved the error is very
      unlikely to have an actual impact.
      
      Fix this issue by adding the group's size instead of a unit size.
      
      BTW, The type of variable max_distance is changed from size_t to unsigned
      long too based on below consideration:
       - type unsigned long usually have same width with IP core registers and
         can be applied at here very well
       - make @max_distance type consistent with the operand calculated against
         it such as @ai->groups[i].base_offset and macro VMALLOC_TOTAL
       - type unsigned long is more universal then size_t, size_t is type defined
         to unsigned int or unsigned long among various ARCHs usually
      Signed-off-by: Nzijun_hu <zijun_hu@htc.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      93c76b6b
  13. 25 5月, 2016 2 次提交
  14. 18 3月, 2016 4 次提交
  15. 23 1月, 2016 1 次提交
  16. 06 11月, 2015 1 次提交
  17. 21 7月, 2015 1 次提交
  18. 25 6月, 2015 1 次提交
    • L
      mm: kmemleak_alloc_percpu() should follow the gfp from per_alloc() · 8a8c35fa
      Larry Finger 提交于
      Beginning at commit d52d3997 ("ipv6: Create percpu rt6_info"), the
      following INFO splat is logged:
      
        ===============================
        [ INFO: suspicious RCU usage. ]
        4.1.0-rc7-next-20150612 #1 Not tainted
        -------------------------------
        kernel/sched/core.c:7318 Illegal context switch in RCU-bh read-side critical section!
        other info that might help us debug this:
        rcu_scheduler_active = 1, debug_locks = 0
         3 locks held by systemd/1:
         #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff815f0c8f>] rtnetlink_rcv+0x1f/0x40
         #1:  (rcu_read_lock_bh){......}, at: [<ffffffff816a34e2>] ipv6_add_addr+0x62/0x540
         #2:  (addrconf_hash_lock){+...+.}, at: [<ffffffff816a3604>] ipv6_add_addr+0x184/0x540
        stack backtrace:
        CPU: 0 PID: 1 Comm: systemd Not tainted 4.1.0-rc7-next-20150612 #1
        Hardware name: TOSHIBA TECRA A50-A/TECRA A50-A, BIOS Version 4.20   04/17/2014
        Call Trace:
          dump_stack+0x4c/0x6e
          lockdep_rcu_suspicious+0xe7/0x120
          ___might_sleep+0x1d5/0x1f0
          __might_sleep+0x4d/0x90
          kmem_cache_alloc+0x47/0x250
          create_object+0x39/0x2e0
          kmemleak_alloc_percpu+0x61/0xe0
          pcpu_alloc+0x370/0x630
      
      Additional backtrace lines are truncated.  In addition, the above splat
      is followed by several "BUG: sleeping function called from invalid
      context at mm/slub.c:1268" outputs.  As suggested by Martin KaFai Lau,
      these are the clue to the fix.  Routine kmemleak_alloc_percpu() always
      uses GFP_KERNEL for its allocations, whereas it should follow the gfp
      from its callers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NLarry Finger <Larry.Finger@lwfinger.net>
      Cc: Martin KaFai Lau <kafai@fb.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: <stable@vger.kernel.org>	[3.18+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8a8c35fa
  19. 25 3月, 2015 1 次提交
  20. 14 2月, 2015 1 次提交
  21. 29 10月, 2014 1 次提交
  22. 09 10月, 2014 1 次提交
    • T
      percpu: fix how @gfp is interpreted by the percpu allocator · 6ae833c7
      Tejun Heo 提交于
      When @gfp is specified, the percpu allocator is interested in whether
      it contains all of GFP_KERNEL or not.  If it does, the normal
      allocation path is taken; otherwise, the atomic allocation path.
      Unfortunately, pcpu_alloc() was incorrectly testing for whether @gfp
      contains any part of GFP_KERNEL.
      
      Fix it by testing "(gfp & GFP_KERNEL) != GFP_KERNEL" instead of
      "!(gfp & GFP_KERNEL)" to decide whether the allocation should be
      atomic or not.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6ae833c7
  23. 22 9月, 2014 1 次提交
  24. 09 9月, 2014 1 次提交
  25. 03 9月, 2014 5 次提交
    • T
      percpu: implement asynchronous chunk population · 1a4d7607
      Tejun Heo 提交于
      The percpu allocator now supports atomic allocations by only
      allocating from already populated areas but the mechanism to ensure
      that there's adequate amount of populated areas was missing.
      
      This patch expands pcpu_balance_work so that in addition to freeing
      excess free chunks it also populates chunks to maintain an adequate
      level of populated areas.  pcpu_alloc() schedules pcpu_balance_work if
      the amount of free populated areas is too low or after an atomic
      allocation failure.
      
      * PERPCU_DYNAMIC_RESERVE is increased by two pages to account for
        PCPU_EMPTY_POP_PAGES_LOW.
      
      * pcpu_async_enabled is added to gate both async jobs -
        chunk->map_extend_work and pcpu_balance_work - so that we don't end
        up scheduling them while the needed subsystems aren't up yet.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      1a4d7607
    • T
      percpu: rename pcpu_reclaim_work to pcpu_balance_work · fe6bd8c3
      Tejun Heo 提交于
      pcpu_reclaim_work will also be used to populate chunks asynchronously.
      Rename it to pcpu_balance_work in preparation.  pcpu_reclaim() is
      renamed to pcpu_balance_workfn() and some of its local variables are
      renamed too.
      
      This is pure rename.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      fe6bd8c3
    • T
      percpu: implmeent pcpu_nr_empty_pop_pages and chunk->nr_populated · b539b87f
      Tejun Heo 提交于
      pcpu_nr_empty_pop_pages counts the number of empty populated pages
      across all chunks and chunk->nr_populated counts the number of
      populated pages in a chunk.  Both will be used to implement pre/async
      population for atomic allocations.
      
      pcpu_chunk_[de]populated() are added to update chunk->populated,
      chunk->nr_populated and pcpu_nr_empty_pop_pages together.  All
      successful chunk [de]populations should be followed by the
      corresponding pcpu_chunk_[de]populated() calls.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b539b87f
    • T
      percpu: make sure chunk->map array has available space · 9c824b6a
      Tejun Heo 提交于
      An allocation attempt may require extending chunk->map array which
      requires GFP_KERNEL context which isn't available for atomic
      allocations.  This patch ensures that chunk->map array usually keeps
      some amount of available space by directly allocating buffer space
      during GFP_KERNEL allocations and scheduling async extension during
      atomic ones.  This should make atomic allocation failures from map
      space exhaustion rare.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      9c824b6a
    • T
      percpu: implement [__]alloc_percpu_gfp() · 5835d96e
      Tejun Heo 提交于
      Now that pcpu_alloc_area() can allocate only from populated areas,
      it's easy to add atomic allocation support to [__]alloc_percpu().
      Update pcpu_alloc() so that it accepts @gfp and skips all the blocking
      operations and allocates only from the populated areas if @gfp doesn't
      contain GFP_KERNEL.  New interface functions [__]alloc_percpu_gfp()
      are added.
      
      While this means that atomic allocations are possible, this isn't
      complete yet as there's no mechanism to ensure that certain amount of
      populated areas is kept available and atomic allocations may keep
      failing under certain conditions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5835d96e