1. 27 7月, 2017 10 次提交
    • D
      percpu: update header to contain bitmap allocator explanation. · 5e81ee3e
      Dennis Zhou (Facebook) 提交于
      The other patches contain a lot of information, so adding this
      information in a separate patch. It adds my copyright and a brief
      explanation of how the bitmap allocator works. There is a minor typo as
      well in the prior explanation so that is fixed.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5e81ee3e
    • D
      percpu: update pcpu_find_block_fit to use an iterator · b4c2116c
      Dennis Zhou (Facebook) 提交于
      The simple, and expensive, way to find a free area is to iterate over
      the entire bitmap until an area is found that fits the allocation size
      and alignment. This patch makes use of an iterate that find an area to
      check by using the block level contig hints. It will only return an area
      that can fit the size and alignment request. If the request can fit
      inside a block, it returns the first_free bit to start checking from to
      see if it can be fulfilled prior to the contig hint. The pcpu_alloc_area
      check has a bound of a block size added in case it is wrong.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b4c2116c
    • D
      percpu: use metadata blocks to update the chunk contig hint · 525ca84d
      Dennis Zhou (Facebook) 提交于
      The largest free region will either be a block level contig hint or an
      aggregate over the left_free and right_free areas of blocks. This is a
      much smaller set of free areas that need to be checked than a full
      traverse.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      525ca84d
    • D
      percpu: update free path to take advantage of contig hints · b185cd0d
      Dennis Zhou (Facebook) 提交于
      The bitmap allocator must keep metadata consistent. The easiest way is
      to scan after every allocation for each affected block and the entire
      chunk. This is rather expensive.
      
      The free path can take advantage of current contig hints to prevent
      scanning within the start and end block.  If a scan is needed, it can
      be done by scanning backwards from the start and forwards from the end
      to identify the entire free area this can be combined with. The blocks
      can then be updated by some basic checks rather than complete block
      scans.
      
      A chunk scan happens when the freed area makes a page free, a block
      free, or spans across blocks. This is necessary as the contig hint at
      this point could span across blocks. The check uses the minimum of page
      size and the block size to allow for variable sized blocks. There is a
      tradeoff here with not updating after every free. It is possible a
      contig hint in one block can be merged with the contig hint in the next
      block. This means the contig hint can be off by up to a page. However,
      if the chunk's contig hint is contained in one block, the contig hint
      will be accurate.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b185cd0d
    • D
      percpu: update alloc path to only scan if contig hints are broken · fc304334
      Dennis Zhou (Facebook) 提交于
      Metadata is kept per block to keep track of where the contig hints are.
      Scanning can be avoided when the contig hints are not broken. In that
      case, left and right contigs have to be managed manually.
      
      This patch changes the allocation path hint updating to only scan when
      contig hints are broken.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      fc304334
    • D
      percpu: keep track of the best offset for contig hints · 268625a6
      Dennis Zhou (Facebook) 提交于
      This patch makes the contig hint starting offset optimization from the
      previous patch as honest as it can be. For both chunk and block starting
      offsets, make sure it keeps the starting offset with the best alignment.
      
      The block skip optimization is added in a later patch when the
      pcpu_find_block_fit iterator is swapped in.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      268625a6
    • D
      percpu: skip chunks if the alloc does not fit in the contig hint · 13f96637
      Dennis Zhou (Facebook) 提交于
      This patch adds chunk->contig_bits_start to keep track of the contig
      hint's offset and the check to skip the chunk if it does not fit. If
      the chunk's contig hint starting offset cannot satisfy an allocation,
      the allocator assumes there is enough memory pressure in this chunk to
      either use a different chunk or create a new one. This accepts a less
      tight packing for a smoother latency curve.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      13f96637
    • D
      percpu: add first_bit to keep track of the first free in the bitmap · 86b442fb
      Dennis Zhou (Facebook) 提交于
      This patch adds first_bit to keep track of the first free bit in the
      bitmap. This hint helps prevent scanning of fully allocated blocks.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      86b442fb
    • D
      percpu: introduce bitmap metadata blocks · ca460b3c
      Dennis Zhou (Facebook) 提交于
      This patch introduces the bitmap metadata blocks and adds the skeleton
      of the code that will be used to maintain these blocks.  Each chunk's
      bitmap is made up of full metadata blocks. These blocks maintain basic
      metadata to help prevent scanning unnecssarily to update hints. Full
      scanning methods are used for the skeleton and will be replaced in the
      coming patches. A number of helper functions are added as well to do
      conversion of pages to blocks and manage offsets. Comments will be
      updated as the final version of each function is added.
      
      There exists a relationship between PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE,
      the region size, and unit_size. Every chunk's region (including offsets)
      is page aligned at the beginning to preserve alignment. The end is
      aligned to LCM(PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE) to ensure that the end
      can fit with the populated page map which is by page and every metadata
      block is fully accounted for. The unit_size is already page aligned, but
      must also be aligned with PCPU_BITMAP_BLOCK_SIZE to ensure full metadata
      blocks.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ca460b3c
    • D
      percpu: replace area map allocator with bitmap · 40064aec
      Dennis Zhou (Facebook) 提交于
      The percpu memory allocator is experiencing scalability issues when
      allocating and freeing large numbers of counters as in BPF.
      Additionally, there is a corner case where iteration is triggered over
      all chunks if the contig_hint is the right size, but wrong alignment.
      
      This patch replaces the area map allocator with a basic bitmap allocator
      implementation. Each subsequent patch will introduce new features and
      replace full scanning functions with faster non-scanning options when
      possible.
      
      Implementation:
      This patchset removes the area map allocator in favor of a bitmap
      allocator backed by metadata blocks. The primary goal is to provide
      consistency in performance and memory footprint with a focus on small
      allocations (< 64 bytes). The bitmap removes the heavy memmove from the
      freeing critical path and provides a consistent memory footprint. The
      metadata blocks provide a bound on the amount of scanning required by
      maintaining a set of hints.
      
      In an effort to make freeing fast, the metadata is updated on the free
      path if the new free area makes a page free, a block free, or spans
      across blocks. This causes the chunk's contig hint to potentially be
      smaller than what it could allocate by up to the smaller of a page or a
      block. If the chunk's contig hint is contained within a block, a check
      occurs and the hint is kept accurate. Metadata is always kept accurate
      on allocation, so there will not be a situation where a chunk has a
      later contig hint than available.
      
      Evaluation:
      I have primarily done testing against a simple workload of allocation of
      1 million objects (2^20) of varying size. Deallocation was done by in
      order, alternating, and in reverse. These numbers were collected after
      rebasing ontop of a80099a1. I present the worst-case numbers here:
      
        Area Map Allocator:
      
              Object Size | Alloc Time (ms) | Free Time (ms)
              ----------------------------------------------
                    4B    |        310      |     4770
                   16B    |        557      |     1325
                   64B    |        436      |      273
                  256B    |        776      |      131
                 1024B    |       3280      |      122
      
        Bitmap Allocator:
      
              Object Size | Alloc Time (ms) | Free Time (ms)
              ----------------------------------------------
                    4B    |        490      |       70
                   16B    |        515      |       75
                   64B    |        610      |       80
                  256B    |        950      |      100
                 1024B    |       3520      |      200
      
      This data demonstrates the inability for the area map allocator to
      handle less than ideal situations. In the best case of reverse
      deallocation, the area map allocator was able to perform within range
      of the bitmap allocator. In the worst case situation, freeing took
      nearly 5 seconds for 1 million 4-byte objects. The bitmap allocator
      dramatically improves the consistency of the free path. The small
      allocations performed nearly identical regardless of the freeing
      pattern.
      
      While it does add to the allocation latency, the allocation scenario
      here is optimal for the area map allocator. The area map allocator runs
      into trouble when it is allocating in chunks where the latter half is
      full. It is difficult to replicate this, so I present a variant where
      the pages are second half filled. Freeing was done sequentially. Below
      are the numbers for this scenario:
      
        Area Map Allocator:
      
              Object Size | Alloc Time (ms) | Free Time (ms)
              ----------------------------------------------
                    4B    |       4118      |     4892
                   16B    |       1651      |     1163
                   64B    |        598      |      285
                  256B    |        771      |      158
                 1024B    |       3034      |      160
      
        Bitmap Allocator:
      
              Object Size | Alloc Time (ms) | Free Time (ms)
              ----------------------------------------------
                    4B    |        481      |       67
                   16B    |        506      |       69
                   64B    |        636      |       75
                  256B    |        892      |       90
                 1024B    |       3262      |      147
      
      The data shows a parabolic curve of performance for the area map
      allocator. This is due to the memmove operation being the dominant cost
      with the lower object sizes as more objects are packed in a chunk and at
      higher object sizes, the traversal of the chunk slots is the dominating
      cost. The bitmap allocator suffers this problem as well. The above data
      shows the inability to scale for the allocation path with the area map
      allocator and that the bitmap allocator demonstrates consistent
      performance in general.
      
      The second problem of additional scanning can result in the area map
      allocator completing in 52 minutes when trying to allocate 1 million
      4-byte objects with 8-byte alignment. The same workload takes
      approximately 16 seconds to complete for the bitmap allocator.
      
      V2:
      Fixed a bug in pcpu_alloc_first_chunk end_offset was setting the bitmap
      using bytes instead of bits.
      
      Added a comment to pcpu_cnt_pop_pages to explain bitmap_weight.
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      40064aec
  2. 26 7月, 2017 13 次提交
  3. 17 7月, 2017 2 次提交
  4. 22 6月, 2017 1 次提交
  5. 21 6月, 2017 4 次提交
  6. 11 5月, 2017 1 次提交
  7. 26 3月, 2017 1 次提交
  8. 16 3月, 2017 1 次提交
    • T
      locking/lockdep: Handle statically initialized PER_CPU locks properly · 383776fa
      Thomas Gleixner 提交于
      If a PER_CPU struct which contains a spin_lock is statically initialized
      via:
      
      DEFINE_PER_CPU(struct foo, bla) = {
      	.lock = __SPIN_LOCK_UNLOCKED(bla.lock)
      };
      
      then lockdep assigns a seperate key to each lock because the logic for
      assigning a key to statically initialized locks is to use the address as
      the key. With per CPU locks the address is obvioulsy different on each CPU.
      
      That's wrong, because all locks should have the same key.
      
      To solve this the following modifications are required:
      
       1) Extend the is_kernel/module_percpu_addr() functions to hand back the
          canonical address of the per CPU address, i.e. the per CPU address
          minus the per CPU offset.
      
       2) Check the lock address with these functions and if the per CPU check
          matches use the returned canonical address as the lock key, so all per
          CPU locks have the same key.
      
       3) Move the static_obj(key) check into look_up_lock_class() so this check
          can be avoided for statically initialized per CPU locks.  That's
          required because the canonical address fails the static_obj(key) check
          for obvious reasons.
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      [ Merged Dan's fixups for !MODULES and !SMP into this patch. ]
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dan Murphy <dmurphy@ti.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20170227143736.pectaimkjkan5kow@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      383776fa
  9. 07 3月, 2017 1 次提交
  10. 28 2月, 2017 1 次提交
  11. 13 12月, 2016 1 次提交
  12. 20 10月, 2016 1 次提交
    • Z
      percpu: ensure the requested alignment is power of two · 3ca45a46
      zijun_hu 提交于
      The percpu allocator expectedly assumes that the requested alignment
      is power of two but hasn't been veryfing the input.  If the specified
      alignment isn't power of two, the allocator can malfunction.  Add the
      sanity check.
      
      The following is detailed analysis of the effects of alignments which
      aren't power of two.
      
       The alignment must be a even at least since the LSB of a chunk->map
       element is used as free/in-use flag of a area; besides, the alignment
       must be a power of 2 too since ALIGN() doesn't work well for other
       alignment always but is adopted by pcpu_fit_in_area().  IOW, the
       current allocator only works well for a power of 2 aligned area
       allocation.
      
       See below opposite example for why an odd alignment doesn't work.
       Let's assume area [16, 36) is free but its previous one is in-use, we
       want to allocate a @size == 8 and @align == 7 area.  The larger area
       [16, 36) is split to three areas [16, 21), [21, 29), [29, 36)
       eventually.  However, due to the usage for a chunk->map element, the
       actual offset of the aim area [21, 29) is 21 but is recorded in
       relevant element as 20; moreover, the residual tail free area [29,
       36) is mistook as in-use and is lost silently
      
       Unlike macro roundup(), ALIGN(x, a) doesn't work if @a isn't a power
       of 2 for example, roundup(10, 6) == 12 but ALIGN(10, 6) == 10, and
       the latter result isn't desired obviously.
      
      tj: Code style and patch description updates.
      Signed-off-by: Nzijun_hu <zijun_hu@htc.com>
      Suggested-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      3ca45a46
  13. 05 10月, 2016 2 次提交
    • Z
      mm/percpu.c: fix potential memory leakage for pcpu_embed_first_chunk() · 9b739662
      zijun_hu 提交于
      in order to ensure the percpu group areas within a chunk aren't
      distributed too sparsely, pcpu_embed_first_chunk() goes to error handling
      path when a chunk spans over 3/4 VMALLOC area, however, during the error
      handling, it forget to free the memory allocated for all percpu groups by
      going to label @out_free other than @out_free_areas.
      
      it will cause memory leakage issue if the rare scene really happens, in
      order to fix the issue, we check chunk spanned area immediately after
      completing memory allocation for all percpu groups, we go to label
      @out_free_areas to free the memory then return if the checking is failed.
      
      in order to verify the approach, we dump all memory allocated then
      enforce the jump then dump all memory freed, the result is okay after
      checking whether we free all memory we allocate in this function.
      
      BTW, The approach is chosen after thinking over the below scenes
       - we don't go to label @out_free directly to fix this issue since we
         maybe free several allocated memory blocks twice
       - the aim of jumping after pcpu_setup_first_chunk() is bypassing free
         usable memory other than handling error, moreover, the function does
         not return error code in any case, it either panics due to BUG_ON()
         or return 0.
      Signed-off-by: Nzijun_hu <zijun_hu@htc.com>
      Tested-by: Nzijun_hu <zijun_hu@htc.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      9b739662
    • Z
      mm/percpu.c: correct max_distance calculation for pcpu_embed_first_chunk() · 93c76b6b
      zijun_hu 提交于
      pcpu_embed_first_chunk() calculates the range a percpu chunk spans into
      @max_distance and uses it to ensure that a chunk is not too big compared
      to the total vmalloc area. However, during calculation, it used incorrect
      top address by adding a unit size to the highest group's base address.
      
      This can make the calculated max_distance slightly smaller than the actual
      distance although given the scale of values involved the error is very
      unlikely to have an actual impact.
      
      Fix this issue by adding the group's size instead of a unit size.
      
      BTW, The type of variable max_distance is changed from size_t to unsigned
      long too based on below consideration:
       - type unsigned long usually have same width with IP core registers and
         can be applied at here very well
       - make @max_distance type consistent with the operand calculated against
         it such as @ai->groups[i].base_offset and macro VMALLOC_TOTAL
       - type unsigned long is more universal then size_t, size_t is type defined
         to unsigned int or unsigned long among various ARCHs usually
      Signed-off-by: Nzijun_hu <zijun_hu@htc.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      93c76b6b
  14. 25 5月, 2016 1 次提交