1. 02 4月, 2009 2 次提交
  2. 10 3月, 2009 2 次提交
    • T
      percpu: generalize embedding first chunk setup helper · 66c3a757
      Tejun Heo 提交于
      Impact: code reorganization
      
      Separate out embedding first chunk setup helper from x86 embedding
      first chunk allocator and put it in mm/percpu.c.  This will be used by
      the default percpu first chunk allocator and possibly by other archs.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      66c3a757
    • T
      percpu: more flexibility for @dyn_size of pcpu_setup_first_chunk() · 6074d5b0
      Tejun Heo 提交于
      Impact: cleanup, more flexibility for first chunk init
      
      Non-negative @dyn_size used to be allowed iff @unit_size wasn't auto.
      This restriction stemmed from implementation detail and made things a
      bit less intuitive.  This patch allows @dyn_size to be specified
      regardless of @unit_size and swaps the positions of @dyn_size and
      @unit_size so that the parameter order makes more sense (static,
      reserved and dyn sizes followed by enclosing unit_size).
      
      While at it, add @unit_size >= PCPU_MIN_UNIT_SIZE sanity check.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6074d5b0
  3. 06 3月, 2009 4 次提交
    • T
      x86, percpu: setup reserved percpu area for x86_64 · 6b19b0c2
      Tejun Heo 提交于
      Impact: fix relocation overflow during module load
      
      x86_64 uses 32bit relocations for symbol access and static percpu
      symbols whether in core or modules must be inside 2GB of the percpu
      segement base which the dynamic percpu allocator doesn't guarantee.
      This patch makes x86_64 reserve PERCPU_MODULE_RESERVE bytes in the
      first chunk so that module percpu areas are always allocated from the
      first chunk which is always inside the relocatable range.
      
      This problem exists for any percpu allocator but is easily triggered
      when using the embedding allocator because the second chunk is located
      beyond 2GB on it.
      
      This patch also changes the meaning of PERCPU_DYNAMIC_RESERVE such
      that it only indicates the size of the area to reserve for dynamic
      allocation as static and dynamic areas can be separate.  New
      PERCPU_DYNAMIC_RESERVED is increased by 4k for both 32 and 64bits as
      the reserved area separation eats away some allocatable space and
      having slightly more headroom (currently between 4 and 8k after
      minimal boot sans module area) makes sense for common case
      performance.
      
      x86_32 can address anywhere from anywhere and doesn't need reserving.
      
      Mike Galbraith first reported the problem first and bisected it to the
      embedding percpu allocator commit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NMike Galbraith <efault@gmx.de>
      Reported-by: NJaswinder Singh Rajput <jaswinder@kernel.org>
      6b19b0c2
    • T
      percpu, module: implement reserved allocation and use it for module percpu variables · edcb4639
      Tejun Heo 提交于
      Impact: add reserved allocation functionality and use it for module
      	percpu variables
      
      This patch implements reserved allocation from the first chunk.  When
      setting up the first chunk, arch can ask to set aside certain number
      of bytes right after the core static area which is available only
      through a separate reserved allocator.  This will be used primarily
      for module static percpu variables on architectures with limited
      relocation range to ensure that the module perpcu symbols are inside
      the relocatable range.
      
      If reserved area is requested, the first chunk becomes reserved and
      isn't available for regular allocation.  If the first chunk also
      includes piggy-back dynamic allocation area, a separate chunk mapping
      the same region is created to serve dynamic allocation.  The first one
      is called static first chunk and the second dynamic first chunk.
      Although they share the page map, their different area map
      initializations guarantee they serve disjoint areas according to their
      purposes.
      
      If arch doesn't setup reserved area, reserved allocation is handled
      like any other allocation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      edcb4639
    • T
      x86: make embedding percpu allocator return excessive free space · 9a4f8a87
      Tejun Heo 提交于
      Impact: reduce unnecessary memory usage on certain configurations
      
      Embedding percpu allocator allocates unit_size *
      smp_num_possible_cpus() bytes consecutively and use it for the first
      chunk.  However, if the static area is small, this can result in
      excessive prellocated free space in the first chunk due to
      PCPU_MIN_UNIT_SIZE restriction.
      
      This patch makes embedding percpu allocator preallocate only what's
      necessary as described by PERPCU_DYNAMIC_RESERVE and return the
      leftover to the bootmem allocator.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      9a4f8a87
    • T
      percpu: use negative for auto for pcpu_setup_first_chunk() arguments · cafe8816
      Tejun Heo 提交于
      Impact: argument semantic cleanup
      
      In pcpu_setup_first_chunk(), zero @unit_size and @dyn_size meant
      auto-sizing.  It's okay for @unit_size as 0 doesn't make sense but 0
      dynamic reserve size is valid.  Alos, if arch @dyn_size is calculated
      from other parameters, it might end up passing in 0 @dyn_size and
      malfunction when the size is automatically adjusted.
      
      This patch makes both @unit_size and @dyn_size ssize_t and use -1 for
      auto sizing.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      cafe8816
  4. 25 2月, 2009 1 次提交
  5. 24 2月, 2009 5 次提交
    • T
      x86: add remapping percpu first chunk allocator · 8ac83757
      Tejun Heo 提交于
      Impact: add better first percpu allocation for NUMA
      
      On NUMA, embedding allocator can't be used as different units can't be
      made to fall in the correct NUMA nodes.  To use large page mapping,
      each unit needs to be remapped.  However, percpu areas are usually
      much smaller than large page size and unused space hurts a lot as the
      number of cpus grow.  This allocator remaps large pages for each chunk
      but gives back unused part to the bootmem allocator making the large
      pages mapped twice.
      
      This adds slightly to the TLB pressure but is much better than using
      4k mappings while still being NUMA-friendly.
      
      Ingo suggested that this would be the correct approach for NUMA.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      8ac83757
    • T
      x86: add embedding percpu first chunk allocator · 89c92151
      Tejun Heo 提交于
      Impact: add better first percpu allocation for !NUMA
      
      On !NUMA, we can simply allocate contiguous memory and use it for the
      first chunk without mapping it into vmalloc area.  As the memory area
      is covered by the large page physical memory mapping, it allows the
      dynamic perpcu allocator to not add any TLB overhead for the static
      percpu area and whatever falls into the first chunk and the
      implementation is very simple too.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      89c92151
    • T
      x86: separate out setup_pcpu_4k() from setup_per_cpu_areas() · 5f5d8405
      Tejun Heo 提交于
      Impact: modularize percpu first chunk allocation
      
      x86 is gonna have a few different strategies for the first chunk
      allocation.  Modularize it by separating out the current allocation
      mechanism into pcpu_alloc_bootmem() and setup_pcpu_4k().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5f5d8405
    • T
      percpu: give more latitude to arch specific first chunk initialization · 8d408b4b
      Tejun Heo 提交于
      Impact: more latitude for first percpu chunk allocation
      
      The first percpu chunk serves the kernel static percpu area and may or
      may not contain extra room for further dynamic allocation.
      Initialization of the first chunk needs to be done before normal
      memory allocation service is up, so it has its own init path -
      pcpu_setup_static().
      
      It seems archs need more latitude while initializing the first chunk
      for example to take advantage of large page mapping.  This patch makes
      the following changes to allow this.
      
      * Define PERCPU_DYNAMIC_RESERVE to give arch hint about how much space
        to reserve in the first chunk for further dynamic allocation.
      
      * Rename pcpu_setup_static() to pcpu_setup_first_chunk().
      
      * Make pcpu_setup_first_chunk() much more flexible by fetching page
        pointer by callback and adding optional @unit_size, @free_size and
        @base_addr arguments which allow archs to selectively part of chunk
        initialization to their likings.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      8d408b4b
    • T
      x86: update populate_extra_pte() and add populate_extra_pmd() · 458a3e64
      Tejun Heo 提交于
      Impact: minor change to populate_extra_pte() and addition of pmd flavor
      
      Update populate_extra_pte() to return pointer to the pte_t for the
      specified address and add populate_extra_pmd() which only populates
      till the pmd and returns pointer to the pmd entry for the address.
      
      For 64bit, pud/pmd/pte fill functions are separated out from
      set_pte_vaddr[_pud]() and used for set_pte_vaddr[_pud]() and
      populate_extra_{pte|pmd}().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      458a3e64
  6. 20 2月, 2009 1 次提交
    • T
      x86: convert to the new dynamic percpu allocator · 11124411
      Tejun Heo 提交于
      Impact: use new dynamic allocator, unified access to static/dynamic
              percpu memory
      
      Convert to the new dynamic percpu allocator.
      
      * implement populate_extra_pte() for both 32 and 64
      * update setup_per_cpu_areas() to use pcpu_setup_static()
      * define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr()
      * define config HAVE_DYNAMIC_PER_CPU_AREA
      Signed-off-by: NTejun Heo <tj@kernel.org>
      11124411
  7. 10 2月, 2009 1 次提交
    • T
      x86: implement x86_32 stack protector · 60a5317f
      Tejun Heo 提交于
      Impact: stack protector for x86_32
      
      Implement stack protector for x86_32.  GDT entry 28 is used for it.
      It's set to point to stack_canary-20 and have the length of 24 bytes.
      CONFIG_CC_STACKPROTECTOR turns off CONFIG_X86_32_LAZY_GS and sets %gs
      to the stack canary segment on entry.  As %gs is otherwise unused by
      the kernel, the canary can be anywhere.  It's defined as a percpu
      variable.
      
      x86_32 exception handlers take register frame on stack directly as
      struct pt_regs.  With -fstack-protector turned on, gcc copies the
      whole structure after the stack canary and (of course) doesn't copy
      back on return thus losing all changed.  For now, -fno-stack-protector
      is added to all files which contain those functions.  We definitely
      need something better.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60a5317f
  8. 31 1月, 2009 1 次提交
  9. 27 1月, 2009 13 次提交
  10. 20 1月, 2009 1 次提交
    • B
      x86: move stack_canary into irq_stack · 947e76cd
      Brian Gerst 提交于
      Impact: x86_64 percpu area layout change, irq_stack now at the beginning
      
      Now that the PDA is empty except for the stack canary, it can be removed.
      The irqstack is moved to the start of the per-cpu section.  If the stack
      protector is enabled, the canary overlaps the bottom 48 bytes of the irqstack.
      
      tj: * updated subject
          * dropped asm relocation of irq_stack_ptr
          * updated comments a bit
          * rebased on top of stack canary changes
      Signed-off-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      947e76cd
  11. 19 1月, 2009 1 次提交
    • L
      x86: fix section mismatch warnings in kernel/setup_percpu.c · c7f8562a
      Leonardo Potenza 提交于
      The function setup_cpu_local_masks() has been marked __init, in
      order to remove the following section mismatch messages:
      
      WARNING: vmlinux.o(.text+0x3c2c7): Section mismatch in reference from the function setup_cpu_local_masks() to the function .init.text:alloc_bootmem_cpumask_var()
      The function setup_cpu_local_masks() references
      the function __init alloc_bootmem_cpumask_var().
      This is often because setup_cpu_local_masks lacks a __init
      annotation or the annotation of alloc_bootmem_cpumask_var is wrong.
      
      WARNING: vmlinux.o(.text+0x3c2d3): Section mismatch in reference from the function setup_cpu_local_masks() to the function .init.text:alloc_bootmem_cpumask_var()
      The function setup_cpu_local_masks() references
      the function __init alloc_bootmem_cpumask_var().
      This is often because setup_cpu_local_masks lacks a __init
      annotation or the annotation of alloc_bootmem_cpumask_var is wrong.
      
      WARNING: vmlinux.o(.text+0x3c2df): Section mismatch in reference from the function setup_cpu_local_masks() to the function .init.text:alloc_bootmem_cpumask_var()
      The function setup_cpu_local_masks() references
      the function __init alloc_bootmem_cpumask_var().
      This is often because setup_cpu_local_masks lacks a __init
      annotation or the annotation of alloc_bootmem_cpumask_var is wrong.
      
      WARNING: vmlinux.o(.text+0x3c2eb): Section mismatch in reference from the function setup_cpu_local_masks() to the function .init.text:alloc_bootmem_cpumask_var()
      The function setup_cpu_local_masks() references
      the function __init alloc_bootmem_cpumask_var().
      This is often because setup_cpu_local_masks lacks a __init
      annotation or the annotation of alloc_bootmem_cpumask_var is wrong.
      Signed-off-by: NLeonardo Potenza <lpotenza@inwind.it>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c7f8562a
  12. 18 1月, 2009 3 次提交
  13. 16 1月, 2009 5 次提交
    • T
      x86: fix build bug introduced during merge · a338af2c
      Tejun Heo 提交于
      EXPORT_PER_CPU_SYMBOL() got misplaced during merge leading to build
      failure.  Fix it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a338af2c
    • T
      x86: make pda a percpu variable · b12d8db8
      Tejun Heo 提交于
      [ Based on original patch from Christoph Lameter and Mike Travis. ]
      
      As pda is now allocated in percpu area, it can easily be made a proper
      percpu variable.  Make it so by defining per cpu symbol from linker
      script and declaring it in C code for SMP and simply defining it for
      UP.  This change cleans up code and brings SMP and UP closer a bit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b12d8db8
    • T
      x86: merge 64 and 32 SMP percpu handling · 9939ddaf
      Tejun Heo 提交于
      Now that pda is allocated as part of percpu, percpu doesn't need to be
      accessed through pda.  Unify x86_64 SMP percpu access with x86_32 SMP
      one.  Other than the segment register, operand size and the base of
      percpu symbols, they behave identical now.
      
      This patch replaces now unnecessary pda->data_offset with a dummy
      field which is necessary to keep stack_canary at its place.  This
      patch also moves per_cpu_offset initialization out of init_gdt() into
      setup_per_cpu_areas().  Note that this change also necessitates
      explicit per_cpu_offset initializations in voyager_smp.c.
      
      With this change, x86_OP_percpu()'s are as efficient on x86_64 as on
      x86_32 and also x86_64 can use assembly PER_CPU macros.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9939ddaf
    • T
      x86: fold pda into percpu area on SMP · 1a51e3a0
      Tejun Heo 提交于
      [ Based on original patch from Christoph Lameter and Mike Travis. ]
      
      Currently pdas and percpu areas are allocated separately.  %gs points
      to local pda and percpu area can be reached using pda->data_offset.
      This patch folds pda into percpu area.
      
      Due to strange gcc requirement, pda needs to be at the beginning of
      the percpu area so that pda->stack_canary is at %gs:40.  To achieve
      this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is
      added and used to reserve pda sized chunk at the start of the percpu
      area.
      
      After this change, for boot cpu, %gs first points to pda in the
      data.init area and later during setup_per_cpu_areas() gets updated to
      point to the actual pda.  This means that setup_per_cpu_areas() need
      to reload %gs for CPU0 while clearing pda area for other cpus as cpu0
      already has modified it when control reaches setup_per_cpu_areas().
      
      This patch also removes now unnecessary get_local_pda() and its call
      sites.
      
      A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into
      per cpu area" patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1a51e3a0
    • T
      x86: use static _cpu_pda array · c8f3329a
      Tejun Heo 提交于
      _cpu_pda array first uses statically allocated storage in data.init
      and then switches to allocated bootmem to conserve space.  However,
      after folding pda area into percpu area, _cpu_pda array will be
      removed completely.  Drop the reallocation part to simplify the code
      for soon-to-follow changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c8f3329a