1. 10 3月, 2009 1 次提交
    • T
      percpu: more flexibility for @dyn_size of pcpu_setup_first_chunk() · 6074d5b0
      Tejun Heo 提交于
      Impact: cleanup, more flexibility for first chunk init
      
      Non-negative @dyn_size used to be allowed iff @unit_size wasn't auto.
      This restriction stemmed from implementation detail and made things a
      bit less intuitive.  This patch allows @dyn_size to be specified
      regardless of @unit_size and swaps the positions of @dyn_size and
      @unit_size so that the parameter order makes more sense (static,
      reserved and dyn sizes followed by enclosing unit_size).
      
      While at it, add @unit_size >= PCPU_MIN_UNIT_SIZE sanity check.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6074d5b0
  2. 06 3月, 2009 5 次提交
    • T
      x86, percpu: setup reserved percpu area for x86_64 · 6b19b0c2
      Tejun Heo 提交于
      Impact: fix relocation overflow during module load
      
      x86_64 uses 32bit relocations for symbol access and static percpu
      symbols whether in core or modules must be inside 2GB of the percpu
      segement base which the dynamic percpu allocator doesn't guarantee.
      This patch makes x86_64 reserve PERCPU_MODULE_RESERVE bytes in the
      first chunk so that module percpu areas are always allocated from the
      first chunk which is always inside the relocatable range.
      
      This problem exists for any percpu allocator but is easily triggered
      when using the embedding allocator because the second chunk is located
      beyond 2GB on it.
      
      This patch also changes the meaning of PERCPU_DYNAMIC_RESERVE such
      that it only indicates the size of the area to reserve for dynamic
      allocation as static and dynamic areas can be separate.  New
      PERCPU_DYNAMIC_RESERVED is increased by 4k for both 32 and 64bits as
      the reserved area separation eats away some allocatable space and
      having slightly more headroom (currently between 4 and 8k after
      minimal boot sans module area) makes sense for common case
      performance.
      
      x86_32 can address anywhere from anywhere and doesn't need reserving.
      
      Mike Galbraith first reported the problem first and bisected it to the
      embedding percpu allocator commit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NMike Galbraith <efault@gmx.de>
      Reported-by: NJaswinder Singh Rajput <jaswinder@kernel.org>
      6b19b0c2
    • T
      percpu, module: implement reserved allocation and use it for module percpu variables · edcb4639
      Tejun Heo 提交于
      Impact: add reserved allocation functionality and use it for module
      	percpu variables
      
      This patch implements reserved allocation from the first chunk.  When
      setting up the first chunk, arch can ask to set aside certain number
      of bytes right after the core static area which is available only
      through a separate reserved allocator.  This will be used primarily
      for module static percpu variables on architectures with limited
      relocation range to ensure that the module perpcu symbols are inside
      the relocatable range.
      
      If reserved area is requested, the first chunk becomes reserved and
      isn't available for regular allocation.  If the first chunk also
      includes piggy-back dynamic allocation area, a separate chunk mapping
      the same region is created to serve dynamic allocation.  The first one
      is called static first chunk and the second dynamic first chunk.
      Although they share the page map, their different area map
      initializations guarantee they serve disjoint areas according to their
      purposes.
      
      If arch doesn't setup reserved area, reserved allocation is handled
      like any other allocation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      edcb4639
    • T
      percpu: use negative for auto for pcpu_setup_first_chunk() arguments · cafe8816
      Tejun Heo 提交于
      Impact: argument semantic cleanup
      
      In pcpu_setup_first_chunk(), zero @unit_size and @dyn_size meant
      auto-sizing.  It's okay for @unit_size as 0 doesn't make sense but 0
      dynamic reserve size is valid.  Alos, if arch @dyn_size is calculated
      from other parameters, it might end up passing in 0 @dyn_size and
      malfunction when the size is automatically adjusted.
      
      This patch makes both @unit_size and @dyn_size ssize_t and use -1 for
      auto sizing.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      cafe8816
    • T
      percpu: cosmetic renames in pcpu_setup_first_chunk() · 2441d15c
      Tejun Heo 提交于
      Impact: cosmetic, preparation for future changes
      
      Make the following renames in pcpur_setup_first_chunk() in preparation
      for future changes.
      
      * s/free_size/dyn_size/
      * s/static_vm/first_vm/
      * s/static_chunk/schunk/
      Signed-off-by: NTejun Heo <tj@kernel.org>
      2441d15c
    • T
      percpu: clean up percpu constants · 6a242909
      Tejun Heo 提交于
      Impact: cleaup
      
      Make the following cleanups.
      
      * There isn't much arch-specific about PERCPU_MODULE_RESERVE.  Always
        define it whether arch overrides PERCPU_ENOUGH_ROOM or not.
      
      * blackfin overrides PERCPU_ENOUGH_ROOM to align static area size.  Do
        it by default.
      
      * percpu allocation sizes doesn't have much to do with the page size.
        Don't use PAGE_SHIFT in their definition.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Bryan Wu <cooloney@kernel.org>
      6a242909
  3. 26 2月, 2009 1 次提交
  4. 25 2月, 2009 1 次提交
    • I
      alloc_percpu: fix UP build · d2b02615
      Ingo Molnar 提交于
      Impact: build fix
      
      the !SMP branch had a 'gfp' leftover:
      
       include/linux/percpu.h: In function '__alloc_percpu':
       include/linux/percpu.h:160: error: 'gfp' undeclared (first use in this function)
       include/linux/percpu.h:160: error: (Each undeclared identifier is reported only once
       include/linux/percpu.h:160: error: for each function it appears in.)
      
      Use GFP_KERNEL like the SMP version does.
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d2b02615
  5. 24 2月, 2009 1 次提交
    • T
      percpu: give more latitude to arch specific first chunk initialization · 8d408b4b
      Tejun Heo 提交于
      Impact: more latitude for first percpu chunk allocation
      
      The first percpu chunk serves the kernel static percpu area and may or
      may not contain extra room for further dynamic allocation.
      Initialization of the first chunk needs to be done before normal
      memory allocation service is up, so it has its own init path -
      pcpu_setup_static().
      
      It seems archs need more latitude while initializing the first chunk
      for example to take advantage of large page mapping.  This patch makes
      the following changes to allow this.
      
      * Define PERCPU_DYNAMIC_RESERVE to give arch hint about how much space
        to reserve in the first chunk for further dynamic allocation.
      
      * Rename pcpu_setup_static() to pcpu_setup_first_chunk().
      
      * Make pcpu_setup_first_chunk() much more flexible by fetching page
        pointer by callback and adding optional @unit_size, @free_size and
        @base_addr arguments which allow archs to selectively part of chunk
        initialization to their likings.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      8d408b4b
  6. 20 2月, 2009 4 次提交
    • T
      percpu: implement new dynamic percpu allocator · fbf59bc9
      Tejun Heo 提交于
      Impact: new scalable dynamic percpu allocator which allows dynamic
              percpu areas to be accessed the same way as static ones
      
      Implement scalable dynamic percpu allocator which can be used for both
      static and dynamic percpu areas.  This will allow static and dynamic
      areas to share faster direct access methods.  This feature is optional
      and enabled only when CONFIG_HAVE_DYNAMIC_PER_CPU_AREA is defined by
      arch.  Please read comment on top of mm/percpu.c for details.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      fbf59bc9
    • T
      percpu: kill percpu_alloc() and friends · f2a8205c
      Tejun Heo 提交于
      Impact: kill unused functions
      
      percpu_alloc() and its friends never saw much action.  It was supposed
      to replace the cpu-mask unaware __alloc_percpu() but it never happened
      and in fact __percpu_alloc_mask() itself never really grew proper
      up/down handling interface either (no exported interface for
      populate/depopulate).
      
      percpu allocation is about to go through major reimplementation and
      there's no reason to carry this unused interface around.  Replace it
      with __alloc_percpu() and free_percpu().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f2a8205c
    • R
      alloc_percpu: add align argument to __alloc_percpu. · 313e458f
      Rusty Russell 提交于
      This prepares for a real __alloc_percpu, by adding an alignment argument.
      Only one place uses __alloc_percpu directly, and that's for a string.
      
      tj: af_inet also uses __alloc_percpu(), update it.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      313e458f
    • R
      alloc_percpu: change percpu_ptr to per_cpu_ptr · b36128c8
      Rusty Russell 提交于
      Impact: cleanup
      
      There are two allocated per-cpu accessor macros with almost identical
      spelling.  The original and far more popular is per_cpu_ptr (44
      files), so change over the other 4 files.
      
      tj: kill percpu_ptr() and update UP too
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: mingo@redhat.com
      Cc: lenb@kernel.org
      Cc: cpufreq@vger.kernel.org
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b36128c8
  7. 09 2月, 2009 1 次提交
  8. 20 1月, 2009 1 次提交
  9. 27 7月, 2008 1 次提交
  10. 25 5月, 2008 1 次提交
    • E
      percpu: introduce DEFINE_PER_CPU_PAGE_ALIGNED() macro · 63cc8c75
      Eric Dumazet 提交于
      While examining holes in percpu section I found this :
      
      c05f5000 D per_cpu__current_task
      c05f5000 D __per_cpu_start
      c05f5004 D per_cpu__cpu_number
      c05f5008 D per_cpu__irq_regs
      c05f500c d per_cpu__cpu_devices
      c05f5040 D per_cpu__cyc2ns
      
      <Big Hole of about 4000 bytes>
      
      c05f6000 d per_cpu__cpuid4_info
      c05f6004 d per_cpu__cache_kobject
      c05f6008 d per_cpu__index_kobject
      
      <Big Hole of about 4000 bytes>
      
      c05f7000 D per_cpu__gdt_page
      
      This is because gdt_page is a percpu variable, defined with
      a page alignement, and linker is doing its job, two times because of .o
      nesting in the build process.
      
      I introduced a new macro DEFINE_PER_CPU_PAGE_ALIGNED() to avoid
      wasting this space. All page aligned variables (only one at this time)
      are put in a separate
      subsection .data.percpu.page_aligned, at the very begining of percpu zone.
      
      Before patch , on a x86_32 machine :
      
      .data.percpu                30232   3227471872
      .data.percpu                22168   3227471872
      
      Thats 8064 bytes saved for each CPU.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      63cc8c75
  11. 15 5月, 2008 1 次提交
  12. 29 4月, 2008 1 次提交
  13. 07 2月, 2008 1 次提交
  14. 31 1月, 2008 1 次提交
  15. 30 1月, 2008 1 次提交
  16. 17 7月, 2007 1 次提交
  17. 03 5月, 2007 1 次提交
  18. 06 10月, 2006 1 次提交
  19. 30 9月, 2006 1 次提交
  20. 26 9月, 2006 2 次提交
  21. 09 1月, 2006 2 次提交
  22. 14 11月, 2005 1 次提交
  23. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4