1. 19 5月, 2008 1 次提交
  2. 13 5月, 2008 2 次提交
  3. 29 4月, 2008 2 次提交
  4. 24 4月, 2008 1 次提交
  5. 15 4月, 2008 3 次提交
    • P
      [LMB] Restructure allocation loops to avoid unsigned underflow · d9024df0
      Paul Mackerras 提交于
      There is a potential bug in __lmb_alloc_base where we subtract `size'
      from the base address of a reserved region without checking whether
      the subtraction could wrap around and produce a very large unsigned
      value.  In fact it probably isn't possible to hit the bug in practice
      since it would only occur in the situation where we can't satisfy the
      allocation request and there is a reserved region starting at 0.
      
      This fixes the potential bug by breaking out of the loop when we get
      to the point where the base of the reserved region is less than the
      size requested.  This also restructures the loop to be a bit easier to
      follow.
      
      The same logic got copied into lmb_alloc_nid_unreserved, so this makes
      a similar change there.  Here the bug is more likely to be hit because
      the outer loop  (in lmb_alloc_nid) goes through the memory regions in
      increasing order rather than decreasing order as __lmb_alloc_base
      does, and we are therefore more likely to hit the case where we are
      testing against a reserved region with a base address of 0.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d9024df0
    • P
      [LMB] Fix some whitespace and other formatting issues, use pr_debug · 300613e5
      Paul Mackerras 提交于
      This makes no semantic changes.  It fixes the whitespace and formatting
      a bit, gets rid of a local DBG macro and uses the equivalent pr_debug
      instead, and restructures one while loop that had a function call and
      assignment in the condition to be a bit more readable.  Some comments
      about functions being called with relocation disabled were also removed
      as they would just be confusing to most readers now that the code is
      in lib/.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      300613e5
    • D
      [LMB] Add lmb_alloc_nid() · c50f68c8
      David S. Miller 提交于
      A variant of lmb_alloc() that tries to allocate memory on a specified
      NUMA node 'nid' but falls back to normal lmb_alloc() if that fails.
      
      The caller provides a 'nid_range' function pointer which assists the
      allocator.  It is given args 'start', 'end', and pointer to integer
      'this_nid'.
      
      It places at 'this_nid' the NUMA node id that corresponds to 'start',
      and returns the end address within 'start' to 'end' at which memory
      assosciated with 'nid' ends.
      
      This callback allows a platform to use lmb_alloc_nid() in just
      about any context, even ones in which early_pfn_to_nid() might
      not be working yet.
      
      This function will be used by the NUMA setup code on sparc64, and also
      it can be used by powerpc, replacing it's hand crafted
      "careful_allocation()" function in arch/powerpc/mm/numa.c
      
      If x86 ever converts it's NUMA support over to using the LMB helpers,
      it can use this too as it has something entirely similar.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c50f68c8
  6. 20 2月, 2008 1 次提交
  7. 14 2月, 2008 4 次提交
  8. 24 1月, 2008 1 次提交
    • K
      [POWERPC] Fix handling of memreserve if the range lands in highmem · f98eeb4e
      Kumar Gala 提交于
      There were several issues if a memreserve range existed and happened
      to be in highmem:
      
      * The bootmem allocator is only aware of lowmem so calling
        reserve_bootmem with a highmem address would cause a BUG_ON
      * All highmem pages were provided to the buddy allocator
      
      Added a lmb_is_reserved() api that we now use to determine if a highem
      page should continue to be PageReserved or provided to the buddy
      allocator.
      
      Also, we incorrectly reported the amount of pages reserved since all
      highmem pages are initally marked reserved and we clear the
      PageReserved flag as we "free" up the highmem pages.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      f98eeb4e
  9. 26 7月, 2007 1 次提交
  10. 08 3月, 2007 1 次提交
    • D
      [POWERPC] Allow duplicate lmb_reserve() calls · eb6de286
      David Gibson 提交于
      At present calling lmb_reserve() (and hence lmb_add_region()) twice
      for exactly the same memory region will cause strange behaviour.
      
      This makes life difficult when booting from a flat device tree with
      memory reserve map.  Which regions are automatically reserved by the
      kernel has changed over time, so it's quite possible a newer kernel
      could attempt to auto-reserve a region which is also explicitly listed
      in the device tree's reserve map, leading to trouble.
      
      This patch avoids the problem by making lmb_reserve() ignore a call to
      reserve a previously reserved region.  It also removes a now redundant
      test designed to avoid one specific case of the problem noted above.
      
      At present, this patch deals only with duplicate reservations of an
      identical region.  Attempting to reserve two different, but
      overlapping regions will still cause problems.  I might post another
      patch later dealing with this case, but I'm avoiding it now since it
      is substantially more complicated to deal with, less likely to occur
      and more likely to indicate a genuine bug elsewhere if it does occur.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      eb6de286
  11. 25 7月, 2006 1 次提交
  12. 07 7月, 2006 1 次提交
  13. 01 7月, 2006 1 次提交
  14. 19 5月, 2006 1 次提交
    • M
      [PATCH] powerpc: Unify mem= handling · 2babf5c2
      Michael Ellerman 提交于
      We currently do mem= handling in three seperate places. And as benh pointed out
      I wrote two of them. Now that we parse command line parameters earlier we can
      clean this mess up.
      
      Moving the parsing out of prom_init means the device tree might be allocated
      above the memory limit. If that happens we'd have to move it. As it happens
      we already have logic to do that for kdump, so just genericise it.
      
      This also means we might have reserved regions above the memory limit, if we
      do the bootmem allocator will blow up, so we have to modify
      lmb_enforce_memory_limit() to truncate the reserves as well.
      
      Tested on P5 LPAR, iSeries, F50, 44p. Tested moving device tree on P5 and
      44p and F50.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2babf5c2
  15. 17 3月, 2006 1 次提交
  16. 07 2月, 2006 3 次提交
  17. 16 11月, 2005 1 次提交
  18. 06 10月, 2005 1 次提交
    • P
      powerpc: Merge lmb.c and make MM initialization use it. · 7c8c6b97
      Paul Mackerras 提交于
      This also creates merged versions of do_init_bootmem, paging_init
      and mem_init and moves them to arch/powerpc/mm/mem.c.  It gets rid
      of the mem_pieces stuff.
      
      I made memory_limit a parameter to lmb_enforce_memory_limit rather
      than a global referenced by that function.  This will require some
      small changes to ppc64 if we want to continue building ARCH=ppc64
      using the merged lmb.c.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      7c8c6b97
  19. 29 8月, 2005 4 次提交
    • M
      [PATCH] ppc64: Simplify some lmb functions · 71e1f55a
      Michael Ellerman 提交于
      lmb_phys_mem_size() can always return lmb.memory.size, as long as it's called
      after lmb_analyze(), which it is. There's no need to recalculate the size on
      every call.
      
      lmb_analyze() was calculating a few things we then threw away, so just don't
      calculate them to start with.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      71e1f55a
    • M
      [PATCH] ppc64: Remove physbase from the lmb_property struct · 180379dc
      Michael Ellerman 提交于
      We no longer need the lmb code to know about abs and phys addresses, so
      remove the physbase variable from the lmb_property struct.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      180379dc
    • M
      [PATCH] ppc64: Remove redundant abs_to_phys() macro · e88bcd1b
      Michael Ellerman 提交于
      abs_to_phys() is a macro that turns out to do nothing, and also has the
      unfortunate property that it's not the inverse of phys_to_abs() on iSeries.
      
      The following is for my benefit as much as everyone else.
      
      With CONFIG_MSCHUNKS enabled, the lmb code is changed such that it keeps
      a physbase variable for each lmb region. This is used to take the possibly
      discontiguous lmb regions and present them as a contiguous address space
      beginning from zero.
      
      In this context each lmb region's base address is its "absolute" base
      address, and its physbase is it's "physical" address (from Linux's point of
      view). The abs_to_phys() macro does the mapping from "absolute" to "physical".
      
      Note: This is not related to the iSeries mapping of physical to absolute
      (ie. Hypervisor) addresses which is maintained with the msChunks structure.
      And the msChunks structure is not controlled via CONFIG_MSCHUNKS.
      
      Once upon a time you could compile for non-iSeries with CONFIG_MSCHUNKS
      enabled. But these days CONFIG_MSCHUNKS depends on CONFIG_PPC_ISERIES, so
      for non-iSeries code abs_to_phys() is a no-op.
      
      On iSeries we always have one lmb region which spans from 0 to
      systemcfg->physicalMemorySize (arch/ppc64/kernel/iSeries_setup.c line 383).
      This region has a base (ie. absolute) address of 0, and a physbase address
      of 0 (as calculated in lmb_analyze() (arch/ppc64/kernel/lmb.c line 144)).
      
      On iSeries, abs_to_phys(aa) is defined as lmb_abs_to_phys(aa), which finds
      the lmb region containing aa (and there's only one, ie. 0), and then does:
      
       return lmb.memory.region[0].physbase + (aa - lmb.memory.region[0].base)
      
      physbase == base == 0, so you're left with "return aa".
      
      So remove abs_to_phys(), and lmb_abs_to_phys() which is the implementation
      of abs_to_phys() for iSeries.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e88bcd1b
    • M
      [PATCH] ppc64: Remove redundant use of pointers in lmb code · a4a0f970
      Michael Ellerman 提交于
      The lmb code is all written to use a pointer to an lmb struct. But it's always
      the same lmb struct, called "lmb". So we take the address of lmb, call it
      _lmb and then start using _lmb->foo everywhere, which is silly.
      
      This patch removes the _lmb pointers and replaces them with direct references
      to the one "lmb" struct. We do the same for some _mem and _rsv pointers which
      point to lmb.memory and lmb.reserved respectively.
      
      This patch looks quite busy, but it's basically just:
      s/_lmb->/lmb./g
      s/_mem->/lmb.memory./g
      s/_rsv->/lmb.reserved./g
      s/_rsv/&lmb.reserved/g
      s/mem->/lmb.memory./g
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a4a0f970
  20. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4