1. 20 6月, 2009 1 次提交
  2. 12 6月, 2009 2 次提交
  3. 01 3月, 2009 1 次提交
    • T
      bootmem, x86: further fixes for arch-specific bootmem wrapping · d0c4f570
      Tejun Heo 提交于
      Impact: fix new breakages introduced by previous fix
      
      Commit c1329375 tried to clean up
      bootmem arch wrapper but it wasn't quite correct.  Before the commit,
      the followings were broken.
      
      * Low level interface functions prefixed with __ ignored arch
        preference.
      
      * reserve_bootmem(...) can't be mapped into
        reserve_bootmem_node(NODE_DATA(0)->bdata, ...) because the node is
        not preference here.  The region specified MUST fall into the
        specified region; otherwise, it will panic.
      
      After the commit,
      
      * If allocation fails for the arch preferred node, it should fallback
        to whatever is available.  Instead, it simply failed allocation.
      
      There are too many internal details to allow generic wrapping and
      still keep things simple for archs.  Plus, all that arch wants is a
      way to prefer certain node over another.
      
      This patch drops the generic wrapping around alloc_bootmem_core() and
      add alloc_bootmem_core() instead.  If necessary, arch can define
      bootmem_arch_referred_node() macro or function which takes all
      allocation information and returns the preferred node.  bootmem
      generic code will always try the preferred node first and then
      fallback to other nodes as usual.
      
      Breakages noted and changes reviewed by Johannes Weiner.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      d0c4f570
  4. 24 2月, 2009 1 次提交
    • T
      bootmem: clean up arch-specific bootmem wrapping · c1329375
      Tejun Heo 提交于
      Impact: cleaner and consistent bootmem wrapping
      
      By setting CONFIG_HAVE_ARCH_BOOTMEM_NODE, archs can define
      arch-specific wrappers for bootmem allocation.  However, this is done
      a bit strangely in that only the high level convenience macros can be
      changed while lower level, but still exported, interface functions
      can't be wrapped.  This not only is messy but also leads to strange
      situation where alloc_bootmem() does what the arch wants it to do but
      the equivalent __alloc_bootmem() call doesn't although they should be
      able to be used interchangeably.
      
      This patch updates bootmem such that archs can override / wrap the
      backend function - alloc_bootmem_core() instead of the highlevel
      interface functions to allow simpler and consistent wrapping.  Also,
      HAVE_ARCH_BOOTMEM_NODE is renamed to HAVE_ARCH_BOOTMEM.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Johannes Weiner <hannes@saeurebad.de>
      c1329375
  5. 07 1月, 2009 1 次提交
  6. 17 10月, 2008 1 次提交
  7. 21 8月, 2008 1 次提交
  8. 15 8月, 2008 1 次提交
    • M
      bootmem allocator: alloc_bootmem_core(): page-align the end offset · 627240aa
      Mikulas Patocka 提交于
      This is the minimal sequence that jams the allocator:
      
      void *p, *q, *r;
      p = alloc_bootmem(PAGE_SIZE);
      q = alloc_bootmem(64);
      free_bootmem(p, PAGE_SIZE);
      p = alloc_bootmem(PAGE_SIZE);
      r = alloc_bootmem(64);
      
      after this sequence (assuming that the allocator was empty or page-aligned
      before), pointer "q" will be equal to pointer "r".
      
      What's hapenning inside the allocator:
      p = alloc_bootmem(PAGE_SIZE);
      in allocator: last_end_off == PAGE_SIZE, bitmap contains bits 10000...
      q = alloc_bootmem(64);
      in allocator: last_end_off == PAGE_SIZE + 64, bitmap contains 11000...
      free_bootmem(p, PAGE_SIZE);
      in allocator: last_end_off == PAGE_SIZE + 64, bitmap contains 01000...
      p = alloc_bootmem(PAGE_SIZE);
      in allocator: last_end_off == PAGE_SIZE, bitmap contains 11000...
      r = alloc_bootmem(64);
      
      and now:
      
      it finds bit "2", as a place where to allocate (sidx)
      
      it hits the condition
      
      if (bdata->last_end_off && PFN_DOWN(bdata->last_end_off) + 1 == sidx))
      start_off = ALIGN(bdata->last_end_off, align);
      
      -you can see that the condition is true, so it assigns start_off =
      ALIGN(bdata->last_end_off, align); (that is PAGE_SIZE) and allocates
      over already allocated block.
      
      With the patch it tries to continue at the end of previous allocation only
      if the previous allocation ended in the middle of the page.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Acked-by: NJohannes Weiner <hannes@saeurebad.de>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      627240aa
  9. 25 7月, 2008 20 次提交
  10. 22 6月, 2008 1 次提交
  11. 28 4月, 2008 2 次提交
    • Y
      memory hotplug: make alloc_bootmem_section() · e70260aa
      Yasunori Goto 提交于
      alloc_bootmem_section() can allocate specified section's area.  This is used
      for usemap to keep same section with pgdat by later patch.
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e70260aa
    • Y
      memory hotplug: register section/node id to free · 04753278
      Yasunori Goto 提交于
      This patch set is to free pages which is allocated by bootmem for
      memory-hotremove.  Some structures of memory management are allocated by
      bootmem.  ex) memmap, etc.
      
      To remove memory physically, some of them must be freed according to
      circumstance.  This patch set makes basis to free those pages, and free
      memmaps.
      
      Basic my idea is using remain members of struct page to remember information
      of users of bootmem (section number or node id).  When the section is
      removing, kernel can confirm it.  By this information, some issues can be
      solved.
      
        1) When the memmap of removing section is allocated on other
           section by bootmem, it should/can be free.
        2) When the memmap of removing section is allocated on the
           same section, it shouldn't be freed. Because the section has to be
           logical memory offlined already and all pages must be isolated against
           page allocater. If it is freed, page allocator may use it which will
           be removed physically soon.
        3) When removing section has other section's memmap,
           kernel will be able to show easily which section should be removed
           before it for user. (Not implemented yet)
        4) When the above case 2), the page isolation will be able to check and skip
           memmap's page when logical memory offline (offline_pages()).
           Current page isolation code fails in this case because this page is
           just reserved page and it can't distinguish this pages can be
           removed or not. But, it will be able to do by this patch.
           (Not implemented yet.)
        5) The node information like pgdat has similar issues. But, this
           will be able to be solved too by this.
           (Not implemented yet, but, remembering node id in the pages.)
      
      Fortunately, current bootmem allocator just keeps PageReserved flags,
      and doesn't use any other members of page struct. The users of
      bootmem doesn't use them too.
      
      This patch:
      
      This is to register information which is node or section's id.  Kernel can
      distinguish which node/section uses the pages allcated by bootmem.  This is
      basis for hot-remove sections or nodes.
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04753278
  12. 27 4月, 2008 3 次提交
  13. 25 3月, 2008 1 次提交
    • Y
      mm: fix boundary checking in free_bootmem_core · 5a982cbc
      Yinghai Lu 提交于
      With numa enabled, some callers could have a range of memory on one node
      but try to free that on other node.  This can cause some pages to be
      freed wrongly.
      
      For example: when we try to allocate 128g boot ram early for
      gart/swiotlb, and free that range later so gart/swiotlb can get some
      range afterwards.
      
      With this patch, we don't need to care which node holds the range, just
      loop to call free_bootmem_node for all online nodes.
      
      This patch makes free_bootmem_core() more robust by trimming the sidx
      and eidx according the ram range that the node has.
      
      And make the free_bootmem_core handle this out of range case.  We could
      use bdata_list to make sure the range can be freed for sure.  So next
      time, we don't need to loop online nodes and could use free_bootmem
      directly.
      Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Tested-by: NIngo Molnar <mingo@elte.hu>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5a982cbc
  14. 08 2月, 2008 1 次提交
    • B
      Introduce flags for reserve_bootmem() · 72a7fe39
      Bernhard Walle 提交于
      This patchset adds a flags variable to reserve_bootmem() and uses the
      BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
      between crashkernel area and already used memory.
      
      This patch:
      
      Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
      If that flag is set, the function returns with -EBUSY if the memory already
      has been reserved in the past.  This is to avoid conflicts.
      
      Because that code runs before SMP initialisation, there's no race condition
      inside reserve_bootmem_core().
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix powerpc build]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72a7fe39
  15. 08 12月, 2006 2 次提交
  16. 26 9月, 2006 1 次提交