1. 05 3月, 2009 1 次提交
  2. 03 3月, 2009 1 次提交
  3. 22 2月, 2009 1 次提交
  4. 08 1月, 2009 1 次提交
  5. 13 11月, 2008 1 次提交
    • R
      x86, hibernate: fix breakage on x86_32 with CONFIG_NUMA set · 97a70e54
      Rafael J. Wysocki 提交于
      Impact: fix crash during hibernation on 32-bit NUMA
      
      The NUMA code on x86_32 creates special memory mapping that allows
      each node's pgdat to be located in this node's memory.  For this
      purpose it allocates a memory area at the end of each node's memory
      and maps this area so that it is accessible with virtual addresses
      belonging to low memory.  As a result, if there is high memory,
      these NUMA-allocated areas are physically located in high memory,
      although they are mapped to low memory addresses.
      
      Our hibernation code does not take that into account and for this
      reason hibernation fails on all x86_32 systems with CONFIG_NUMA=y and
      with high memory present.  Fix this by adding a special mapping for
      the NUMA-allocated memory areas to the temporary page tables created
      during the last phase of resume.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      97a70e54
  6. 13 10月, 2008 1 次提交
  7. 26 7月, 2008 1 次提交
  8. 25 7月, 2008 1 次提交
  9. 08 7月, 2008 10 次提交
  10. 10 6月, 2008 2 次提交
  11. 04 6月, 2008 2 次提交
  12. 03 6月, 2008 10 次提交
  13. 31 5月, 2008 2 次提交
  14. 20 5月, 2008 2 次提交
    • A
      x86: cope with no remap space being allocated for a numa node · 84d6bd0e
      Andy Whitcroft 提交于
      When allocating the pgdat's for numa nodes on x86_32 we attempt to place
      them in the numa remap space for that node.  However should the node not
      have any remap space allocated (such as due to having non-ram pages in
      the remap location in the node) then we will incorrectly place the pgdat
      at zero.  Check we have remap available, falling back to node 0 memory
      where we do not.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84d6bd0e
    • A
      x86: reinstate numa remap for SPARSEMEM on x86 NUMA systems · b9ada428
      Andy Whitcroft 提交于
      Recent kernels have been panic'ing trying to allocate memory early in boot,
      in __alloc_pages:
      
        BUG: unable to handle kernel paging request at 00001568
        IP: [<c10407b6>] __alloc_pages+0x33/0x2cc
        *pdpt = 00000000013a5001 *pde = 0000000000000000
        Oops: 0000 [#1] SMP
        Modules linked in:
      
        Pid: 1, comm: swapper Not tainted (2.6.25 #78)
        EIP: 0060:[<c10407b6>] EFLAGS: 00010246 CPU: 0
        EIP is at __alloc_pages+0x33/0x2cc
        EAX: 00001564 EBX: 000412d0 ECX: 00001564 EDX: 000005c3
        ESI: f78012a0 EDI: 00000001 EBP: 00001564 ESP: f7871e50
        DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
        Process swapper (pid: 1, ti=f7870000 task=f786f670 task.ti=f7870000)
        Stack: 00000000 f786f670 00000010 00000000 0000b700 000412d0 f78012a0 00000001
               00000000 c105b64d 00000000 000412d0 f78012a0 f7803120 00000000 c105c1c5
               00000010 f7803144 000412d0 00000001 f7803130 f7803120 f78012a0 00000001
        Call Trace:
         [<c105b64d>] kmem_getpages+0x94/0x129
         [<c105c1c5>] cache_grow+0x8f/0x123
         [<c105c689>] ____cache_alloc_node+0xb9/0xe4
         [<c105c999>] kmem_cache_alloc_node+0x92/0xd2
         [<c1018929>] build_sched_domains+0x536/0x70d
         [<c100b63c>] do_flush_tlb_all+0x0/0x3f
         [<c100b63c>] do_flush_tlb_all+0x0/0x3f
         [<c10572d6>] interleave_nodes+0x23/0x5a
         [<c105c44f>] alternate_node_alloc+0x43/0x5b
         [<c1018b47>] arch_init_sched_domains+0x46/0x51
         [<c136e85e>] kernel_init+0x0/0x82
         [<c137ac19>] sched_init_smp+0x10/0xbb
         [<c136e8a1>] kernel_init+0x43/0x82
         [<c10035cf>] kernel_thread_helper+0x7/0x10
      
      Debugging this showed that the NODE_DATA() for nodes other than node 0
      were all NULL.  Tracing this back showed that the NODE_DATA() pointers
      were being initialised to each nodes remap space.  However under
      SPARSEMEM remap is disabled which leads to the pgdat's being placed
      incorrectly at kernel virtual address 0.  Leading to the panic when
      attempting to allocate memory from these nodes.
      
      Numa remap was disabled in the commit below.  This occured while fixing
      problems triggered when attempting to boot x86_32 NUMA SPARSEMEM kernels
      on non-numa hardware.
      
      	x86: make NUMA work on 32-bit
      	commit 1b000a5d
      
      The real problem is believed to be related to other alignment issues in
      the regions blocked out from the bootmem allocator for small memory
      systems, and has been fixed separately.  Therefore re-enable remap for
      SPARSMEM, which fixes pgdat allocation issues.  Testing confirms that
      SPARSMEM NUMA kernels will boot correctly with this part of the change
      reverted.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b9ada428
  15. 05 5月, 2008 1 次提交
    • T
      x86: undo visws/numaq build changes · 48b83d24
      Thomas Gleixner 提交于
      arch/x86/pci/Makefile_32 has a nasty detail. VISWS and NUMAQ build
      override the generic pci-y rules. This needs a proper cleanup, but
      that needs more thoughts. Undo
      
      commit 895d3093
          x86: numaq fix
          do not override the existing pci-y rule when adding visws or
          numaq rules.
      
      There is also a stupid init function ordering problem vs. acpi.o
      
      Add comments to the Makefile to avoid tripping over this again.
      
      Remove the srat stub code in discontig_32.c to allow a proper NUMAQ
      build.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48b83d24
  16. 20 4月, 2008 1 次提交
  17. 17 4月, 2008 1 次提交
  18. 27 3月, 2008 1 次提交
    • Y
      x86: fix trim mtrr not to setup_memory two times · 76c32418
      Yinghai Lu 提交于
      we could call find_max_pfn() directly instead of setup_memory() to get
      max_pfn needed for mtrr trimming.
      
      otherwise setup_memory() is called two times... that is duplicated...
      
      [ mingo@elte.hu: both Thomas and me simulated a double call to
        setup_bootmem_allocator() and can confirm that it is a real bug
        which can hang in certain configs. It's not been reported yet but
        that is probably due to the relatively scarce nature of
        MTRR-trimming systems. ]
      Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      76c32418