1. 15 7月, 2011 3 次提交
  2. 14 7月, 2011 1 次提交
  3. 13 7月, 2011 1 次提交
    • T
      x86, numa: Implement pfn -> nid mapping granularity check · 1e01979c
      Tejun Heo 提交于
      SPARSEMEM w/o VMEMMAP and DISCONTIGMEM, both used only on 32bit, use
      sections array to map pfn to nid which is limited in granularity.  If
      NUMA nodes are laid out such that the mapping cannot be accurate, boot
      will fail triggering BUG_ON() in mminit_verify_page_links().
      
      On 32bit, it's 512MiB w/ PAE and SPARSEMEM.  This seems to have been
      granular enough until commit 2706a0bf (x86, NUMA: Enable
      CONFIG_AMD_NUMA on 32bit too).  Apparently, there is a machine which
      aligns NUMA nodes to 128MiB and has only AMD NUMA but not SRAT.  This
      led to the following BUG_ON().
      
       On node 0 totalpages: 2096615
         DMA zone: 32 pages used for memmap
         DMA zone: 0 pages reserved
         DMA zone: 3927 pages, LIFO batch:0
         Normal zone: 1740 pages used for memmap
         Normal zone: 220978 pages, LIFO batch:31
         HighMem zone: 16405 pages used for memmap
         HighMem zone: 1853533 pages, LIFO batch:31
       BUG: Int 6: CR2   (null)
            EDI   (null)  ESI 00000002  EBP 00000002  ESP c1543ecc
            EBX f2400000  EDX 00000006  ECX   (null)  EAX 00000001
            err   (null)  EIP c16209aa   CS 00000060  flg 00010002
       Stack: f2400000 00220000 f7200800 c1620613 00220000 01000000 04400000 00238000
                (null) f7200000 00000002 f7200b58 f7200800 c1620929 000375fe   (null)
              f7200b80 c16395f0 00200a02 f7200a80   (null) 000375fe 00000002   (null)
       Pid: 0, comm: swapper Not tainted 2.6.39-rc5-00181-g2706a0bf #17
       Call Trace:
        [<c136b1e5>] ? early_fault+0x2e/0x2e
        [<c16209aa>] ? mminit_verify_page_links+0x12/0x42
        [<c1620613>] ? memmap_init_zone+0xaf/0x10c
        [<c1620929>] ? free_area_init_node+0x2b9/0x2e3
        [<c1607e99>] ? free_area_init_nodes+0x3f2/0x451
        [<c1601d80>] ? paging_init+0x112/0x118
        [<c15f578d>] ? setup_arch+0x791/0x82f
        [<c15f43d9>] ? start_kernel+0x6a/0x257
      
      This patch implements node_map_pfn_alignment() which determines
      maximum internode alignment and update numa_register_memblks() to
      reject NUMA configuration if alignment exceeds the pfn -> nid mapping
      granularity of the memory model as determined by PAGES_PER_SECTION.
      
      This makes the problematic machine boot w/ flatmem by rejecting the
      NUMA config and provides protection against crazy NUMA configurations.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/r/20110712074534.GB2872@htj.dyndns.org
      LKML-Reference: <20110628174613.GP478@escobedo.osrc.amd.com>
      Reported-and-Tested-by: NHans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Conny Seidel <conny.seidel@amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      1e01979c
  4. 02 5月, 2011 10 次提交
    • Y
      x86, NUMA: Trim numa meminfo with max_pfn in a separate loop · e5a10c1b
      Yinghai Lu 提交于
      During testing 32bit numa unifying code from tj, found one system with
      more than 64g fails to use numa.  It turns out we do not trim numa
      meminfo correctly against max_pfn in case start address of a node is
      higher than 64GiB.  Bug fix made it to tip tree.
      
      This patch moves the checking and trimming to a separate loop.  So we
      don't need to compare low/high in following merge loops.  It makes the
      code more readable.
      
      Also it makes the node merge printouts less strange.  On a 512GiB numa
      system with 32bit,
      
      before:
      > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
      > NUMA: Node 0 [0,80000000) + [100000000,1080000000) -> [0,1000000000)
      
      after:
      > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
      > NUMA: Node 0 [0,80000000) + [100000000,1000000000) -> [0,1000000000)
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      [Updated patch description and comment slightly.]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e5a10c1b
    • Y
      x86, NUMA: Rename setup_node_bootmem() to setup_node_data() · a56bca80
      Yinghai Lu 提交于
      After using memblock to replace bootmem, that function only sets up
      node_data now.
      
      Change the name to reflect what it actually does.
      
      tj: Minor adjustment to the patch description.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a56bca80
    • T
      x86, NUMA: Make numa_init_array() static · 752d4f37
      Tejun Heo 提交于
      numa_init_array() no longer has users outside of numa.c.  Make it
      static.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      752d4f37
    • T
      x86, NUMA: Make 32bit use common NUMA init path · bd6709a9
      Tejun Heo 提交于
      With both _numa_init() methods converted and the rest of init code
      adjusted, numa_32.c now can switch from the 32bit only init code to
      the common one in numa.c.
      
      * Shim get_memcfg_*()'s are dropped and initmem_init() calls
        x86_numa_init(), which is updated to handle NUMAQ.
      
      * All boilerplate operations including node range limiting, pgdat
        alloc/init are handled by numa_init().  32bit only implementation is
        removed.
      
      * 32bit numa_add_memblk(), numa_set_distance() and
        memory_add_physaddr_to_nid() removed and common versions in
        numa_32.c enabled for 32bit.
      
      This change causes the following behavior changes.
      
      * NODE_DATA()->node_start_pfn/node_spanned_pages properly initialized
        for 32bit too.
      
      * Much more sanity checks and configuration cleanups.
      
      * Proper handling of node distances.
      
      * The same NUMA init messages as 64bit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      bd6709a9
    • T
      x86, NUMA: Initialize and use remap allocator from setup_node_bootmem() · 7888e96b
      Tejun Heo 提交于
      setup_node_bootmem() is taken from 64bit and doesn't use remap
      allocator.  It's about to be shared with 32bit so add support for it.
      If NODE_DATA is remapped, it's noted in the debug message and node
      locality check is skipped as the __pa() of the remapped address
      doesn't reflect the actual physical address.
      
      On 64bit, remap allocator becomes noop and doesn't affect the
      behavior.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      7888e96b
    • T
      x86, NUMA: Remove long 64bit assumption from numa.c · 38f3e1ca
      Tejun Heo 提交于
      Code moved from numa_64.c has assumption that long is 64bit in several
      places.  This patch removes the assumption by using {s|u}64_t
      explicity, using PFN_PHYS() for page number -> addr conversions and
      adjusting printf formats.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      38f3e1ca
    • T
      x86, NUMA: Enable build of generic NUMA init code on 32bit · 744baba0
      Tejun Heo 提交于
      Generic NUMA init code was moved to numa.c from numa_64.c but is still
      guaraded by CONFIG_X86_64.  This patch removes the compile guard and
      enables compiling on 32bit.
      
      * numa_add_memblk() and numa_set_distance() clash with the shim
        implementation in numa_32.c and are left out.
      
      * memory_add_physaddr_to_nid() clashes with 32bit implementation and
        is left out.
      
      * MAX_DMA_PFN definition in dma.h moved out of !CONFIG_X86_32.
      
      * node_data definition in numa_32.c removed in favor of the one in
        numa.c.
      
      There are places where ulong is assumed to be 64bit.  The next patch
      will fix them up.  Note that although the code is compiled it isn't
      used yet and this patch doesn't cause any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      744baba0
    • T
      x86, NUMA: Move NUMA init logic from numa_64.c to numa.c · a4106eae
      Tejun Heo 提交于
      Move the generic 64bit NUMA init machinery from numa_64.c to numa.c.
      
      * node_data[], numa_mem_info and numa_distance
      * numa_add_memblk[_to](), numa_remove_memblk[_from]()
      * numa_set_distance() and friends
      * numa_init() and all the numa_meminfo handling helpers called from it
      * dummy_numa_init()
      * memory_add_physaddr_to_nid()
      
      A new function x86_numa_init() is added and the content of
      numa_64.c::initmem_init() is moved into it.  initmem_init() now simply
      calls x86_numa_init().
      
      Constants and numa_off declaration are moved from numa_{32|64}.h to
      numa.h.
      
      This is code reorganization and doesn't involve any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      a4106eae
    • T
      x86, NUMA: Move numa_nodes_parsed to numa.[hc] · e6df595b
      Tejun Heo 提交于
      Move numa_nodes_parsed from numa_64.[hc] to numa.[hc] to prepare for
      NUMA init path unification.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      e6df595b
    • T
      x86, NUMA: Unify 32/64bit numa_cpu_node() implementation · 6bd26273
      Tejun Heo 提交于
      Currently, the only meaningful user of apic->x86_32_numa_cpu_node() is
      NUMAQ which returns valid mapping only after CPU is initialized during
      SMP bringup; thus, the previous patch to set apicid -> node in
      setup_local_APIC() makes __apicid_to_node[] always contain the correct
      mapping whether custom apic->x86_32_numa_cpu_node() is used or not.
      
      So, there is no reason to keep separate 32bit implementation.  We can
      always consult __apicid_to_node[].  Move 64bit implementation from
      numa_64.c to numa.c and remove 32bit implementation from numa_32.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      6bd26273
  5. 21 4月, 2011 1 次提交
  6. 14 2月, 2011 1 次提交
    • D
      x86, numa: Add error handling for bad cpu-to-node mappings · 14392fd3
      David Rientjes 提交于
      CONFIG_DEBUG_PER_CPU_MAPS may return NUMA_NO_NODE when an
      early_cpu_to_node() mapping hasn't been initialized.  In such a
      case, it emits a warning and continues without an issue but
      callers may try to use the return value to index into an array.
      
      We can catch those errors and fail silently since a warning has
      already been emitted.  No current user of numa_add_cpu()
      requires this error checking to avoid a crash, but it's better
      to be proactive in case a future user happens to have a bug and
      a user tries to diagnose it with CONFIG_DEBUG_PER_CPU_MAPS.
      Reported-by: NJesper Juhl <jj@chaosbits.net>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      LKML-Reference: <alpine.DEB.2.00.1102071407250.7812@chino.kir.corp.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      14392fd3
  7. 28 1月, 2011 4 次提交
    • T
      x86: Unify NUMA initialization between 32 and 64bit · 8db78cc4
      Tejun Heo 提交于
      Now that everything else is unified, NUMA initialization can be
      unified too.
      
      * numa_init_array() and init_cpu_to_node() are moved from
        numa_64 to numa.
      
      * numa_32::initmem_init() is updated to call numa_init_array()
        and setup_arch() to call init_cpu_to_node() on 32bit too.
      
      * x86_cpu_to_node_map is now initialized to NUMA_NO_NODE on
        32bit too. This is safe now as numa_init_array() will initialize
        it early during boot.
      
      This makes NUMA mapping fully initialized before
      setup_per_cpu_areas() on 32bit too and thus makes the first
      percpu chunk which contains all the static variables and some of
      dynamic area allocated with NUMA affinity correctly considered.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-17-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Reported-by: NEric Dumazet <eric.dumazet@gmail.com>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      8db78cc4
    • T
      x86: Unify node_to_cpumask_map handling between 32 and 64bit · de2d9445
      Tejun Heo 提交于
      x86_32 has been managing node_to_cpumask_map explicitly from
      map_cpu_to_node() and friends in a rather ugly way.  With
      previous changes, it's now possible to share the code with
      64bit.
      
      * When CONFIG_NUMA_EMU is disabled, numa_add/remove_cpu() are
        implemented in numa.c and shared by 32 and 64bit.  CONFIG_NUMA_EMU
        versions still live in numa_64.c.
      
        NUMA_EMU's dependency on 64bit is planned to be removed and the
        above should go away together.
      
      * identify_cpu() now calls numa_add_cpu() for 32bit too.  This
        makes the explicit mask management from map_cpu_to_node() unnecessary.
      
      * The whole x86_32 specific map_cpu_to_node() chunk is no longer
        necessary.  Dropped.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-16-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      de2d9445
    • T
      x86: Unify CPU -> NUMA node mapping between 32 and 64bit · 645a7919
      Tejun Heo 提交于
      Unlike 64bit, 32bit has been using its own cpu_to_node_map[] for
      CPU -> NUMA node mapping.  Replace it with early_percpu variable
      x86_cpu_to_node_map and share the mapping code with 64bit.
      
      * USE_PERCPU_NUMA_NODE_ID is now enabled for 32bit too.
      
      * x86_cpu_to_node_map and numa_set/clear_node() are moved from
        numa_64 to numa.  For now, on 32bit, x86_cpu_to_node_map is initialized
        with 0 instead of NUMA_NO_NODE.  This is to avoid introducing unexpected
        behavior change and will be updated once init path is unified.
      
      * srat_detect_node() is now enabled for x86_32 too.  It calls
        numa_set_node() and initializes the mapping making explicit
        cpu_to_node_map[] updates from map/unmap_cpu_to_node() unnecessary.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: penberg@kernel.org
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-15-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      645a7919
    • T
      x86: Unify cpu/apicid <-> NUMA node mapping between 32 and 64bit · bbc9e2f4
      Tejun Heo 提交于
      The mapping between cpu/apicid and node is done via
      apicid_to_node[] on 64bit and apicid_2_node[] +
      apic->x86_32_numa_cpu_node() on 32bit. This difference makes it
      difficult to further unify 32 and 64bit NUMA handling.
      
      This patch unifies it by replacing both apicid_to_node[] and
      apicid_2_node[] with __apicid_to_node[] array, which is accessed
      by two accessors - set_apicid_to_node() and numa_cpu_node().  On
      64bit, numa_cpu_node() always consults __apicid_to_node[]
      directly while 32bit goes through apic->numa_cpu_node() method
      to allow apic implementations to override it.
      
      srat_detect_node() for amd cpus contains workaround for broken
      NUMA configuration which assumes relationship between APIC ID,
      HT node ID and NUMA topology.  Leave it to access
      __apicid_to_node[] directly as mapping through CPU might result
      in undesirable behavior change.  The comment is reformatted and
      updated to note the ugliness.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-14-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      bbc9e2f4
  8. 19 1月, 2011 1 次提交
  9. 31 5月, 2010 2 次提交
  10. 28 5月, 2010 1 次提交
  11. 13 3月, 2009 4 次提交