1. 16 8月, 2011 1 次提交
  2. 11 8月, 2011 1 次提交
  3. 05 8月, 2011 1 次提交
  4. 27 7月, 2011 1 次提交
  5. 13 7月, 2011 2 次提交
    • T
      x86, numa: Implement pfn -> nid mapping granularity check · 1e01979c
      Tejun Heo 提交于
      SPARSEMEM w/o VMEMMAP and DISCONTIGMEM, both used only on 32bit, use
      sections array to map pfn to nid which is limited in granularity.  If
      NUMA nodes are laid out such that the mapping cannot be accurate, boot
      will fail triggering BUG_ON() in mminit_verify_page_links().
      
      On 32bit, it's 512MiB w/ PAE and SPARSEMEM.  This seems to have been
      granular enough until commit 2706a0bf (x86, NUMA: Enable
      CONFIG_AMD_NUMA on 32bit too).  Apparently, there is a machine which
      aligns NUMA nodes to 128MiB and has only AMD NUMA but not SRAT.  This
      led to the following BUG_ON().
      
       On node 0 totalpages: 2096615
         DMA zone: 32 pages used for memmap
         DMA zone: 0 pages reserved
         DMA zone: 3927 pages, LIFO batch:0
         Normal zone: 1740 pages used for memmap
         Normal zone: 220978 pages, LIFO batch:31
         HighMem zone: 16405 pages used for memmap
         HighMem zone: 1853533 pages, LIFO batch:31
       BUG: Int 6: CR2   (null)
            EDI   (null)  ESI 00000002  EBP 00000002  ESP c1543ecc
            EBX f2400000  EDX 00000006  ECX   (null)  EAX 00000001
            err   (null)  EIP c16209aa   CS 00000060  flg 00010002
       Stack: f2400000 00220000 f7200800 c1620613 00220000 01000000 04400000 00238000
                (null) f7200000 00000002 f7200b58 f7200800 c1620929 000375fe   (null)
              f7200b80 c16395f0 00200a02 f7200a80   (null) 000375fe 00000002   (null)
       Pid: 0, comm: swapper Not tainted 2.6.39-rc5-00181-g2706a0bf #17
       Call Trace:
        [<c136b1e5>] ? early_fault+0x2e/0x2e
        [<c16209aa>] ? mminit_verify_page_links+0x12/0x42
        [<c1620613>] ? memmap_init_zone+0xaf/0x10c
        [<c1620929>] ? free_area_init_node+0x2b9/0x2e3
        [<c1607e99>] ? free_area_init_nodes+0x3f2/0x451
        [<c1601d80>] ? paging_init+0x112/0x118
        [<c15f578d>] ? setup_arch+0x791/0x82f
        [<c15f43d9>] ? start_kernel+0x6a/0x257
      
      This patch implements node_map_pfn_alignment() which determines
      maximum internode alignment and update numa_register_memblks() to
      reject NUMA configuration if alignment exceeds the pfn -> nid mapping
      granularity of the memory model as determined by PAGES_PER_SECTION.
      
      This makes the problematic machine boot w/ flatmem by rejecting the
      NUMA config and provides protection against crazy NUMA configurations.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/r/20110712074534.GB2872@htj.dyndns.org
      LKML-Reference: <20110628174613.GP478@escobedo.osrc.amd.com>
      Reported-and-Tested-by: NHans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Conny Seidel <conny.seidel@amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      1e01979c
    • T
      x86, mm: s/PAGES_PER_ELEMENT/PAGES_PER_SECTION/ · d0ead157
      Tejun Heo 提交于
      DISCONTIGMEM on x86-32 implements pfn -> nid mapping similarly to
      SPARSEMEM; however, it calls each mapping unit ELEMENT instead of
      SECTION.  This patch renames it to SECTION so that PAGES_PER_SECTION
      is valid for both DISCONTIGMEM and SPARSEMEM.  This will be used by
      the next patch to implement mapping granularity check.
      
      This patch is trivial constant rename.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/r/20110712074422.GA2872@htj.dyndns.org
      Cc: Hans Rosenfeld <hans.rosenfeld@amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      d0ead157
  6. 12 7月, 2011 1 次提交
  7. 01 7月, 2011 1 次提交
    • P
      perf: Remove the nmi parameter from the swevent and overflow interface · a8b0ca17
      Peter Zijlstra 提交于
      The nmi parameter indicated if we could do wakeups from the current
      context, if not, we would set some state and self-IPI and let the
      resulting interrupt do the wakeup.
      
      For the various event classes:
      
        - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
          the PMI-tail (ARM etc.)
        - tracepoint: nmi=0; since tracepoint could be from NMI context.
        - software: nmi=[0,1]; some, like the schedule thing cannot
          perform wakeups, and hence need 0.
      
      As one can see, there is very little nmi=1 usage, and the down-side of
      not using it is that on some platforms some software events can have a
      jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
      
      The up-side however is that we can remove the nmi parameter and save a
      bunch of conditionals in fast paths.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Michael Cree <mcree@orcon.net.nz>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      a8b0ca17
  8. 19 6月, 2011 1 次提交
  9. 15 6月, 2011 1 次提交
  10. 29 5月, 2011 1 次提交
  11. 26 5月, 2011 1 次提交
  12. 25 5月, 2011 3 次提交
  13. 22 5月, 2011 1 次提交
  14. 21 5月, 2011 1 次提交
    • L
      sanitize <linux/prefetch.h> usage · 268bb0ce
      Linus Torvalds 提交于
      Commit e66eed65 ("list: remove prefetching from regular list
      iterators") removed the include of prefetch.h from list.h, which
      uncovered several cases that had apparently relied on that rather
      obscure header file dependency.
      
      So this fixes things up a bit, using
      
         grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]')
         grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]')
      
      to guide us in finding files that either need <linux/prefetch.h>
      inclusion, or have it despite not needing it.
      
      There are more of them around (mostly network drivers), but this gets
      many core ones.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      268bb0ce
  15. 17 5月, 2011 1 次提交
  16. 13 5月, 2011 2 次提交
    • S
      x86/mm: Fix section mismatch derived from native_pagetable_reserve() · 53f8023f
      Sedat Dilek 提交于
      With CONFIG_DEBUG_SECTION_MISMATCH=y I see these warnings in next-20110415:
      
        LD      vmlinux.o
        MODPOST vmlinux.o
      WARNING: vmlinux.o(.text+0x1ba48): Section mismatch in reference from the function native_pagetable_reserve() to the function .init.text:memblock_x86_reserve_range()
      The function native_pagetable_reserve() references
      the function __init memblock_x86_reserve_range().
      This is often because native_pagetable_reserve lacks a __init
      annotation or the annotation of memblock_x86_reserve_range is wrong.
      
      This patch fixes the issue.
      Thanks to pipacs from PaX project for help on IRC.
      Acked-by: N"H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NSedat Dilek <sedat.dilek@gmail.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      53f8023f
    • S
      x86,xen: introduce x86_init.mapping.pagetable_reserve · 279b706b
      Stefano Stabellini 提交于
      Introduce a new x86_init hook called pagetable_reserve that at the end
      of init_memory_mapping is used to reserve a range of memory addresses for
      the kernel pagetable pages we used and free the other ones.
      
      On native it just calls memblock_x86_reserve_range while on xen it also
      takes care of setting the spare memory previously allocated
      for kernel pagetable pages from RO to RW, so that it can be used for
      other purposes.
      
      A detailed explanation of the reason why this hook is needed follows.
      
      As a consequence of the commit:
      
      commit 4b239f45
      Author: Yinghai Lu <yinghai@kernel.org>
      Date:   Fri Dec 17 16:58:28 2010 -0800
      
          x86-64, mm: Put early page table high
      
      at some point init_memory_mapping is going to reach the pagetable pages
      area and map those pages too (mapping them as normal memory that falls
      in the range of addresses passed to init_memory_mapping as argument).
      Some of those pages are already pagetable pages (they are in the range
      pgt_buf_start-pgt_buf_end) therefore they are going to be mapped RO and
      everything is fine.
      Some of these pages are not pagetable pages yet (they fall in the range
      pgt_buf_end-pgt_buf_top; for example the page at pgt_buf_end) so they
      are going to be mapped RW.  When these pages become pagetable pages and
      are hooked into the pagetable, xen will find that the guest has already
      a RW mapping of them somewhere and fail the operation.
      The reason Xen requires pagetables to be RO is that the hypervisor needs
      to verify that the pagetables are valid before using them. The validation
      operations are called "pinning" (more details in arch/x86/xen/mmu.c).
      
      In order to fix the issue we mark all the pages in the entire range
      pgt_buf_start-pgt_buf_top as RO, however when the pagetable allocation
      is completed only the range pgt_buf_start-pgt_buf_end is reserved by
      init_memory_mapping. Hence the kernel is going to crash as soon as one
      of the pages in the range pgt_buf_end-pgt_buf_top is reused (b/c those
      ranges are RO).
      
      For this reason we need a hook to reserve the kernel pagetable pages we
      used and free the other ones so that they can be reused for other
      purposes.
      On native it just means calling memblock_x86_reserve_range, on Xen it
      also means marking RW the pagetable pages that we allocated before but
      that haven't been used before.
      
      Another way to fix this is without using the hook is by adding a 'if
      (xen_pv_domain)' in the 'init_memory_mapping' code and calling the Xen
      counterpart, but that is just nasty.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Acked-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      279b706b
  17. 02 5月, 2011 20 次提交
    • Y
      x86, NUMA: Trim numa meminfo with max_pfn in a separate loop · e5a10c1b
      Yinghai Lu 提交于
      During testing 32bit numa unifying code from tj, found one system with
      more than 64g fails to use numa.  It turns out we do not trim numa
      meminfo correctly against max_pfn in case start address of a node is
      higher than 64GiB.  Bug fix made it to tip tree.
      
      This patch moves the checking and trimming to a separate loop.  So we
      don't need to compare low/high in following merge loops.  It makes the
      code more readable.
      
      Also it makes the node merge printouts less strange.  On a 512GiB numa
      system with 32bit,
      
      before:
      > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
      > NUMA: Node 0 [0,80000000) + [100000000,1080000000) -> [0,1000000000)
      
      after:
      > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
      > NUMA: Node 0 [0,80000000) + [100000000,1000000000) -> [0,1000000000)
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      [Updated patch description and comment slightly.]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e5a10c1b
    • Y
      x86, NUMA: Rename setup_node_bootmem() to setup_node_data() · a56bca80
      Yinghai Lu 提交于
      After using memblock to replace bootmem, that function only sets up
      node_data now.
      
      Change the name to reflect what it actually does.
      
      tj: Minor adjustment to the patch description.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a56bca80
    • T
      x86, NUMA: Enable emulation on 32bit too · 1b7e03ef
      Tejun Heo 提交于
      Now that NUMA init path is unified, NUMA emulation can be enabled on
      32bit.  Make numa_emluation.c safe on 32bit by doing the followings.
      
      * Define MAX_DMA32_PFN on 32bit too.
      
      * Include bootmem.h for max_pfn declaration.
      
      * Use u64 explicitly and always use PFN_PHYS() when converting page
        number to address.
      
      * Avoid __udivdi3() generation on 32bit by doing number of pages
        calculation instead in split_nodes_interleave().
      
      And drop X86_64 dependency from Kconfig.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      1b7e03ef
    • T
      x86, NUMA: Enable CONFIG_AMD_NUMA on 32bit too · 2706a0bf
      Tejun Heo 提交于
      Now that NUMA init path is unified, amdtopology can be enabled on
      32bit.  Make amdtopology.c safe on 32bit by explicitly using u64 and
      drop X86_64 dependency from Kconfig.
      
      Inclusion of bootmem.h is added for max_pfn declaration.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      2706a0bf
    • T
      x86, NUMA: Rename amdtopology_64.c to amdtopology.c · c6f58878
      Tejun Heo 提交于
      amdtopology is going to be used by 32bit too drop _64 suffix.  This is
      pure rename.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      c6f58878
    • T
      x86, NUMA: Make numa_init_array() static · 752d4f37
      Tejun Heo 提交于
      numa_init_array() no longer has users outside of numa.c.  Make it
      static.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      752d4f37
    • T
      x86, NUMA: Make 32bit use common NUMA init path · bd6709a9
      Tejun Heo 提交于
      With both _numa_init() methods converted and the rest of init code
      adjusted, numa_32.c now can switch from the 32bit only init code to
      the common one in numa.c.
      
      * Shim get_memcfg_*()'s are dropped and initmem_init() calls
        x86_numa_init(), which is updated to handle NUMAQ.
      
      * All boilerplate operations including node range limiting, pgdat
        alloc/init are handled by numa_init().  32bit only implementation is
        removed.
      
      * 32bit numa_add_memblk(), numa_set_distance() and
        memory_add_physaddr_to_nid() removed and common versions in
        numa_32.c enabled for 32bit.
      
      This change causes the following behavior changes.
      
      * NODE_DATA()->node_start_pfn/node_spanned_pages properly initialized
        for 32bit too.
      
      * Much more sanity checks and configuration cleanups.
      
      * Proper handling of node distances.
      
      * The same NUMA init messages as 64bit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      bd6709a9
    • T
      x86, NUMA: Initialize and use remap allocator from setup_node_bootmem() · 7888e96b
      Tejun Heo 提交于
      setup_node_bootmem() is taken from 64bit and doesn't use remap
      allocator.  It's about to be shared with 32bit so add support for it.
      If NODE_DATA is remapped, it's noted in the debug message and node
      locality check is skipped as the __pa() of the remapped address
      doesn't reflect the actual physical address.
      
      On 64bit, remap allocator becomes noop and doesn't affect the
      behavior.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      7888e96b
    • T
      x86-32, NUMA: Add @start and @end to init_alloc_remap() · 99cca492
      Tejun Heo 提交于
      Instead of dereferencing node_start/end_pfn[] directly, make
      init_alloc_remap() take @start and @end and let the caller be
      responsible for making sure the range is sane.  This is to prepare for
      use from unified NUMA init code.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      99cca492
    • T
      x86, NUMA: Remove long 64bit assumption from numa.c · 38f3e1ca
      Tejun Heo 提交于
      Code moved from numa_64.c has assumption that long is 64bit in several
      places.  This patch removes the assumption by using {s|u}64_t
      explicity, using PFN_PHYS() for page number -> addr conversions and
      adjusting printf formats.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      38f3e1ca
    • T
      x86, NUMA: Enable build of generic NUMA init code on 32bit · 744baba0
      Tejun Heo 提交于
      Generic NUMA init code was moved to numa.c from numa_64.c but is still
      guaraded by CONFIG_X86_64.  This patch removes the compile guard and
      enables compiling on 32bit.
      
      * numa_add_memblk() and numa_set_distance() clash with the shim
        implementation in numa_32.c and are left out.
      
      * memory_add_physaddr_to_nid() clashes with 32bit implementation and
        is left out.
      
      * MAX_DMA_PFN definition in dma.h moved out of !CONFIG_X86_32.
      
      * node_data definition in numa_32.c removed in favor of the one in
        numa.c.
      
      There are places where ulong is assumed to be 64bit.  The next patch
      will fix them up.  Note that although the code is compiled it isn't
      used yet and this patch doesn't cause any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      744baba0
    • T
      x86, NUMA: Move NUMA init logic from numa_64.c to numa.c · a4106eae
      Tejun Heo 提交于
      Move the generic 64bit NUMA init machinery from numa_64.c to numa.c.
      
      * node_data[], numa_mem_info and numa_distance
      * numa_add_memblk[_to](), numa_remove_memblk[_from]()
      * numa_set_distance() and friends
      * numa_init() and all the numa_meminfo handling helpers called from it
      * dummy_numa_init()
      * memory_add_physaddr_to_nid()
      
      A new function x86_numa_init() is added and the content of
      numa_64.c::initmem_init() is moved into it.  initmem_init() now simply
      calls x86_numa_init().
      
      Constants and numa_off declaration are moved from numa_{32|64}.h to
      numa.h.
      
      This is code reorganization and doesn't involve any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      a4106eae
    • T
      x86-32, NUMA: Update numaq to use new NUMA init protocol · 299a180a
      Tejun Heo 提交于
      Update numaq such that it calls numa_add_memblk() and sets
      numa_nodes_parsed instead of directly diddling with NUMA states.  The
      original get_memcfg_numaq() is renamed to numaq_numa_init() and new
      get_memcfg_numaq() is created in numa_32.c.
      
      The shim numa_add_memblk() implementation handles node_start/end_pfn[]
      and node_set_online() for nodes with memory.  The new
      get_memcfg_numaq() exactly the same with get_memcfg_from_srat() other
      than calling the numaq init function.  Things get_memcfgs_numaq() do
      are not strictly necessary for numaq but added for consistency and to
      help unifying NUMA init handling.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      299a180a
    • T
      x86-32, NUMA: Replace srat_32.c with srat.c · 5acd91ab
      Tejun Heo 提交于
      SRAT support implementation in srat_32.c and srat.c are generally
      similar; however, there are some differences.
      
      First of all, 64bit implementation supports more types of SRAT
      entries.  64bit supports x2apic, affinity, memory and SLIT.  32bit
      only supports processor and memory.
      
      Most other differences stem from different initialization protocols
      employed by 64bit and 32bit NUMA init paths.
      
      On 64bit,
      
      * Mappings among PXM, node and apicid are directly done in each SRAT
        entry callback.
      
      * Memory affinity information is passed to numa_add_memblk() which
        takes care of all interfacing with NUMA init.
      
      * Doesn't directly initialize NUMA configurations.  All the
        information is recorded in numa_nodes_parsed and memblks.
      
      On 32bit,
      
      * Checks numa_off.
      
      * Things go through one more level of indirection via private tables
        but eventually end up initializing the same mappings.
      
      * node_start/end_pfn[] are initialized and
        memblock_x86_register_active_regions() is called for each memory
        chunk.
      
      * node_set_online() is called for each online node.
      
      * sort_node_map() is called.
      
      There are also other minor differences in sanity checking and messages
      but taking 64bit version should be good enough.
      
      This patch drops the 32bit specific implementation and makes the 64bit
      implementation common for both 32 and 64bit.
      
      The init protocol differences are dealt with in two places - the
      numa_add_memblk() shim added in the previous patch and new temporary
      numa_32.c:get_memcfg_from_srat() which wraps invocation of
      x86_acpi_numa_init().
      
      The shim numa_add_memblk() handles the folowings.
      
      * node_start/end_pfn[] initialization.
      
      * node_set_online() for memory nodes.
      
      * Invocation of memblock_x86_register_active_regions().
      
      The shim get_memcfg_from_srat() handles the followings.
      
      * numa_off check.
      
      * node_set_online() for CPU nodes.
      
      * sort_node_map() invocation.
      
      * Clearing of numa_nodes_parsed and active_ranges on failure.
      
      The shims are temporary and will be removed as the generic NUMA init
      path in 32bit is replaced with 64bit one.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      5acd91ab
    • T
      x86-32, NUMA: implement temporary NUMA init shims · b0d31080
      Tejun Heo 提交于
      To help transition to common NUMA init, implement temporary 32bit
      shims for numa_add_memblk() and numa_set_distance().
      numa_add_memblk() registers the memblk and adjusts
      node_start/end_pfn[].  numa_set_distance() is noop.
      
      These shims will allow using 64bit NUMA init functions on 32bit and
      gradual transition to common NUMA init path.
      
      For detailed description, please read description of commits which
      make use of the shim functions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      b0d31080
    • T
      x86, NUMA: Move numa_nodes_parsed to numa.[hc] · e6df595b
      Tejun Heo 提交于
      Move numa_nodes_parsed from numa_64.[hc] to numa.[hc] to prepare for
      NUMA init path unification.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      e6df595b
    • T
      x86-32, NUMA: Move get_memcfg_numa() into numa_32.c · daf4f480
      Tejun Heo 提交于
      There's no reason get_memcfg_numa() to be implemented inline in
      mmzone_32.h.  Move it to numa_32.c and also make
      get_memcfg_numa_flag() static.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      daf4f480
    • T
      x86, NUMA: make srat.c 32bit safe · eca9ad31
      Tejun Heo 提交于
      Make srat.c 32bit safe by removing the assumption that unsigned long
      is 64bit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      eca9ad31
    • T
      x86, NUMA: rename srat_64.c to srat.c · 7b2600f8
      Tejun Heo 提交于
      Rename srat_64.c to srat.c.  This is to prepare for unification of
      NUMA init paths between 32 and 64bit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      7b2600f8
    • T
      x86, NUMA: trivial cleanups · 1201e10a
      Tejun Heo 提交于
      * Kill no longer used struct bootnode.
      
      * Kill dangling declaration of pxm_to_nid() in numa_32.h.
      
      * Make setup_node_bootmem() static.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      1201e10a