1. 04 3月, 2011 1 次提交
    • T
      x86-64, NUMA: Revert NUMA affine page table allocation · f8911250
      Tejun Heo 提交于
      This patch reverts NUMA affine page table allocation added by commit
      1411e0ec (x86-64, numa: Put pgtable to local node memory).
      
      The commit made an undocumented change where the kernel linear mapping
      strictly follows intersection of e820 memory map and NUMA
      configuration.  If the physical memory configuration has holes or NUMA
      nodes are not properly aligned, this leads to using unnecessarily
      smaller mapping size which leads to increased TLB pressure.  For
      details,
      
        http://thread.gmane.org/gmane.linux.kernel/1104672
      
      Patches to fix the problem have been proposed but the underlying code
      needs more cleanup and the approach itself seems a bit heavy handed
      and it has been determined to revert the feature for now and come back
      to it in the next developement cycle.
      
        http://thread.gmane.org/gmane.linux.kernel/1105959
      
      As init_memory_mapping_high() callsites have been consolidated since
      the commit, reverting is done manually.  Also, the RED-PEN comment in
      arch/x86/mm/init.c is not restored as the problem no longer exists
      with memblock based top-down early memory allocation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      f8911250
  2. 02 3月, 2011 2 次提交
    • T
      x86-64, NUMA: Better explain numa_distance handling · eb8c1e2c
      Tejun Heo 提交于
      Handling of out-of-bounds distances and allocation failure can use
      better documentation.  Add it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      eb8c1e2c
    • Y
      x86-64, NUMA: Fix distance table handling · ce003330
      Yinghai Lu 提交于
      NUMA distance table handling has the following problems.
      
      * numa_reset_distance() uses numa_distance * sizeof(numa_distance[0])
        as the table size when it should be using the square of
        numa_distance.
      
      * The same size miscalculation when allocation space for phys_dist in
        numa_emulation().
      
      * In numa_emulation(), phys_dist must be reserved; otherwise, the new
        emulated distance table may overlap it.
      
      Fix them and, while at it, take numa_distance_cnt resetting in
      numa_reset_distance() out of the if block to simplify the code a bit.
      
      David Rientjes reported incorrect handling of distance table during
      emulation.
      
      -tj: Edited out numa_alloc_distance() related changes which weren't
           necessary and rewrote patch description.
      
      -v2: Ingo was unhappy with 80-column limit induced linebreaks.  Let
           lines run over 80-column.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Reported-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      ce003330
  3. 26 2月, 2011 1 次提交
  4. 25 2月, 2011 1 次提交
    • D
      x86-64, NUMA: Fix size of numa_distance array · 1f565a89
      David Rientjes 提交于
      numa_distance should be sized like the SLIT, an NxN matrix where N is
      the highest node id + 1.  This patch fixes the calculation to avoid
      overflowing the array on the subsequent iteration.
      
      -tj: The original patch used last index to calculate size.  Yinghai
           pointed out it should be incremented so it is the number of
           elements instead of the last index to calculate the size of the
           table.  Updated accordingly.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      1f565a89
  5. 24 2月, 2011 4 次提交
    • Y
      x86: Rename e820_table_* to pgt_buf_* · d1b19426
      Yinghai Lu 提交于
      e820_table_{start|end|top}, which are used to buffer page table
      allocation during early boot, are now derived from memblock and don't
      have much to do with e820.  Change the names so that they reflect what
      they're used for.
      
      This patch doesn't introduce any behavior change.
      
      -v2: Ingo found that earlier patch "x86: Use early pre-allocated page
           table buffer top-down" caused crash on 32bit and needed to be
           dropped.  This patch was updated to reflect the change.
      
      -tj: Updated commit description.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d1b19426
    • Y
      bootmem: Move __alloc_memory_core_early() to nobootmem.c · 8bc1f91e
      Yinghai Lu 提交于
      Now that bootmem.c and nobootmem.c are separate, there's no reason to
      define __alloc_memory_core_early(), which is used only by nobootmem,
      inside #ifdef in page_alloc.c.  Move it to nobootmem.c and make it
      static.
      
      This patch doesn't introduce any behavior change.
      
      -tj: Updated commit description.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      8bc1f91e
    • Y
      bootmem: Move contig_page_data definition to bootmem.c/nobootmem.c · e782ab42
      Yinghai Lu 提交于
      Now that bootmem.c and nobootmem.c are separate, it's cleaner to
      define contig_page_data in each file than in page_alloc.c with #ifdef.
      Move it.
      
      This patch doesn't introduce any behavior change.
      
      -v2: According to Andrew, fixed the struct layout.
      -tj: Updated commit description.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e782ab42
    • Y
      bootmem: Separate out CONFIG_NO_BOOTMEM code into nobootmem.c · 09325873
      Yinghai Lu 提交于
      mm/bootmem.c contained code paths for both bootmem and no bootmem
      configurations.  They implement about the same set of APIs in
      different ways and as a result bootmem.c contains massive amount of
      #ifdef CONFIG_NO_BOOTMEM.
      
      Separate out CONFIG_NO_BOOTMEM code into mm/nobootmem.c.  As the
      common part is relatively small, duplicate them in nobootmem.c instead
      of creating a common file or ifdef'ing in bootmem.c.
      
      The followings are duplicated.
      
      * {min|max}_low_pfn, max_pfn, saved_max_pfn
      * free_bootmem_late()
      * ___alloc_bootmem()
      * __alloc_bootmem_low()
      
      The followings are applicable only to nobootmem and moved verbatim.
      
      * __free_pages_memory()
      * free_all_memory_core_early()
      
      The followings are not applicable to nobootmem and omitted in
      nobootmem.c.
      
      * reserve_bootmem_node()
      * reserve_bootmem()
      
      The rest split function bodies according to CONFIG_NO_BOOTMEM.
      
      Makefile is updated so that only either bootmem.c or nobootmem.c is
      built according to CONFIG_NO_BOOTMEM.
      
      This patch doesn't introduce any behavior change.
      
      -tj: Rewrote commit description.
      Suggested-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      09325873
  6. 22 2月, 2011 4 次提交
  7. 21 2月, 2011 1 次提交
  8. 17 2月, 2011 23 次提交
    • Y
      x86-64, NUMA: Put dummy_numa_init() in the init section · 6d496f9f
      Yinghai Lu 提交于
      dummy_numa_init() is used only during system boot.  Put it in .init
      like other NUMA init functions.
      
      - tj: Description update.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6d496f9f
    • Y
      x86-64, NUMA: Don't call __pa() with invalid address in numa_reset_distance() · 2ca230ba
      Yinghai Lu 提交于
      Do not call __pa(numa_distance) if it was not allocated before.
      Calling with invalid address triggers VIRTUAL_BUG_ON() in
      __phys_addr() if CONFIG_DEBUG_VIRTUAL.
      
      Also reported by Ingo.
      
       http://thread.gmane.org/gmane.linux.kernel/1101306/focus=1101785
      
      - v2: Change to check existing path as tj requested.
      - tj: Description update.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NIngo Molnar <mingo@elte.hu>
      2ca230ba
    • T
      x86-64, NUMA: Unify emulated distance mapping · e23bba60
      Tejun Heo 提交于
      NUMA emulation needs to update node distance information.  It did it
      by remapping apicid to PXM mapping, even when amdtopology is being
      used.  There is no reason to go through such convolution.  The generic
      code has all the information necessary to transform the distance table
      to the emulated nid space.
      
      Implement generic distance table transformation in numa_emulation()
      and drop private implementations in srat_64 and amdtopology_64.  This
      makes find_node_by_addr() and fake_physnodes() and related functions
      unnecessary, drop them.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      e23bba60
    • T
      x86-64, NUMA: Unify emulated apicid -> node mapping transformation · 6b78cb54
      Tejun Heo 提交于
      NUMA emulation changes node mappings and thus apicid -> node mapping
      needs to be updated accordingly.  srat_64 and amdtopology_64 did this
      separately; however, all the necessary information is the mapping from
      emulated nodes to physical nodes which is available in
      emu_nid_to_phys[].
      
      Implement common __apicid_to_node[] transformation in numa_emulation()
      and drop duplicate implementations.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      6b78cb54
    • T
      x86-64, NUMA: Emulate directly from numa_meminfo · 1cca5340
      Tejun Heo 提交于
      NUMA emulation built physnodes[] array which could only represent
      configurations from the physical meminfo and emulated nodes using the
      information.  There's no reason to take this extra level of
      indirection.  Update emulation functions so that they operate directly
      on numa_meminfo.  This simplifies the code and makes emulation layout
      behave better with interleaved physical nodes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      1cca5340
    • T
      x86-64, NUMA: Wrap node ID during emulation · 775ee85d
      Tejun Heo 提交于
      Both emulation layout functions - split_nodes[_size]_interleave() -
      didn't wrap emulated nid while laying out the fake nodes and tried to
      avoid interating over the specified number of nodes, which is fragile.
      
      Now that the emulation code generates numa_meminfo, the node memblks
      don't need to be consecutive and emulated node IDs can simply wrap.
      This makes the code more robust and is necessary for updates to better
      handle the cases where the physical nodes are interleaved.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      775ee85d
    • T
      x86-64, NUMA: Make emulation code build numa_meminfo and share the registration path · c88aea7a
      Tejun Heo 提交于
      NUMA emulation code built nodes[] array and had its own registration
      path to set up the emulated nodes.  Update it such that it generates
      emulated numa_meminfo and returns control to initmem_init() and shares
      the same registration path with non-emulated cases.
      
      Because {acpi|amd}_fake_nodes() expect nodes[] parameter,
      fake_physnodes() now generates nodes[] from numa_meminfo.  This will
      go away with further updates.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      c88aea7a
    • T
      x86-64, NUMA: Build and use direct emulated nid -> phys nid mapping · 9d073cae
      Tejun Heo 提交于
      NUMA emulation copied physical NUMA configuration into physnodes[] and
      used it to reverse-map emulated nodes to physical nodes, which is
      unnecessarily convoluted.  Build emu_nid_to_phys[] array to map
      emulated nids directly to the matching physical nids and use it in
      numa_add_cpu().
      
      physnodes[] will be removed with further patches.
      
      - v2: Build failure when CONFIG_DEBUG_PER_CPU_MAPS due to missing
        local variable definition fixed.  Reported by Ingo.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      9d073cae
    • T
      x86-64, NUMA: Trivial changes to prepare for emulation updates · d9c515ea
      Tejun Heo 提交于
      * Separate out numa_add_memblk_to() from numa_add_memblk() so that
        different numa_meminfo can be used.
      
      * Rename cmdline to emu_cmdline.
      
      * Drop @start/last_pfn from numa_emulation() and use max_pfn directly.
      
      This patch doesn't introduce any behavior change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      d9c515ea
    • T
      x86-64, NUMA: Implement generic node distance handling · ac7136b6
      Tejun Heo 提交于
      Node distance either used direct node comparison, ACPI PXM comparison
      or ACPI SLIT table lookup.  This patch implements generic node
      distance handling.  NUMA init methods can call numa_set_distance() to
      set distance between nodes and the common __node_distance()
      implementation will report the set distance.
      
      Due to the way NUMA emulation is implemented, the generic node
      distance handling is used only when emulation is not used.  Later
      patches will update NUMA emulation to use the generic distance
      mechanism.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      ac7136b6
    • T
      x86-64, NUMA: Kill mem_nodes_parsed · 4697bdcc
      Tejun Heo 提交于
      With all memory configuration information now carried in numa_meminfo,
      there's no need to keep mem_nodes_parsed separate.  Drop it and use
      numa_nodes_parsed for CPU / memory-less nodes.
      
      A new helper numa_nodemask_from_meminfo() is added to calculate
      memnode mask on the fly which is currently used to set
      node_possible_map.
      
      This simplifies NUMA init methods a bit and removes a source of
      possible inconsistencies.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      4697bdcc
    • T
      x86-64, NUMA: Rename cpu_nodes_parsed to numa_nodes_parsed · 92d4a437
      Tejun Heo 提交于
      It's no longer necessary to keep both cpu_nodes_parsed and
      mem_nodes_parsed.  In preparation for merge, rename cpu_nodes_parsed
      to numa_nodes_parsed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      92d4a437
    • T
      x86-64, NUMA: Kill numa_nodes[] · 91556237
      Tejun Heo 提交于
      numa_nodes[] doesn't carry any information which isn't present in
      numa_meminfo.  Each entry is simply min/max range of all the memblks
      for the node.  This is not only redundant but also inaccurate when
      memblks for different nodes interleave - for example,
      find_node_by_addr() can return the wrong nodeid.
      
      Kill numa_nodes[] and always use numa_meminfo instead.
      
      * nodes_cover_memory() is renamed to numa_meminfo_cover_memory() and
        now operations on numa_meminfo and returns bool.
      
      * setup_node_bootmem() needs min/max range.  Compute the range on the
        fly.  setup_node_bootmem() invocation is restructured to use outer
        loop instead of hardcoding the double invocations.
      
      * find_node_by_addr() now operates on numa_meminfo.
      
      * setup_physnodes() builds physnodes[] from memblks.  This will go
        away when emulation code is updated to use struct numa_meminfo.
      
      This patch also makes the following misc changes.
      
      * Clearing of nodes_add[] clearing is converted to memset().
      
      * numa_add_memblk() in amd_numa_init() is moved down a bit for
        consistency.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      91556237
    • T
      x86-64, NUMA: Add common find_node_by_addr() · a844ef46
      Tejun Heo 提交于
      srat_64.c and amdtopology_64.c had their own versions of
      find_node_by_addr() which were basically the same.  Add common one in
      numa_64.c and remove the duplicates.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      a844ef46
    • T
      x86-64, NUMA: consolidate and improve memblk sanity checks · 56e827fb
      Tejun Heo 提交于
      memblk sanity check was scattered around and incomplete.  Consolidate
      and improve.
      
      * Confliction detection and cutoff_node() logic are moved to
        numa_cleanup_meminfo().
      
      * numa_cleanup_meminfo() clears the unused memblks before returning.
      
      * Check and warn about invalid input parameters in numa_add_memblk().
      
      * Check the maximum number of memblk isn't exceeded in
        numa_add_memblk().
      
      * numa_cleanup_meminfo() is now called before numa_emulation() so that
        the emulation code also uses the cleaned up version.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      56e827fb
    • T
      x86-64, NUMA: make numa_cleanup_meminfo() prettier · 2e756be4
      Tejun Heo 提交于
      * Factor out numa_remove_memblk_from().
      
      * Hole detection doesn't need separate start/end.  Calculate start/end
        once.
      
      * Relocate comment.
      
      * Define iterators at the top and remove unnecessary prefix
        increments.
      
      This prepares for further improvements to the function.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      2e756be4
    • T
      x86-64, NUMA: Separate out numa_cleanup_meminfo() · f9c60251
      Tejun Heo 提交于
      Separate out numa_cleanup_meminfo() from numa_register_memblks().
      node_possible_map initialization is moved to the top of the split
      numa_register_memblks().
      
      This patch doesn't cause behavior change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      f9c60251
    • T
      x86-64, NUMA: Introduce struct numa_meminfo · 97e7b78d
      Tejun Heo 提交于
      Arrays for memblks and nodeids and their length lived in separate
      variables making things unnecessarily cumbersome.  Introduce struct
      numa_meminfo which contains all memory configuration info.  This patch
      doesn't cause any behavior change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      97e7b78d
    • T
      x86-64, NUMA: Remove %NULL @nodeids handling from compute_hash_shift() · 8968dab8
      Tejun Heo 提交于
      numa_emulation() called compute_hash_shift() with %NULL @nodeids which
      meant identity mapping between index and nodeid.  Make
      numa_emulation() build identity array and drop %NULL @nodeids handling
      from populate_memnodemap() and thus from compute_hash_shift().  This
      is to prepare for transition to using memblks instead.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      8968dab8
    • T
      x86-64, NUMA: Kill {acpi|amd|dummy}_scan_nodes() · 5d371b08
      Tejun Heo 提交于
      They are empty now.  Kill them.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      5d371b08
    • T
      x86-64, NUMA: Unify the rest of memblk registration · fd0435d8
      Tejun Heo 提交于
      Move the remaining memblk registration logic from acpi_scan_nodes() to
      numa_register_memblks() and initmem_init().
      
      This applies nodes_cover_memory() sanity check, memory node sorting
      and node_online() checking, which were only applied to acpi, to all
      init methods.
      
      As all memblk registration is moved to common code, active range
      clearing is moved to initmem_init() too and removed from bad_srat().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      fd0435d8
    • T
      x86-64, NUMA: Unify use of memblk in all init methods · 43a662f0
      Tejun Heo 提交于
      Make both amd and dummy use numa_add_memblk() to describe the detected
      memory blocks.  This allows initmem_init() to call
      numa_register_memblk() regardless of init method in use.  Drop custom
      memory registration codes from amd and dummy.
      
      After this change, memblk merge/cleanup in numa_register_memblks() is
      applied to all init methods.
      
      As this makes compute_hash_shift() and numa_register_memblks() used
      only inside numa_64.c, make them static.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      43a662f0
    • T
      x86-64, NUMA: Factor out memblk handling into numa_{add|register}_memblk() · ef396ec9
      Tejun Heo 提交于
      Factor out memblk handling from srat_64.c into two functions in
      numa_64.c.  This patch doesn't introduce any behavior change.  The
      next patch will make all init methods use these functions.
      
      - v2: Fixed build failure on 32bit due to misplaced NR_NODE_MEMBLKS.
            Reported by Ingo.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      ef396ec9
  9. 16 2月, 2011 3 次提交
    • T
      x86-64, NUMA: Kill {acpi|amd}_get_nodes() · 19095548
      Tejun Heo 提交于
      With common numa_nodes[], common code in numa_64.c can access it
      directly.  Copy directly and kill {acpi|amd}_get_nodes().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      19095548
    • T
      x86-64, NUMA: Use common numa_nodes[] · 206e4208
      Tejun Heo 提交于
      ACPI and amd are using separate nodes[] array.  Add numa_nodes[] and
      use them in all NUMA init methods.  cutoff_node() cleanup is moved
      from srat_64.c to numa_64.c and applied in initmem_init() regardless
      of init methods.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      206e4208
    • T
      x86-64, NUMA: Move apicid to numa mapping initialization from amd_scan_nodes() to amd_numa_init() · 45fe6c78
      Tejun Heo 提交于
      This brings amd initialization behavior closer to that of acpi.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      45fe6c78