• D
    x86, numa: Fix cpu to node mapping for sparse node ids · a387e95a
    David Rientjes 提交于
    NUMA boot code assumes that physical node ids start at 0, but the DIMMs
    that the apic id represents may not be reachable.  If this is the case,
    node 0 is never online and cpus never end up getting appropriately
    assigned to a node.  This causes the cpumask of all online nodes to be
    empty and machines crash with kernel code assuming online nodes have
    valid cpus.
    
    The fix is to appropriately map all the address ranges for physical nodes
    and ensure the cpu to node mapping function checks all possible nodes (up
    to MAX_NUMNODES) instead of simply checking nodes 0-N, where N is the
    number of physical nodes, for valid address ranges.
    
    This requires no longer "compressing" the address ranges of nodes in the
    physical node map from 0-N, but rather leave indices in physnodes[] to
    represent the actual node id of the physical node.  Accordingly, the
    topology exported by both amd_get_nodes() and acpi_get_nodes() no longer
    must return the number of nodes to iterate through; all such iterations
    will now be to MAX_NUMNODES.
    
    This change also passes the end address of system RAM (which may be
    different from normal operation if mem= is specified on the command line)
    before the physnodes[] array is populated.  ACPI parsed nodes are
    truncated to fit within the address range that respect the mem=
    boundaries and even some physical nodes may become unreachable in such
    cases.
    
    When NUMA emulation does succeed, any apicid to node mapping that exists
    for unreachable nodes are given default values so that proximity domains
    can still be assigned.  This is important for node_distance() to
    function as desired.
    Signed-off-by: NDavid Rientjes <rientjes@google.com>
    LKML-Reference: <alpine.DEB.2.00.1012221702090.3701@chino.kir.corp.google.com>
    Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
    a387e95a
acpi.h 5.3 KB