• Y
    x86, NUMA: Trim numa meminfo with max_pfn in a separate loop · e5a10c1b
    Yinghai Lu 提交于
    During testing 32bit numa unifying code from tj, found one system with
    more than 64g fails to use numa.  It turns out we do not trim numa
    meminfo correctly against max_pfn in case start address of a node is
    higher than 64GiB.  Bug fix made it to tip tree.
    
    This patch moves the checking and trimming to a separate loop.  So we
    don't need to compare low/high in following merge loops.  It makes the
    code more readable.
    
    Also it makes the node merge printouts less strange.  On a 512GiB numa
    system with 32bit,
    
    before:
    > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
    > NUMA: Node 0 [0,80000000) + [100000000,1080000000) -> [0,1000000000)
    
    after:
    > NUMA: Node 0 [0,a0000) + [100000,80000000) -> [0,80000000)
    > NUMA: Node 0 [0,80000000) + [100000000,1000000000) -> [0,1000000000)
    Signed-off-by: NYinghai Lu <yinghai@kernel.org>
    [Updated patch description and comment slightly.]
    Signed-off-by: NTejun Heo <tj@kernel.org>
    e5a10c1b
numa.c 20.2 KB