1. 15 11月, 2005 3 次提交
    • E
      [PATCH] x86_64: Optimize NUMA node hash function · 529a3404
      Eric Dumazet 提交于
      Compute the highest possible value for memnode_shift, in order to reduce
      footprint of memnodemap[] to the minimum, thus making all users
      (phys_to_nid(), kfree()), more cache friendly.
      
      Before the patch :
      
       Node 0 MemBase 0000000000000000 Limit 00000001ffffffff
       Node 1 MemBase 0000000200000000 Limit 00000003ffffffff
       Using 23 for the hash shift. Max adder is 3ffffffff
      
      After the patch :
      
       Node 0 MemBase 0000000000000000 Limit 00000001ffffffff
       Node 1 MemBase 0000000200000000 Limit 00000003ffffffff
       Using 33 for the hash shift.
      
      In this case, only 2 bytes of memnodemap[] are used, instead of 2048
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      529a3404
    • A
      [PATCH] x86_64: Speed up numa_node_id by putting it directly into the PDA · 69d81fcd
      Andi Kleen 提交于
      Not go from the CPU number to an mapping array.
      Mode number is often used now in fast paths.
      
      This also adds a generic numa_node_id to all the topology includes
      
      Suggested by Eric Dumazet
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      69d81fcd
    • A
      [PATCH] x86_64: Add 4GB DMA32 zone · a2f1b424
      Andi Kleen 提交于
      Add a new 4GB GFP_DMA32 zone between the GFP_DMA and GFP_NORMAL zones.
      
      As a bit of historical background: when the x86-64 port
      was originally designed we had some discussion if we should
      use a 16MB DMA zone like i386 or a 4GB DMA zone like IA64 or
      both. Both was ruled out at this point because it was in early
      2.4 when VM is still quite shakey and had bad troubles even
      dealing with one DMA zone.  We settled on the 16MB DMA zone mainly
      because we worried about older soundcards and the floppy.
      
      But this has always caused problems since then because
      device drivers had trouble getting enough DMA able memory. These days
      the VM works much better and the wide use of NUMA has proven
      it can deal with many zones successfully.
      
      So this patch adds both zones.
      
      This helps drivers who need a lot of memory below 4GB because
      their hardware is not accessing more (graphic drivers - proprietary
      and free ones, video frame buffer drivers, sound drivers etc.).
      Previously they could only use IOMMU+16MB GFP_DMA, which
      was not enough memory.
      
      Another common problem is that hardware who has full memory
      addressing for >4GB misses it for some control structures in memory
      (like transmit rings or other metadata).  They tended to allocate memory
      in the 16MB GFP_DMA or the IOMMU/swiotlb then using pci_alloc_consistent,
      but that can tie up a lot of precious 16MB GFPDMA/IOMMU/swiotlb memory
      (even on AMD systems the IOMMU tends to be quite small) especially if you have
      many devices.  With the new zone pci_alloc_consistent can just put
      this stuff into memory below 4GB which works better.
      
      One argument was still if the zone should be 4GB or 2GB. The main
      motivation for 2GB would be an unnamed not so unpopular hardware
      raid controller (mostly found in older machines from a particular four letter
      company) who has a strange 2GB restriction in firmware. But
      that one works ok with swiotlb/IOMMU anyways, so it doesn't really
      need GFP_DMA32. I chose 4GB to be compatible with IA64 and because
      it seems to be the most common restriction.
      
      The new zone is so far added only for x86-64.
      
      For other architectures who don't set up this
      new zone nothing changes. Architectures can set a compatibility
      define in Kconfig CONFIG_DMA_IS_DMA32 that will define GFP_DMA32
      as GFP_DMA. Otherwise it's a nop because on 32bit architectures
      it's normally not needed because GFP_NORMAL (=0) is DMA able
      enough.
      
      One problem is still that GFP_DMA means different things on different
      architectures. e.g. some drivers used to have #ifdef ia64  use GFP_DMA
      (trusting it to be 4GB) #elif __x86_64__ (use other hacks like
      the swiotlb because 16MB is not enough) ... . This was quite
      ugly and is now obsolete.
      
      These should be now converted to use GFP_DMA32 unconditionally. I haven't done
      this yet. Or best only use pci_alloc_consistent/dma_alloc_coherent
      which will use GFP_DMA32 transparently.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a2f1b424
  2. 01 10月, 2005 2 次提交
    • R
      [PATCH] x86_64 early numa init fix · 85cc5135
      Ravikiran G Thirumalai 提交于
      The tests Alok carried out on Petr's box confirmed that cpu_to_node[BP] is
      not setup early enough by numa_init_array due to the x86_64 changes in
      2.6.14-rc*, and unfortunately set wrongly by the work around code in
      numa_init_array().  cpu_to_node[0] gets set with 1 early and later gets set
      properly to 0 during identify_cpu() when all cpus are brought up, but
      confusing the numa slab in the process.
      
      Here is a quick fix for this.  The right fix obviously is to have
      cpu_to_node[bsp] setup early for numa_init_array().  The following patch
      will fix the problem now, and the code can stay on even when
      cpu_to_node{BP] gets fixed early correctly.
      
      Thanks to Petr for access to his box.
      
      Signed off by: Ravikiran Thirumalai <kiran@scalex86.org>
      Signed-off-by: NAlok N Kataria <alokk@calsoftinc.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      85cc5135
    • R
      [PATCH] x86_64: fix the BP node_to_cpumask · e6a045a5
      Ravikiran G Thirumalai 提交于
      Fix the BP node_to_cpumask.  2.6.14-rc* broke the boot cpu bit as the
      cpu_to_node(0) is now not setup early enough for numa_init_array.
      cpu_to_node[] is setup much later at srat_detect_node on acpi srat based
      em64t machines.  This seems like a problem on amd machines too, Tested on
      em64t though.  /sys/devices/system/node/node0/cpumap shows up sanely after
      this patch.
      
      Signed off by: Ravikiran Thirumalai <kiran@scalex86.org>
      Signed-off-by: NShai Fultheim <shai@scalex86.org>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e6a045a5
  3. 13 9月, 2005 2 次提交
  4. 08 9月, 2005 1 次提交
  5. 27 8月, 2005 1 次提交
    • A
      [PATCH] x86_64: Tell VM about holes in nodes · 485761bd
      Andi Kleen 提交于
      Some nodes can have large holes on x86-64.
      
      This fixes problems with the VM allowing too many dirty pages because it
      overestimates the number of available RAM in a node.  In extreme cases you
      can end up with all RAM filled with dirty pages which can lead to deadlocks
      and other nasty behaviour.
      
      This patch just tells the VM about the known holes from e820.  Reserved
      (like the kernel text or mem_map) is still not taken into account, but that
      should be only a few percent error now.
      
      Small detail is that the flat setup uses the NUMA free_area_init_node() now
      too because it offers more flexibility.
      
      (akpm: lotsa thanks to Martin for working this problem out)
      
      Cc: Martin Bligh <mbligh@mbligh.org>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      485761bd
  6. 29 7月, 2005 1 次提交
  7. 26 6月, 2005 1 次提交
  8. 24 6月, 2005 1 次提交
  9. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4