1. 23 6月, 2006 2 次提交
  2. 28 3月, 2006 1 次提交
    • S
      [PATCH] sched: new sched domain for representing multi-core · 1e9f28fa
      Siddha, Suresh B 提交于
      Add a new sched domain for representing multi-core with shared caches
      between cores.  Consider a dual package system, each package containing two
      cores and with last level cache shared between cores with in a package.  If
      there are two runnable processes, with this appended patch those two
      processes will be scheduled on different packages.
      
      On such systems, with this patch we have observed 8% perf improvement with
      specJBB(2 warehouse) benchmark and 35% improvement with CFP2000 rate(with 2
      users).
      
      This new domain will come into play only on multi-core systems with shared
      caches.  On other systems, this sched domain will be removed by domain
      degeneration code.  This new domain can be also used for implementing power
      savings policy (see OLS 2005 CMP kernel scheduler paper for more details..
      I will post another patch for power savings policy soon)
      
      Most of the arch/* file changes are for cpu_coregroup_map() implementation.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1e9f28fa
  3. 23 3月, 2006 3 次提交
  4. 09 3月, 2006 1 次提交
  5. 25 2月, 2006 1 次提交
    • J
      [PATCH] x86: fix broken SMP boot sequence · 2b932f6c
      James Bottomley 提交于
      Recent GDT changes broke the SMP boot sequence if the booting CPU is
      numbered anything other than zero.  There's also a subtle source of error
      in that the boot time CPU now uses cpu_gdt_table (which is actually the GDT
      for booting CPUs in head.S).  This patch fixes both problems by making GDT
      descriptors themselves allocated from a per_cpu area and switching to them
      in cpu_init(), which now means that cpu_gdt_table is exclusively used for
      booting CPUs again.
      Signed-off-by: NJames Bottomley <James.Bottomley@SteelEye.com>
      Cc: Zachary Amsden <zach@vmware.com>
      Cc: Matt Tolentino <metolent@snoqualmie.dp.intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2b932f6c
  6. 06 2月, 2006 1 次提交
  7. 12 1月, 2006 1 次提交
  8. 09 1月, 2006 1 次提交
  9. 07 1月, 2006 2 次提交
  10. 15 11月, 2005 2 次提交
  11. 07 11月, 2005 1 次提交
  12. 31 10月, 2005 2 次提交
  13. 11 9月, 2005 1 次提交
  14. 05 9月, 2005 2 次提交
    • Z
      [PATCH] i386: inline assembler: cleanup and encapsulate descriptor and task register management · 4d37e7e3
      Zachary Amsden 提交于
      i386 inline assembler cleanup.
      
      This change encapsulates descriptor and task register management.  Also,
      it is possible to improve assembler generation in two cases; savesegment
      may store the value in a register instead of a memory location, which
      allows GCC to optimize stack variables into registers, and MOV MEM, SEG
      is always a 16-bit write to memory, making the casting in math-emu
      unnecessary.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4d37e7e3
    • Z
      [PATCH] i386: inline asm cleanup · 4bb0d3ec
      Zachary Amsden 提交于
      i386 Inline asm cleanup.  Use cr/dr accessor functions.
      
      Also, a potential bugfix.  Also, some CR accessors really should be volatile.
      Reads from CR0 (numeric state may change in an exception handler), writes to
      CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
      re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
      was not there to begin with, and in no case should kernel memory be clobbered,
      except when doing a TLB flush, which already has memory clobber.
      
      I noticed that page invalidation does not have a memory clobber.  I can't find
      a bug as a result, but there is definitely a potential for a bug here:
      
      #define __flush_tlb_single(addr) \
      	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4bb0d3ec
  15. 08 7月, 2005 1 次提交
  16. 26 6月, 2005 3 次提交
  17. 24 6月, 2005 1 次提交
  18. 21 5月, 2005 1 次提交
  19. 17 5月, 2005 1 次提交
  20. 17 4月, 2005 3 次提交