1. 19 5月, 2010 1 次提交
  2. 26 3月, 2010 1 次提交
  3. 20 3月, 2010 1 次提交
    • A
      x86, amd: Restrict usage of c1e_idle() · 035a02c1
      Andreas Herrmann 提交于
      Currently c1e_idle returns true for all CPUs greater than or equal to
      family 0xf model 0x40. This covers too many CPUs.
      
      Meanwhile a respective erratum for the underlying problem was filed
      (#400). This patch adds the logic to check whether erratum #400
      applies to a given CPU.
      Especially for CPUs where SMI/HW triggered C1e is not supported,
      c1e_idle() doesn't need to be used. We can check this by looking at
      the respective OSVW bit for erratum #400.
      
      Cc: <stable@kernel.org> # .32.x .33.x
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <20100319110922.GA19614@alberich.amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      035a02c1
  4. 19 3月, 2010 1 次提交
  5. 17 12月, 2009 1 次提交
  6. 16 12月, 2009 1 次提交
  7. 10 9月, 2009 1 次提交
  8. 30 7月, 2009 1 次提交
  9. 10 7月, 2009 1 次提交
  10. 01 7月, 2009 1 次提交
  11. 29 5月, 2009 1 次提交
  12. 09 5月, 2009 1 次提交
  13. 24 3月, 2009 2 次提交
  14. 25 2月, 2009 1 次提交
  15. 22 1月, 2009 1 次提交
  16. 17 12月, 2008 1 次提交
  17. 23 10月, 2008 2 次提交
  18. 15 10月, 2008 1 次提交
  19. 10 9月, 2008 1 次提交
  20. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  21. 10 6月, 2008 1 次提交
  22. 17 4月, 2008 3 次提交
    • A
      x86: split large page mapping for AMD TSEG · 8346ea17
      Andi Kleen 提交于
      On AMD SMM protected memory is part of the address map, but handled
      internally like an MTRR. That leads to large pages getting split
      internally which has some performance implications. Check for the
      AMD TSEG MSR and split the large page mapping on that area
      explicitely if it is part of the direct mapping.
      
      There is also SMM ASEG, but it is in the first 1MB and already covered by
      the earlier split first page patch.
      
      Idea for this came from an earlier patch by Andreas Herrmann
      
      On a RevF dual Socket Opteron system kernbench shows a clear
      improvement from this:
      (together with the earlier patches in this series, especially the
      split first 2MB patch)
      
      [lower is better]
                    no split stddev         split  stddev    delta
      Elapsed Time   87.146 (0.727516)     84.296 (1.09098)  -3.2%
      User Time     274.537 (4.05226)     273.692 (3.34344)  -0.3%
      System Time    34.907 (0.42492)      34.508 (0.26832)  -1.1%
      Percent CPU   322.5   (38.3007)     326.5   (44.5128)  +1.2%
      
      => About 3.2% improvement in elapsed time for kernbench.
      
      With GB pages on AMD Fam1h the impact of splitting is much higher of course,
      since it would split two full GB pages (together with the first
      1MB split patch) instead of two 2MB pages.  I could not benchmark
      a clear difference in kernbench on gbpages, so I kept it disabled
      for that case
      
      That was only limited benchmarking of course, so if someone
      was interested in running more tests for the gbpages case
      that could be revisited (contributions welcome)
      
      I didn't bother implementing this for 32bit because it is very
      unlikely the 32bit lowmem mapping overlaps into the TSEG near 4GB
      and the 2MB low split is already handled for both.
      
      [ mingo@elte.hu: do it on gbpages kernels too, there's no clear reason
                       why it shouldnt help there. ]
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Acked-by: andreas.herrmann3@amd.com
      Cc: mingo@elte.hu
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8346ea17
    • V
      x86: PAT infrastructure patch · 2e5d9c85
      venkatesh.pallipadi@intel.com 提交于
      Sets up pat_init() infrastructure.
      
      PAT MSR has following setting.
      	PAT
      	|PCD
      	||PWT
      	|||
      	000 WB		_PAGE_CACHE_WB
      	001 WC		_PAGE_CACHE_WC
      	010 UC-		_PAGE_CACHE_UC_MINUS
      	011 UC		_PAGE_CACHE_UC
      
      We are effectively changing WT from boot time setting to WC.
      UC_MINUS is used to provide backward compatibility to existing /dev/mem
      users(X).
      
      reserve_memtype and free_memtype are new interfaces for maintaining alias-free
      mapping. It is currently implemented in a simple way with a linked list and
      not optimized. reserve and free tracks the effective memory type, as a result
      of PAT and MTRR setting rather than what is actually requested in PAT.
      
      pat_init piggy backs on mtrr_init as the rules for setting both pat and mtrr
      are same.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2e5d9c85
    • S
      x86: add AMD Northbridge MSR definition · 12db648c
      stephane eranian 提交于
      adds AMD Northbridge config MSR definition
      Signed-off-by: NStephane Eranian <eranian@gmail.com>
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      12db648c
  23. 30 1月, 2008 2 次提交
  24. 20 10月, 2007 1 次提交
  25. 11 10月, 2007 1 次提交
  26. 03 5月, 2007 2 次提交
    • B
      [PATCH] i386: Enable support for fixed-range IORRs to keep RdMem & WrMem in sync · de938c51
      Bernhard Kaindl 提交于
      If our copy of the MTRRs of the BSP has RdMem or WrMem set, and
      we are running on an AMD64/K8 system, the boot CPU must have had
      MtrrFixDramEn and MtrrFixDramModEn set (otherwise our RDMSR would
      have copied these bits cleared), so we set them on this CPU as well.
      
      This allows us to keep the AMD64/K8 RdMem and WrMem bits in sync
      across the CPUs of SMP systems in order to fullfill the duty of
      system software to "initialize and maintain MTRR consistency
      across all processors." as written in the AMD and Intel manuals.
      
      If an WRMSR instruction fails because MtrrFixDramModEn is not
      set, I expect that also the Intel-style MTRR bits are not updated.
      
      AK: minor cleanup, moved MSR defines around
      Signed-off-by: NBernhard Kaindl <bk@suse.de>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      de938c51
    • H
      [PATCH] x86: Clean up x86 control register and MSR macros (corrected) · 4bc5aa91
      H. Peter Anvin 提交于
      This patch is based on Rusty's recent cleanup of the EFLAGS-related
      macros; it extends the same kind of cleanup to control registers and
      MSRs.
      
      It also unifies these between i386 and x86-64; at least with regards
      to MSRs, the two had definitely gotten out of sync.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      4bc5aa91