1. 14 10月, 2011 1 次提交
  2. 16 7月, 2011 1 次提交
  3. 15 7月, 2011 1 次提交
    • L
      x86, intel, power: Initialize MSR_IA32_ENERGY_PERF_BIAS · abe48b10
      Len Brown 提交于
      Since 2.6.36 (23016bf0), Linux prints the existence of "epb" in /proc/cpuinfo,
      Since 2.6.38 (d5532ee7), the x86_energy_perf_policy(8) utility has
      been available in-tree to update MSR_IA32_ENERGY_PERF_BIAS.
      
      However, the typical BIOS fails to initialize the MSR, presumably
      because this is handled by high-volume shrink-wrap operating systems...
      
      Linux distros, on the other hand, do not yet invoke x86_energy_perf_policy(8).
      As a result, WSM-EP, SNB, and later hardware from Intel will run in its
      default hardware power-on state (performance), which assumes that users
      care for performance at all costs and not for energy efficiency.
      While that is fine for performance benchmarks, the hardware's intended default
      operating point is "normal" mode...
      
      Initialize the MSR to the "normal" by default during kernel boot.
      
      x86_energy_perf_policy(8) is available to change the default after boot,
      should the user have a different preference.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1107140051020.18606@x980Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@kernel.org>
      abe48b10
  4. 18 5月, 2011 1 次提交
  5. 17 5月, 2011 1 次提交
  6. 28 1月, 2011 2 次提交
    • T
      x86: Unify CPU -> NUMA node mapping between 32 and 64bit · 645a7919
      Tejun Heo 提交于
      Unlike 64bit, 32bit has been using its own cpu_to_node_map[] for
      CPU -> NUMA node mapping.  Replace it with early_percpu variable
      x86_cpu_to_node_map and share the mapping code with 64bit.
      
      * USE_PERCPU_NUMA_NODE_ID is now enabled for 32bit too.
      
      * x86_cpu_to_node_map and numa_set/clear_node() are moved from
        numa_64 to numa.  For now, on 32bit, x86_cpu_to_node_map is initialized
        with 0 instead of NUMA_NO_NODE.  This is to avoid introducing unexpected
        behavior change and will be updated once init path is unified.
      
      * srat_detect_node() is now enabled for x86_32 too.  It calls
        numa_set_node() and initializes the mapping making explicit
        cpu_to_node_map[] updates from map/unmap_cpu_to_node() unnecessary.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: penberg@kernel.org
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-15-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      645a7919
    • T
      x86: Unify cpu/apicid <-> NUMA node mapping between 32 and 64bit · bbc9e2f4
      Tejun Heo 提交于
      The mapping between cpu/apicid and node is done via
      apicid_to_node[] on 64bit and apicid_2_node[] +
      apic->x86_32_numa_cpu_node() on 32bit. This difference makes it
      difficult to further unify 32 and 64bit NUMA handling.
      
      This patch unifies it by replacing both apicid_to_node[] and
      apicid_2_node[] with __apicid_to_node[] array, which is accessed
      by two accessors - set_apicid_to_node() and numa_cpu_node().  On
      64bit, numa_cpu_node() always consults __apicid_to_node[]
      directly while 32bit goes through apic->numa_cpu_node() method
      to allow apic implementations to override it.
      
      srat_detect_node() for amd cpus contains workaround for broken
      NUMA configuration which assumes relationship between APIC ID,
      HT node ID and NUMA topology.  Leave it to access
      __apicid_to_node[] directly as mapping through CPU might result
      in undesirable behavior change.  The comment is reformatted and
      updated to note the ugliness.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-14-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      bbc9e2f4
  7. 12 10月, 2010 1 次提交
    • N
      x86, numa: Assign CPUs to nodes in round-robin manner on fake NUMA · 50f2d7f6
      Nikanth Karthikesan 提交于
      commit d9c2d5ac "x86, numa: Use near(er)
      online node instead of roundrobin for NUMA" changed NUMA initialization on
      Intel to choose the nearest online node or first node.  Fake NUMA would be
      better of with round-robin initialization, instead of the all CPUS on
      first node.  Change the choice of first node, back to round-robin.
      
      For testing NUMA kernel behaviour without cpusets and NUMA aware
      applications, it would be better to have cpus in different nodes, rather
      than all in a single node.  With cpusets migration of tasks scenarios
      cannot not be tested.
      
      I guess having it round-robin shouldn't affect the use cases for all cpus
      on the first node.
      
      The code comments in arch/x86/mm/numa_64.c:759 indicate that this used to
      be the case, which was changed by commit d9c2d5ac.  It changed from
      roundrobin to nearer or first node.  And I couldn't find any reason for
      this change in its changelog.
      Signed-off-by: NNikanth Karthikesan <knikanth@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      50f2d7f6
  8. 29 9月, 2010 1 次提交
  9. 21 9月, 2010 1 次提交
  10. 13 8月, 2010 1 次提交
  11. 24 4月, 2010 1 次提交
    • H
      x86: Disable large pages on CPUs with Atom erratum AAE44 · 7a0fc404
      H. Peter Anvin 提交于
      Atom erratum AAE44/AAF40/AAG38/AAH41:
      
      "If software clears the PS (page size) bit in a present PDE (page
      directory entry), that will cause linear addresses mapped through this
      PDE to use 4-KByte pages instead of using a large page after old TLB
      entries are invalidated. Due to this erratum, if a code fetch uses
      this PDE before the TLB entry for the large page is invalidated then
      it may fetch from a different physical address than specified by
      either the old large page translation or the new 4-KByte page
      translation. This erratum may also cause speculative code fetches from
      incorrect addresses."
      
      [http://download.intel.com/design/processor/specupdt/319536.pdf]
      
      Where as commit 211b3d03 seems to
      workaround errata AAH41 (mixed 4K TLBs) it reduces the window of
      opportunity for the bug to occur and does not totally remove it.  This
      patch disables mixed 4K/4MB page tables totally avoiding the page
      splitting and not tripping this processor issue.
      
      This is based on an original patch by Colin King.
      Originally-by: NColin Ian King <colin.king@canonical.com>
      Cc: Colin Ian King <colin.king@canonical.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      LKML-Reference: <1269271251-19775-1-git-send-email-colin.king@canonical.com>
      Cc: <stable@kernel.org>
      7a0fc404
  12. 10 4月, 2010 1 次提交
  13. 26 3月, 2010 1 次提交
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  14. 02 3月, 2010 1 次提交
  15. 18 12月, 2009 1 次提交
  16. 12 12月, 2009 1 次提交
    • M
      x86: Limit the number of processor bootup messages · 2eaad1fd
      Mike Travis 提交于
      When there are a large number of processors in a system, there
      is an excessive amount of messages sent to the system console.
      It's estimated that with 4096 processors in a system, and the
      console baudrate set to 56K, the startup messages will take
      about 84 minutes to clear the serial port.
      
      This set of patches limits the number of repetitious messages
      which contain no additional information.  Much of this information
      is obtainable from the /proc and /sysfs.   Some of the messages
      are also sent to the kernel log buffer as KERN_DEBUG messages so
      dmesg can be used to examine more closely any details specific to
      a problem.
      
      The new cpu bootup sequence for system_state == SYSTEM_BOOTING:
      
      Booting Node   0, Processors  #1 #2 #3 #4 #5 #6 #7 Ok.
      Booting Node   1, Processors  #8 #9 #10 #11 #12 #13 #14 #15 Ok.
      ...
      Booting Node   3, Processors  #56 #57 #58 #59 #60 #61 #62 #63 Ok.
      Brought up 64 CPUs
      
      After the system is running, a single line boot message is displayed
      when CPU's are hotplugged on:
      
          Booting Node %d Processor %d APIC 0x%x
      
      Status of the following lines:
      
          CPU: Physical Processor ID:		printed once (for boot cpu)
          CPU: Processor Core ID:		printed once (for boot cpu)
          CPU: Hyper-Threading is disabled	printed once (for boot cpu)
          CPU: Thermal monitoring enabled	printed once (for boot cpu)
          CPU %d/0x%x -> Node %d:		removed
          CPU %d is now offline:		only if system_state == RUNNING
          Initializing CPU#%d:		KERN_DEBUG
      Signed-off-by: NMike Travis <travis@sgi.com>
      LKML-Reference: <4B219E28.8080601@sgi.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      2eaad1fd
  17. 23 11月, 2009 1 次提交
    • Y
      x86, numa: Use near(er) online node instead of roundrobin for NUMA · d9c2d5ac
      Yinghai Lu 提交于
      CPU to node mapping is set via the following sequence:
      
       1. numa_init_array(): Set up roundrobin from cpu to online node
      
       2. init_cpu_to_node(): Set that according to apicid_to_node[]
      			according to srat only handle the node that
      			is online, and leave other cpu on node
      			without ram (aka not online) to still
      			roundrobin.
      
      3. later call srat_detect_node for Intel/AMD, will use first_online
         node or nearby node.
      
      Problem is that setup_per_cpu_areas() is not called between 2 and 3,
      the per_cpu for cpu on node with ram is on different node, and could
      put that on node with two hops away.
      
      So try to optimize this and add find_near_online_node() and call
      init_cpu_to_node().
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B07A739.3030104@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9c2d5ac
  18. 15 9月, 2009 1 次提交
    • P
      x86: Move APERF/MPERF into a X86_FEATURE · a8303aaf
      Peter Zijlstra 提交于
      Move the APERFMPERF capacility into a X86_FEATURE flag so that it
      can be used outside of the acpi cpufreq driver.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      Cc: cpufreq@vger.kernel.org
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a8303aaf
  19. 11 7月, 2009 1 次提交
  20. 15 6月, 2009 1 次提交
    • V
      x86: add hooks for kmemcheck · f8561296
      Vegard Nossum 提交于
      The hooks that we modify are:
      - Page fault handler (to handle kmemcheck faults)
      - Debug exception handler (to hide pages after single-stepping
        the instruction that caused the page fault)
      
      Also redefine memset() to use the optimized version if kmemcheck is
      enabled.
      
      (Thanks to Pekka Enberg for minimizing the impact on the page fault
      handler.)
      
      As kmemcheck doesn't handle MMX/SSE instructions (yet), we also disable
      the optimized xor code, and rely instead on the generic C implementation
      in order to avoid false-positive warnings.
      Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
      
      [whitespace fixlet]
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      
      [rebased for mainline inclusion]
      Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
      f8561296
  21. 18 5月, 2009 1 次提交
  22. 13 3月, 2009 1 次提交
  23. 12 3月, 2009 1 次提交
  24. 08 3月, 2009 1 次提交
  25. 27 2月, 2009 1 次提交
    • I
      x86: set X86_FEATURE_TSC_RELIABLE · 83ce4009
      Ingo Molnar 提交于
      If the TSC is constant and non-stop, also set it reliable.
      
      (We will turn this off in DMI quirks for multi-chassis systems)
      
      The performance number on a 16-way Nehalem system running
      32 tasks that context-switch between each other is significant:
      
         sched_clock_stable=0		sched_clock_stable=1
         ....................         ....................
         22.456925 million/sec        24.306972 million/sec   [+8.2%]
      
      lmbench's "lat_ctx -s 0 2" goes from 0.63 microseconds to
      0.59 microseconds - a 6.7% increase in context-switching
      performance.
      
      Perfstat of 1 million pipe context switches between two tasks:
      
       Performance counter stats for './pipe-test-1m':
      
             [before]           [after]
         ............      ............
         37621.421089      36436.848378    task clock ticks     (msecs)
      
                    0                 0    CPU migrations       (events)
              2000274           2000189    context switches     (events)
                  194               193    pagefaults           (events)
           8433799643        8171016416    CPU cycles           (events) -3.21%
           8370133368        8180999694    instructions         (events) -2.31%
              4158565           3895941    cache references     (events) -6.74%
                44312             46264    cache misses         (events)
      
          2349.287976       2279.362465    wall-time            (msecs)  -3.06%
      
      The speedup comes straight from the reduction in the instruction
      count. sched_clock_cpu() got simpler and the whole workload thus
      executes faster.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      83ce4009
  26. 20 2月, 2009 1 次提交
  27. 18 2月, 2009 2 次提交
  28. 09 2月, 2009 1 次提交
  29. 29 1月, 2009 1 次提交
  30. 27 1月, 2009 1 次提交
  31. 26 1月, 2009 1 次提交
    • I
      x86: unmask CPUID levels on Intel CPUs, fix · 99fb4d34
      Ingo Molnar 提交于
      Impact: fix boot hang on pre-model-15 Intel CPUs
      
      rdmsrl_safe() does not work in very early bootup code yet, because we
      dont have the pagefault handler installed yet so exception section
      does not get parsed. rdmsr_safe() will just crash and hang the bootup.
      
      So limit the MSR_IA32_MISC_ENABLE MSR read to those CPU types that
      support it.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99fb4d34
  32. 24 1月, 2009 1 次提交
    • H
      x86: handle PAT more like other CPU features · 75a04811
      H. Peter Anvin 提交于
      Impact: Cleanup
      
      When PAT was originally introduced, it was handled specially for a few
      reasons:
      
      - PAT bugs are hard to track down, so we wanted to maintain a
        whitelist of CPUs.
      - The i386 and x86-64 CPUID code was not yet unified.
      
      Both of these are now obsolete, so handle PAT like any other features,
      including ordinary feature blacklisting due to known bugs.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      75a04811
  33. 22 1月, 2009 1 次提交
  34. 19 12月, 2008 1 次提交
  35. 17 12月, 2008 1 次提交
    • V
      x86: support always running TSC on Intel CPUs · 40fb1715
      Venki Pallipadi 提交于
      Impact: reward non-stop TSCs with good TSC-based clocksources, etc.
      
      Add support for CPUID_0x80000007_Bit8 on Intel CPUs as well. This bit means
      that the TSC is invariant with C/P/T states and always runs at constant
      frequency.
      
      With Intel CPUs, we have 3 classes
      * CPUs where TSC runs at constant rate and does not stop n C-states
      * CPUs where TSC runs at constant rate, but will stop in deep C-states
      * CPUs where TSC rate will vary based on P/T-states and TSC will stop in deep
        C-states.
      
      To cover these 3, one feature bit (CONSTANT_TSC) is not enough. So, add a
      second bit (NONSTOP_TSC). CONSTANT_TSC indicates that the TSC runs at
      constant frequency irrespective of P/T-states, and NONSTOP_TSC indicates
      that TSC does not stop in deep C-states.
      
      CPUID_0x8000000_Bit8 indicates both these feature bit can be set.
      We still have CONSTANT_TSC _set_ and NONSTOP_TSC _not_set_ on some older Intel
      CPUs, based on model checks. We can use TSC on such CPUs for time, as long as
      those CPUs do not support/enter deep C-states.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      40fb1715
  36. 12 12月, 2008 1 次提交
  37. 10 11月, 2008 1 次提交
  38. 16 10月, 2008 1 次提交