1. 13 3月, 2012 1 次提交
    • P
      sched: Cleanup cpu_active madness · 5fbd036b
      Peter Zijlstra 提交于
      Stepan found:
      
      CPU0		CPUn
      
      _cpu_up()
        __cpu_up()
      
      		boostrap()
      		  notify_cpu_starting()
      		  set_cpu_online()
      		  while (!cpu_active())
      		    cpu_relax()
      
      <PREEMPT-out>
      
      smp_call_function(.wait=1)
        /* we find cpu_online() is true */
        arch_send_call_function_ipi_mask()
      
        /* wait-forever-more */
      
      <PREEMPT-in>
      		  local_irq_enable()
      
        cpu_notify(CPU_ONLINE)
          sched_cpu_active()
            set_cpu_active()
      
      Now the purpose of cpu_active is mostly with bringing down a cpu, where
      we mark it !active to avoid the load-balancer from moving tasks to it
      while we tear down the cpu. This is required because we only update the
      sched_domain tree after we brought the cpu-down. And this is needed so
      that some tasks can still run while we bring it down, we just don't want
      new tasks to appear.
      
      On cpu-up however the sched_domain tree doesn't yet include the new cpu,
      so its invisible to the load-balancer, regardless of the active state.
      So instead of setting the active state after we boot the new cpu (and
      consequently having to wait for it before enabling interrupts) set the
      cpu active before we set it online and avoid the whole mess.
      Reported-by: NStepan Moskovchenko <stepanm@codeaurora.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1323965362.18942.71.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
      5fbd036b
  2. 23 2月, 2012 1 次提交
  3. 24 12月, 2011 1 次提交
  4. 06 12月, 2011 1 次提交
    • J
      x86: Reduce clock calibration time during slave cpu startup · b565201c
      Jack Steiner 提交于
      Reduce the startup time for slave cpus.
      
      Adds hooks for an arch-specific function for clock calibration.
      These hooks are used on x86.  If a newly started cpu has the
      same phys_proc_id as a core already active, uses the TSC for the
      delay loop and has a CONSTANT_TSC, use the already-calculated
      value of loops_per_jiffy.
      
      This patch reduces the time required to start slave cpus on a
      4096 cpu system from: 465 sec OLD 62 sec NEW
      
      This reduces boot time on a 4096p system by almost 7 minutes.
      Nice...
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: John Stultz <john.stultz@linaro.org>
      [fix CONFIG_SMP=n build]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b565201c
  5. 05 12月, 2011 1 次提交
    • D
      x86, NMI: Add NMI IPI selftest · 99e8b9ca
      Don Zickus 提交于
      The previous patch modified the stop cpus path to use NMI
      instead of IRQ as the way to communicate to the other cpus to
      shutdown.  There were some concerns that various machines may
      have problems with using an NMI IPI.
      
      This patch creates a selftest to check if NMI is working at
      boot. The idea is to help catch any issues before the machine
      panics and we learn the hard way.
      
      Loosely based on the locking-selftest.c file, this separate file
      runs a couple of simple tests and reports the results.  The
      output looks like:
      
      ...
      Brought up 4 CPUs
      ----------------
      | NMI testsuite:
      --------------------
        remote IPI:  ok  |
         local IPI:  ok  |
      --------------------
      Good, all   2 testcases passed! |
      ---------------------------------
      Total of 4 processors activated (21330.61 BogoMIPS).
      ...
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: seiji.aguchi@hds.com
      Cc: vgoyal@redhat.com
      Cc: mjg@redhat.com
      Cc: tony.luck@intel.com
      Cc: gong.chen@intel.com
      Cc: satoru.moriya@hds.com
      Cc: avi@redhat.com
      Cc: Andi Kleen <andi@firstfloor.org>
      Link: http://lkml.kernel.org/r/1318533267-18880-3-git-send-email-dzickus@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      99e8b9ca
  6. 21 7月, 2011 1 次提交
  7. 08 6月, 2011 1 次提交
    • T
      x86: cpu-hotplug: Prevent softirq wakeup on wrong CPU · fd8a7de1
      Thomas Gleixner 提交于
      After a newly plugged CPU sets the cpu_online bit it enables
      interrupts and goes idle. The cpu which brought up the new cpu waits
      for the cpu_online bit and when it observes it, it sets the cpu_active
      bit for this cpu. The cpu_active bit is the relevant one for the
      scheduler to consider the cpu as a viable target.
      
      With forced threaded interrupt handlers which imply forced threaded
      softirqs we observed the following race:
      
      cpu 0                         cpu 1
      
      bringup(cpu1);
                                    set_cpu_online(smp_processor_id(), true);
      		              local_irq_enable();
      while (!cpu_online(cpu1));
                                    timer_interrupt()
                                      -> wake_up(softirq_thread_cpu1);
                                           -> enqueue_on(softirq_thread_cpu1, cpu0);
      
                                                                              ^^^^
      
      cpu_notify(CPU_ONLINE, cpu1);
        -> sched_cpu_active(cpu1)
           -> set_cpu_active((cpu1, true);
      
      When an interrupt happens before the cpu_active bit is set by the cpu
      which brought up the newly onlined cpu, then the scheduler refuses to
      enqueue the woken thread which is bound to that newly onlined cpu on
      that newly onlined cpu due to the not yet set cpu_active bit and
      selects a fallback runqueue. Not really an expected and desirable
      behaviour.
      
      So far this has only been observed with forced hard/softirq threading,
      but in theory this could happen without forced threaded hard/softirqs
      as well. It's probably unobservable as it would take a massive
      interrupt storm on the newly onlined cpu which causes the softirq loop
      to wake up the softirq thread and an even longer delay of the cpu
      which waits for the cpu_online bit.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: stable@kernel.org # 2.6.39
      fd8a7de1
  8. 30 5月, 2011 1 次提交
  9. 29 5月, 2011 1 次提交
    • L
      x86 idle: clarify AMD erratum 400 workaround · 02c68a02
      Len Brown 提交于
      The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting:
      1. Intel C1E is somehow involved
      2. All AMD processors with C1E are involved
      
      Use the string "amd_c1e" instead of simply "c1e" to clarify that
      this workaround is specific to AMD's version of C1E.
      Use the string "e400" to clarify that the workaround is specific
      to AMD processors with Erratum 400.
      
      This patch is text-substitution only, with no functional change.
      
      cc: x86@kernel.org
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      02c68a02
  10. 21 4月, 2011 1 次提交
  11. 16 4月, 2011 1 次提交
    • K
      x86, NUMA: Fix fakenuma boot failure · 7d6b4670
      KOSAKI Motohiro 提交于
      Currently, numa=fake boot parameter is broken. If it's used,
      kernel may panic due to devide by zero error depending on CPU
      configuration
      
      Call Trace:
       [<ffffffff8104ad4c>] find_busiest_group+0x38c/0xd30
       [<ffffffff81086aff>] ? local_clock+0x6f/0x80
       [<ffffffff81050533>] load_balance+0xa3/0x600
       [<ffffffff81050f53>] idle_balance+0xf3/0x180
       [<ffffffff81550092>] schedule+0x722/0x7d0
       [<ffffffff81550538>] ? wait_for_common+0x128/0x190
       [<ffffffff81550a65>] schedule_timeout+0x265/0x320
       [<ffffffff81095815>] ? lock_release_holdtime+0x35/0x1a0
       [<ffffffff81550538>] ? wait_for_common+0x128/0x190
       [<ffffffff8109bb6c>] ? __lock_release+0x9c/0x1d0
       [<ffffffff815534e0>] ? _raw_spin_unlock_irq+0x30/0x40
       [<ffffffff815534e0>] ? _raw_spin_unlock_irq+0x30/0x40
       [<ffffffff81550540>] wait_for_common+0x130/0x190
       [<ffffffff81051920>] ? try_to_wake_up+0x510/0x510
       [<ffffffff8155067d>] wait_for_completion+0x1d/0x20
       [<ffffffff8107f36c>] kthread_create_on_node+0xac/0x150
       [<ffffffff81077bb0>] ? process_scheduled_works+0x40/0x40
       [<ffffffff8155045f>] ? wait_for_common+0x4f/0x190
       [<ffffffff8107a283>] __alloc_workqueue_key+0x1a3/0x590
       [<ffffffff81e0cce2>] cpuset_init_smp+0x6b/0x7b
       [<ffffffff81df3d07>] kernel_init+0xc3/0x182
       [<ffffffff8155d5e4>] kernel_thread_helper+0x4/0x10
       [<ffffffff81553cd4>] ? retint_restore_args+0x13/0x13
       [<ffffffff81df3c44>] ? start_kernel+0x400/0x400
       [<ffffffff8155d5e0>] ? gs_change+0x13/0x13
      
      The divede by zero is caused by the following line,
      group->cpu_power==0:
      
       kernel/sched_fair.c::update_sg_lb_stats()
              /* Adjust by relative CPU power of the group */
              sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power;
      
      This regression was caused by commit e23bba60 ("x86-64, NUMA: Unify
      emulated distance mapping") because it changes cpu -> node
      mapping in the process of dropping fake_physnodes().
      
        old) all cpus are assinged node 0
        now) cpus are assigned round robin
             (the logic is implemented by numa_init_array())
      
        Note: The change in behavior only happens if the system doesn't
              have neither ACPI SRAT table nor AMD northbridge NUMA
      	information.
      
      Round robin assignment doesn't work because init_numa_sched_groups_power()
      assumes all logical cpus in the same physical cpu share the same node
      (then it only accounts for group_first_cpu()), and the simple round robin
      breaks the above assumption.
      
      Thus, this patch implements a reassignment of node-ids if buggy firmware
      or numa emulation makes wrong cpu node map. Tt enforce all logical cpus
      in the same physical cpu share the same node.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/20110415203928.1303.A69D9226@jp.fujitsu.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      7d6b4670
  12. 29 3月, 2011 1 次提交
    • C
      x86: A fast way to check capabilities of the current cpu · 349c004e
      Christoph Lameter 提交于
      Add this_cpu_has() which determines if the current cpu has a certain
      ability using a segment prefix and a bit test operation.
      
      For that we need to add bit operations to x86s percpu.h.
      
      Many uses of cpu_has use a pointer passed to a function to determine
      the current flags. That is no longer necessary after this patch.
      
      However, this patch only converts the straightforward cases where
      cpu_has is used with this_cpu_ptr. The rest is work for later.
      
      -tj: Rolled up patch to add x86_ prefix and use percpu_read() instead
           of percpu_read_stable().
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      349c004e
  13. 23 2月, 2011 1 次提交
    • H
      x86: Rework arch_disable_smp_support() for x86 · 7167d08e
      Henrik Kretzschmar 提交于
      Currently arch_disable_smp_support() on x86 disables only the
      support for the IOAPIC and is also compiled in if SMP-support is
      not.
      
      Therefore this function is renamed to disable_ioapic_support(),
      which meets its purpose and is only compiled in the kernel
      when IOAPIC support is also.
      
      A new arch_disable_smp_support() is created in smpboot.c,
      which calls disable_ioapic_support() and gets only compiled
      in the kernel when SMP support is also.
      Signed-off-by: NHenrik Kretzschmar <henne@nachtwindheim.de>
      LKML-Reference: <1298385487-4708-3-git-send-email-henne@nachtwindheim.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7167d08e
  14. 18 2月, 2011 1 次提交
    • H
      x86, trampoline: Common infrastructure for low memory trampolines · 4822b7fc
      H. Peter Anvin 提交于
      Common infrastructure for low memory trampolines.  This code installs
      the trampolines permanently in low memory very early.  It also permits
      multiple pieces of code to be used for this purpose.
      
      This code also introduces a standard infrastructure for computing
      symbol addresses in the trampoline code.
      
      The only change to the actual SMP trampolines themselves is that the
      64-bit trampoline has been made reusable -- the previous version would
      overwrite the code with a status variable; this moves the status
      variable to a separate location.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      LKML-Reference: <4D5DFBE4.7090104@intel.com>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Matthieu Castet <castet.matthieu@free.fr>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      4822b7fc
  15. 10 2月, 2011 1 次提交
    • J
      x86: Fix section mismatch in LAPIC initialization · 2fb270f3
      Jan Beulich 提交于
      Additionally doing things conditionally upon smp_processor_id()
      being zero is generally a bad idea, as this means CPU 0 cannot
      be offlined and brought back online later again.
      
      While there may be other places where this is done, I think adding
      more of those should be avoided so that some day SMP can really
      become "symmetrical".
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      LKML-Reference: <4D525C7E0200007800030EE1@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2fb270f3
  16. 05 2月, 2011 1 次提交
  17. 28 1月, 2011 8 次提交
    • T
      x86: Unify node_to_cpumask_map handling between 32 and 64bit · de2d9445
      Tejun Heo 提交于
      x86_32 has been managing node_to_cpumask_map explicitly from
      map_cpu_to_node() and friends in a rather ugly way.  With
      previous changes, it's now possible to share the code with
      64bit.
      
      * When CONFIG_NUMA_EMU is disabled, numa_add/remove_cpu() are
        implemented in numa.c and shared by 32 and 64bit.  CONFIG_NUMA_EMU
        versions still live in numa_64.c.
      
        NUMA_EMU's dependency on 64bit is planned to be removed and the
        above should go away together.
      
      * identify_cpu() now calls numa_add_cpu() for 32bit too.  This
        makes the explicit mask management from map_cpu_to_node() unnecessary.
      
      * The whole x86_32 specific map_cpu_to_node() chunk is no longer
        necessary.  Dropped.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-16-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      de2d9445
    • T
      x86: Unify CPU -> NUMA node mapping between 32 and 64bit · 645a7919
      Tejun Heo 提交于
      Unlike 64bit, 32bit has been using its own cpu_to_node_map[] for
      CPU -> NUMA node mapping.  Replace it with early_percpu variable
      x86_cpu_to_node_map and share the mapping code with 64bit.
      
      * USE_PERCPU_NUMA_NODE_ID is now enabled for 32bit too.
      
      * x86_cpu_to_node_map and numa_set/clear_node() are moved from
        numa_64 to numa.  For now, on 32bit, x86_cpu_to_node_map is initialized
        with 0 instead of NUMA_NO_NODE.  This is to avoid introducing unexpected
        behavior change and will be updated once init path is unified.
      
      * srat_detect_node() is now enabled for x86_32 too.  It calls
        numa_set_node() and initializes the mapping making explicit
        cpu_to_node_map[] updates from map/unmap_cpu_to_node() unnecessary.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: penberg@kernel.org
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-15-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      645a7919
    • T
      x86: Unify cpu/apicid <-> NUMA node mapping between 32 and 64bit · bbc9e2f4
      Tejun Heo 提交于
      The mapping between cpu/apicid and node is done via
      apicid_to_node[] on 64bit and apicid_2_node[] +
      apic->x86_32_numa_cpu_node() on 32bit. This difference makes it
      difficult to further unify 32 and 64bit NUMA handling.
      
      This patch unifies it by replacing both apicid_to_node[] and
      apicid_2_node[] with __apicid_to_node[] array, which is accessed
      by two accessors - set_apicid_to_node() and numa_cpu_node().  On
      64bit, numa_cpu_node() always consults __apicid_to_node[]
      directly while 32bit goes through apic->numa_cpu_node() method
      to allow apic implementations to override it.
      
      srat_detect_node() for amd cpus contains workaround for broken
      NUMA configuration which assumes relationship between APIC ID,
      HT node ID and NUMA topology.  Leave it to access
      __apicid_to_node[] directly as mapping through CPU might result
      in undesirable behavior change.  The comment is reformatted and
      updated to note the ugliness.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-14-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: David Rientjes <rientjes@google.com>
      bbc9e2f4
    • T
      x86: Replace apic->apicid_to_node() with ->x86_32_numa_cpu_node() · 89e5dc21
      Tejun Heo 提交于
      apic->apicid_to_node() is 32bit specific apic operation which
      determines NUMA node for a CPU.  Depending on the APIC
      implementation, it can be easier to determine NUMA node from
      either physical or logical apicid.  Currently,
      ->apicid_to_node() takes @logical_apicid and calls
      hard_smp_processor_id() if the physical apicid is needed.
      
      This prevents NUMA mapping from being queried from a different
      CPU, which in turn makes it impossible to initialize NUMA
      mapping before SMP bringup.
      
      This patch replaces apic->apicid_to_node() with
      ->x86_32_numa_cpu_node() which takes @cpu, from which both
      logical and physical apicids can easily be determined.  While at
      it, drop duplicate implementations from bigsmp_32 and summit_32,
      and use the default one.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-13-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89e5dc21
    • T
      x86: Always use x86_cpu_to_logical_apicid for cpu -> logical apic id · 6f802c4b
      Tejun Heo 提交于
      Currently, cpu -> logical apic id translation is done by
      apic->cpu_to_logical_apicid() callback which may or may not use
      x86_cpu_to_logical_apicid.  This is unnecessary as it should
      always equal logical_smp_processor_id() which is known early
      during CPU bring up.
      
      Initialize x86_cpu_to_logical_apicid after apic->init_apic_ldr()
      in setup_local_APIC() and always use x86_cpu_to_logical_apicid
      for cpu -> logical apic id mapping.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: penberg@kernel.org
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-6-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f802c4b
    • T
      x86: Replace cpu_2_logical_apicid[] with early percpu variable · 4c321ff8
      Tejun Heo 提交于
      Unlike x86_64, on x86_32, the mapping from cpu to logical apicid
      may vary depending on apic in use.  cpu_2_logical_apicid[] array
      is used for this mapping.  Replace it with early percpu variable
      x86_cpu_to_logical_apicid to make it better aligned with other
      mappings.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: penberg@kernel.org
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-5-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4c321ff8
    • T
      x86: Drop x86_32 MAX_APICID · b78aa66b
      Tejun Heo 提交于
      Commit 56d91f13 (x86, acpi: Add MAX_LOCAL_APIC for 32bit) added
      MAX_LOCAL_APIC for x86_32 but didn't replace MAX_APICID users
      with it. Convert MAX_APICID users to MAX_LOCAL_APIC and drop
      MAX_APICID.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Acked-by: NYinghai Lu <yinghai@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-3-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b78aa66b
    • T
      x86: Kill unused static boot_cpu_logical_apicid in smpboot.c · bd22a2f1
      Tejun Heo 提交于
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Acked-by: NYinghai Lu <yinghai@kernel.org>
      Cc: eric.dumazet@gmail.com
      Cc: yinghai@kernel.org
      Cc: brgerst@gmail.com
      Cc: gorcunov@gmail.com
      Cc: shaohui.zheng@intel.com
      Cc: rientjes@google.com
      LKML-Reference: <1295789862-25482-2-git-send-email-tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bd22a2f1
  18. 26 1月, 2011 3 次提交
    • Y
      x86: Don't copy per_cpu cpuinfo for BSP two times · 792363d2
      Yinghai Lu 提交于
      smp_store_cpu_info(0) will do that.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      LKML-Reference: <4D3A16F2.5090902@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      792363d2
    • Y
      x86: Move llc_shared_map out of cpu_info · b3d7336d
      Yinghai Lu 提交于
      cpu_info is already with per_cpu, We can take llc_shared_map out
      of cpu_info, and declare it as per_cpu variable directly.
      
      So later referencing could be simple and directly instead of
      diving to find cpu_info at first.
      
      Also could make smp_store_cpu_info() much simple to avoid to do
      save and restore trick.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Hans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Alok N Kataria <akataria@vmware.com>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Hans J. Koch <hjk@linutronix.de>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <4D3A16E8.5020608@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b3d7336d
    • A
      x86, amd: Normalize compute unit IDs on multi-node processors · d518573d
      Andreas Herrmann 提交于
      On multi-node CPUs we don't need the socket wide compute unit ID
      but the node-wide compute unit ID. Thus we need to normalize the
      value. This is similar to what we do with cpu_core_id.
      
      A compute unit is then identified by physical_package_id,
      node_id, and compute_unit_id.
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <1295881543-572552-2-git-send-email-hans.rosenfeld@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d518573d
  19. 22 1月, 2011 1 次提交
    • B
      x86, hotplug: Fix powersavings with offlined cores on AMD · 93789b32
      Borislav Petkov 提交于
      ea530692 made a CPU use monitor/mwait
      when offline. This is not the optimal choice for AMD wrt to powersavings
      and we'd prefer our cores to halt (i.e. enter C1) instead. For this, the
      same selection whether to use monitor/mwait has to be used as when we
      select the idle routine for the machine.
      
      With this patch, offlining cores 1-5 on a X6 machine allows core0 to
      boost again.
      
      [ hpa: putting this in urgent since it is a (power) regression fix ]
      Reported-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      Cc: stable@kernel.org # 37.x
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.hl>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <1295534572-10730-1-git-send-email-bp@amd64.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      93789b32
  20. 09 1月, 2011 1 次提交
  21. 30 12月, 2010 2 次提交
  22. 14 12月, 2010 1 次提交
  23. 26 11月, 2010 1 次提交
  24. 18 11月, 2010 1 次提交
    • D
      x86, nmi_watchdog: Remove all stub function calls from old nmi_watchdog · 072b198a
      Don Zickus 提交于
      Now that the bulk of the old nmi_watchdog is gone, remove all
      the stub variables and hooks associated with it.
      
      This touches lots of files mainly because of how the io_apic
      nmi_watchdog was implemented.  Now that the io_apic nmi_watchdog
      is forever gone, remove all its fingers.
      
      Most of this code was not being exercised by virtue of
      nmi_watchdog != NMI_IO_APIC, so there shouldn't be anything to
      risky here.
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: fweisbec@gmail.com
      Cc: gorcunov@openvz.org
      LKML-Reference: <1289578944-28564-3-git-send-email-dzickus@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      072b198a
  25. 27 10月, 2010 2 次提交
  26. 21 10月, 2010 1 次提交
  27. 12 10月, 2010 1 次提交
  28. 02 10月, 2010 1 次提交
  29. 21 9月, 2010 1 次提交