1. 10 2月, 2013 1 次提交
    • L
      x86 idle: remove mwait_idle() and "idle=mwait" cmdline param · 69fb3676
      Len Brown 提交于
      mwait_idle() is a C1-only idle loop intended to be more efficient
      than HLT, starting on Pentium-4 HT-enabled processors.
      
      But mwait_idle() has been replaced by the more general
      mwait_idle_with_hints(), which handles both C1 and deeper C-states.
      ACPI processor_idle and intel_idle use only mwait_idle_with_hints(),
      and no longer use mwait_idle().
      
      Here we simplify the x86 native idle code by removing mwait_idle(),
      and the "idle=mwait" bootparam used to invoke it.
      
      Since Linux 3.0 there has been a boot-time warning when "idle=mwait"
      was invoked saying it would be removed in 2012.  This removal
      was also noted in the (now removed:-) feature-removal-schedule.txt.
      
      After this change, kernels configured with
      (CONFIG_ACPI=n && CONFIG_INTEL_IDLE=n) when run on hardware
      that supports MWAIT will simply use HLT.  If MWAIT is desired
      on those systems, cpuidle and the cpuidle drivers above
      can be enabled.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Cc: x86@kernel.org
      69fb3676
  2. 01 12月, 2012 1 次提交
    • V
      x86, fpu: Avoid FPU lazy restore after suspend · 644c1541
      Vincent Palatin 提交于
      When a cpu enters S3 state, the FPU state is lost.
      After resuming for S3, if we try to lazy restore the FPU for a process running
      on the same CPU, this will result in a corrupted FPU context.
      
      Ensure that "fpu_owner_task" is properly invalided when (re-)initializing a CPU,
      so nobody will try to lazy restore a state which doesn't exist in the hardware.
      
      Tested with a 64-bit kernel on a 4-core Ivybridge CPU with eagerfpu=off,
      by doing thousands of suspend/resume cycles with 4 processes doing FPU
      operations running. Without the patch, a process is killed after a
      few hundreds cycles by a SIGFPE.
      
      Cc: Duncan Laurie <dlaurie@chromium.org>
      Cc: Olof Johansson <olofj@chromium.org>
      Cc: <stable@kernel.org> v3.4+ # for 3.4 need to replace this_cpu_write by percpu_write
      Signed-off-by: NVincent Palatin <vpalatin@chromium.org>
      Link: http://lkml.kernel.org/r/1354306532-1014-1-git-send-email-vpalatin@chromium.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      644c1541
  3. 15 11月, 2012 2 次提交
  4. 14 11月, 2012 1 次提交
  5. 23 8月, 2012 1 次提交
    • R
      x86/smp: Don't ever patch back to UP if we unplug cpus · 816afe4f
      Rusty Russell 提交于
      We still patch SMP instructions to UP variants if we boot with a
      single CPU, but not at any other time.  In particular, not if we
      unplug CPUs to return to a single cpu.
      
      Paul McKenney points out:
      
       mean offline overhead is 6251/48=130.2 milliseconds.
      
       If I remove the alternatives_smp_switch() from the offline
       path [...] the mean offline overhead is 550/42=13.1 milliseconds
      
      Basically, we're never going to get those 120ms back, and the
      code is pretty messy.
      
      We get rid of:
      
       1) The "smp-alt-once" boot option. It's actually "smp-alt-boot", the
          documentation is wrong. It's now the default.
      
       2) The skip_smp_alternatives flag used by suspend.
      
       3) arch_disable_nonboot_cpus_begin() and arch_disable_nonboot_cpus_end()
          which were only used to set this one flag.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Paul McKenney <paul.mckenney@us.ibm.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/87vcgwwive.fsf@rustcorp.com.auSigned-off-by: NIngo Molnar <mingo@kernel.org>
      816afe4f
  6. 14 6月, 2012 1 次提交
    • V
      x86: Add read_mostly declaration/definition to variables from smp.h · 0816b0f0
      Vlad Zolotarov 提交于
      Add "read-mostly" qualifier to the following variables in
      smp.h:
      
       - cpu_sibling_map
       - cpu_core_map
       - cpu_llc_shared_map
       - cpu_llc_id
       - cpu_number
       - x86_cpu_to_apicid
       - x86_bios_cpu_apicid
       - x86_cpu_to_logical_apicid
      
      As long as all the variables above are only written during the
      initialization, this change is meant to prevent the false
      sharing. More specifically, on vSMP Foundation platform
      x86_cpu_to_apicid shared the same internode_cache_line with
      frequently written lapic_events.
      
      From the analysis of the first 33 per_cpu variables out of 219
      (memories they describe, to be more specific) the 8 have read_mostly
      nature (tlb_vector_offset, cpu_loops_per_jiffy, xen_debug_irq, etc.)
      and 25 are frequently written (irq_stack_union, gdt_page,
      exception_stacks, idt_desc, etc.).
      
      Assuming that the spread of the rest of the per_cpu variables is
      similar, identifying the read mostly memories will make more sense
      in terms of long-term code maintenance comparing to identifying
      frequently written memories.
      Signed-off-by: NVlad Zolotarov <vlad@scalemp.com>
      Acked-by: NShai Fultheim <shai@scalemp.com>
      Cc: Shai Fultheim (Shai@ScaleMP.com) <Shai@scalemp.com>
      Cc: ido@wizery.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1719258.EYKzE4Zbq5@vladSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0816b0f0
  7. 13 6月, 2012 1 次提交
    • B
      x86/smp: Fix topology checks on AMD MCM CPUs · 161270fc
      Borislav Petkov 提交于
      The warning below triggers on AMD MCM packages because physical package
      IDs on the cores of a _physical_ socket are the same. I.e., this field
      says which CPUs belong to the same physical package.
      
      However, the same two CPUs belong to two different internal, i.e.
      "logical" nodes in the same physical socket which is reflected in the
      CPU-to-node map on x86 with NUMA.
      
      Which makes this check wrong on the above topologies so circumvent it.
      
      [    0.444413] Booting Node   0, Processors  #1 #2 #3 #4 #5 Ok.
      [    0.461388] ------------[ cut here ]------------
      [    0.465997] WARNING: at arch/x86/kernel/smpboot.c:310 topology_sane.clone.1+0x6e/0x81()
      [    0.473960] Hardware name: Dinar
      [    0.477170] sched: CPU #6's mc-sibling CPU #0 is not on the same node! [node: 1 != 0]. Ignoring dependency.
      [    0.486860] Booting Node   1, Processors  #6
      [    0.491104] Modules linked in:
      [    0.494141] Pid: 0, comm: swapper/6 Not tainted 3.4.0+ #1
      [    0.499510] Call Trace:
      [    0.501946]  [<ffffffff8144bf92>] ? topology_sane.clone.1+0x6e/0x81
      [    0.508185]  [<ffffffff8102f1fc>] warn_slowpath_common+0x85/0x9d
      [    0.514163]  [<ffffffff8102f2b7>] warn_slowpath_fmt+0x46/0x48
      [    0.519881]  [<ffffffff8144bf92>] topology_sane.clone.1+0x6e/0x81
      [    0.525943]  [<ffffffff8144c234>] set_cpu_sibling_map+0x251/0x371
      [    0.532004]  [<ffffffff8144c4ee>] start_secondary+0x19a/0x218
      [    0.537729] ---[ end trace 4eaa2a86a8e2da22 ]---
      [    0.628197]  #7 #8 #9 #10 #11 Ok.
      [    0.807108] Booting Node   3, Processors  #12 #13 #14 #15 #16 #17 Ok.
      [    0.897587] Booting Node   2, Processors  #18 #19 #20 #21 #22 #23 Ok.
      [    0.917443] Brought up 24 CPUs
      
      We ran a topology sanity check test we have here on it and
      it all looks ok... hopefully :).
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20120529135442.GE29157@aftab.osrc.amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      161270fc
  8. 06 6月, 2012 2 次提交
  9. 05 6月, 2012 1 次提交
  10. 30 5月, 2012 1 次提交
  11. 17 5月, 2012 1 次提交
    • P
      sched: Remove stale power aware scheduling remnants and dysfunctional knobs · 8e7fbcbc
      Peter Zijlstra 提交于
      It's been broken forever (i.e. it's not scheduling in a power
      aware fashion), as reported by Suresh and others sending
      patches, and nobody cares enough to fix it properly ...
      so remove it to make space free for something better.
      
      There's various problems with the code as it stands today, first
      and foremost the user interface which is bound to topology
      levels and has multiple values per level. This results in a
      state explosion which the administrator or distro needs to
      master and almost nobody does.
      
      Furthermore large configuration state spaces aren't good, it
      means the thing doesn't just work right because it's either
      under so many impossibe to meet constraints, or even if
      there's an achievable state workloads have to be aware of
      it precisely and can never meet it for dynamic workloads.
      
      So pushing this kind of decision to user-space was a bad idea
      even with a single knob - it's exponentially worse with knobs
      on every node of the topology.
      
      There is a proposal to replace the user interface with a single
      3 state knob:
      
       sched_balance_policy := { performance, power, auto }
      
      where 'auto' would be the preferred default which looks at things
      like Battery/AC mode and possible cpufreq state or whatever the hw
      exposes to show us power use expectations - but there's been no
      progress on it in the past many months.
      
      Aside from that, the actual implementation of the various knobs
      is known to be broken. There have been sporadic attempts at
      fixing things but these always stop short of reaching a mergable
      state.
      
      Therefore this wholesale removal with the hopes of spurring
      people who care to come forward once again and work on a
      coherent replacement.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1326104915.2442.53.camel@twinsSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8e7fbcbc
  12. 14 5月, 2012 1 次提交
  13. 09 5月, 2012 5 次提交
  14. 26 4月, 2012 2 次提交
  15. 30 3月, 2012 1 次提交
  16. 27 3月, 2012 1 次提交
  17. 14 3月, 2012 1 次提交
  18. 13 3月, 2012 1 次提交
    • P
      sched: Cleanup cpu_active madness · 5fbd036b
      Peter Zijlstra 提交于
      Stepan found:
      
      CPU0		CPUn
      
      _cpu_up()
        __cpu_up()
      
      		boostrap()
      		  notify_cpu_starting()
      		  set_cpu_online()
      		  while (!cpu_active())
      		    cpu_relax()
      
      <PREEMPT-out>
      
      smp_call_function(.wait=1)
        /* we find cpu_online() is true */
        arch_send_call_function_ipi_mask()
      
        /* wait-forever-more */
      
      <PREEMPT-in>
      		  local_irq_enable()
      
        cpu_notify(CPU_ONLINE)
          sched_cpu_active()
            set_cpu_active()
      
      Now the purpose of cpu_active is mostly with bringing down a cpu, where
      we mark it !active to avoid the load-balancer from moving tasks to it
      while we tear down the cpu. This is required because we only update the
      sched_domain tree after we brought the cpu-down. And this is needed so
      that some tasks can still run while we bring it down, we just don't want
      new tasks to appear.
      
      On cpu-up however the sched_domain tree doesn't yet include the new cpu,
      so its invisible to the load-balancer, regardless of the active state.
      So instead of setting the active state after we boot the new cpu (and
      consequently having to wait for it before enabling interrupts) set the
      cpu active before we set it online and avoid the whole mess.
      Reported-by: NStepan Moskovchenko <stepanm@codeaurora.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1323965362.18942.71.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
      5fbd036b
  19. 05 3月, 2012 1 次提交
    • I
      x86: Introduce x86_cpuinit.early_percpu_clock_init hook · df156f90
      Igor Mammedov 提交于
      When kvm guest uses kvmclock, it may hang on vcpu hot-plug.
      This is caused by an overflow in pvclock_get_nsec_offset,
      
          u64 delta = tsc - shadow->tsc_timestamp;
      
      which in turn is caused by an undefined values from percpu
      hv_clock that hasn't been initialized yet.
      Uninitialized clock on being booted cpu is accessed from
         start_secondary
          -> smp_callin
            ->  smp_store_cpu_info
              -> identify_secondary_cpu
                -> mtrr_ap_init
                  -> mtrr_restore
                    -> stop_machine_from_inactive_cpu
                      -> queue_stop_cpus_work
                        ...
                          -> sched_clock
                            -> kvm_clock_read
      which is well before x86_cpuinit.setup_percpu_clockev call in
      start_secondary, where percpu clock is initialized.
      
      This patch introduces a hook that allows to setup/initialize
      per_cpu clock early and avoid overflow due to reading
        - undefined values
        - old values if cpu was offlined and then onlined again
      
      Another possible early user of this clock source is ftrace that
      accesses it to get timestamps for ring buffer entries. So if
      mtrr_ap_init is moved from identify_secondary_cpu to past
      x86_cpuinit.setup_percpu_clockev in start_secondary, ftrace
      may cause the same overflow/hang on cpu hot-plug anyway.
      
      More complete description of the problem:
        https://lkml.org/lkml/2012/2/2/101
      
      Credits to Marcelo Tosatti <mtosatti@redhat.com> for hook idea.
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      df156f90
  20. 23 2月, 2012 1 次提交
  21. 13 2月, 2012 1 次提交
  22. 24 12月, 2011 1 次提交
  23. 06 12月, 2011 1 次提交
    • J
      x86: Reduce clock calibration time during slave cpu startup · b565201c
      Jack Steiner 提交于
      Reduce the startup time for slave cpus.
      
      Adds hooks for an arch-specific function for clock calibration.
      These hooks are used on x86.  If a newly started cpu has the
      same phys_proc_id as a core already active, uses the TSC for the
      delay loop and has a CONSTANT_TSC, use the already-calculated
      value of loops_per_jiffy.
      
      This patch reduces the time required to start slave cpus on a
      4096 cpu system from: 465 sec OLD 62 sec NEW
      
      This reduces boot time on a 4096p system by almost 7 minutes.
      Nice...
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: John Stultz <john.stultz@linaro.org>
      [fix CONFIG_SMP=n build]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b565201c
  24. 05 12月, 2011 1 次提交
    • D
      x86, NMI: Add NMI IPI selftest · 99e8b9ca
      Don Zickus 提交于
      The previous patch modified the stop cpus path to use NMI
      instead of IRQ as the way to communicate to the other cpus to
      shutdown.  There were some concerns that various machines may
      have problems with using an NMI IPI.
      
      This patch creates a selftest to check if NMI is working at
      boot. The idea is to help catch any issues before the machine
      panics and we learn the hard way.
      
      Loosely based on the locking-selftest.c file, this separate file
      runs a couple of simple tests and reports the results.  The
      output looks like:
      
      ...
      Brought up 4 CPUs
      ----------------
      | NMI testsuite:
      --------------------
        remote IPI:  ok  |
         local IPI:  ok  |
      --------------------
      Good, all   2 testcases passed! |
      ---------------------------------
      Total of 4 processors activated (21330.61 BogoMIPS).
      ...
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: seiji.aguchi@hds.com
      Cc: vgoyal@redhat.com
      Cc: mjg@redhat.com
      Cc: tony.luck@intel.com
      Cc: gong.chen@intel.com
      Cc: satoru.moriya@hds.com
      Cc: avi@redhat.com
      Cc: Andi Kleen <andi@firstfloor.org>
      Link: http://lkml.kernel.org/r/1318533267-18880-3-git-send-email-dzickus@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      99e8b9ca
  25. 21 7月, 2011 1 次提交
  26. 08 6月, 2011 1 次提交
    • T
      x86: cpu-hotplug: Prevent softirq wakeup on wrong CPU · fd8a7de1
      Thomas Gleixner 提交于
      After a newly plugged CPU sets the cpu_online bit it enables
      interrupts and goes idle. The cpu which brought up the new cpu waits
      for the cpu_online bit and when it observes it, it sets the cpu_active
      bit for this cpu. The cpu_active bit is the relevant one for the
      scheduler to consider the cpu as a viable target.
      
      With forced threaded interrupt handlers which imply forced threaded
      softirqs we observed the following race:
      
      cpu 0                         cpu 1
      
      bringup(cpu1);
                                    set_cpu_online(smp_processor_id(), true);
      		              local_irq_enable();
      while (!cpu_online(cpu1));
                                    timer_interrupt()
                                      -> wake_up(softirq_thread_cpu1);
                                           -> enqueue_on(softirq_thread_cpu1, cpu0);
      
                                                                              ^^^^
      
      cpu_notify(CPU_ONLINE, cpu1);
        -> sched_cpu_active(cpu1)
           -> set_cpu_active((cpu1, true);
      
      When an interrupt happens before the cpu_active bit is set by the cpu
      which brought up the newly onlined cpu, then the scheduler refuses to
      enqueue the woken thread which is bound to that newly onlined cpu on
      that newly onlined cpu due to the not yet set cpu_active bit and
      selects a fallback runqueue. Not really an expected and desirable
      behaviour.
      
      So far this has only been observed with forced hard/softirq threading,
      but in theory this could happen without forced threaded hard/softirqs
      as well. It's probably unobservable as it would take a massive
      interrupt storm on the newly onlined cpu which causes the softirq loop
      to wake up the softirq thread and an even longer delay of the cpu
      which waits for the cpu_online bit.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: stable@kernel.org # 2.6.39
      fd8a7de1
  27. 30 5月, 2011 1 次提交
  28. 29 5月, 2011 1 次提交
    • L
      x86 idle: clarify AMD erratum 400 workaround · 02c68a02
      Len Brown 提交于
      The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting:
      1. Intel C1E is somehow involved
      2. All AMD processors with C1E are involved
      
      Use the string "amd_c1e" instead of simply "c1e" to clarify that
      this workaround is specific to AMD's version of C1E.
      Use the string "e400" to clarify that the workaround is specific
      to AMD processors with Erratum 400.
      
      This patch is text-substitution only, with no functional change.
      
      cc: x86@kernel.org
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      02c68a02
  29. 21 4月, 2011 1 次提交
  30. 16 4月, 2011 1 次提交
    • K
      x86, NUMA: Fix fakenuma boot failure · 7d6b4670
      KOSAKI Motohiro 提交于
      Currently, numa=fake boot parameter is broken. If it's used,
      kernel may panic due to devide by zero error depending on CPU
      configuration
      
      Call Trace:
       [<ffffffff8104ad4c>] find_busiest_group+0x38c/0xd30
       [<ffffffff81086aff>] ? local_clock+0x6f/0x80
       [<ffffffff81050533>] load_balance+0xa3/0x600
       [<ffffffff81050f53>] idle_balance+0xf3/0x180
       [<ffffffff81550092>] schedule+0x722/0x7d0
       [<ffffffff81550538>] ? wait_for_common+0x128/0x190
       [<ffffffff81550a65>] schedule_timeout+0x265/0x320
       [<ffffffff81095815>] ? lock_release_holdtime+0x35/0x1a0
       [<ffffffff81550538>] ? wait_for_common+0x128/0x190
       [<ffffffff8109bb6c>] ? __lock_release+0x9c/0x1d0
       [<ffffffff815534e0>] ? _raw_spin_unlock_irq+0x30/0x40
       [<ffffffff815534e0>] ? _raw_spin_unlock_irq+0x30/0x40
       [<ffffffff81550540>] wait_for_common+0x130/0x190
       [<ffffffff81051920>] ? try_to_wake_up+0x510/0x510
       [<ffffffff8155067d>] wait_for_completion+0x1d/0x20
       [<ffffffff8107f36c>] kthread_create_on_node+0xac/0x150
       [<ffffffff81077bb0>] ? process_scheduled_works+0x40/0x40
       [<ffffffff8155045f>] ? wait_for_common+0x4f/0x190
       [<ffffffff8107a283>] __alloc_workqueue_key+0x1a3/0x590
       [<ffffffff81e0cce2>] cpuset_init_smp+0x6b/0x7b
       [<ffffffff81df3d07>] kernel_init+0xc3/0x182
       [<ffffffff8155d5e4>] kernel_thread_helper+0x4/0x10
       [<ffffffff81553cd4>] ? retint_restore_args+0x13/0x13
       [<ffffffff81df3c44>] ? start_kernel+0x400/0x400
       [<ffffffff8155d5e0>] ? gs_change+0x13/0x13
      
      The divede by zero is caused by the following line,
      group->cpu_power==0:
      
       kernel/sched_fair.c::update_sg_lb_stats()
              /* Adjust by relative CPU power of the group */
              sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power;
      
      This regression was caused by commit e23bba60 ("x86-64, NUMA: Unify
      emulated distance mapping") because it changes cpu -> node
      mapping in the process of dropping fake_physnodes().
      
        old) all cpus are assinged node 0
        now) cpus are assigned round robin
             (the logic is implemented by numa_init_array())
      
        Note: The change in behavior only happens if the system doesn't
              have neither ACPI SRAT table nor AMD northbridge NUMA
      	information.
      
      Round robin assignment doesn't work because init_numa_sched_groups_power()
      assumes all logical cpus in the same physical cpu share the same node
      (then it only accounts for group_first_cpu()), and the simple round robin
      breaks the above assumption.
      
      Thus, this patch implements a reassignment of node-ids if buggy firmware
      or numa emulation makes wrong cpu node map. Tt enforce all logical cpus
      in the same physical cpu share the same node.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Shaohui Zheng <shaohui.zheng@intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/20110415203928.1303.A69D9226@jp.fujitsu.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      7d6b4670
  31. 29 3月, 2011 1 次提交
    • C
      x86: A fast way to check capabilities of the current cpu · 349c004e
      Christoph Lameter 提交于
      Add this_cpu_has() which determines if the current cpu has a certain
      ability using a segment prefix and a bit test operation.
      
      For that we need to add bit operations to x86s percpu.h.
      
      Many uses of cpu_has use a pointer passed to a function to determine
      the current flags. That is no longer necessary after this patch.
      
      However, this patch only converts the straightforward cases where
      cpu_has is used with this_cpu_ptr. The rest is work for later.
      
      -tj: Rolled up patch to add x86_ prefix and use percpu_read() instead
           of percpu_read_stable().
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      349c004e
  32. 23 2月, 2011 1 次提交
    • H
      x86: Rework arch_disable_smp_support() for x86 · 7167d08e
      Henrik Kretzschmar 提交于
      Currently arch_disable_smp_support() on x86 disables only the
      support for the IOAPIC and is also compiled in if SMP-support is
      not.
      
      Therefore this function is renamed to disable_ioapic_support(),
      which meets its purpose and is only compiled in the kernel
      when IOAPIC support is also.
      
      A new arch_disable_smp_support() is created in smpboot.c,
      which calls disable_ioapic_support() and gets only compiled
      in the kernel when SMP support is also.
      Signed-off-by: NHenrik Kretzschmar <henne@nachtwindheim.de>
      LKML-Reference: <1298385487-4708-3-git-send-email-henne@nachtwindheim.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7167d08e
  33. 18 2月, 2011 1 次提交
    • H
      x86, trampoline: Common infrastructure for low memory trampolines · 4822b7fc
      H. Peter Anvin 提交于
      Common infrastructure for low memory trampolines.  This code installs
      the trampolines permanently in low memory very early.  It also permits
      multiple pieces of code to be used for this purpose.
      
      This code also introduces a standard infrastructure for computing
      symbol addresses in the trampoline code.
      
      The only change to the actual SMP trampolines themselves is that the
      64-bit trampoline has been made reusable -- the previous version would
      overwrite the code with a status variable; this moves the status
      variable to a separate location.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      LKML-Reference: <4D5DFBE4.7090104@intel.com>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Matthieu Castet <castet.matthieu@free.fr>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      4822b7fc