1. 26 1月, 2019 1 次提交
  2. 29 8月, 2017 1 次提交
  3. 18 8月, 2017 1 次提交
  4. 24 1月, 2017 1 次提交
  5. 05 1月, 2017 1 次提交
    • D
      x86/irq, trace: Add __irq_entry annotation to x86's platform IRQ handlers · c4158ff5
      Daniel Bristot de Oliveira 提交于
      This patch adds the __irq_entry annotation to the default x86
      platform IRQ handlers. ftrace's function_graph tracer uses the
      __irq_entry annotation to notify the entry and return of IRQ
      handlers.
      
      For example, before the patch:
        354549.667252 |   3)  d..1              |  default_idle_call() {
        354549.667252 |   3)  d..1              |    arch_cpu_idle() {
        354549.667253 |   3)  d..1              |      default_idle() {
        354549.696886 |   3)  d..1              |        smp_trace_reschedule_interrupt() {
        354549.696886 |   3)  d..1              |          irq_enter() {
        354549.696886 |   3)  d..1              |            rcu_irq_enter() {
      
      After the patch:
        366416.254476 |   3)  d..1              |    arch_cpu_idle() {
        366416.254476 |   3)  d..1              |      default_idle() {
        366416.261566 |   3)  d..1  ==========> |
        366416.261566 |   3)  d..1              |        smp_trace_reschedule_interrupt() {
        366416.261566 |   3)  d..1              |          irq_enter() {
        366416.261566 |   3)  d..1              |            rcu_irq_enter() {
      
      KASAN also uses this annotation. The smp_apic_timer_interrupt()
      was already annotated.
      Signed-off-by: NDaniel Bristot de Oliveira <bristot@redhat.com>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Aaron Lu <aaron.lu@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Claudio Fontana <claudio.fontana@huawei.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dou Liyang <douly.fnst@cn.fujitsu.com>
      Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicolai Stange <nicstange@gmail.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Cc: linux-edac@vger.kernel.org
      Link: http://lkml.kernel.org/r/059fdf437c2f0c09b13c18c8fe4e69999d3ffe69.1483528431.git.bristot@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c4158ff5
  6. 10 12月, 2016 1 次提交
  7. 23 11月, 2016 2 次提交
  8. 13 4月, 2016 1 次提交
  9. 26 3月, 2016 1 次提交
    • S
      ACPI / processor: Request native thermal interrupt handling via _OSC · a2121167
      Srinivas Pandruvada 提交于
      There are several reports of freeze on enabling HWP (Hardware PStates)
      feature on Skylake-based systems by the Intel P-states driver. The root
      cause is identified as the HWP interrupts causing BIOS code to freeze.
      
      HWP interrupts use the thermal LVT which can be handled by Linux
      natively, but on the affected Skylake-based systems SMM will respond
      to it by default.  This is a problem for several reasons:
       - On the affected systems the SMM thermal LVT handler is broken (it
         will crash when invoked) and a BIOS update is necessary to fix it.
       - With thermal interrupt handled in SMM we lose all of the reporting
         features of the arch/x86/kernel/cpu/mcheck/therm_throt driver.
       - Some thermal drivers like x86-package-temp depend on the thermal
         threshold interrupts signaled via the thermal LVT.
       - The HWP interrupts are useful for debugging and tuning
         performance (if the kernel can handle them).
      The native handling of thermal interrupts needs to be enabled
      because of that.
      
      This requires some way to tell SMM that the OS can handle thermal
      interrupts.  That can be done by using _OSC/_PDC in processor
      scope very early during ACPI initialization.
      
      The meaning of _OSC/_PDC bit 12 in processor scope is whether or
      not the OS supports native handling of interrupts for Collaborative
      Processor Performance Control (CPPC) notifications.  Since on
      HWP-capable systems CPPC is a firmware interface to HWP, setting
      this bit effectively tells the firmware that the OS will handle
      thermal interrupts natively going forward.
      
      For details on _OSC/_PDC refer to:
      http://www.intel.com/content/www/us/en/standards/processor-vendor-specific-acpi-specification.html
      
      To implement the _OSC/_PDC handshake as described, introduce a new
      function, acpi_early_processor_osc(), that walks the ACPI
      namespace looking for ACPI processor objects and invokes _OSC for
      them with bit 12 in the capabilities buffer set and terminates the
      namespace walk on the first success.
      
      Also modify intel_thermal_interrupt() to clear HWP status bits in
      the HWP_STATUS MSR to acknowledge HWP interrupts (which prevents
      them from firing continuously).
      Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      [ rjw: Subject & changelog, function rename ]
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      a2121167
  10. 03 2月, 2016 1 次提交
  11. 21 10月, 2015 1 次提交
    • A
      x86/mce: Fix thermal throttling reporting after kexec · 81ffdcdd
      Andi Kleen 提交于
      The per CPU thermal vector init code checks if the thermal
      vector is already installed and complains and bails out if it
      is.
      
      This happens after kexec, as kernel shut down does not clear the
      thermal vector APIC register.
      
      This causes two problems:
      
      1. So we always do not fully initialize thermal reports after
         kexec. The CPU is still likely initialized, as the previous
         kernel should have done it. But we don't set up the software
         pointer to the thermal vector, so reporting may end up with a
         unknown thermal interrupt message.
      
      2. Also it complains for every logical CPU, even though the
         value is actually derived from BP only.
      
      The problem is that we end up with one message per CPU, so on
      larger systems it becomes very noisy and messes up the otherwise
      nicely formatted CPU bootup numbers in the kernel log.
      
      Just remove the check. I checked the code and there's no valid
      code paths where the thermal init code for a CPU could be called
      multiple times.
      
      Why the kernel does not clean up this value on shutdown:
      
      The thermal monitoring is controlled per logical CPU thread.
      Normal shutdown code is just running on one CPU. To disable it
      we would need a broadcast NMI to all CPUs on shut down. That's
      overkill for this. So we just ignore it after kexec.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-edac <linux-edac@vger.kernel.org>
      Link: http://lkml.kernel.org/r/1445246268-26285-9-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      81ffdcdd
  12. 19 9月, 2014 1 次提交
  13. 06 5月, 2014 1 次提交
  14. 20 3月, 2014 2 次提交
    • S
      x86, therm_throt.c: Remove unused therm_cpu_lock · 7b7139d4
      Srivatsa S. Bhat 提交于
      After fixing the CPU hotplug callback registration code, the callbacks
      invoked for each online CPU, during the initialization phase in
      thermal_throttle_init_device(), can no longer race with the actual CPU
      hotplug notifier callbacks (in thermal_throttle_cpu_callback). Hence the
      therm_cpu_lock is unnecessary now. Remove it.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      7b7139d4
    • S
      x86, therm_throt.c: Fix CPU hotplug callback registration · 4e6192bb
      Srivatsa S. Bhat 提交于
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Instead, the correct and race-free way of performing the callback
      registration is:
      
      	cpu_notifier_register_begin();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* Note the use of the double underscored version of the API */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_notifier_register_done();
      
      Fix the thermal throttle code in x86 by using this latter form of callback
      registration.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4e6192bb
  15. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  16. 21 6月, 2013 2 次提交
    • S
      x86, trace: Add irq vector tracepoints · cf910e83
      Seiji Aguchi 提交于
      [Purpose of this patch]
      
      As Vaibhav explained in the thread below, tracepoints for irq vectors
      are useful.
      
      http://www.spinics.net/lists/mm-commits/msg85707.html
      
      <snip>
      The current interrupt traces from irq_handler_entry and irq_handler_exit
      provide when an interrupt is handled.  They provide good data about when
      the system has switched to kernel space and how it affects the currently
      running processes.
      
      There are some IRQ vectors which trigger the system into kernel space,
      which are not handled in generic IRQ handlers.  Tracing such events gives
      us the information about IRQ interaction with other system events.
      
      The trace also tells where the system is spending its time.  We want to
      know which cores are handling interrupts and how they are affecting other
      processes in the system.  Also, the trace provides information about when
      the cores are idle and which interrupts are changing that state.
      <snip>
      
      On the other hand, my usecase is tracing just local timer event and
      getting a value of instruction pointer.
      
      I suggested to add an argument local timer event to get instruction pointer before.
      But there is another way to get it with external module like systemtap.
      So, I don't need to add any argument to irq vector tracepoints now.
      
      [Patch Description]
      
      Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
      But there is an above use case to trace specific irq_vector rather than tracing all events.
      In this case, we are concerned about overhead due to unwanted events.
      
      So, add following tracepoints instead of introducing irq_vector_entry/exit.
      so that we can enable them independently.
         - local_timer_vector
         - reschedule_vector
         - call_function_vector
         - call_function_single_vector
         - irq_work_entry_vector
         - error_apic_vector
         - thermal_apic_vector
         - threshold_apic_vector
         - spurious_apic_vector
         - x86_platform_ipi_vector
      
      Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
      makes a zero when tracepoints are disabled. Detailed explanations are as follows.
       - Create trace irq handlers with entering_irq()/exiting_irq().
       - Create a new IDT, trace_idt_table, at boot time by adding a logic to
         _set_gate(). It is just a copy of original idt table.
       - Register the new handlers for tracpoints to the new IDT by introducing
         macros to alloc_intr_gate() called at registering time of irq_vector handlers.
       - Add checking, whether irq vector tracing is on/off, into load_current_idt().
         This has to be done below debug checking for these reasons.
         - Switching to debug IDT may be kicked while tracing is enabled.
         - On the other hands, switching to trace IDT is kicked only when debugging
           is disabled.
      
      In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
      used for other purposes.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      cf910e83
    • S
      x86, trace: Introduce entering/exiting_irq() · eddc0e92
      Seiji Aguchi 提交于
      When implementing tracepoints in interrupt handers, if the tracepoints are
      simply added in the performance sensitive path of interrupt handers,
      it may cause potential performance problem due to the time penalty.
      
      To solve the problem, an idea is to prepare non-trace/trace irq handers and
      switch their IDTs at the enabling/disabling time.
      
      So, let's introduce entering_irq()/exiting_irq() for pre/post-
      processing of each irq handler.
      
      A way to use them is as follows.
      
      Non-trace irq handler:
      smp_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      Trace irq_handler:
      smp_trace_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	trace_irq_entry();	/* tracepoint for irq entry */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	trace_irq_exit();	/* tracepoint for irq exit */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      If tracepoints can place outside entering_irq()/exiting_irq() as follows,
      it looks cleaner.
      
      smp_trace_irq_handler()
      {
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      }
      
      But it doesn't work.
      The problem is with irq_enter/exit() being called. They must be called before
      trace_irq_enter/exit(),  because of the rcu_irq_enter() must be called before
      any tracepoints are used, as tracepoints use  rcu to synchronize.
      
      As a possible alternative, we may be able to call irq_enter() first as follows
      if irq_enter() can nest.
      
      smp_trace_irq_hander()
      {
      	irq_entry();
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      	irq_exit();
      }
      
      But it doesn't work, either.
      If irq_enter() is nested, it may have a time penalty because it has to check if it
      was already called or not. The time penalty is not desired in performance sensitive
      paths even if it is tiny.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      eddc0e92
  17. 15 6月, 2013 2 次提交
  18. 13 6月, 2013 1 次提交
  19. 29 3月, 2012 1 次提交
  20. 22 12月, 2011 1 次提交
    • K
      cpu: convert 'cpu' and 'machinecheck' sysdev_class to a regular subsystem · 8a25a2fd
      Kay Sievers 提交于
      This moves the 'cpu sysdev_class' over to a regular 'cpu' subsystem
      and converts the devices to regular devices. The sysdev drivers are
      implemented as subsystem interfaces now.
      
      After all sysdev classes are ported to regular driver core entities, the
      sysdev implementation will be entirely removed from the kernel.
      
      Userspace relies on events and generic sysfs subsystem infrastructure
      from sysdev devices, which are made available with this conversion.
      
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Borislav Petkov <bp@amd64.org>
      Cc: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Zhang Rui <rui.zhang@intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NKay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      8a25a2fd
  21. 15 12月, 2011 1 次提交
    • F
      x86, mce, therm_throt: Don't report power limit and package level thermal throttle events in mcelog · 29e9bf18
      Fenghua Yu 提交于
      Thermal throttle and power limit events are not defined as MCE errors in x86
      architecture and should not generate MCE errors in mcelog.
      
      Current kernel generates fake software defined MCE errors for these events.
      This may confuse users because they may think the machine has real MCE errors
      while actually only thermal throttle or power limit events happen.
      
      To make it worse, buggy firmware on some platforms may falsely generate
      the events. Therefore, kernel reports MCE errors which users think as real
      hardware errors. Although the firmware bugs should be fixed, on the other hand,
      kernel should not report MCE errors either.
      
      So mcelog is not a good mechanism to report these events. To report the events, we count them in respective counters (core_power_limit_count,
      package_power_limit_count, core_throttle_count, and package_throttle_count) in
      /sys/devices/system/cpu/cpu#/thermal_throttle/. Users can check the counters
      for each event on each CPU. Please note that all CPU's on one package report
      duplicate counters. It's user application's responsibity to retrieve a package
      level counter for one package.
      
      This patch doesn't report package level power limit, core level power limit, and
      package level thermal throttle events in mcelog. When the events happen, only
      report them in respective counters in sysfs.
      
      Since core level thermal throttle has been legacy code in kernel for a while and
      users accepted it as MCE error in mcelog, core level thermal throttle is still
      reported in mcelog. In the mean time, the event is counted in a counter in sysfs
      as well.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Acked-by: NBorislav Petkov <bp@amd64.org>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Link: http://lkml.kernel.org/r/20111215001945.GA21009@linux-os.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      29e9bf18
  22. 12 12月, 2011 1 次提交
    • F
      x86: Call idle notifier after irq_enter() · 98ad1cc1
      Frederic Weisbecker 提交于
      Interrupts notify the idle exit state before calling irq_enter().
      But the notifier code calls rcu_read_lock() and this is not
      allowed while rcu is in an extended quiescent state. We need
      to wait for irq_enter() -> rcu_idle_exit() to be called before
      doing so otherwise this results in a grumpy RCU:
      
      [    0.099991] WARNING: at include/linux/rcupdate.h:194 __atomic_notifier_call_chain+0xd2/0x110()
      [    0.099991] Hardware name: AMD690VM-FMH
      [    0.099991] Modules linked in:
      [    0.099991] Pid: 0, comm: swapper Not tainted 3.0.0-rc6+ #255
      [    0.099991] Call Trace:
      [    0.099991]  <IRQ>  [<ffffffff81051c8a>] warn_slowpath_common+0x7a/0xb0
      [    0.099991]  [<ffffffff81051cd5>] warn_slowpath_null+0x15/0x20
      [    0.099991]  [<ffffffff817d6fa2>] __atomic_notifier_call_chain+0xd2/0x110
      [    0.099991]  [<ffffffff817d6ff1>] atomic_notifier_call_chain+0x11/0x20
      [    0.099991]  [<ffffffff81001873>] exit_idle+0x43/0x50
      [    0.099991]  [<ffffffff81020439>] smp_apic_timer_interrupt+0x39/0xa0
      [    0.099991]  [<ffffffff817da253>] apic_timer_interrupt+0x13/0x20
      [    0.099991]  <EOI>  [<ffffffff8100ae67>] ? default_idle+0xa7/0x350
      [    0.099991]  [<ffffffff8100ae65>] ? default_idle+0xa5/0x350
      [    0.099991]  [<ffffffff8100b19b>] amd_e400_idle+0x8b/0x110
      [    0.099991]  [<ffffffff810cb01f>] ? rcu_enter_nohz+0x8f/0x160
      [    0.099991]  [<ffffffff810019a0>] cpu_idle+0xb0/0x110
      [    0.099991]  [<ffffffff817a7505>] rest_init+0xe5/0x140
      [    0.099991]  [<ffffffff817a7468>] ? rest_init+0x48/0x140
      [    0.099991]  [<ffffffff81cc5ca3>] start_kernel+0x3d1/0x3dc
      [    0.099991]  [<ffffffff81cc5321>] x86_64_start_reservations+0x131/0x135
      [    0.099991]  [<ffffffff81cc5412>] x86_64_start_kernel+0xed/0xf4
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Andy Henroid <andrew.d.henroid@intel.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      98ad1cc1
  23. 01 11月, 2011 1 次提交
    • P
      x86: Fix files explicitly requiring export.h for EXPORT_SYMBOL/THIS_MODULE · 69c60c88
      Paul Gortmaker 提交于
      These files were implicitly getting EXPORT_SYMBOL via device.h
      which was including module.h, but that will be fixed up shortly.
      
      By fixing these now, we can avoid seeing things like:
      
      arch/x86/kernel/rtc.c:29: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’
      arch/x86/kernel/pci-dma.c:20: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’
      arch/x86/kernel/e820.c:69: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’
      
      [ with input from Randy Dunlap <rdunlap@xenotime.net> and also
        from Stephen Rothwell <sfr@canb.auug.org.au> ]
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      69c60c88
  24. 16 5月, 2011 1 次提交
    • Y
      x86, apic: Fix spurious error interrupts triggering on all non-boot APs · e503f9e4
      Youquan Song 提交于
      This patch fixes a bug reported by a customer, who found
      that many unreasonable error interrupts reported on all
      non-boot CPUs (APs) during the system boot stage.
      
      According to Chapter 10 of Intel Software Developer Manual
      Volume 3A, Local APIC may signal an illegal vector error when
      an LVT entry is set as an illegal vector value (0~15) under
      FIXED delivery mode (bits 8-11 is 0), regardless of whether
      the mask bit is set or an interrupt actually happen. These
      errors are seen as error interrupts.
      
      The initial value of thermal LVT entries on all APs always reads
      0x10000 because APs are woken up by BSP issuing INIT-SIPI-SIPI
      sequence to them and LVT registers are reset to 0s except for
      the mask bits which are set to 1s when APs receive INIT IPI.
      
      When the BIOS takes over the thermal throttling interrupt,
      the LVT thermal deliver mode should be SMI and it is required
      from the kernel to keep AP's LVT thermal monitoring register
      programmed as such as well.
      
      This issue happens when BIOS does not take over thermal throttling
      interrupt, AP's LVT thermal monitor register will be restored to
      0x10000 which means vector 0 and fixed deliver mode, so all APs will
      signal illegal vector error interrupts.
      
      This patch check if interrupt delivery mode is not fixed mode before
      restoring AP's LVT thermal monitor register.
      Signed-off-by: NYouquan Song <youquan.song@intel.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NYong Wang <yong.y.wang@intel.com>
      Cc: hpa@linux.intel.com
      Cc: joe@perches.com
      Cc: jbaron@redhat.com
      Cc: trenn@suse.de
      Cc: kent.liu@intel.com
      Cc: chaohong.guo@intel.com
      Cc: <stable@kernel.org> # As far back as possible
      Link: http://lkml.kernel.org/r/1303402963-17738-1-git-send-email-youquan.song@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      e503f9e4
  25. 20 4月, 2011 1 次提交
  26. 29 3月, 2011 1 次提交
  27. 21 1月, 2011 1 次提交
  28. 04 1月, 2011 1 次提交
  29. 08 10月, 2010 1 次提交
  30. 06 9月, 2010 1 次提交
    • J
      therm_throt.c: Trivial printk message fix for a unsuitable abbreviation of 'thermal' · 592091c0
      Jin Dongming 提交于
      In unexpected_thermal_interrupt(), "LVT TMR interrupt" is used
      in error message.
      
      I don't think TMR is a suitable abbreviation for thermal.
        1.TMR has been used in IA32 Architectures Software Developer's
          Manual, and is the abbreviation for Trigger Mode Register.
        2.There is not an standard abbreviation "TMR" defined for thermal
          in IA32 Architectures Software Developer's Manual.
        3.Though we could understand it as Thermal Monitor Register, it is
          easy to be misunderstood as a *TIMER* interrupt also.
      
      I think this patch will fix it.
      Signed-off-by: NJin Dongming <jin.dongming@np.css.fujitsu.com>
      Reviewed-by: NJean Delvare <khali@linux-fr.org>
      Cc: Brown Len <len.brown@intel.com>
      Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      LKML-Reference: <4C7C492D.5020704@np.css.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      592091c0
  31. 21 8月, 2010 1 次提交
  32. 04 8月, 2010 2 次提交
    • F
      x86, hwmon: Package Level Thermal/Power: power limit · 0199114c
      Fenghua Yu 提交于
      Power limit notification feature is published in Intel 64 and IA-32
      Architectures SDMV Vol 3A 14.5.6 Power Limit Notification.
      
      It is implemented first on Intel Sandy Bridge platform.
      
      The patch handles notification interrupt. Interrupt handler dumps power limit
      information in log_buf, logs the event in mce log, and increases the event
      counters (core_power_limit and package_power_limit). Upper level applications
      could use the data to detect system health or diagnose functionality/performance
      issues.
      
      In the future, the event could be handled in a more fancy way.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      LKML-Reference: <1280448826-12004-5-git-send-email-fenghua.yu@intel.com>
      Reviewed-by: NLen Brown <len.brown@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      0199114c
    • F
      x86, hwmon: Package Level Thermal/Power: thermal throttling handler · 55d435a2
      Fenghua Yu 提交于
      Add package level thermal throttle interrupt support. The interrupt handler
      increases package level thermal throttle count. It also logs the event in MCE
      log.
      
      The package level thermal throttle interrupt happens across threads in a
      package. Each thread handles the interrupt individually. User level application
      is supposed to retrieve correct event count and log based on package/thread
      topology. This is the same situation for core level interrupt handler. In the
      future, interrupt may be reported only per package or per core.
      
      core_throttle_count and package_throttle_count are used for user interface.
      Previously only throttle_count is used for core throttle count. If you think
      new core_throttle_count name breaks user interface, I can change this part.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      LKML-Reference: <1280448826-12004-4-git-send-email-fenghua.yu@intel.com>
      Reviewed-by: NLen Brown <len.brown@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      55d435a2
  33. 28 5月, 2010 1 次提交
  34. 14 12月, 2009 2 次提交