1. 11 3月, 2014 1 次提交
  2. 09 2月, 2014 3 次提交
  3. 25 1月, 2014 1 次提交
  4. 16 1月, 2014 2 次提交
    • H
      x86, apic: Make disabled_cpu_apicid static read_mostly, fix typos · 5b4d1dbc
      H. Peter Anvin 提交于
      Make disabled_cpu_apicid static and read_mostly, and fix a couple of
      typos.
      Reported-by: NIngo Molnar <mingo@kernel.org>
      Link: http://lkml.kernel.org/r/20140115182511.GA22737@gmail.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      5b4d1dbc
    • H
      x86, apic, kexec: Add disable_cpu_apicid kernel parameter · 151e0c7d
      HATAYAMA Daisuke 提交于
      Add disable_cpu_apicid kernel parameter. To use this kernel parameter,
      specify an initial APIC ID of the corresponding CPU you want to
      disable.
      
      This is mostly used for the kdump 2nd kernel to disable BSP to wake up
      multiple CPUs without causing system reset or hang due to sending INIT
      from AP to BSP.
      
      Kdump users first figure out initial APIC ID of the BSP, CPU0 in the
      1st kernel, for example from /proc/cpuinfo and then set up this kernel
      parameter for the 2nd kernel using the obtained APIC ID.
      
      However, doing this procedure at each boot time manually is awkward,
      which should be automatically done by user-land service scripts, for
      example, kexec-tools on fedora/RHEL distributions.
      
      This design is more flexible than disabling BSP in kernel boot time
      automatically in that in kernel boot time we have no choice but
      referring to ACPI/MP table to obtain initial APIC ID for BSP, meaning
      that the method is not applicable to the systems without such BIOS
      tables.
      
      One assumption behind this design is that users get initial APIC ID of
      the BSP in still healthy state and so BSP is uniquely kept in
      CPU0. Thus, through the kernel parameter, only one initial APIC ID can
      be specified.
      
      In a comparison with disabled_cpu_apicid, we use read_apic_id(), not
      boot_cpu_physical_apicid, because on some platforms, the variable is
      modified to the apicid reported as BSP through MP table and this
      function is executed with the temporarily modified
      boot_cpu_physical_apicid. As a result, disabled_cpu_apicid kernel
      parameter doesn't work well for apicids of APs.
      
      Fixing the wrong handling of boot_cpu_physical_apicid requires some
      reviews and tests beyond some platforms and it could take some
      time. The fix here is a kind of workaround to focus on the main topic
      of this patch.
      Signed-off-by: NHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Link: http://lkml.kernel.org/r/20140115064458.1545.38775.stgit@localhost6.localdomain6Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      151e0c7d
  5. 14 1月, 2014 1 次提交
    • R
      x86/apic: Read Error Status Register correctly · 60283df7
      Richard Weinberger 提交于
      Currently we do a read, a dummy write and a final read to fetch
      the error code. The value from the final read is taken.
      This is not the recommended way and leads to corrupted/lost ESR
      values.
      
      Intel(c) 64 and IA-32 Architectures Software Developer's Manual,
      Combined Volumes 1, 2ABC, 3ABC, Section 10.5.3 states:
      
        Before attempt to read from the ESR, software should first
        write to it. (The value written does not affect the values read
        subsequently; only zero may be written in x2APIC mode.) This
        write clears any previously logged errors and updates the ESR
        with any errors detected since the last write to the ESR.
        This write also rearms the APIC error interrupt triggering
        mechanism.
      
      This patch removes the first read such that we are conform with
      the manual.
      
      On my (very old) Pentium MMX SMP system this patch fixes the
      issue that APIC errors:
      
        a) are not always reported and
        b) are reported with false error numbers.
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      Cc: seiji.aguchi@hds.com
      Cc: rientjes@google.com
      Cc: konrad.wilk@oracle.com
      Cc: bp@alien8.de
      Cc: Yinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/1389685487-20872-1-git-send-email-richard@nod.atSigned-off-by: NIngo Molnar <mingo@kernel.org>
      60283df7
  6. 13 1月, 2014 1 次提交
  7. 12 1月, 2014 1 次提交
    • P
      x86/irq: Fix do_IRQ() interrupt warning for cpu hotplug retriggered irqs · 9345005f
      Prarit Bhargava 提交于
      During heavy CPU-hotplug operations the following spurious kernel warnings
      can trigger:
      
        do_IRQ: No ... irq handler for vector (irq -1)
      
        [ See: https://bugzilla.kernel.org/show_bug.cgi?id=64831 ]
      
      When downing a cpu it is possible that there are unhandled irqs
      left in the APIC IRR register.  The following code path shows
      how the problem can occur:
      
       1. CPU 5 is to go down.
      
       2. cpu_disable() on CPU 5 executes with interrupt flag cleared
          by local_irq_save() via stop_machine().
      
       3. IRQ 12 asserts on CPU 5, setting IRR but not ISR because
          interrupt flag is cleared (CPU unabled to handle the irq)
      
       4. IRQs are migrated off of CPU 5, and the vectors' irqs are set
          to -1. 5. stop_machine() finishes cpu_disable()
      
       6. cpu_die() for CPU 5 executes in normal context.
      
       7. CPU 5 attempts to handle IRQ 12 because the IRR is set for
          IRQ 12.  The code attempts to find the vector's IRQ and cannot
          because it has been set to -1. 8. do_IRQ() warning displays
          warning about CPU 5 IRQ 12.
      
      I added a debug printk to output which CPU & vector was
      retriggered and discovered that that we are getting bogus
      events.  I see a 100% correlation between this debug printk in
      fixup_irqs() and the do_IRQ() warning.
      
      This patchset resolves this by adding definitions for
      VECTOR_UNDEFINED(-1) and VECTOR_RETRIGGERED(-2) and modifying
      the code to use them.
      
      Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=64831Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Reviewed-by: NRui Wang <rui.y.wang@intel.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Cc: Yang Zhang <yang.z.zhang@Intel.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: janet.morgan@Intel.com
      Cc: tony.luck@Intel.com
      Cc: ruiv.wang@gmail.com
      Link: http://lkml.kernel.org/r/1388938252-16627-1-git-send-email-prarit@redhat.com
      [ Cleaned up the code a bit. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      9345005f
  8. 07 1月, 2014 1 次提交
  9. 07 12月, 2013 1 次提交
    • L
      ACPI: Clean up inclusions of ACPI header files · 8b48463f
      Lv Zheng 提交于
      Replace direct inclusions of <acpi/acpi.h>, <acpi/acpi_bus.h> and
      <acpi/acpi_drivers.h>, which are incorrect, with <linux/acpi.h>
      inclusions and remove some inclusions of those files that aren't
      necessary.
      
      First of all, <acpi/acpi.h>, <acpi/acpi_bus.h> and <acpi/acpi_drivers.h>
      should not be included directly from any files that are built for
      CONFIG_ACPI unset, because that generally leads to build warnings about
      undefined symbols in !CONFIG_ACPI builds.  For CONFIG_ACPI set,
      <linux/acpi.h> includes those files and for CONFIG_ACPI unset it
      provides stub ACPI symbols to be used in that case.
      
      Second, there are ordering dependencies between those files that always
      have to be met.  Namely, it is required that <acpi/acpi_bus.h> be included
      prior to <acpi/acpi_drivers.h> so that the acpi_pci_root declarations the
      latter depends on are always there.  And <acpi/acpi.h> which provides
      basic ACPICA type declarations should always be included prior to any other
      ACPI headers in CONFIG_ACPI builds.  That also is taken care of including
      <linux/acpi.h> as appropriate.
      Signed-off-by: NLv Zheng <lv.zheng@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: Bjorn Helgaas <bhelgaas@google.com> (drivers/pci stuff)
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> (Xen stuff)
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      8b48463f
  10. 15 11月, 2013 1 次提交
  11. 15 10月, 2013 1 次提交
  12. 24 9月, 2013 3 次提交
    • M
      x86/UV: Update UV support for external NMI signals · 0d12ef0c
      Mike Travis 提交于
      The current UV NMI handler has not been updated for the changes
      in the system NMI handler and the perf operations.  The UV NMI
      handler reads an MMR in the UV Hub to check to see if the NMI
      event was caused by the external 'system NMI' that the operator
      can initiate on the System Mgmt Controller.
      
      The problem arises when the perf tools are running, causing
      millions of perf events per second on very large CPU count
      systems.  Previously this was okay because the perf NMI handler
      ran at a higher priority on the NMI call chain and if the NMI
      was a perf event, it would stop calling other NMI handlers
      remaining on the NMI call chain.
      
      Now the system NMI handler calls all the handlers on the NMI
      call chain including the UV NMI handler.  This causes the UV NMI
      handler to read the MMRs at the same millions per second rate.
      This can lead to significant performance loss and possible
      system failures.  It also can cause thousands of 'Dazed and
      Confused' messages being sent to the system console.  This
      effectively makes perf tools unusable on UV systems.
      
      To avoid this excessive overhead when perf tools are running,
      this code has been optimized to minimize reading of the MMRs as
      much as possible, by moving to the NMI_UNKNOWN notifier chain.
      This chain is called only when all the users on the standard
      NMI_LOCAL call chain have been called and none of them have
      claimed this NMI.
      
      There is an exception where the NMI_LOCAL notifier chain is
      used.  When the perf tools are in use, it's possible that the UV
      NMI was captured by some other NMI handler and then either
      ignored or mistakenly processed as a perf event.  We set a
      per_cpu ('ping') flag for those CPUs that ignored the initial
      NMI, and then send them an IPI NMI signal.  The NMI_LOCAL
      handler on each cpu does not need to read the MMR, but instead
      checks the in memory flag indicating it was pinged.  There are
      two module variables, 'ping_count' indicating how many requested
      NMI events occurred, and 'ping_misses' indicating how many stray
      NMI events.  These most likely are perf events so it shows the
      overhead of the perf NMI interrupts and how many MMR reads were avoided.
      
      This patch also minimizes the reads of the MMRs by having the
      first cpu entering the NMI handler on each node set a per HUB
      in-memory atomic value.  (Having a per HUB value avoids sending
      lock traffic over NumaLink.)  Both types of UV NMIs from the SMI
      layer are supported.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NDimitri Sivanich <sivanich@sgi.com>
      Reviewed-by: NHedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/r/20130923212500.353547733@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0d12ef0c
    • M
      x86/UV: Move NMI support · 1e019421
      Mike Travis 提交于
      This patch moves the UV NMI support from the x2apic file to a
      new separate uv_nmi.c file in preparation for the next sequence
      of patches. It prevents upcoming bloat of the x2apic file, and
      has the added benefit of putting the upcoming /sys/module
      parameters under the name 'uv_nmi' instead of 'x2apic_uv_x',
      which was obscure.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NDimitri Sivanich <sivanich@sgi.com>
      Reviewed-by: NHedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/r/20130923212500.183295611@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1e019421
    • J
      x86 / ACPI: simplify _acpi_map_lsapic() · 7e1f85f9
      Jiang Liu 提交于
      In acpi_register_lapic(), it will generates a new logical cpu
      number and maps to the local APIC id, this logical cpu number
      can be returned to simplify _acpi_map_lsapic() implementation.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      7e1f85f9
  13. 26 8月, 2013 1 次提交
  14. 20 8月, 2013 1 次提交
    • Y
      x86/ioapic/kcrash: Prevent crash_kexec() from deadlocking on ioapic_lock · 17405453
      Yoshihiro YUNOMAE 提交于
      Prevent crash_kexec() from deadlocking on ioapic_lock. When
      crash_kexec() is executed on a CPU, the CPU will take ioapic_lock
      in disable_IO_APIC(). So if the cpu gets an NMI while locking
      ioapic_lock, a deadlock will happen.
      
      In this patch, ioapic_lock is zapped/initialized before disable_IO_APIC().
      
      You can reproduce this deadlock the following way:
      
      1. Add mdelay(1000) after raw_spin_lock_irqsave() in
         native_ioapic_set_affinity()@arch/x86/kernel/apic/io_apic.c
      
         Although the deadlock can occur without this modification, it will increase
         the potential of the deadlock problem.
      
      2. Build and install the kernel
      
      3. Set up the OS which will run panic() and kexec when NMI is injected
          # echo "kernel.unknown_nmi_panic=1" >> /etc/sysctl.conf
          # vim /etc/default/grub
            add "nmi_watchdog=0 crashkernel=256M" in GRUB_CMDLINE_LINUX line
          # grub2-mkconfig
      
      4. Reboot the OS
      
      5. Run following command for each vcpu on the guest
          # while true; do echo <CPU num> > /proc/irq/<IO-APIC-edge or IO-APIC-fasteoi>/smp_affinitity; done;
         By running this command, cpus will get ioapic_lock for setting affinity.
      
      6. Inject NMI (push a dump button or execute 'virsh inject-nmi <domain>' if you
         use VM). After injecting NMI, panic() is called in an nmi-handler context.
         Then, kexec will normally run in panic(), but the operation will be stopped
         by deadlock on ioapic_lock in crash_kexec()->machine_crash_shutdown()->
         native_machine_crash_shutdown()->disable_IO_APIC()->clear_IO_APIC()->
         clear_IO_APIC_pin()->ioapic_read_entry().
      Signed-off-by: NYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/20130820070107.28245.83806.stgit@yunodevelSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17405453
  15. 07 8月, 2013 1 次提交
  16. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  17. 10 7月, 2013 1 次提交
  18. 21 6月, 2013 3 次提交
    • S
      trace,x86: Move creation of irq tracepoints from apic.c to irq.c · 83ab8514
      Steven Rostedt (Red Hat) 提交于
      Compiling without CONFIG_X86_LOCAL_APIC set, apic.c will not be
      compiled, and the irq tracepoints will not be created via the
      CREATE_TRACE_POINTS macro. When CONFIG_X86_LOCAL_APIC is not set,
      we get the following build error:
      
        LD      init/built-in.o
      arch/x86/built-in.o: In function `trace_x86_platform_ipi_entry':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:66: undefined reference to `__tracepoint_x86_platform_ipi_entry'
      arch/x86/built-in.o: In function `trace_x86_platform_ipi_exit':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:66: undefined reference to `__tracepoint_x86_platform_ipi_exit'
      arch/x86/built-in.o: In function `trace_irq_work_entry':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:72: undefined reference to `__tracepoint_irq_work_entry'
      arch/x86/built-in.o: In function `trace_irq_work_exit':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:72: undefined reference to `__tracepoint_irq_work_exit'
      arch/x86/built-in.o:(__jump_table+0x8): undefined reference to `__tracepoint_x86_platform_ipi_entry'
      arch/x86/built-in.o:(__jump_table+0x14): undefined reference to `__tracepoint_x86_platform_ipi_exit'
      arch/x86/built-in.o:(__jump_table+0x20): undefined reference to `__tracepoint_irq_work_entry'
      arch/x86/built-in.o:(__jump_table+0x2c): undefined reference to `__tracepoint_irq_work_exit'
      make[1]: *** [vmlinux] Error 1
      make: *** [sub-make] Error 2
      
      As irq.c is always compiled for x86, it is a more appropriate location
      to create the irq tracepoints.
      
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      83ab8514
    • S
      x86, trace: Add irq vector tracepoints · cf910e83
      Seiji Aguchi 提交于
      [Purpose of this patch]
      
      As Vaibhav explained in the thread below, tracepoints for irq vectors
      are useful.
      
      http://www.spinics.net/lists/mm-commits/msg85707.html
      
      <snip>
      The current interrupt traces from irq_handler_entry and irq_handler_exit
      provide when an interrupt is handled.  They provide good data about when
      the system has switched to kernel space and how it affects the currently
      running processes.
      
      There are some IRQ vectors which trigger the system into kernel space,
      which are not handled in generic IRQ handlers.  Tracing such events gives
      us the information about IRQ interaction with other system events.
      
      The trace also tells where the system is spending its time.  We want to
      know which cores are handling interrupts and how they are affecting other
      processes in the system.  Also, the trace provides information about when
      the cores are idle and which interrupts are changing that state.
      <snip>
      
      On the other hand, my usecase is tracing just local timer event and
      getting a value of instruction pointer.
      
      I suggested to add an argument local timer event to get instruction pointer before.
      But there is another way to get it with external module like systemtap.
      So, I don't need to add any argument to irq vector tracepoints now.
      
      [Patch Description]
      
      Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
      But there is an above use case to trace specific irq_vector rather than tracing all events.
      In this case, we are concerned about overhead due to unwanted events.
      
      So, add following tracepoints instead of introducing irq_vector_entry/exit.
      so that we can enable them independently.
         - local_timer_vector
         - reschedule_vector
         - call_function_vector
         - call_function_single_vector
         - irq_work_entry_vector
         - error_apic_vector
         - thermal_apic_vector
         - threshold_apic_vector
         - spurious_apic_vector
         - x86_platform_ipi_vector
      
      Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
      makes a zero when tracepoints are disabled. Detailed explanations are as follows.
       - Create trace irq handlers with entering_irq()/exiting_irq().
       - Create a new IDT, trace_idt_table, at boot time by adding a logic to
         _set_gate(). It is just a copy of original idt table.
       - Register the new handlers for tracpoints to the new IDT by introducing
         macros to alloc_intr_gate() called at registering time of irq_vector handlers.
       - Add checking, whether irq vector tracing is on/off, into load_current_idt().
         This has to be done below debug checking for these reasons.
         - Switching to debug IDT may be kicked while tracing is enabled.
         - On the other hands, switching to trace IDT is kicked only when debugging
           is disabled.
      
      In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
      used for other purposes.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      cf910e83
    • S
      x86, trace: Introduce entering/exiting_irq() · eddc0e92
      Seiji Aguchi 提交于
      When implementing tracepoints in interrupt handers, if the tracepoints are
      simply added in the performance sensitive path of interrupt handers,
      it may cause potential performance problem due to the time penalty.
      
      To solve the problem, an idea is to prepare non-trace/trace irq handers and
      switch their IDTs at the enabling/disabling time.
      
      So, let's introduce entering_irq()/exiting_irq() for pre/post-
      processing of each irq handler.
      
      A way to use them is as follows.
      
      Non-trace irq handler:
      smp_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      Trace irq_handler:
      smp_trace_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	trace_irq_entry();	/* tracepoint for irq entry */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	trace_irq_exit();	/* tracepoint for irq exit */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      If tracepoints can place outside entering_irq()/exiting_irq() as follows,
      it looks cleaner.
      
      smp_trace_irq_handler()
      {
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      }
      
      But it doesn't work.
      The problem is with irq_enter/exit() being called. They must be called before
      trace_irq_enter/exit(),  because of the rcu_irq_enter() must be called before
      any tracepoints are used, as tracepoints use  rcu to synchronize.
      
      As a possible alternative, we may be able to call irq_enter() first as follows
      if irq_enter() can nest.
      
      smp_trace_irq_hander()
      {
      	irq_entry();
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      	irq_exit();
      }
      
      But it doesn't work, either.
      If irq_enter() is nested, it may have a time penalty because it has to check if it
      was already called or not. The time penalty is not desired in performance sensitive
      paths even if it is tiny.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      eddc0e92
  19. 20 6月, 2013 1 次提交
    • M
      x86: Fix trigger_all_cpu_backtrace() implementation · b52e0a7c
      Michel Lespinasse 提交于
      The following change fixes the x86 implementation of
      trigger_all_cpu_backtrace(), which was previously (accidentally,
      as far as I can tell) disabled to always return false as on
      architectures that do not implement this function.
      
      trigger_all_cpu_backtrace(), as defined in include/linux/nmi.h,
      should call arch_trigger_all_cpu_backtrace() if available, or
      return false if the underlying arch doesn't implement this
      function.
      
      x86 did provide a suitable arch_trigger_all_cpu_backtrace()
      implementation, but it wasn't actually being used because it was
      declared in asm/nmi.h, which linux/nmi.h doesn't include. Also,
      linux/nmi.h couldn't easily be fixed by including asm/nmi.h,
      because that file is not available on all architectures.
      
      I am proposing to fix this by moving the x86 definition of
      arch_trigger_all_cpu_backtrace() to asm/irq.h.
      
      Tested via: echo l > /proc/sysrq-trigger
      
      Before the change, this uses a fallback implementation which
      shows backtraces on active CPUs (using
      smp_call_function_interrupt() )
      
      After the change, this shows NMI backtraces on all CPUs
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1370518875-1346-1-git-send-email-walken@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b52e0a7c
  20. 31 5月, 2013 1 次提交
  21. 30 5月, 2013 1 次提交
  22. 20 2月, 2013 1 次提交
  23. 12 2月, 2013 1 次提交
  24. 11 2月, 2013 1 次提交
    • S
      x86/apic: Work around boot failure on HP ProLiant DL980 G7 Server systems · cb214ede
      Stoney Wang 提交于
      When a HP ProLiant DL980 G7 Server boots a regular kernel,
      there will be intermittent lost interrupts which could
      result in a hang or (in extreme cases) data loss.
      
      The reason is that this system only supports x2apic physical
      mode, while the kernel boots with a logical-cluster default
      setting.
      
      This bug can be worked around by specifying the "x2apic_phys" or
      "nox2apic" boot option, but we want to handle this system
      without requiring manual workarounds.
      
      The BIOS sets ACPI_FADT_APIC_PHYSICAL in FADT table.
      As all apicids are smaller than 255, BIOS need to pass the
      control to the OS with xapic mode, according to x2apic-spec,
      chapter 2.9.
      
      Current code handle x2apic when BIOS pass with xapic mode
      enabled:
      
      When user specifies x2apic_phys, or FADT indicates PHYSICAL:
      
      1. During madt oem check, apic driver is set with xapic logical
         or xapic phys driver at first.
      
      2. enable_IR_x2apic() will enable x2apic_mode.
      
      3. if user specifies x2apic_phys on the boot line, x2apic_phys_probe()
         will install the correct x2apic phys driver and use x2apic phys mode.
         Otherwise it will skip the driver will let x2apic_cluster_probe to
         take over to install x2apic cluster driver (wrong one) even though FADT
         indicates PHYSICAL, because x2apic_phys_probe does not check
         FADT PHYSICAL.
      
      Add checking x2apic_fadt_phys in x2apic_phys_probe() to fix the
      problem.
      Signed-off-by: NStoney Wang <song-bo.wang@hp.com>
      [ updated the changelog and simplified the code ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: stable@kernel.org
      Link: http://lkml.kernel.org/r/1360263182-16226-1-git-send-email-yinghai@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cb214ede
  25. 28 1月, 2013 9 次提交