1. 04 6月, 2009 11 次提交
    • A
      x86, mce: support action-optional machine checks · 9b1beaf2
      Andi Kleen 提交于
      Newer Intel CPUs support a new class of machine checks called recoverable
      action optional.
      
      Action Optional means that the CPU detected some form of corruption in
      the background and tells the OS about using a machine check
      exception. The OS can then take appropiate action, like killing the
      process with the corrupted data or logging the event properly to disk.
      
      This is done by the new generic high level memory failure handler added
      in a earlier patch. The high level handler takes the address with the
      failed memory and does the appropiate action, like killing the process.
      
      In this version of the patch the high level handler is stubbed out
      with a weak function to not create a direct dependency on the hwpoison
      branch.
      
      The high level handler cannot be directly called from the machine check
      exception though, because it has to run in a defined process context to
      be able to sleep when taking VM locks (it is not expected to sleep for a
      long time, just do so in some exceptional cases like lock contention)
      
      Thus the MCE handler has to queue a work item for process context,
      trigger process context and then call the high level handler from there.
      
      This patch adds two path to process context: through a per thread kernel
      exit notify_user() callback or through a high priority work item.
      The first runs when the process exits back to user space, the other when
      it goes to sleep and there is no higher priority process.
      
      The machine check handler will schedule both, and whoever runs first
      will grab the event. This is done because quick reaction to this
      event is critical to avoid a potential more fatal machine check
      when the corruption is consumed.
      
      There is a simple lock less ring buffer to queue the corrupted
      addresses between the exception handler and the process context handler.
      Then in process context it just calls the high level VM code with
      the corrupted PFNs.
      
      The code adds the required code to extract the failed address from
      the CPU's machine check registers. It doesn't try to handle all
      possible cases -- the specification has 6 different ways to specify
      memory address -- but only the linear address.
      
      Most of the required checking has been already done earlier in the
      mce_severity rule checking engine.  Following the Intel
      recommendations Action Optional errors are only enabled for known
      situations (encoded in MCACODs). The errors are ignored otherwise,
      because they are action optional.
      
      v2: Improve comment, disable preemption while processing ring buffer
          (reported by Ying Huang)
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      9b1beaf2
    • A
      x86, mce: define MCE_VECTOR · 8fa8dd9e
      Andi Kleen 提交于
      Add MCE_VECTOR for the #MC exception.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      8fa8dd9e
    • A
      x86, mce: rename mce_notify_user to mce_notify_irq · 9ff36ee9
      Andi Kleen 提交于
      Rename the mce_notify_user function to mce_notify_irq. The next
      patch will split the wakeup handling of interrupt context
      and of process context and it's better to give it a clearer
      name for this.
      
      Contains a fix from Ying Huang
      
      [ Impact: cleanup ]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      9ff36ee9
    • A
      x86: fix panic with interrupts off (needed for MCE) · 4ef702c1
      Andi Kleen 提交于
      For some time each panic() called with interrupts disabled
      triggered the !irqs_disabled() WARN_ON in smp_call_function(),
      producing ugly backtraces and confusing users.
      
      This is a common situation with machine checks for example which
      tend to call panic with interrupts disabled, but will also hit
      in other situations e.g. panic during early boot.  In fact it
      means that panic cannot be called in many circumstances, which
      would be bad.
      
      This all started with the new fancy queued smp_call_function,
      which is then used by the shutdown path to shut down the other
      CPUs.
      
      On closer examination it turned out that the fancy RCU
      smp_call_function() does lots of things not suitable in a panic
      situation anyways, like allocating memory and relying on complex
      system state.
      
      I originally tried to patch this over by checking for panic
      there, but it was quite complicated and the original patch
      was also not very popular.  This also didn't fix some of the
      underlying complexity problems.
      
      The new code in post 2.6.29 tries to patch around this by
      checking for oops_in_progress, but that is not enough to make
      this fully safe and I don't think that's a real solution
      because panic has to be reliable.
      
      So instead use an own vector to reboot.  This makes the reboot
      code extremly straight forward, which is definitely a big plus
      in a panic situation where it is important to avoid relying on
      too much kernel state.  The new simple code is also safe to be
      called from interupts off region because it is very very simple.
      
      There can be situations where it is important that panic
      is reliable.  For example on a fatal machine check the panic
      is needed to get the system up again and running as quickly
      as possible.  So it's important that panic is reliable and
      all function it calls simple.
      
      This is why I came up with this simple vector scheme.
      It's very hard to beat in simplicity.  Vectors are not
      particularly precious anymore since all big systems are
      using per CPU vectors.
      
      Another possibility would have been to use an NMI similar
      to kdump, but there is still the problem that NMIs don't
      work reliably on some systems due to BIOS issues.  NMIs
      would have been able to stop CPUs running with interrupts
      off too.  In the sake of universal reliability I opted for
      using a non NMI vector for now.
      
      I put the reboot vector into the highest priority bucket of
      the APIC vectors and moved the 64bit UV_BAU message down
      instead into the next lower priority.
      
      [ Impact: bug fix, fixes an old regression ]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4ef702c1
    • A
      x86, mce: implement new status bits · ed7290d0
      Andi Kleen 提交于
      The x86 architecture recently added some new machine check status bits:
      S(ignalled) and AR (Action-Required). Signalled allows to check
      if a specific event caused an exception or was just logged through CMCI.
      AR allows the kernel to decide if an event needs immediate action
      or can be delayed or ignored.
      
      Implement support for these new status bits. mce_severity() uses
      the new bits to grade the machine check correctly and decide what
      to do. The exception handler uses AR to decide to kill or not.
      The S bit is used to separate events between the poll/CMCI handler
      and the exception handler.
      
      Classical UC always leads to panic. That was true before anyways
      because the existing CPUs always passed a PCC with it.
      
      Also corrects the rules whether to kill in user or kernel context
      and how to handle missing RIPV.
      
      The machine check handler largely uses the mce-severity grading
      engine now instead of making its own decisions. This means the logic
      is centralized in one place.  This is useful because it has to be
      evaluated multiple times.
      
      v2: Some rule fixes; Add AO events
      Fix RIPV, RIPV|EIPV order (Ying Huang)
      Fix UCNA with AR=1 message (Ying Huang)
      Add comment about panicing in m_c_p.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      ed7290d0
    • A
      x86, mce: implement bootstrapping for machine check wakeups · ccc3c319
      Andi Kleen 提交于
      Machine checks support waking up the mcelog daemon quickly.
      
      The original wake up code for this was pretty ugly, relying on
      a idle notifier and a special process flag. The reason it did
      it this way is that the machine check handler is not subject
      to normal interrupt locking rules so it's not safe
      to call wake_up().  Instead it set a process flag
      and then either did the wakeup in the syscall return
      or in the idle notifier.
      
      This patch adds a new "bootstraping" method as replacement.
      
      The idea is that the handler checks if it's in a state where
      it is unsafe to call wake_up(). If it's safe it calls it directly.
      When it's not safe -- that is it interrupted in a critical
      section with interrupts disables -- it uses a new "self IPI" to trigger
      an IPI to its own CPU. This can be done safely because IPI
      triggers are atomic with some care. The IPI is raised
      once the interrupts are reenabled and can then safely call
      wake_up().
      
      When APICs are disabled the event is just queued and will be picked up
      eventually by the next polling timer. I think that's a reasonable
      compromise, since it should only happen quite rarely.
      
      Contains fixes from Ying Huang.
      
      [ solve conflict on irqinit, make it work on 32bit (entry_arch.h) - HS ]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      ccc3c319
    • A
      x86, mce: extend struct mce user interface with more information. · 8ee08347
      Andi Kleen 提交于
      Experience has shown that struct mce which is used to pass an machine
      check to the user space daemon currently a few limitations.  Also some
      data which is useful to print at panic level is also missing.
      
      This patch addresses most of them. The same information is also
      printed out together with mce panic.
      
      struct mce can be painlessly extended in a compatible way, the mcelog
      user space code just ignores additional fields with a warning.
      
      - It doesn't provide a wall time timestamp. There have been a few
        complaints about that. Fix that by adding a 64bit time_t
      
      - It doesn't provide the exact CPU identification. This makes
        it awkward for mcelog to decode the event correctly, especially
        when there are variations in the supported MCE codes on different
        CPU models or when mcelog is running on a different host after a panic.
        Previously the administrator had to specify the correct CPU
        when mcelog ran on a different host, but with the more variation
        in machine checks now it's better to auto detect that.
        It's also useful for more detailed analysis of CPU events.
        Pass CPUID 1.EAX and the cpu vendor (as encoded in processor.h) instead.
      
      - Socket ID and initial APIC ID are useful to report because they
        allow to identify the failing CPU in some (not all) cases.
        This is also especially useful for the panic situation.
        This addresses one of the complaints from Thomas Gleixner earlier.
      
      - The MCG capabilities MSR needs to be reported for some advanced
        error processing in mcelog
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      8ee08347
    • A
      x86, mce: support more than 256 CPUs in struct mce · d620c67f
      Andi Kleen 提交于
      The old struct mce had a limitation to 256 CPUs. But x86 Linux supports
      more than that now with x2apic. Add a new field extcpu to report the
      extended number.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d620c67f
    • A
      x86, mce: store record length into memory struct mce anchor · f6fb0ac0
      Andi Kleen 提交于
      This makes it easier for tools who want to extract the mcelog out of
      crash images or memory dumps to adapt to changing struct mce size.
      The length field replaces padding, so it's fully compatible.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      f6fb0ac0
    • A
      x86, mce: add MCE poll count to /proc/interrupts · ca84f696
      Andi Kleen 提交于
      Keep a count of the machine check polls (or CMCI events) in
      /proc/interrupts.
      
      Andi needs this for debugging, but it's also useful in general
      to see what's going in by the kernel.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      ca84f696
    • A
      x86, mce: add machine check exception count in /proc/interrupts · 01ca79f1
      Andi Kleen 提交于
      Useful for debugging, but it's also good general policy
      to have a counter for all special interrupts there. This makes it easier
      to diagnose where a CPU is spending its time.
      
      [ Impact: feature, debugging tool ]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      01ca79f1
  2. 29 5月, 2009 10 次提交
  3. 28 5月, 2009 1 次提交
  4. 18 5月, 2009 2 次提交
    • Y
      x86, irq: don't call mp_config_acpi_gsi() if update_mptable is not enabled · f1bdb523
      Yinghai Lu 提交于
      Len expressed concern that the update_mptable feature has
      side-effects on the ACPI code.
      
      Make it sure explicitly that the code only ever gets called if
      the (default disabled) update_mptable boot quirk option is
      disabled.
      
      [ Impact: isolate the update_mptable feature from ACPI code more ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Len Brown <lenb@kernel.org>
      LKML-Reference: <4A0DC832.5090200@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f1bdb523
    • Y
      x86, apic: introduce io_apic_irq_attr · e5198075
      Yinghai Lu 提交于
      according to Ingo, io_apic irq-setup related functions have too many
      parameters with a repetitive signature.
      
      So reduce related funcs to get less params by passing a pointer
      to a newly defined io_apic_irq_attr structure.
      
      v2: io_apic_irq ==> irq_attr
          triggering ==> trigger
      
      v3: add set_io_apic_irq_attr
      
      [ Impact: cleanup ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: Len Brown <lenb@kernel.org>
      LKML-Reference: <4A08ACD3.2070401@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e5198075
  5. 16 5月, 2009 1 次提交
    • J
      x86: Fix performance regression caused by paravirt_ops on native kernels · b4ecc126
      Jeremy Fitzhardinge 提交于
      Xiaohui Xin and some other folks at Intel have been looking into what's
      behind the performance hit of paravirt_ops when running native.
      
      It appears that the hit is entirely due to the paravirtualized
      spinlocks introduced by:
      
       | commit 8efcbab6
       | Date:   Mon Jul 7 12:07:51 2008 -0700
       |
       |     paravirt: introduce a "lock-byte" spinlock implementation
      
      The extra call/return in the spinlock path is somehow
      causing an increase in the cycles/instruction of somewhere around 2-7%
      (seems to vary quite a lot from test to test).  The working theory is
      that the CPU's pipeline is getting upset about the
      call->call->locked-op->return->return, and seems to be failing to
      speculate (though I haven't seen anything definitive about the precise
      reasons).  This doesn't entirely make sense, because the performance
      hit is also visible on unlock and other operations which don't involve
      locked instructions.  But spinlock operations clearly swamp all the
      other pvops operations, even though I can't imagine that they're
      nearly as common (there's only a .05% increase in instructions
      executed).
      
      If I disable just the pv-spinlock calls, my tests show that pvops is
      identical to non-pvops performance on native (my measurements show that
      it is actually about .1% faster, but Xiaohui shows a .05% slowdown).
      
      Summary of results, averaging 10 runs of the "mmperf" test, using a
      no-pvops build as baseline:
      
      		nopv		Pv-nospin	Pv-spin
      CPU cycles	100.00%		99.89%		102.18%
      instructions	100.00%		100.10%		100.15%
      CPI		100.00%		99.79%		102.03%
      cache ref	100.00%		100.84%		100.28%
      cache miss	100.00%		90.47%		88.56%
      cache miss rate	100.00%		89.72%		88.31%
      branches	100.00%		99.93%		100.04%
      branch miss	100.00%		103.66%		107.72%
      branch miss rt	100.00%		103.73%		107.67%
      wallclock	100.00%		99.90%		102.20%
      
      The clear effect here is that the 2% increase in CPI is
      directly reflected in the final wallclock time.
      
      (The other interesting effect is that the more ops are
      out of line calls via pvops, the lower the cache access
      and miss rates.  Not too surprising, but it suggests that
      the non-pvops kernel is over-inlined.  On the flipside,
      the branch misses go up correspondingly...)
      
      So, what's the fix?
      
      Paravirt patching turns all the pvops calls into direct calls, so
      _spin_lock etc do end up having direct calls.  For example, the compiler
      generated code for paravirtualized _spin_lock is:
      
      <_spin_lock+0>:		mov    %gs:0xb4c8,%rax
      <_spin_lock+9>:		incl   0xffffffffffffe044(%rax)
      <_spin_lock+15>:	callq  *0xffffffff805a5b30
      <_spin_lock+22>:	retq
      
      The indirect call will get patched to:
      <_spin_lock+0>:		mov    %gs:0xb4c8,%rax
      <_spin_lock+9>:		incl   0xffffffffffffe044(%rax)
      <_spin_lock+15>:	callq <__ticket_spin_lock>
      <_spin_lock+20>:	nop; nop		/* or whatever 2-byte nop */
      <_spin_lock+22>:	retq
      
      One possibility is to inline _spin_lock, etc, when building an
      optimised kernel (ie, when there's no spinlock/preempt
      instrumentation/debugging enabled).  That will remove the outer
      call/return pair, returning the instruction stream to a single
      call/return, which will presumably execute the same as the non-pvops
      case.  The downsides arel 1) it will replicate the
      preempt_disable/enable code at eack lock/unlock callsite; this code is
      fairly small, but not nothing; and 2) the spinlock definitions are
      already a very heavily tangled mass of #ifdefs and other preprocessor
      magic, and making any changes will be non-trivial.
      
      The other obvious answer is to disable pv-spinlocks.  Making them a
      separate config option is fairly easy, and it would be trivial to
      enable them only when Xen is enabled (as the only non-default user).
      But it doesn't really address the common case of a distro build which
      is going to have Xen support enabled, and leaves the open question of
      whether the native performance cost of pv-spinlocks is worth the
      performance improvement on a loaded Xen system (10% saving of overall
      system CPU when guests block rather than spin).  Still it is a
      reasonable short-term workaround.
      
      [ Impact: fix pvops performance regression when running native ]
      Analysed-by: N"Xin Xiaohui" <xiaohui.xin@intel.com>
      Analysed-by: N"Li Xin" <xin.li@intel.com>
      Analysed-by: N"Nakajima Jun" <jun.nakajima@intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Xen-devel <xen-devel@lists.xensource.com>
      LKML-Reference: <4A0B62F7.5030802@goop.org>
      [ fixed the help text ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b4ecc126
  6. 12 5月, 2009 2 次提交
    • Y
      x86: read apic ID in the !acpi_lapic case · 4797f6b0
      Yinghai Lu 提交于
      Ed found that on 32-bit, boot_cpu_physical_apicid is not read right,
      when the mptable is broken.
      
      Interestingly, actually three paths use/set it:
      
       1. acpi: at that time that is already read from reg
       2. mptable: only read from mptable
       3. no madt, and no mptable, that use default apic id 0 for 64-bit, -1 for 32-bit
      
      so we could read the apic id for the 2/3 path. We trust the hardware
      register more than we trust a BIOS data structure (the mptable).
      
      We can also avoid the double set_fixmap() when acpi_lapic
      is used, and also need to move cpu_has_apic earlier and
      call apic_disable().
      
      Also when need to update the apic id, we'd better read and
      set the apic version as well - so that quirks are applied precisely.
      
      v2: make path 3 with 64bit, use -1 as apic id, so could read it later.
      v3: fix whitespace problem pointed out by Ed Swierk
      v5: fix boot crash
      
      [ Impact: get correct apic id for bsp other than acpi path ]
      Reported-by: NEd Swierk <eswierk@aristanetworks.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NCyrill Gorcunov <gorcunov@openvz.org>
      LKML-Reference: <49FC85A9.2070702@kernel.org>
      [ v4: sanity-check in the ACPI case too ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4797f6b0
    • M
      x86, 32-bit: fix kernel_trap_sp() · 7b6c6c77
      Masami Hiramatsu 提交于
      Use &regs->sp instead of regs for getting the top of stack in kernel mode.
      (on x86-64, regs->sp always points the top of stack)
      
      [ Impact: Oprofile decodes only stack for backtracing on i386 ]
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      [ v2: rename the API to kernel_stack_pointer(), move variable inside ]
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: systemtap@sources.redhat.com
      Cc: Harvey Harrison <harvey.harrison@gmail.com>
      Cc: Jan Blunck <jblunck@suse.de>
      Cc: Christoph Hellwig <hch@infradead.org>
      LKML-Reference: <20090511210300.17332.67549.stgit@localhost.localdomain>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7b6c6c77
  7. 11 5月, 2009 6 次提交
  8. 03 5月, 2009 1 次提交
  9. 28 4月, 2009 1 次提交
    • Y
      irq: change ACPI GSI APIs to also take a device argument · a2f809b0
      Yinghai Lu 提交于
      We want to use dev_to_node() later on, to be aware of the 'home node'
      of the GSI in question.
      
      [ Impact: cleanup, prepare the IRQ code to be more NUMA aware ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NLen Brown <lenb@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-acpi@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      LKML-Reference: <49F65560.20904@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a2f809b0
  10. 23 4月, 2009 2 次提交
  11. 22 4月, 2009 2 次提交
    • S
      x86: x2apic, IR: remove reinit_intr_remapped_IO_APIC() · ff166cb5
      Suresh Siddha 提交于
      When interrupt-remapping is enabled, we are relying on
      setup_IO_APIC_irqs() to configure remapped entries in the
      IO-APIC, which comes little bit later after enabling
      interrupt-remapping.
      
      Meanwhile, restoration of old io-apic entries after enabling
      interrupt-remapping will not make the interrupts through
      io-apic functional anyway.
      
      So remove the unnecessary reinit_intr_remapped_IO_APIC() step.
      
      The longer story:
      
      When interrupt-remapping is enabled, IO-APIC entries need to be
      setup in the re-mappable format (pointing to
      interrupt-remapping table entries setup by the OS). This
      remapping configuration is happening in the same place where we
      traditionally configure IO-APIC (i.e., in
      setup_IO_APIC_irqs()).
      
      So when we enable interrupt-remapping successfully, there is no
      need to restore old io-apic RTE entries before we actually do a
      complete configuration shortly in setup_IO_APIC_irqs(). Old
      IO-APIC RTE's may be in traditional format (non re-mappable) or
      in re-mappable format pointing to interrupt-remapping table
      entries setup by BIOS. Restoring both of these will not make
      IO-APIC functional. We have to rely on setup_IO_APIC_irqs() for
      proper configuration by OS.
      
      So I am removing this unnecessary and broken step.
      
      [ Impact: remove unnecessary/broken IO-APIC setup step ]
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NWeidong Han <weidong.han@intel.com>
      Cc: dwmw2@infradead.org
      LKML-Reference: <20090420200450.552359000@linux-os.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ff166cb5
    • D
      FRV: Fix the section attribute on UP DECLARE_PER_CPU() · 9b8de747
      David Howells 提交于
      In non-SMP mode, the variable section attribute specified by DECLARE_PER_CPU()
      does not agree with that specified by DEFINE_PER_CPU().  This means that
      architectures that have a small data section references relative to a base
      register may throw up linkage errors due to too great a displacement between
      where the base register points and the per-CPU variable.
      
      On FRV, the .h declaration says that the variable is in the .sdata section, but
      the .c definition says it's actually in the .data section.  The linker throws
      up the following errors:
      
      kernel/built-in.o: In function `release_task':
      kernel/exit.c:78: relocation truncated to fit: R_FRV_GPREL12 against symbol `per_cpu__process_counts' defined in .data section in kernel/built-in.o
      kernel/exit.c:78: relocation truncated to fit: R_FRV_GPREL12 against symbol `per_cpu__process_counts' defined in .data section in kernel/built-in.o
      
      To fix this, DECLARE_PER_CPU() should simply apply the same section attribute
      as does DEFINE_PER_CPU().  However, this is made slightly more complex by
      virtue of the fact that there are several variants on DEFINE, so these need to
      be matched by variants on DECLARE.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9b8de747
  12. 21 4月, 2009 1 次提交