1. 22 2月, 2012 1 次提交
  2. 21 2月, 2012 4 次提交
    • L
      i387: export 'fpu_owner_task' per-cpu variable · 27e74da9
      Linus Torvalds 提交于
      (And define it properly for x86-32, which had its 'current_task'
      declaration in separate from x86-64)
      
      Bitten by my dislike for modules on the machines I use, and the fact
      that apparently nobody else actually wanted to test the patches I sent
      out.
      
      Snif. Nobody else cares.
      
      Anyway, we probably should uninline the 'kernel_fpu_begin()' function
      that is what modules actually use and that references this, but this is
      the minimal fix for now.
      Reported-by: NJosh Boyer <jwboyer@gmail.com>
      Reported-and-tested-by: NJongman Heo <jongman.heo@samsung.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      27e74da9
    • L
      i387: support lazy restore of FPU state · 7e16838d
      Linus Torvalds 提交于
      This makes us recognize when we try to restore FPU state that matches
      what we already have in the FPU on this CPU, and avoids the restore
      entirely if so.
      
      To do this, we add two new data fields:
      
       - a percpu 'fpu_owner_task' variable that gets written any time we
         update the "has_fpu" field, and thus acts as a kind of back-pointer
         to the task that owns the CPU.  The exception is when we save the FPU
         state as part of a context switch - if the save can keep the FPU
         state around, we leave the 'fpu_owner_task' variable pointing at the
         task whose FP state still remains on the CPU.
      
       - a per-thread 'last_cpu' field, that indicates which CPU that thread
         used its FPU on last.  We update this on every context switch
         (writing an invalid CPU number if the last context switch didn't
         leave the FPU in a lazily usable state), so we know that *that*
         thread has done nothing else with the FPU since.
      
      These two fields together can be used when next switching back to the
      task to see if the CPU still matches: if 'fpu_owner_task' matches the
      task we are switching to, we know that no other task (or kernel FPU
      usage) touched the FPU on this CPU in the meantime, and if the current
      CPU number matches the 'last_cpu' field, we know that this thread did no
      other FP work on any other CPU, so the FPU state on the CPU must match
      what was saved on last context switch.
      
      In that case, we can avoid the 'f[x]rstor' entirely, and just clear the
      CR0.TS bit.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e16838d
    • L
      i387: use 'restore_fpu_checking()' directly in task switching code · 80ab6f1e
      Linus Torvalds 提交于
      This inlines what is usually just a couple of instructions, but more
      importantly it also fixes the theoretical error case (can that FPU
      restore really ever fail? Maybe we should remove the checking).
      
      We can't start sending signals from within the scheduler, we're much too
      deep in the kernel and are holding the runqueue lock etc.  So don't
      bother even trying.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      80ab6f1e
    • L
      i387: fix up some fpu_counter confusion · cea20ca3
      Linus Torvalds 提交于
      This makes sure we clear the FPU usage counter for newly created tasks,
      just so that we start off in a known state (for example, don't try to
      preload the FPU state on the first task switch etc).
      
      It also fixes a thinko in when we increment the fpu_counter at task
      switch time, introduced by commit 34ddc81a ("i387: re-introduce FPU
      state preloading at context switch time").  We should increment the
      *new* task fpu_counter, not the old task, and only if we decide to use
      that state (whether lazily or preloaded).
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cea20ca3
  3. 19 2月, 2012 2 次提交
    • L
      i387: re-introduce FPU state preloading at context switch time · 34ddc81a
      Linus Torvalds 提交于
      After all the FPU state cleanups and finally finding the problem that
      caused all our FPU save/restore problems, this re-introduces the
      preloading of FPU state that was removed in commit b3b0870e ("i387:
      do not preload FPU state at task switch time").
      
      However, instead of simply reverting the removal, this reimplements
      preloading with several fixes, most notably
      
       - properly abstracted as a true FPU state switch, rather than as
         open-coded save and restore with various hacks.
      
         In particular, implementing it as a proper FPU state switch allows us
         to optimize the CR0.TS flag accesses: there is no reason to set the
         TS bit only to then almost immediately clear it again.  CR0 accesses
         are quite slow and expensive, don't flip the bit back and forth for
         no good reason.
      
       - Make sure that the same model works for both x86-32 and x86-64, so
         that there are no gratuitous differences between the two due to the
         way they save and restore segment state differently due to
         architectural differences that really don't matter to the FPU state.
      
       - Avoid exposing the "preload" state to the context switch routines,
         and in particular allow the concept of lazy state restore: if nothing
         else has used the FPU in the meantime, and the process is still on
         the same CPU, we can avoid restoring state from memory entirely, just
         re-expose the state that is still in the FPU unit.
      
         That optimized lazy restore isn't actually implemented here, but the
         infrastructure is set up for it.  Of course, older CPU's that use
         'fnsave' to save the state cannot take advantage of this, since the
         state saving also trashes the state.
      
      In other words, there is now an actual _design_ to the FPU state saving,
      rather than just random historical baggage.  Hopefully it's easier to
      follow as a result.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      34ddc81a
    • L
      i387: move TS_USEDFPU flag from thread_info to task_struct · f94edacf
      Linus Torvalds 提交于
      This moves the bit that indicates whether a thread has ownership of the
      FPU from the TS_USEDFPU bit in thread_info->status to a word of its own
      (called 'has_fpu') in task_struct->thread.has_fpu.
      
      This fixes two independent bugs at the same time:
      
       - changing 'thread_info->status' from the scheduler causes nasty
         problems for the other users of that variable, since it is defined to
         be thread-synchronous (that's what the "TS_" part of the naming was
         supposed to indicate).
      
         So perfectly valid code could (and did) do
      
      	ti->status |= TS_RESTORE_SIGMASK;
      
         and the compiler was free to do that as separate load, or and store
         instructions.  Which can cause problems with preemption, since a task
         switch could happen in between, and change the TS_USEDFPU bit. The
         change to TS_USEDFPU would be overwritten by the final store.
      
         In practice, this seldom happened, though, because the 'status' field
         was seldom used more than once, so gcc would generally tend to
         generate code that used a read-modify-write instruction and thus
         happened to avoid this problem - RMW instructions are naturally low
         fat and preemption-safe.
      
       - On x86-32, the current_thread_info() pointer would, during interrupts
         and softirqs, point to a *copy* of the real thread_info, because
         x86-32 uses %esp to calculate the thread_info address, and thus the
         separate irq (and softirq) stacks would cause these kinds of odd
         thread_info copy aliases.
      
         This is normally not a problem, since interrupts aren't supposed to
         look at thread information anyway (what thread is running at
         interrupt time really isn't very well-defined), but it confused the
         heck out of irq_fpu_usable() and the code that tried to squirrel
         away the FPU state.
      
         (It also caused untold confusion for us poor kernel developers).
      
      It also turns out that using 'task_struct' is actually much more natural
      for most of the call sites that care about the FPU state, since they
      tend to work with the task struct for other reasons anyway (ie
      scheduling).  And the FPU data that we are going to save/restore is
      found there too.
      
      Thanks to Arjan Van De Ven <arjan@linux.intel.com> for pointing us to
      the %esp issue.
      
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Reported-and-tested-by: NRaphael Prevost <raphael@buro.asia>
      Acked-and-tested-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Tested-by: NPeter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f94edacf
  4. 17 2月, 2012 5 次提交
    • L
      i387: move AMD K7/K8 fpu fxsave/fxrstor workaround from save to restore · 4903062b
      Linus Torvalds 提交于
      The AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is
      pending.  In order to not leak FIP state from one process to another, we
      need to do a floating point load after the fxsave of the old process,
      and before the fxrstor of the new FPU state.  That resets the state to
      the (uninteresting) kernel load, rather than some potentially sensitive
      user information.
      
      We used to do this directly after the FPU state save, but that is
      actually very inconvenient, since it
      
       (a) corrupts what is potentially perfectly good FPU state that we might
           want to lazy avoid restoring later and
      
       (b) on x86-64 it resulted in a very annoying ordering constraint, where
           "__unlazy_fpu()" in the task switch needs to be delayed until after
           the DS segment has been reloaded just to get the new DS value.
      
      Coupling it to the fxrstor instead of the fxsave automatically avoids
      both of these issues, and also ensures that we only do it when actually
      necessary (the FP state after a save may never actually get used).  It's
      simply a much more natural place for the leaked state cleanup.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4903062b
    • L
      i387: do not preload FPU state at task switch time · b3b0870e
      Linus Torvalds 提交于
      Yes, taking the trap to re-load the FPU/MMX state is expensive, but so
      is spending several days looking for a bug in the state save/restore
      code.  And the preload code has some rather subtle interactions with
      both paravirtualization support and segment state restore, so it's not
      nearly as simple as it should be.
      
      Also, now that we no longer necessarily depend on a single bit (ie
      TS_USEDFPU) for keeping track of the state of the FPU, we migth be able
      to do better.  If we are really switching between two processes that
      keep touching the FP state, save/restore is inevitable, but in the case
      of having one process that does most of the FPU usage, we may actually
      be able to do much better than the preloading.
      
      In particular, we may be able to keep track of which CPU the process ran
      on last, and also per CPU keep track of which process' FP state that CPU
      has.  For modern CPU's that don't destroy the FPU contents on save time,
      that would allow us to do a lazy restore by just re-enabling the
      existing FPU state - with no restore cost at all!
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3b0870e
    • L
      i387: don't ever touch TS_USEDFPU directly, use helper functions · 6d59d7a9
      Linus Torvalds 提交于
      This creates three helper functions that do the TS_USEDFPU accesses, and
      makes everybody that used to do it by hand use those helpers instead.
      
      In addition, there's a couple of helper functions for the "change both
      CR0.TS and TS_USEDFPU at the same time" case, and the places that do
      that together have been changed to use those.  That means that we have
      fewer random places that open-code this situation.
      
      The intent is partly to clarify the code without actually changing any
      semantics yet (since we clearly still have some hard to reproduce bug in
      this area), but also to make it much easier to use another approach
      entirely to caching the CR0.TS bit for software accesses.
      
      Right now we use a bit in the thread-info 'status' variable (this patch
      does not change that), but we might want to make it a full field of its
      own or even make it a per-cpu variable.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d59d7a9
    • L
      i387: move TS_USEDFPU clearing out of __save_init_fpu and into callers · b6c66418
      Linus Torvalds 提交于
      Touching TS_USEDFPU without touching CR0.TS is confusing, so don't do
      it.  By moving it into the callers, we always do the TS_USEDFPU next to
      the CR0.TS accesses in the source code, and it's much easier to see how
      the two go hand in hand.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b6c66418
    • L
      i387: fix x86-64 preemption-unsafe user stack save/restore · 15d8791c
      Linus Torvalds 提交于
      Commit 5b1cbac3 ("i387: make irq_fpu_usable() tests more robust")
      added a sanity check to the #NM handler to verify that we never cause
      the "Device Not Available" exception in kernel mode.
      
      However, that check actually pinpointed a (fundamental) race where we do
      cause that exception as part of the signal stack FPU state save/restore
      code.
      
      Because we use the floating point instructions themselves to save and
      restore state directly from user mode, we cannot do that atomically with
      testing the TS_USEDFPU bit: the user mode access itself may cause a page
      fault, which causes a task switch, which saves and restores the FP/MMX
      state from the kernel buffers.
      
      This kind of "recursive" FP state save is fine per se, but it means that
      when the signal stack save/restore gets restarted, it will now take the
      '#NM' exception we originally tried to avoid.  With preemption this can
      happen even without the page fault - but because of the user access, we
      cannot just disable preemption around the save/restore instruction.
      
      There are various ways to solve this, including using the
      "enable/disable_page_fault()" helpers to not allow page faults at all
      during the sequence, and fall back to copying things by hand without the
      use of the native FP state save/restore instructions.
      
      However, the simplest thing to do is to just allow the #NM from kernel
      space, but fix the race in setting and clearing CR0.TS that this all
      exposed: the TS bit changes and the TS_USEDFPU bit absolutely have to be
      atomic wrt scheduling, so while the actual state save/restore can be
      interrupted and restarted, the act of actually clearing/setting CR0.TS
      and the TS_USEDFPU bit together must not.
      
      Instead of just adding random "preempt_disable/enable()" calls to what
      is already excessively ugly code, this introduces some helper functions
      that mostly mirror the "kernel_fpu_begin/end()" functionality, just for
      the user state instead.
      
      Those helper functions should probably eventually replace the other
      ad-hoc CR0.TS and TS_USEDFPU tests too, but I'll need to think about it
      some more: the task switching functionality in particular needs to
      expose the difference between the 'prev' and 'next' threads, while the
      new helper functions intentionally were written to only work with
      'current'.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      15d8791c
  5. 16 2月, 2012 1 次提交
    • L
      i387: fix sense of sanity check · c38e2345
      Linus Torvalds 提交于
      The check for save_init_fpu() (introduced in commit 5b1cbac3: "i387:
      make irq_fpu_usable() tests more robust") was the wrong way around, but
      I hadn't noticed, because my "tests" were bogus: the FPU exceptions are
      disabled by default, so even doing a divide by zero never actually
      triggers this code at all unless you do extra work to enable them.
      
      So if anybody did enable them, they'd get one spurious warning.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c38e2345
  6. 14 2月, 2012 2 次提交
    • L
      i387: make irq_fpu_usable() tests more robust · 5b1cbac3
      Linus Torvalds 提交于
      Some code - especially the crypto layer - wants to use the x86
      FP/MMX/AVX register set in what may be interrupt (typically softirq)
      context.
      
      That *can* be ok, but the tests for when it was ok were somewhat
      suspect.  We cannot touch the thread-specific status bits either, so
      we'd better check that we're not going to try to save FP state or
      anything like that.
      
      Now, it may be that the TS bit is always cleared *before* we set the
      USEDFPU bit (and only set when we had already cleared the USEDFP
      before), so the TS bit test may actually have been sufficient, but it
      certainly was not obviously so.
      
      So this explicitly verifies that we will not touch the TS_USEDFPU bit,
      and adds a few related sanity-checks.  Because it seems that somehow
      AES-NI is corrupting user FP state.  The cause is not clear, and this
      patch doesn't fix it, but while debugging it I really wanted the code to
      be more obviously correct and robust.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b1cbac3
    • L
      i387: math_state_restore() isn't called from asm · be98c2cd
      Linus Torvalds 提交于
      It was marked asmlinkage for some really old and stale legacy reasons.
      Fix that and the equally stale comment.
      
      Noticed when debugging the irq_fpu_usable() bugs.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be98c2cd
  7. 07 2月, 2012 1 次提交
    • S
      perf: Fix double start/stop in x86_pmu_start() · f39d47ff
      Stephane Eranian 提交于
      The following patch fixes a bug introduced by the following
      commit:
      
              e050e3f0 ("perf: Fix broken interrupt rate throttling")
      
      The patch caused the following warning to pop up depending on
      the sampling frequency adjustments:
      
        ------------[ cut here ]------------
        WARNING: at arch/x86/kernel/cpu/perf_event.c:995 x86_pmu_start+0x79/0xd4()
      
      It was caused by the following call sequence:
      
      perf_adjust_freq_unthr_context.part() {
           stop()
           if (delta > 0) {
                perf_adjust_period() {
                    if (period > 8*...) {
                        stop()
                        ...
                        start()
                    }
                }
            }
            start()
      }
      
      Which caused a double start and a double stop, thus triggering
      the assert in x86_pmu_start().
      
      The patch fixes the problem by avoiding the double calls. We
      pass a new argument to perf_adjust_period() to indicate whether
      or not the event is already stopped. We can't just remove the
      start/stop from that function because it's called from
      __perf_event_overflow where the event needs to be reloaded via a
      stop/start back-toback call.
      
      The patch reintroduces the assertion in x86_pmu_start() which
      was removed by commit:
      
      	84f2b9b2 ("perf: Remove deprecated WARN_ON_ONCE()")
      
      In this second version, we've added calls to disable/enable PMU
      during unthrottling or frequency adjustment based on bug report
      of spurious NMI interrupts from Eric Dumazet.
      Reported-and-tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: markus@trippelsdorf.de
      Cc: paulus@samba.org
      Link: http://lkml.kernel.org/r/20120207133956.GA4932@quad
      [ Minor edits to the changelog and to the code ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f39d47ff
  8. 04 2月, 2012 2 次提交
    • S
      xen pvhvm: do not remap pirqs onto evtchns if !xen_have_vector_callback · 207d543f
      Stefano Stabellini 提交于
      CC: stable@kernel.org #2.6.37 and onwards
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      207d543f
    • K
      xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while atomic. · 41bd956d
      Konrad Rzeszutek Wilk 提交于
      When a user offlines a VCPU and then onlines it, we get:
      
      NMI watchdog disabled (cpu2): hardware events not enabled
      BUG: scheduling while atomic: swapper/2/0/0x00000002
      Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbco
       ttm bitblit softcursor drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs [last unloaded:
      
      Pid: 0, comm: swapper/2 Tainted: G           O 3.2.0phase15.1-00003-gd6f7f5b-dirty #4
      Call Trace:
       [<ffffffff81070571>] __schedule_bug+0x61/0x70
       [<ffffffff8158eb78>] __schedule+0x798/0x850
       [<ffffffff8158ed6a>] schedule+0x3a/0x50
       [<ffffffff810349be>] cpu_idle+0xbe/0xe0
       [<ffffffff81583599>] cpu_bringup_and_idle+0xe/0x10
      
      The reason for this should be obvious from this call-chain:
      cpu_bringup_and_idle:
       \- cpu_bringup
        |   \-[preempt_disable]
        |
        |- cpu_idle
             \- play_dead [assuming the user offlined the VCPU]
             |     \
             |     +- (xen_play_dead)
             |          \- HYPERVISOR_VCPU_off [so VCPU is dead, once user
             |          |                       onlines it starts from here]
             |          \- cpu_bringup [preempt_disable]
             |
             +- preempt_enable_no_reschedule()
             +- schedule()
             \- preempt_enable()
      
      So we have two preempt_disble() and one preempt_enable(). Calling
      preempt_enable() after the cpu_bringup() in the xen_play_dead
      fixes the imbalance.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      41bd956d
  9. 03 2月, 2012 1 次提交
    • S
      perf: Remove deprecated WARN_ON_ONCE() · 84f2b9b2
      Stephane Eranian 提交于
      With the new throttling/unthrottling code introduced with
      commit:
      
        e050e3f0 ("perf: Fix broken interrupt rate throttling")
      
      we occasionally hit two WARN_ON_ONCE() checks in:
      
        - intel_pmu_pebs_enable()
        - intel_pmu_lbr_enable()
        - x86_pmu_start()
      
      The assertions are no longer problematic. There is a valid
      path where they can trigger but it is harmless.
      
      The assertion can be triggered with:
      
        $ perf record -e instructions:pp ....
      
      Leading to paths:
      
        intel_pmu_pebs_enable
        intel_pmu_enable_event
        x86_perf_event_set_period
        x86_pmu_start
        perf_adjust_freq_unthr_context
        perf_event_task_tick
        scheduler_tick
      
      And:
      
        intel_pmu_lbr_enable
        intel_pmu_enable_event
        x86_perf_event_set_period
        x86_pmu_start
        perf_adjust_freq_unthr_context.
        perf_event_task_tick
        scheduler_tick
      
      cpuc->enabled is always on because when we get to
      perf_adjust_freq_unthr_context() the PMU is not totally
      disabled. Furthermore when we need to adjust a period,
      we only stop the event we need to change and not the
      entire PMU. Thus, when we re-enable, cpuc->enabled is
      already set. Note that when we stop the event, both
      pebs and lbr are stopped if necessary (and possible).
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/20120202110401.GA30911@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      84f2b9b2
  10. 01 2月, 2012 3 次提交
  11. 30 1月, 2012 2 次提交
    • M
      x86/reboot: Remove VersaLogic Menlow reboot quirk · e6d36a65
      Michael D Labriola 提交于
      This commit removes the reboot quirk originally added by commit
      e19e074b ("x86: Fix reboot problem on VersaLogic Menlow boards").
      
      Testing with a VersaLogic Ocelot (VL-EPMs-21a rev 1.00 w/ BIOS
      6.5.102) revealed the following regarding the reboot hang
      problem:
      
      - v2.6.37 reboot=bios was needed.
      
      - v2.6.38-rc1: behavior changed, reboot=acpi is needed,
        reboot=kbd and reboot=bios results in system hang.
      
      - v2.6.38: VersaLogic patch (e19e074b "x86: Fix reboot problem on
        VersaLogic Menlow boards") was applied prior to v2.6.38-rc7.  This
        patch sets a quirk for VersaLogic Menlow boards that forces the use
        of reboot=bios, which doesn't work anymore.
      
      - v3.2: It seems that commit 660e34ce ("x86: Reorder reboot method
        preferences") changed the default reboot method to acpi prior to
        v3.0-rc1, which means the default behavior is appropriate for the
        Ocelot.  No VersaLogic quirk is required.
      
      The Ocelot board used for testing can successfully reboot w/out
      having to pass any reboot= arguments for all 3 current versions
      of the BIOS.
      Signed-off-by: NMichael D Labriola <michael.d.labriola@gmail.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Cc: Michael D Labriola <mlabriol@gdeb.com>
      Cc: Kushal Koolwal <kushalkoolwal@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/87vcnub9hu.fsf@gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      e6d36a65
    • M
      x86/reboot: Skip DMI checks if reboot set by user · 5955633e
      Michael D Labriola 提交于
      Skip DMI checks for vendor specific reboot quirks if the user
      passed in a reboot= arg on the command line - we should never
      override user choices.
      Signed-off-by: NMichael D Labriola <michael.d.labriola@gmail.com>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Michael D Labriola <mlabriol@gdeb.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/87wr8ab9od.fsf@gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      5955633e
  12. 28 1月, 2012 1 次提交
  13. 27 1月, 2012 2 次提交
    • P
      bugs, x86: Fix printk levels for panic, softlockups and stack dumps · b0f4c4b3
      Prarit Bhargava 提交于
      rsyslog will display KERN_EMERG messages on a connected
      terminal.  However, these messages are useless/undecipherable
      for a general user.
      
      For example, after a softlockup we get:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Stack:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Call Trace:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Code: ff ff a8 08 75 25 31 d2 48 8d 86 38 e0 ff ff 48 89
       d1 0f 01 c8 0f ae f0 48 8b 86 38 e0 ff ff a8 08 75 08 b1 01 4c 89 e0 0f 01 c9 <e8> ea 69 dd ff 4c 29 e8 48 89 c7 e8 0f bc da ff 49 89 c4 49 89
      
      This happens because the printk levels for these messages are
      incorrect. Only an informational message should be displayed on
      a terminal.
      
      I modified the printk levels for various messages in the kernel
      and tested the output by using the drivers/misc/lkdtm.c kernel
      modules (ie, softlockups, panics, hard lockups, etc.) and
      confirmed that the console output was still the same and that
      the output to the terminals was correct.
      
      For example, in the case of a softlockup we now see the much
      more informative:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 10:18:06 ...
       BUG: soft lockup - CPU4 stuck for 60s!
      
      instead of the above confusing messages.
      
      AFAICT, the messages no longer have to be KERN_EMERG.  In the
      most important case of a panic we set console_verbose().  As for
      the other less severe cases the correct data is output to the
      console and /var/log/messages.
      
      Successfully tested by me using the drivers/misc/lkdtm.c module.
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: dzickus@redhat.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1327586134-11926-1-git-send-email-prarit@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      b0f4c4b3
    • J
      x86: Properly parenthesize cmpxchg() macro arguments · fc395b92
      Jan Beulich 提交于
      Quite oddly, all of the arguments passed through from the top
      level macros to the second level which didn't need parentheses
      had them, while the only expression (involving a parameter)
      needing them didn't.
      
      Very recently I got bitten by the lack thereof when using
      something like "array + index" for the first operand, with
      "array" being an array more narrow than int.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/4F2183A9020000780006F3E6@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      fc395b92
  14. 26 1月, 2012 6 次提交
  15. 25 1月, 2012 1 次提交
    • D
      x86: xen: size struct xen_spinlock to always fit in arch_spinlock_t · 7a7546b3
      David Vrabel 提交于
      If NR_CPUS < 256 then arch_spinlock_t is only 16 bits wide but struct
      xen_spinlock is 32 bits.  When a spin lock is contended and
      xl->spinners is modified the two bytes immediately after the spin lock
      would be corrupted.
      
      This is a regression caused by 84eb950d
      (x86, ticketlock: Clean up types and accessors) which reduced the size
      of arch_spinlock_t.
      
      Fix this by making xl->spinners a u8 if NR_CPUS < 256.  A
      BUILD_BUG_ON() is also added to check the sizes of the two structures
      are compatible.
      
      In many cases this was not noticable as there would often be padding
      bytes after the lock (e.g., if any of CONFIG_GENERIC_LOCKBREAK,
      CONFIG_DEBUG_SPINLOCK, or CONFIG_DEBUG_LOCK_ALLOC were enabled).
      
      The bnx2 driver is affected. In struct bnx2, phy_lock and
      indirect_lock may have no padding after them.  Contention on phy_lock
      would corrupt indirect_lock making it appear locked and the driver
      would deadlock.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      CC: stable@kernel.org #only 3.2
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      7a7546b3
  16. 20 1月, 2012 1 次提交
  17. 19 1月, 2012 2 次提交
  18. 18 1月, 2012 3 次提交
    • A
      x86-32: Fix build failure with AUDIT=y, AUDITSYSCALL=n · 6015ff10
      Al Viro 提交于
      JONGMAN HEO reports:
      
        With current linus git (commit a25a2b84), I got following build error,
      
        arch/x86/kernel/vm86_32.c: In function 'do_sys_vm86':
        arch/x86/kernel/vm86_32.c:340: error: implicit declaration of function '__audit_syscall_exit'
        make[3]: *** [arch/x86/kernel/vm86_32.o] Error 1
      
      OK, I can reproduce it (32bit allmodconfig with AUDIT=y, AUDITSYSCALL=n)
      
      It's due to commit d7e7528b: "Audit: push audit success and retcode
      into arch ptrace.h".
      Reported-by: NJONGMAN HEO <jongman.heo@samsung.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6015ff10
    • L
      x86, tsc: Fix SMI induced variation in quick_pit_calibrate() · 68f30fbe
      Linus Torvalds 提交于
      pit_expect_msb() returns success wrongly in the below SMI scenario:
      
      a. pit_verify_msb() has not yet seen the MSB transition.
      
      b. we are close to the MSB transition though and got a SMI immediately after
         returning from pit_verify_msb() which didn't see the MSB transition. PIT MSB
         transition has happened somewhere during SMI execution.
      
      c. returned from SMI and we noted down the 'tsc', saw the pit MSB change now and
         exited the loop to calculate 'deltatsc'. Instead of noting the TSC at the MSB
         transition, we are way off because of the SMI.  And as the SMI happened
         between the pit_verify_msb() and before the 'tsc' is recorded in the
         for loop, 'delattsc' (d1/d2 in quick_pit_calibrate()) will be small and
         quick_pit_calibrate() will not notice this error.
      
      Depending on whether SMI disturbance happens while computing d1 or d2, we will
      see the TSC calibrated value smaller or bigger than the expected value. As a
      result, in a cluster we were seeing a variation of approximately +/- 20MHz in
      the calibrated values, resulting in NTP failures.
      
        [ As far as the SMI source is concerned, this is a periodic SMI that gets
          disabled after ACPI is enabled by the OS. But the TSC calibration happens
          before the ACPI is enabled. ]
      
      To address this, change pit_expect_msb() so that
      
       - the 'tsc' is the TSC in between the two reads that read the MSB
      change from the PIT (same as before)
      
       - the 'delta' is the difference in TSC from *before* the MSB changed
      to *after* the MSB changed.
      
      Now the delta is twice as big as before (it covers four PIT accesses,
      roughly 4us) and quick_pit_calibrate() will loop a bit longer to get
      the calibrated value with in the 500ppm precision. As the delta (d1/d2)
      covers four PIT accesses, actual calibrated result might be closer to
      250ppm precision.
      
      As the loop now takes longer to stabilize, double MAX_QUICK_PIT_MS to 50.
      
      SMI disturbance will showup as much larger delta's and the loop will take
      longer than usual for the result to be with in the accepted precision. Or will
      fallback to slow PIT calibration if it takes more than 50msec.
      
      Also while we are at this, remove the calibration correction that aims to
      get the result to the middle of the error bars. We really don't know which
      direction to correct into, so remove it.
      Reported-and-tested-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1326843337.5291.4.camel@sbsiddha-mobl2Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      68f30fbe
    • E
      audit: inline audit_syscall_entry to reduce burden on archs · b05d8447
      Eric Paris 提交于
      Every arch calls:
      
      if (unlikely(current->audit_context))
      	audit_syscall_entry()
      
      which requires knowledge about audit (the existance of audit_context) in
      the arch code.  Just do it all in static inline in audit.h so that arch's
      can remain blissfully ignorant.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      b05d8447