1. 29 11月, 2016 1 次提交
  2. 25 11月, 2016 1 次提交
  3. 23 11月, 2016 3 次提交
  4. 17 11月, 2016 1 次提交
  5. 15 11月, 2016 1 次提交
  6. 11 11月, 2016 4 次提交
  7. 01 11月, 2016 1 次提交
  8. 28 10月, 2016 3 次提交
  9. 24 10月, 2016 1 次提交
  10. 17 10月, 2016 4 次提交
    • H
      s390/dumpstack: get rid of return_address again · dcddba96
      Heiko Carstens 提交于
      With commit ef6000b4 ("Disable the __builtin_return_address()
      warning globally after all)" the kernel does not warn at all again if
      __builtin_return_address(n) is called with n > 0.
      
      Besides the fact that this was a false warning on s390 anyway, due to
      the always present backchain, we can now revert commit 56063306
      ("s390/dumpstack: implement and use return_address()") again, to
      simplify the code again.
      
      After all I shouldn't have had return_address() implememted at all to
      workaround this issue. So get rid of this again.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      dcddba96
    • H
      s390/disassambler: use pr_cont where appropriate · 4d062487
      Heiko Carstens 提交于
      Just like for dumpstack use pr_cont instead of simple printk calls to
      fix the output when disassembling a piece of code.
      
      Before:
      [    0.840627] Krnl Code: 000000000017d1c6: a77400f7            brc     7,17d3b4
      [    0.840630]
                                000000000017d1ca: 92015000            mvi     0(%r5),1
      [    0.840634]
                               #000000000017d1ce: a7f40001            brc     15,17d1d0
      
      After:
      [    0.831792] Krnl Code: 000000000017d13e: a77400f7            brc     7,17d32c
                                000000000017d142: 92015000            mvi     0(%r5),1
                               #000000000017d146: a7f40001            brc     15,17d148
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      4d062487
    • H
      s390/dumpstack: use pr_cont where appropriate · a7906345
      Heiko Carstens 提交于
      Use pr_cont instead of simple printk calls when lines will be
      continued. This fixes the kernel output of various lines printed on
      e.g. a warning:
      
      Before:
      [    0.840604] Krnl PSW : 0404c00180000000 000000000017d1d2
      [    0.840606]  (try_to_wake_up+0x382/0x5e0)
      
      [    0.840610]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0
      [    0.840611]  RI:0 EA:3
      
      After:
      [    0.831772] Krnl PSW : 0404c00180000000 000000000017d14a (try_to_wake_up+0x382/0x5e0)
      [    0.831776]            R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      a7906345
    • H
      s390/dumpstack: restore reliable indicator for call traces · d0208639
      Heiko Carstens 提交于
      Before merging all different stack tracers the call traces printed had
      an indicator if an entry can be considered reliable or not.
      Unreliable entries were put in braces, reliable not. Currently all
      lines contain these extra braces.
      
      This patch restores the old behaviour by adding an extra "reliable"
      parameter to the callback functions. Only show_trace makes currently
      use of it.
      
      Before:
      [    0.804751] Call Trace:
      [    0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
      [    0.804756] ([<0000000000161d64>] create_worker+0x174/0x1c0)
      
      After:
      [    0.804751] Call Trace:
      [    0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
      [    0.804756]  [<0000000000161d64>] create_worker+0x174/0x1c0
      
      Fixes: 758d39eb ("s390/dumpstack: merge all four stack tracers")
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d0208639
  11. 08 10月, 2016 2 次提交
  12. 20 9月, 2016 3 次提交
  13. 08 9月, 2016 1 次提交
  14. 29 8月, 2016 6 次提交
    • M
      s390/nmi: improve revalidation of fpu / vector registers · 8f149ea6
      Martin Schwidefsky 提交于
      The machine check handler will do one of two things if the floating-point
      control, a floating point register or a vector register can not be
      revalidated:
      1) if the PSW indicates user mode the process is terminated
      2) if the PSW indicates kernel mode the system is stopped
      
      To unconditionally stop the system for 2) is incorrect.
      
      There are three possible outcomes if the floating-point control, a
      floating point register or a vector registers can not be revalidated:
      1) The kernel is inside a kernel_fpu_begin/kernel_fpu_end block and
         needs the register. The system is stopped.
      2) No active kernel_fpu_begin/kernel_fpu_end block and the CIF_CPU bit
         is not set. The user space process needs the register and is killed.
      3) No active kernel_fpu_begin/kernel_fpu_end block and the CIF_FPU bit
         is set. Neither the kernel nor the user space process needs the
         lost register. Just revalidate it and continue.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      8f149ea6
    • M
      s390/fpu: improve kernel_fpu_[begin|end] · 7f79695c
      Martin Schwidefsky 提交于
      In case of nested user of the FPU or vector registers in the kernel
      the current code uses the mask of the FPU/vector registers of the
      previous contexts to decide which registers to save and restore.
      E.g. if the previous context used KERNEL_VXR_V0V7 and the next
      context wants to use KERNEL_VXR_V24V31 the first 8 vector registers
      are stored to the FPU state structure. But this is not necessary
      as the next context does not use these registers.
      
      Rework the FPU/vector register save and restore code. The new code
      does a few things differently:
      1) A lowcore field is used instead of a per-cpu variable.
      2) The kernel_fpu_end function now has two parameters just like
         kernel_fpu_begin. The register flags are required by both
         functions to save / restore the minimal register set.
      3) The inline functions kernel_fpu_begin/kernel_fpu_end now do the
         update of the register masks. If the user space FPU registers
         have already been stored neither save_fpu_regs nor the
         __kernel_fpu_begin/__kernel_fpu_end functions have to be called
         for the first context. In this case kernel_fpu_begin adds 7
         instructions and kernel_fpu_end adds 4 instructions.
      3) The inline assemblies in __kernel_fpu_begin / __kernel_fpu_end
         to save / restore the vector registers are simplified a bit.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      7f79695c
    • D
      s390/time: avoid races when updating tb_update_count · 67f03de5
      David Hildenbrand 提交于
      The increment might not be atomic and we're not holding the
      timekeeper_lock. Therefore we might lose an update to count, resulting in
      VDSO being trapped in a loop. As other archs also simply update the
      values and count doesn't seem to have an impact on reloading of these
      values in VDSO code, let's just remove the update of tb_update_count.
      Suggested-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      67f03de5
    • D
      s390/time: fixup the clock comparator on all cpus · 0c00b1e0
      David Hildenbrand 提交于
      By leaving fixup_cc unset, only the clock comparator of the cpu actually
      doing the sync is fixed up until now.
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      0c00b1e0
    • D
      s390/time: cleanup etr leftovers · ca64f639
      David Hildenbrand 提交于
      There are still some etr leftovers and wrong comments, let's clean that up.
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      ca64f639
    • D
      s390/time: simplify stp time syncs · 41ad0220
      David Hildenbrand 提交于
      The way we call do_adjtimex() today is broken. It has 0 effect, as
      ADJ_OFFSET_SINGLESHOT (0x0001) in the kernel maps to !ADJ_ADJTIME
      (in contrast to user space where it maps to  ADJ_OFFSET_SINGLESHOT |
      ADJ_ADJTIME - 0x8001). !ADJ_ADJTIME will silently ignore all adjustments
      without STA_PLL being active. We could switch to ADJ_ADJTIME or turn
      STA_PLL on, but still we would run into some problems:
      
      - Even when switching to nanoseconds, we lose accuracy.
      - Successive calls to do_adjtimex() will simply overwrite any leftovers
        from the previous call (if not fully handled)
      - Anything that NTP does using the sysctl heavily interferes with our
        use.
      - !ADJ_ADJTIME will silently round stuff > or < than 0.5 seconds
      
      Reusing do_adjtimex() here just feels wrong. The whole STP synchronization
      works right now *somehow* only, as do_adjtimex() does nothing and our
      TOD clock jumps in time, although it shouldn't. This is especially bad
      as the clock could jump backwards in time. We will have to find another
      way to fix this up.
      
      As leap seconds are also not properly handled yet, let's just get rid of
      all this complex logic altogether and use the correct clock_delta for
      fixing up the clock comparator and keeping the sched_clock monotonic.
      
      This change should have 0 effect on the current STP mechanism. Once we
      know how to best handle sync events and leap second updates, we'll start
      with a fresh implementation.
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      41ad0220
  15. 27 8月, 2016 1 次提交
  16. 24 8月, 2016 1 次提交
    • J
      ftrace: Add return address pointer to ftrace_ret_stack · 9a7c348b
      Josh Poimboeuf 提交于
      Storing this value will help prevent unwinders from getting out of sync
      with the function graph tracer ret_stack.  Now instead of needing a
      stateful iterator, they can compare the return address pointer to find
      the right ret_stack entry.
      
      Note that an array of 50 ftrace_ret_stack structs is allocated for every
      task.  So when an arch implements this, it will add either 200 or 400
      bytes of memory usage per task (depending on whether it's a 32-bit or
      64-bit platform).
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/a95cfcc39e8f26b89a430c56926af0bb217bc0a1.1471607358.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9a7c348b
  17. 08 8月, 2016 2 次提交
  18. 31 7月, 2016 4 次提交