1. 16 9月, 2019 2 次提交
    • B
      powerpc/tm: Remove msr_tm_active() · 052bc385
      Breno Leitao 提交于
      [ Upstream commit 5c784c8414fba11b62e12439f11e109fb5751f38 ]
      
      Currently msr_tm_active() is a wrapper around MSR_TM_ACTIVE() if
      CONFIG_PPC_TRANSACTIONAL_MEM is set, or it is just a function that
      returns false if CONFIG_PPC_TRANSACTIONAL_MEM is not set.
      
      This function is not necessary, since MSR_TM_ACTIVE() just do the same and
      could be used, removing the dualism and simplifying the code.
      
      This patchset remove every instance of msr_tm_active() and replaced it
      by MSR_TM_ACTIVE().
      Signed-off-by: NBreno Leitao <leitao@debian.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      052bc385
    • G
      powerpc/tm: Fix FP/VMX unavailable exceptions inside a transaction · 47a0f70d
      Gustavo Romero 提交于
      commit 8205d5d98ef7f155de211f5e2eb6ca03d95a5a60 upstream.
      
      When we take an FP unavailable exception in a transaction we have to
      account for the hardware FP TM checkpointed registers being
      incorrect. In this case for this process we know the current and
      checkpointed FP registers must be the same (since FP wasn't used
      inside the transaction) hence in the thread_struct we copy the current
      FP registers to the checkpointed ones.
      
      This copy is done in tm_reclaim_thread(). We use thread->ckpt_regs.msr
      to determine if FP was on when in userspace. thread->ckpt_regs.msr
      represents the state of the MSR when exiting userspace. This is setup
      by check_if_tm_restore_required().
      
      Unfortunatley there is an optimisation in giveup_all() which returns
      early if tsk->thread.regs->msr (via local variable `usermsr`) has
      FP=VEC=VSX=SPE=0. This optimisation means that
      check_if_tm_restore_required() is not called and hence
      thread->ckpt_regs.msr is not updated and will contain an old value.
      
      This can happen if due to load_fp=255 we start a userspace process
      with MSR FP=1 and then we are context switched out. In this case
      thread->ckpt_regs.msr will contain FP=1. If that same process is then
      context switched in and load_fp overflows, MSR will have FP=0. If that
      process now enters a transaction and does an FP instruction, the FP
      unavailable will not update thread->ckpt_regs.msr (the bug) and MSR
      FP=1 will be retained in thread->ckpt_regs.msr.  tm_reclaim_thread()
      will then not perform the required memcpy and the checkpointed FP regs
      in the thread struct will contain the wrong values.
      
      The code path for this happening is:
      
             Userspace:                      Kernel
                         Start userspace
                          with MSR FP/VEC/VSX/SPE=0 TM=1
                            < -----
             ...
             tbegin
             bne
             fp instruction
                         FP unavailable
                             ---- >
                                              fp_unavailable_tm()
      					  tm_reclaim_current()
      					    tm_reclaim_thread()
      					      giveup_all()
      					        return early since FP/VMX/VSX=0
      						/* ckpt MSR not updated (Incorrect) */
      					      tm_reclaim()
      					        /* thread_struct ckpt FP regs contain junk (OK) */
                                                    /* Sees ckpt MSR FP=1 (Incorrect) */
      					      no memcpy() performed
      					        /* thread_struct ckpt FP regs not fixed (Incorrect) */
      					  tm_recheckpoint()
      					     /* Put junk in hardware checkpoint FP regs */
                                               ....
                            < -----
                         Return to userspace
                           with MSR TM=1 FP=1
                           with junk in the FP TM checkpoint
             TM rollback
             reads FP junk
      
      This is a data integrity problem for the current process as the FP
      registers are corrupted. It's also a security problem as the FP
      registers from one process may be leaked to another.
      
      This patch moves up check_if_tm_restore_required() in giveup_all() to
      ensure thread->ckpt_regs.msr is updated correctly.
      
      A simple testcase to replicate this will be posted to
      tools/testing/selftests/powerpc/tm/tm-poison.c
      
      Similarly for VMX.
      
      This fixes CVE-2019-15030.
      
      Fixes: f48e91e8 ("powerpc/tm: Fix FP and VMX register corruption")
      Cc: stable@vger.kernel.org # 4.12+
      Signed-off-by: NGustavo Romero <gromero@linux.vnet.ibm.com>
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20190904045529.23002-1-gromero@linux.vnet.ibm.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      47a0f70d
  2. 24 3月, 2019 1 次提交
    • M
      powerpc: Fix 32-bit KVM-PR lockup and host crash with MacOS guest · 344996a8
      Mark Cave-Ayland 提交于
      commit fe1ef6bcdb4fca33434256a802a3ed6aacf0bd2f upstream.
      
      Commit 8792468d "powerpc: Add the ability to save FPU without
      giving it up" unexpectedly removed the MSR_FE0 and MSR_FE1 bits from
      the bitmask used to update the MSR of the previous thread in
      __giveup_fpu() causing a KVM-PR MacOS guest to lockup and panic the
      host kernel.
      
      Leaving FE0/1 enabled means unrelated processes might receive FPEs
      when they're not expecting them and crash. In particular if this
      happens to init the host will then panic.
      
      eg (transcribed):
        qemu-system-ppc[837]: unhandled signal 8 at 12cc9ce4 nip 12cc9ce4 lr 12cc9ca4 code 0
        systemd[1]: unhandled signal 8 at 202f02e0 nip 202f02e0 lr 001003d4 code 0
        Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      Reinstate these bits to the MSR bitmask to enable MacOS guests to run
      under 32-bit KVM-PR once again without issue.
      
      Fixes: 8792468d ("powerpc: Add the ability to save FPU without giving it up")
      Cc: stable@vger.kernel.org # v4.6+
      Signed-off-by: NMark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      344996a8
  3. 05 10月, 2018 1 次提交
    • M
      powerpc: Don't print kernel instructions in show_user_instructions() · a932ed3b
      Michael Ellerman 提交于
      Recently we implemented show_user_instructions() which dumps the code
      around the NIP when a user space process dies with an unhandled
      signal. This was modelled on the x86 code, and we even went so far as
      to implement the exact same bug, namely that if the user process
      crashed with its NIP pointing into the kernel we will dump kernel text
      to dmesg. eg:
      
        bad-bctr[2996]: segfault (11) at c000000000010000 nip c000000000010000 lr 12d0b0894 code 1
        bad-bctr[2996]: code: fbe10068 7cbe2b78 7c7f1b78 fb610048 38a10028 38810020 fb810050 7f8802a6
        bad-bctr[2996]: code: 3860001c f8010080 48242371 60000000 <7c7b1b79> 4082002c e8010080 eb610048
      
      This was discovered on x86 by Jann Horn and fixed in commit
      342db04a ("x86/dumpstack: Don't dump kernel memory based on usermode RIP").
      
      Fix it by checking the adjusted NIP value (pc) and number of
      instructions against USER_DS, and bail if we fail the check, eg:
      
        bad-bctr[2969]: segfault (11) at c000000000010000 nip c000000000010000 lr 107930894 code 1
        bad-bctr[2969]: Bad NIP, not dumping instructions.
      
      Fixes: 88b0fe17 ("powerpc: Add show_user_instructions()")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a932ed3b
  4. 07 8月, 2018 1 次提交
    • M
      powerpc: Add show_user_instructions() · 88b0fe17
      Murilo Opsfelder Araujo 提交于
      show_user_instructions() is a slightly modified version of
      show_instructions() that allows userspace instruction dump.
      
      This will be useful within show_signal_msg() to dump userspace
      instructions of the faulty location.
      
      Here is a sample of what show_user_instructions() outputs:
      
        pandafault[10850]: code: 4bfffeec 4bfffee8 3c401002 38427f00 fbe1fff8 f821ffc1 7c3f0b78 3d22fffe
        pandafault[10850]: code: 392988d0 f93f0020 e93f0020 39400048 <99490000> 39200000 7d234b78 383f0040
      
      The current->comm and current->pid printed can serve as a glue that
      links the instructions dump to its originator, allowing messages to be
      interleaved in the logs.
      Signed-off-by: NMurilo Opsfelder Araujo <muriloo@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      88b0fe17
  5. 30 7月, 2018 1 次提交
  6. 24 7月, 2018 2 次提交
  7. 16 7月, 2018 1 次提交
  8. 03 6月, 2018 5 次提交
  9. 24 5月, 2018 1 次提交
  10. 25 4月, 2018 1 次提交
    • E
      signal: Ensure every siginfo we send has all bits initialized · 3eb0f519
      Eric W. Biederman 提交于
      Call clear_siginfo to ensure every stack allocated siginfo is properly
      initialized before being passed to the signal sending functions.
      
      Note: It is not safe to depend on C initializers to initialize struct
      siginfo on the stack because C is allowed to skip holes when
      initializing a structure.
      
      The initialization of struct siginfo in tracehook_report_syscall_exit
      was moved from the helper user_single_step_siginfo into
      tracehook_report_syscall_exit itself, to make it clear that the local
      variable siginfo gets fully initialized.
      
      In a few cases the scope of struct siginfo has been reduced to make it
      clear that siginfo siginfo is not used on other paths in the function
      in which it is declared.
      
      Instances of using memset to initialize siginfo have been replaced
      with calls clear_siginfo for clarity.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      3eb0f519
  11. 03 4月, 2018 1 次提交
    • N
      powerpc: Don't write to DABR on >= Power8 if DAWR is disabled · 252988cb
      Nicholas Piggin 提交于
      flush_thread() calls __set_breakpoint() via set_debug_reg_defaults()
      without checking ppc_breakpoint_available(). On Power8 or later CPUs
      which have the DAWR feature disabled that will cause a write to the
      DABR which is incorrect as those CPUs don't have a DABR.
      
      Fix it two ways, by checking ppc_breakpoint_available() in
      set_debug_reg_defaults(), and also by reworking __set_breakpoint() to
      only write to DABR on Power7 or earlier.
      
      Fixes: 96541531 ("powerpc: Disable DAWR in the base POWER9 CPU features")
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      [mpe: Rework the logic in __set_breakpoint()]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      252988cb
  12. 27 3月, 2018 1 次提交
  13. 13 3月, 2018 1 次提交
  14. 27 1月, 2018 1 次提交
  15. 23 1月, 2018 2 次提交
  16. 20 1月, 2018 1 次提交
  17. 19 1月, 2018 2 次提交
  18. 16 1月, 2018 1 次提交
  19. 19 12月, 2017 1 次提交
    • M
      powerpc/kernel: Print actual address of regs when oopsing · 182dc9c7
      Michael Ellerman 提交于
      When we oops or otherwise call show_regs() we print the address of the
      regs structure. Being able to see the address is fairly useful,
      firstly to verify that the regs pointer is not completely bogus, and
      secondly it allows you to dump the regs and surrounding memory with a
      debugger if you have one.
      
      In the normal case the regs will be located somewhere on the stack, so
      printing their location discloses no further information than printing
      the stack pointer does already.
      
      So switch to %px and print the actual address, not the hashed value.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      182dc9c7
  20. 29 11月, 2017 2 次提交
    • V
      powerpc: Do not assign thread.tidr if already assigned · 7e4d4233
      Vaibhav Jain 提交于
      If set_thread_tidr() is called twice for same task_struct then it will
      allocate a new tidr value to it leaving the previous value still
      dangling in the vas_thread_ida table.
      
      To fix this the patch changes set_thread_tidr() to check if a tidr
      value is already assigned to the task_struct and if yes then returns
      zero.
      
      Fixes: ec233ede("powerpc: Add support for setting SPRN_TIDR")
      Signed-off-by: NVaibhav Jain <vaibhav@linux.vnet.ibm.com>
      Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com>
      [mpe: Modify to return 0 in the success case, not the TID value]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7e4d4233
    • V
      powerpc: Avoid signed to unsigned conversion in set_thread_tidr() · aca7573f
      Vaibhav Jain 提交于
      There is an unsafe signed to unsigned conversion in set_thread_tidr()
      that may cause an error value to be assigned to SPRN_TIDR register and
      used as thread-id.
      
      The issue happens as assign_thread_tidr() returns an int and
      thread.tidr is an unsigned-long. So a negative error code returned
      from assign_thread_tidr() will fail the error check and gets assigned
      as tidr as a large positive value.
      
      To fix this the patch assigns the return value of assign_thread_tidr()
      to a temporary int and assigns it to thread.tidr iff its '> 0'.
      
      The patch shouldn't impact the calling convention of set_thread_tidr()
      i.e all -ve return-values are error codes and a return value of '0'
      indicates success.
      
      Fixes: ec233ede("powerpc: Add support for setting SPRN_TIDR")
      Signed-off-by: NVaibhav Jain <vaibhav@linux.vnet.ibm.com>
      Reviewed-by: Christophe Lombard clombard@linux.vnet.ibm.com
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      aca7573f
  21. 12 11月, 2017 2 次提交
  22. 06 11月, 2017 4 次提交
    • C
      powerpc: Always save/restore checkpointed regs during treclaim/trecheckpoint · eb5c3f1c
      Cyril Bur 提交于
      Lazy save and restore of FP/Altivec means that a userspace process can
      be sent to userspace with FP or Altivec disabled and loaded only as
      required (by way of an FP/Altivec unavailable exception). Transactional
      Memory complicates this situation as a transaction could be started
      without FP/Altivec being loaded up. This causes the hardware to
      checkpoint incorrect registers. Handling FP/Altivec unavailable
      exceptions while a thread is transactional requires a reclaim and
      recheckpoint to ensure the CPU has correct state for both sets of
      registers.
      
      tm_reclaim() has optimisations to not always save the FP/Altivec
      registers to the checkpointed save area. This was originally done
      because the caller might have information that the checkpointed
      registers aren't valid due to lazy save and restore. We've also been a
      little vague as to how tm_reclaim() leaves the FP/Altivec state since it
      doesn't necessarily always save it to the thread struct. This has lead
      to an (incorrect) assumption that it leaves the checkpointed state on
      the CPU.
      
      tm_recheckpoint() has similar optimisations in reverse. It may not
      always reload the checkpointed FP/Altivec registers from the thread
      struct before the trecheckpoint. It is therefore quite unclear where it
      expects to get the state from. This didn't help with the assumption
      made about tm_reclaim().
      
      These optimisations sit in what is by definition a slow path. If a
      process has to go through a reclaim/recheckpoint then its transaction
      will be doomed on returning to userspace. This mean that the process
      will be unable to complete its transaction and be forced to its failure
      handler. This is already an out if line case for userspace. Furthermore,
      the cost of copying 64 times 128 bits from registers isn't very long[0]
      (at all) on modern processors. As such it appears these optimisations
      have only served to increase code complexity and are unlikely to have
      had a measurable performance impact.
      
      Our transactional memory handling has been riddled with bugs. A cause
      of this has been difficulty in following the code flow, code complexity
      has not been our friend here. It makes sense to remove these
      optimisations in favour of a (hopefully) more stable implementation.
      
      This patch does mean that some times the assembly will needlessly save
      'junk' registers which will subsequently get overwritten with the
      correct value by the C code which calls the assembly function. This
      small inefficiency is far outweighed by the reduction in complexity for
      general TM code, context switching paths, and transactional facility
      unavailable exception handler.
      
      0: I tried to measure it once for other work and found that it was
      hiding in the noise of everything else I was working with. I find it
      exceedingly likely this will be the case here.
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      eb5c3f1c
    • C
      powerpc: Force reload for recheckpoint during tm {fp, vec, vsx} unavailable exception · 91381b9c
      Cyril Bur 提交于
      Lazy save and restore of FP/Altivec means that a userspace process can
      be sent to userspace with FP or Altivec disabled and loaded only as
      required (by way of an FP/Altivec unavailable exception). Transactional
      Memory complicates this situation as a transaction could be started
      without FP/Altivec being loaded up. This causes the hardware to
      checkpoint incorrect registers. Handling FP/Altivec unavailable
      exceptions while a thread is transactional requires a reclaim and
      recheckpoint to ensure the CPU has correct state for both sets of
      registers.
      
      tm_reclaim() has optimisations to not always save the FP/Altivec
      registers to the checkpointed save area. This was originally done
      because the caller might have information that the checkpointed
      registers aren't valid due to lazy save and restore. We've also been a
      little vague as to how tm_reclaim() leaves the FP/Altivec state since it
      doesn't necessarily always save it to the thread struct. This has lead
      to an (incorrect) assumption that it leaves the checkpointed state on
      the CPU.
      
      tm_recheckpoint() has similar optimisations in reverse. It may not
      always reload the checkpointed FP/Altivec registers from the thread
      struct before the trecheckpoint. It is therefore quite unclear where it
      expects to get the state from. This didn't help with the assumption
      made about tm_reclaim().
      
      This patch is a minimal fix for ease of backporting. A more correct fix
      which removes the msr parameter to tm_reclaim() and tm_recheckpoint()
      altogether has been upstreamed to apply on top of this patch.
      
      Fixes: dc310669 ("powerpc: tm: Always use fp_state and vr_state to
      store live registers")
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      91381b9c
    • C
      powerpc: Don't enable FP/Altivec if not checkpointed · a7771176
      Cyril Bur 提交于
      Lazy save and restore of FP/Altivec means that a userspace process can
      be sent to userspace with FP or Altivec disabled and loaded only as
      required (by way of an FP/Altivec unavailable exception). Transactional
      Memory complicates this situation as a transaction could be started
      without FP/Altivec being loaded up. This causes the hardware to
      checkpoint incorrect registers. Handling FP/Altivec unavailable
      exceptions while a thread is transactional requires a reclaim and
      recheckpoint to ensure the CPU has correct state for both sets of
      registers.
      
      Lazy save and restore of FP/Altivec cannot be done if a process is
      transactional. If a facility was enabled it must remain enabled whenever
      a thread is transactional.
      
      Commit dc16b553 ("powerpc: Always restore FPU/VEC/VSX if hardware
      transactional memory in use") ensures that the facilities are always
      enabled if a thread is transactional. A bug in the introduced code may
      cause it to inadvertently enable a facility that was (and should remain)
      disabled. The problem with this extraneous enablement is that the
      registers for the erroneously enabled facility have not been correctly
      recheckpointed - the recheckpointing code assumed the facility would
      remain disabled.
      
      Further compounding the issue, the transactional {fp,altivec,vsx}
      unavailable code has been incorrectly using the MSR to enable
      facilities. The presence of the {FP,VEC,VSX} bit in the regs->msr simply
      means if the registers are live on the CPU, not if the kernel should
      load them before returning to userspace. This has worked due to the bug
      mentioned above.
      
      This causes transactional threads which return to their failure handler
      to observe incorrect checkpointed registers. Perhaps an example will
      help illustrate the problem:
      
      A userspace process is running and uses both FP and Altivec registers.
      This process then continues to run for some time without touching
      either sets of registers. The kernel subsequently disables the
      facilities as part of lazy save and restore. The userspace process then
      performs a tbegin and the CPU checkpoints 'junk' FP and Altivec
      registers. The process then performs a floating point instruction
      triggering a fp unavailable exception in the kernel.
      
      The kernel then loads the FP registers - and only the FP registers.
      Since the thread is transactional it must perform a reclaim and
      recheckpoint to ensure both the checkpointed registers and the
      transactional registers are correct. It then (correctly) enables
      MSR[FP] for the process. Later (on exception exist) the kernel also
      (inadvertently) enables MSR[VEC]. The process is then returned to
      userspace.
      
      Since the act of loading the FP registers doomed the transaction we know
      CPU will fail the transaction, restore its checkpointed registers, and
      return the process to its failure handler. The problem is that we're
      now running with Altivec enabled and the 'junk' checkpointed registers
      are restored. The kernel had only recheckpointed FP.
      
      This patch solves this by only activating FP/Altivec if userspace was
      using them when it entered the kernel and not simply if the process is
      transactional.
      
      Fixes: dc16b553 ("powerpc: Always restore FPU/VEC/VSX if hardware
      transactional memory in use")
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a7771176
    • M
      powerpc/64s: Replace CONFIG_PPC_STD_MMU_64 with CONFIG_PPC_BOOK3S_64 · 4e003747
      Michael Ellerman 提交于
      CONFIG_PPC_STD_MMU_64 indicates support for the "standard" powerpc MMU
      on 64-bit CPUs. The "standard" MMU refers to the hash page table MMU
      found in "server" processors, from IBM mainly.
      
      Currently CONFIG_PPC_STD_MMU_64 is == CONFIG_PPC_BOOK3S_64. While it's
      annoying to have two symbols that always have the same value, it's not
      quite annoying enough to bother removing one.
      
      However with the arrival of Power9, we now have the situation where
      CONFIG_PPC_STD_MMU_64 is enabled, but the kernel is running using the
      Radix MMU - *not* the "standard" MMU. So it is now actively confusing
      to use it, because it implies that code is disabled or inactive when
      the Radix MMU is in use, however that is not necessarily true.
      
      So s/CONFIG_PPC_STD_MMU_64/CONFIG_PPC_BOOK3S_64/, and do some minor
      formatting updates of some of the affected lines.
      
      This will be a pain for backports, but c'est la vie.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4e003747
  23. 21 10月, 2017 2 次提交
    • M
      powerpc/tm: P9 disable transactionally suspended sigcontexts · 92fb8690
      Michael Neuling 提交于
      Unfortunately userspace can construct a sigcontext which enables
      suspend. Thus userspace can force Linux into a path where trechkpt is
      executed.
      
      This patch blocks this from happening on POWER9 by sanity checking
      sigcontexts passed in.
      
      ptrace doesn't have this problem as only MSR SE and BE can be changed
      via ptrace.
      
      This patch also adds a number of WARN_ON()s in case we ever enter
      suspend when we shouldn't. This should not happen, but if it does the
      symptoms are soft lockup warnings which are not obviously TM related,
      so the WARN_ON()s should make it obvious what's happening.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      92fb8690
    • M
      powerpc/powernv: Enable TM without suspend if possible · 54820530
      Michael Ellerman 提交于
      Some Power9 revisions can run in a mode where TM operates without
      suspended state. If we find ourself on a CPU that might be in this
      mode, we query OPAL to check, and if so we reenable TM in CPU
      features, and enable a new user feature to signal to userspace that we
      are in this mode.
      
      We do not enable the "normal" user feature, PPC_FEATURE2_HTM, but we
      do enable PPC_FEATURE2_HTM_NOSC because that indicates to userspace
      that the kernel will abort transactions on syscall entry, which is
      true regardless of the suspend mode.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      54820530
  24. 06 10月, 2017 1 次提交
  25. 28 8月, 2017 2 次提交
    • M
      powerpc/oops: Line up NIP & MSR with other rows · a6036100
      Michael Ellerman 提交于
      This is purely cosmetic, but does look nicer IMHO:
      
      Before:
      
        task: c000000001453400 task.stack: c000000001c6c000
        NIP: c000000000a0fbfc LR: c000000000a0fbf4 CTR: c000000000ba6220
        REGS: c0000001fffef820 TRAP: 0300   Not tainted  (4.13.0-rc6-gcc-6.3.1-00234-g423af27f7d81)
        MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 88088242  XER: 00000000
        CFAR: c0000000000b3488 DAR: 0000000000000000 DSISR: 42000000 SOFTE: 0
      
      After:
        task: c000000001453400 task.stack: c000000001c6c000
        NIP:  c000000000a0fbfc LR: c000000000a0fbf4 CTR: c000000000ba6220
        REGS: c0000001fffef820 TRAP: 0300   Not tainted  (4.13.0-rc6-gcc-6.3.1-00234-g423af27f7d81-dirty)
        MSR:  8000000000009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 88088242  XER: 00000000
        CFAR: c0000000000b34a4 DAR: 0000000000000000 DSISR: 42000000 SOFTE: 0
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a6036100
    • M
      powerpc/oops: Print CR/XER on same line as MSR · f6fc73fb
      Michael Ellerman 提交于
      Somehow we missed this when the pr_cont() changes went in. Fix CR/XER
      to go on the same line as MSR, as they have historically, eg:
      
        MSR: 8000000000009032 <SF,EE,ME,IR,DR,RI>  CR: 4804408a  XER: 20000000
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f6fc73fb