1. 18 5月, 2020 3 次提交
  2. 15 5月, 2020 2 次提交
    • M
      powerpc: Drop unneeded cast in task_pt_regs() · 24ac99e9
      Michael Ellerman 提交于
      There's no need to cast in task_pt_regs() as tsk->thread.regs should
      already be a struct pt_regs. If someone's using task_pt_regs() on
      something that's not a task but happens to have a thread.regs then
      we'll deal with them later.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20200428123152.73566-1-mpe@ellerman.id.au
      24ac99e9
    • M
      powerpc/64: Don't initialise init_task->thread.regs · 7ffa8b7d
      Michael Ellerman 提交于
      Aneesh increased the size of struct pt_regs by 16 bytes and started
      seeing this WARN_ON:
      
        smp: Bringing up secondary CPUs ...
        ------------[ cut here ]------------
        WARNING: CPU: 0 PID: 0 at arch/powerpc/kernel/process.c:455 giveup_all+0xb4/0x110
        Modules linked in:
        CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.7.0-rc2-gcc-8.2.0-1.g8f6a41f-default+ #318
        NIP:  c00000000001a2b4 LR: c00000000001a29c CTR: c0000000031d0000
        REGS: c0000000026d3980 TRAP: 0700   Not tainted  (5.7.0-rc2-gcc-8.2.0-1.g8f6a41f-default+)
        MSR:  800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE>  CR: 48048224  XER: 00000000
        CFAR: c000000000019cc8 IRQMASK: 1
        GPR00: c00000000001a264 c0000000026d3c20 c0000000026d7200 800000000280b033
        GPR04: 0000000000000001 0000000000000000 0000000000000077 30206d7372203164
        GPR08: 0000000000002000 0000000002002000 800000000280b033 3230303030303030
        GPR12: 0000000000008800 c0000000031d0000 0000000000800050 0000000002000066
        GPR16: 000000000309a1a0 000000000309a4b0 000000000309a2d8 000000000309a890
        GPR20: 00000000030d0098 c00000000264da40 00000000fd620000 c0000000ff798080
        GPR24: c00000000264edf0 c0000001007469f0 00000000fd620000 c0000000020e5e90
        GPR28: c00000000264edf0 c00000000264d200 000000001db60000 c00000000264d200
        NIP [c00000000001a2b4] giveup_all+0xb4/0x110
        LR [c00000000001a29c] giveup_all+0x9c/0x110
        Call Trace:
        [c0000000026d3c20] [c00000000001a264] giveup_all+0x64/0x110 (unreliable)
        [c0000000026d3c90] [c00000000001ae34] __switch_to+0x104/0x480
        [c0000000026d3cf0] [c000000000e0b8a0] __schedule+0x320/0x970
        [c0000000026d3dd0] [c000000000e0c518] schedule_idle+0x38/0x70
        [c0000000026d3df0] [c00000000019c7c8] do_idle+0x248/0x3f0
        [c0000000026d3e70] [c00000000019cbb8] cpu_startup_entry+0x38/0x40
        [c0000000026d3ea0] [c000000000011bb0] rest_init+0xe0/0xf8
        [c0000000026d3ed0] [c000000002004820] start_kernel+0x990/0x9e0
        [c0000000026d3f90] [c00000000000c49c] start_here_common+0x1c/0x400
      
      Which was unexpected. The warning is checking the thread.regs->msr
      value of the task we are switching from:
      
        usermsr = tsk->thread.regs->msr;
        ...
        WARN_ON((usermsr & MSR_VSX) && !((usermsr & MSR_FP) && (usermsr & MSR_VEC)));
      
      ie. if MSR_VSX is set then both of MSR_FP and MSR_VEC are also set.
      
      Dumping tsk->thread.regs->msr we see that it's: 0x1db60000
      
      Which is not a normal looking MSR, in fact the only valid bit is
      MSR_VSX, all the other bits are reserved in the current definition of
      the MSR.
      
      We can see from the oops that it was swapper/0 that we were switching
      from when we hit the warning, ie. init_task. So its thread.regs points
      to the base (high addresses) in init_stack.
      
      Dumping the content of init_task->thread.regs, with the members of
      pt_regs annotated (the 16 bytes larger version), we see:
      
        0000000000000000 c000000002780080    gpr[0]     gpr[1]
        0000000000000000 c000000002666008    gpr[2]     gpr[3]
        c0000000026d3ed0 0000000000000078    gpr[4]     gpr[5]
        c000000000011b68 c000000002780080    gpr[6]     gpr[7]
        0000000000000000 0000000000000000    gpr[8]     gpr[9]
        c0000000026d3f90 0000800000002200    gpr[10]    gpr[11]
        c000000002004820 c0000000026d7200    gpr[12]    gpr[13]
        000000001db60000 c0000000010aabe8    gpr[14]    gpr[15]
        c0000000010aabe8 c0000000010aabe8    gpr[16]    gpr[17]
        c00000000294d598 0000000000000000    gpr[18]    gpr[19]
        0000000000000000 0000000000001ff8    gpr[20]    gpr[21]
        0000000000000000 c00000000206d608    gpr[22]    gpr[23]
        c00000000278e0cc 0000000000000000    gpr[24]    gpr[25]
        000000002fff0000 c000000000000000    gpr[26]    gpr[27]
        0000000002000000 0000000000000028    gpr[28]    gpr[29]
        000000001db60000 0000000004750000    gpr[30]    gpr[31]
        0000000002000000 000000001db60000    nip        msr
        0000000000000000 0000000000000000    orig_r3    ctr
        c00000000000c49c 0000000000000000    link       xer
        0000000000000000 0000000000000000    ccr        softe
        0000000000000000 0000000000000000    trap       dar
        0000000000000000 0000000000000000    dsisr      result
        0000000000000000 0000000000000000    ppr        kuap
        0000000000000000 0000000000000000    pad[2]     pad[3]
      
      This looks suspiciously like stack frames, not a pt_regs. If we look
      closely we can see return addresses from the stack trace above,
      c000000002004820 (start_kernel) and c00000000000c49c (start_here_common).
      
      init_task->thread.regs is setup at build time in processor.h:
      
        #define INIT_THREAD  { \
        	.ksp = INIT_SP, \
        	.regs = (struct pt_regs *)INIT_SP - 1, /* XXX bogus, I think */ \
      
      The early boot code where we setup the initial stack is:
      
        LOAD_REG_ADDR(r3,init_thread_union)
      
        /* set up a stack pointer */
        LOAD_REG_IMMEDIATE(r1,THREAD_SIZE)
        add	r1,r3,r1
        li	r0,0
        stdu	r0,-STACK_FRAME_OVERHEAD(r1)
      
      Which creates a stack frame of size 112 bytes (STACK_FRAME_OVERHEAD).
      Which is far too small to contain a pt_regs.
      
      So the result is init_task->thread.regs is pointing at some stack
      frames on the init stack, not at a pt_regs.
      
      We have gotten away with this for so long because with pt_regs at its
      current size the MSR happens to point into the first frame, at a
      location that is not written to by the early asm. With the 16 byte
      expansion the MSR falls into the second frame, which is used by the
      compiler, and collides with a saved register that tends to be
      non-zero.
      
      As far as I can see this has been wrong since the original merge of
      64-bit ppc support, back in 2002.
      
      Conceptually swapper should have no regs, it never entered from
      userspace, and in fact that's what we do on 32-bit. It's also
      presumably what the "bogus" comment is referring to.
      
      So I think the right fix is to just not-initialise regs at all. I'm
      slightly worried this will break some code that isn't prepared for a
      NULL regs, but we'll have to see.
      
      Remove the comment in head_64.S which refers to us setting up the
      regs (even though we never did), and is otherwise not really accurate
      any more.
      Reported-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20200428123130.73078-1-mpe@ellerman.id.au
      7ffa8b7d
  3. 20 4月, 2020 1 次提交
  4. 18 2月, 2020 1 次提交
    • C
      powerpc/32s: Fix DSI and ISI exceptions for CONFIG_VMAP_STACK · 232ca1ee
      Christophe Leroy 提交于
      hash_page() needs to read page tables from kernel memory. When entire
      kernel memory is mapped by BATs, which is normally the case when
      CONFIG_STRICT_KERNEL_RWX is not set, it works even if the page hosting
      the page table is not referenced in the MMU hash table.
      
      However, if the page where the page table resides is not covered by
      a BAT, a DSI fault can be encountered from hash_page(), and it loops
      forever. This can happen when CONFIG_STRICT_KERNEL_RWX is selected
      and the alignment of the different regions is too small to allow
      covering the entire memory with BATs. This also happens when
      CONFIG_DEBUG_PAGEALLOC is selected or when booting with 'nobats'
      flag.
      
      Also, if the page containing the kernel stack is not present in the
      MMU hash table, registers cannot be saved and a recursive DSI fault
      is encountered.
      
      To allow hash_page() to properly do its job at all time and load the
      MMU hash table whenever needed, it must run with data MMU disabled.
      This means it must be called before re-enabling data MMU. To allow
      this, registers clobbered by hash_page() and create_hpte() have to
      be saved in the thread struct together with SRR0, SSR1, DAR and DSISR.
      It is also necessary to ensure that DSI prolog doesn't overwrite
      regs saved by prolog of the current running exception. That means:
      - DSI can only use SPRN_SPRG_SCRATCH0
      - Exceptions must free SPRN_SPRG_SCRATCH0 before writing to the stack.
      
      This also fixes the Oops reported by Erhard when create_hpte() is
      called by add_hash_page().
      
      Due to prolog size increase, a few more exceptions had to get split
      in two parts.
      
      Fixes: cd08f109 ("powerpc/32s: Enable CONFIG_VMAP_STACK")
      Reported-by: NErhard F. <erhard_f@mailbox.org>
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Tested-by: NErhard F. <erhard_f@mailbox.org>
      Tested-by: NLarry Finger <Larry.Finger@lwfinger.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=206501
      Link: https://lore.kernel.org/r/64a4aa44686e9fd4b01333401367029771d9b231.1581761633.git.christophe.leroy@c-s.fr
      232ca1ee
  5. 26 1月, 2020 1 次提交
  6. 16 1月, 2020 1 次提交
  7. 15 6月, 2019 1 次提交
  8. 31 5月, 2019 1 次提交
  9. 30 4月, 2019 1 次提交
    • N
      powerpc/64s: Reimplement book3s idle code in C · 10d91611
      Nicholas Piggin 提交于
      Reimplement Book3S idle code in C, moving POWER7/8/9 implementation
      speific HV idle code to the powernv platform code.
      
      Book3S assembly stubs are kept in common code and used only to save
      the stack frame and non-volatile GPRs before executing architected
      idle instructions, and restoring the stack and reloading GPRs then
      returning to C after waking from idle.
      
      The complex logic dealing with threads and subcores, locking, SPRs,
      HMIs, timebase resync, etc., is all done in C which makes it more
      maintainable.
      
      This is not a strict translation to C code, there are some
      significant differences:
      
      - Idle wakeup no longer uses the ->cpu_restore call to reinit SPRs,
        but saves and restores them itself.
      
      - The optimisation where EC=ESL=0 idle modes did not have to save GPRs
        or change MSR is restored, because it's now simple to do. ESL=1
        sleeps that do not lose GPRs can use this optimization too.
      
      - KVM secondary entry and cede is now more of a call/return style
        rather than branchy. nap_state_lost is not required because KVM
        always returns via NVGPR restoring path.
      
      - KVM secondary wakeup from offline sequence is moved entirely into
        the offline wakeup, which avoids a hwsync in the normal idle wakeup
        path.
      
      Performance measured with context switch ping-pong on different
      threads or cores, is possibly improved a small amount, 1-3% depending
      on stop state and core vs thread test for shallow states. Deep states
      it's in the noise compared with other latencies.
      
      KVM improvements:
      
      - Idle sleepers now always return to caller rather than branch out
        to KVM first.
      
      - This allows optimisations like very fast return to caller when no
        state has been lost.
      
      - KVM no longer requires nap_state_lost because it controls NVGPR
        save/restore itself on the way in and out.
      
      - The heavy idle wakeup KVM request check can be moved out of the
        normal host idle code and into the not-performance-critical offline
        code.
      
      - KVM nap code now returns from where it is called, which makes the
        flow a bit easier to follow.
      Reviewed-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      [mpe: Squash the KVM changes in]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      10d91611
  10. 21 4月, 2019 1 次提交
    • C
      powerpc/32s: Implement Kernel Userspace Access Protection · a68c31fc
      Christophe Leroy 提交于
      This patch implements Kernel Userspace Access Protection for
      book3s/32.
      
      Due to limitations of the processor page protection capabilities,
      the protection is only against writing. read protection cannot be
      achieved using page protection.
      
      The previous patch modifies the page protection so that RW user
      pages are RW for Key 0 and RO for Key 1, and it sets Key 0 for
      both user and kernel.
      
      This patch changes userspace segment registers are set to Ku 0
      and Ks 1. When kernel needs to write to RW pages, the associated
      segment register is then changed to Ks 0 in order to allow write
      access to the kernel.
      
      In order to avoid having the read all segment registers when
      locking/unlocking the access, some data is kept in the thread_struct
      and saved on stack on exceptions. The field identifies both the
      first unlocked segment and the first segment following the last
      unlocked one. When no segment is unlocked, it contains value 0.
      
      As the hash_page() function is not able to easily determine if a
      protfault is due to a bad kernel access to userspace, protfaults
      need to be handled by handle_page_fault when KUAP is set.
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      [mpe: Drop allow_read/write_to/from_user() as they're now in kup.h,
            and adapt allow_user_access() to do nothing when to == NULL]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a68c31fc
  11. 23 2月, 2019 4 次提交
  12. 21 2月, 2019 1 次提交
  13. 31 10月, 2018 1 次提交
  14. 14 10月, 2018 2 次提交
    • N
      powerpc/64s/hash: Add a SLB preload cache · 5434ae74
      Nicholas Piggin 提交于
      When switching processes, currently all user SLBEs are cleared, and a
      few (exec_base, pc, and stack) are preloaded. In trivial testing with
      small apps, this tends to miss the heap and low 256MB segments, and it
      will also miss commonly accessed segments on large memory workloads.
      
      Add a simple round-robin preload cache that just inserts the last SLB
      miss into the head of the cache and preloads those at context switch
      time. Every 256 context switches, the oldest entry is removed from the
      cache to shrink the cache and require fewer slbmte if they are unused.
      
      Much more could go into this, including into the SLB entry reclaim
      side to track some LRU information etc, which would require a study of
      large memory workloads. But this is a simple thing we can do now that
      is an obvious win for common workloads.
      
      With the full series, process switching speed on the context_switch
      benchmark on POWER9/hash (with kernel speculation security masures
      disabled) increases from 140K/s to 178K/s (27%).
      
      POWER8 does not change much (within 1%), it's unclear why it does not
      see a big gain like POWER9.
      
      Booting to busybox init with 256MB segments has SLB misses go down
      from 945 to 69, and with 1T segments 900 to 21. These could almost all
      be eliminated by preloading a bit more carefully with ELF binary
      loading.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      5434ae74
    • N
      powerpc/64: Interrupts save PPR on stack rather than thread_struct · 4c2de74c
      Nicholas Piggin 提交于
      PPR is the odd register out when it comes to interrupt handling, it is
      saved in current->thread.ppr while all others are saved on the stack.
      
      The difficulty with this is that accessing thread.ppr can cause a SLB
      fault, but the SLB fault handler implementation in C change had
      assumed the normal exception entry handlers would not cause an SLB
      fault.
      
      Fix this by allocating room in the interrupt stack to save PPR.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4c2de74c
  15. 03 10月, 2018 1 次提交
  16. 19 9月, 2018 1 次提交
    • N
      powerpc/64s/hash: Add a SLB preload cache · 89ca4e12
      Nicholas Piggin 提交于
      When switching processes, currently all user SLBEs are cleared, and a
      few (exec_base, pc, and stack) are preloaded. In trivial testing with
      small apps, this tends to miss the heap and low 256MB segments, and it
      will also miss commonly accessed segments on large memory workloads.
      
      Add a simple round-robin preload cache that just inserts the last SLB
      miss into the head of the cache and preloads those at context switch
      time. Every 256 context switches, the oldest entry is removed from the
      cache to shrink the cache and require fewer slbmte if they are unused.
      
      Much more could go into this, including into the SLB entry reclaim
      side to track some LRU information etc, which would require a study of
      large memory workloads. But this is a simple thing we can do now that
      is an obvious win for common workloads.
      
      With the full series, process switching speed on the context_switch
      benchmark on POWER9/hash (with kernel speculation security masures
      disabled) increases from 140K/s to 178K/s (27%).
      
      POWER8 does not change much (within 1%), it's unclear why it does not
      see a big gain like POWER9.
      
      Booting to busybox init with 256MB segments has SLB misses go down
      from 945 to 69, and with 1T segments 900 to 21. These could almost all
      be eliminated by preloading a bit more carefully with ELF binary
      loading.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      89ca4e12
  17. 30 7月, 2018 1 次提交
  18. 03 6月, 2018 2 次提交
  19. 31 3月, 2018 1 次提交
  20. 30 3月, 2018 2 次提交
  21. 20 1月, 2018 1 次提交
  22. 12 11月, 2017 2 次提交
  23. 02 7月, 2017 1 次提交
  24. 29 6月, 2017 1 次提交
  25. 19 6月, 2017 1 次提交
  26. 08 6月, 2017 1 次提交
  27. 05 5月, 2017 1 次提交
    • S
      powerpc/64e: Don't place the stack beyond TASK_SIZE · 61baf155
      Scott Wood 提交于
      Commit f4ea6dcb ("powerpc/mm: Enable mappings above 128TB") increased
      the task size on book3s, and introduced a mechanism to dynamically
      control whether a task uses these larger addresses.  While the change to
      the task size itself was ifdef-protected to only apply on book3s, the
      change to STACK_TOP_USER64 was not.  On book3e, this had the effect of
      trying to use addresses up to 128TiB for the stack despite a 64TiB task
      size limit -- which broke 64-bit userspace producing the following errors:
      
      Starting init: /sbin/init exists but couldn't execute it (error -14)
      Starting init: /bin/sh exists but couldn't execute it (error -14)
      Kernel panic - not syncing: No working init found.  Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance.
      
      Fixes: f4ea6dcb ("powerpc/mm: Enable mappings above 128TB")
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NScott Wood <oss@buserror.net>
      61baf155
  28. 01 4月, 2017 1 次提交
    • A
      powerpc/mm: Enable mappings above 128TB · f4ea6dcb
      Aneesh Kumar K.V 提交于
      Not all user space application is ready to handle wide addresses. It's
      known that at least some JIT compilers use higher bits in pointers to
      encode their information. It collides with valid pointers with 512TB
      addresses and leads to crashes.
      
      To mitigate this, we are not going to allocate virtual address space
      above 128TB by default.
      
      But userspace can ask for allocation from full address space by
      specifying hint address (with or without MAP_FIXED) above 128TB.
      
      If hint address set above 128TB, but MAP_FIXED is not specified, we try
      to look for unmapped area by specified address. If it's already
      occupied, we look for unmapped area in *full* address space, rather than
      from 128TB window.
      
      This approach helps to easily make application's memory allocator aware
      about large address space without manually tracking allocated virtual
      address space.
      
      This is going to be a per mmap decision. ie, we can have some mmaps with
      larger addresses and other that do not.
      
      A sample memory layout looks like:
      
        10000000-10010000 r-xp 00000000 fc:00 9057045          /home/max_addr_512TB
        10010000-10020000 r--p 00000000 fc:00 9057045          /home/max_addr_512TB
        10020000-10030000 rw-p 00010000 fc:00 9057045          /home/max_addr_512TB
        10029630000-10029660000 rw-p 00000000 00:00 0          [heap]
        7fff834a0000-7fff834b0000 rw-p 00000000 00:00 0
        7fff834b0000-7fff83670000 r-xp 00000000 fc:00 9177190  /lib/powerpc64le-linux-gnu/libc-2.23.so
        7fff83670000-7fff83680000 r--p 001b0000 fc:00 9177190  /lib/powerpc64le-linux-gnu/libc-2.23.so
        7fff83680000-7fff83690000 rw-p 001c0000 fc:00 9177190  /lib/powerpc64le-linux-gnu/libc-2.23.so
        7fff83690000-7fff836a0000 rw-p 00000000 00:00 0
        7fff836a0000-7fff836c0000 r-xp 00000000 00:00 0        [vdso]
        7fff836c0000-7fff83700000 r-xp 00000000 fc:00 9177193  /lib/powerpc64le-linux-gnu/ld-2.23.so
        7fff83700000-7fff83710000 r--p 00030000 fc:00 9177193  /lib/powerpc64le-linux-gnu/ld-2.23.so
        7fff83710000-7fff83720000 rw-p 00040000 fc:00 9177193  /lib/powerpc64le-linux-gnu/ld-2.23.so
        7fffdccf0000-7fffdcd20000 rw-p 00000000 00:00 0        [stack]
        1000000000000-1000000010000 rw-p 00000000 00:00 0
        1ffff83710000-1ffff83720000 rw-p 00000000 00:00 0
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f4ea6dcb
  29. 31 3月, 2017 1 次提交
    • A
      powerpc/mm/hash: Increase VA range to 128TB · f6eedbba
      Aneesh Kumar K.V 提交于
      We update the hash linux page table layout such that we can support
      512TB. But we limit the TASK_SIZE to 128TB. We can switch to 128TB by
      default without conditional because that is the max virtual address
      supported by other architectures. We will later add a mechanism to
      on-demand increase the application's effective address range to 512TB.
      
      Having the page table layout changed to accommodate 512TB makes testing
      large memory configuration easier with less code changes to kernel
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f6eedbba
  30. 31 1月, 2017 1 次提交
    • G
      powernv: Pass PSSCR value and mask to power9_idle_stop · 09206b60
      Gautham R. Shenoy 提交于
      The power9_idle_stop method currently takes only the requested stop
      level as a parameter and picks up the rest of the PSSCR bits from a
      hand-coded macro. This is not a very flexible design, especially when
      the firmware has the capability to communicate the psscr value and the
      mask associated with a particular stop state via device tree.
      
      This patch modifies the power9_idle_stop API to take as parameters the
      PSSCR value and the PSSCR mask corresponding to the stop state that
      needs to be set. These PSSCR value and mask are respectively obtained
      by parsing the "ibm,cpu-idle-state-psscr" and
      "ibm,cpu-idle-state-psscr-mask" fields from the device tree.
      
      In addition to this, the patch adds support for handling stop states
      for which ESL and EC bits in the PSSCR are zero. As per the
      architecture, a wakeup from these stop states resumes execution from
      the subsequent instruction as opposed to waking up at the System
      Vector.
      
      The older firmware sets only the Requested Level (RL) field in the
      psscr and psscr-mask exposed in the device tree. For older firmware
      where psscr-mask=0xf, this patch will set the default sane values that
      the set for for remaining PSSCR fields (i.e PSLL, MTL, ESL, EC, and
      TR). For the new firmware, the patch will validate that the invariants
      required by the ISA for the psscr values are maintained by the
      firmware.
      
      This skiboot patch that exports fully populated PSSCR values and the
      mask for all the stop states can be found here:
      https://lists.ozlabs.org/pipermail/skiboot/2016-September/004869.html
      
      [Optimize the number of instructions before entering STOP with
      ESL=EC=0, validate the PSSCR values provided by the firimware
      maintains the invariants required as per the ISA suggested by Balbir
      Singh]
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      09206b60