1. 07 8月, 2018 14 次提交
  2. 30 7月, 2018 8 次提交
  3. 24 7月, 2018 7 次提交
    • C
      powerpc/tm: Remove struct thread_info param from tm_reclaim_thread() · edd00b83
      Cyril Bur 提交于
      Since commit dc310669 ("powerpc: tm: Always use fp_state and
      vr_state to store live registers") tm_reclaim_thread() doesn't use the
      parameter anymore, both callers have to bother getting it as they have
      no need for a struct thread_info either.
      
      Just remove it and adjust the callers.
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      edd00b83
    • C
      powerpc/tm: Update function prototype comment · a596a7e9
      Cyril Bur 提交于
      In commit eb5c3f1c ("powerpc: Always save/restore checkpointed regs
      during treclaim/trecheckpoint") __tm_recheckpoint was modified to no
      longer take the second parameter 'unsigned long orig_msr' as part of a
      TM rewrite to simplify the reclaiming/recheckpointing process.
      
      There is a comment in the asm file where the function is delcared which
      has an incorrect prototype with the 'orig_msr' parameter.
      
      This patch corrects the comment.
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a596a7e9
    • A
      powerpc/mm: Check memblock_add against MAX_PHYSMEM_BITS range · 6aba0c84
      Aneesh Kumar K.V 提交于
      With SPARSEMEM config enabled, we make sure that we don't add sections beyond
      MAX_PHYSMEM_BITS range. This results in not building vmemmap mapping for
      range beyond max range. But our memblock layer looks the device tree and create
      mapping for the full memory range. Prevent this by checking against
      MAX_PHSYSMEM_BITS when doing memblock_add.
      
      We don't do similar check for memeblock_reserve_range. If reserve range is beyond
      MAX_PHYSMEM_BITS we expect that to be configured with 'nomap'. Any other
      reserved range should come from existing memblock ranges which we already
      filtered while adding.
      
      This avoids crash as below when running on a system with system ram config above
      MAX_PHSYSMEM_BITS
      
       Unable to handle kernel paging request for data at address 0xc00a001000000440
       Faulting instruction address: 0xc000000001034118
       cpu 0x0: Vector: 300 (Data Access) at [c00000000124fb30]
           pc: c000000001034118: __free_pages_bootmem+0xc0/0x1c0
           lr: c00000000103b258: free_all_bootmem+0x19c/0x22c
           sp: c00000000124fdb0
          msr: 9000000002001033
          dar: c00a001000000440
        dsisr: 40000000
         current = 0xc00000000120dd00
         paca    = 0xc000000001f60000^I irqmask: 0x03^I irq_happened: 0x01
           pid   = 0, comm = swapper
       [c00000000124fe20] c00000000103b258 free_all_bootmem+0x19c/0x22c
       [c00000000124fee0] c000000001010a68 mem_init+0x3c/0x5c
       [c00000000124ff00] c00000000100401c start_kernel+0x298/0x5e4
       [c00000000124ff90] c00000000000b57c start_here_common+0x1c/0x520
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6aba0c84
    • M
      powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2 · 6d44acae
      Michael Ellerman 提交于
      When I added the spectre_v2 information in sysfs, I included the
      availability of the ori31 speculation barrier.
      
      Although the ori31 barrier can be used to mitigate v2, it's primarily
      intended as a spectre v1 mitigation. Spectre v2 is mitigated by
      hardware changes.
      
      So rework the sysfs files to show the ori31 information in the
      spectre_v1 file, rather than v2.
      
      Currently we display eg:
      
        $ grep . spectre_v*
        spectre_v1:Mitigation: __user pointer sanitization
        spectre_v2:Mitigation: Indirect branch cache disabled, ori31 speculation barrier enabled
      
      After:
      
        $ grep . spectre_v*
        spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled
        spectre_v2:Mitigation: Indirect branch cache disabled
      
      Fixes: d6fbe1c5 ("powerpc/64s: Wire up cpu_show_spectre_v2()")
      Cc: stable@vger.kernel.org # v4.17+
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6d44acae
    • N
      powerpc: NMI IPI make NMI IPIs fully sychronous · 5b73151f
      Nicholas Piggin 提交于
      There is an asynchronous aspect to smp_send_nmi_ipi. The caller waits
      for all CPUs to call in to the handler, but it does not wait for
      completion of the handler. This is a needless complication, so remove
      it and always wait synchronously.
      
      The synchronous wait allows the caller to easily time out and clear
      the wait for completion (zero nmi_ipi_busy_count) in the case of badly
      behaved handlers. This would have prevented the recent smp_send_stop
      NMI IPI bug from causing the system to hang.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      5b73151f
    • N
      powerpc/64s: make PACA_IRQ_HARD_DIS track MSR[EE] closely · 9b81c021
      Nicholas Piggin 提交于
      When the masked interrupt handler clears MSR[EE] for an interrupt in
      the PACA_IRQ_MUST_HARD_MASK set, it does not set PACA_IRQ_HARD_DIS.
      This makes them get out of synch.
      
      With that taken into account, it's only low level irq manipulation
      (and interrupt entry before reconcile) where they can be out of synch.
      This makes the code less surprising.
      
      It also allows the IRQ replay code to rely on the IRQ_HARD_DIS value
      and not have to mtmsrd again in this case (e.g., for an external
      interrupt that has been masked). The bigger benefit might just be
      that there is not such an element of surprise in these two bits of
      state.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9b81c021
    • R
      powerpc/pkeys: Save the pkey registers before fork · c76662e8
      Ram Pai 提交于
      When a thread forks the contents of AMR, IAMR, UAMOR registers in the
      newly forked thread are not inherited.
      
      Save the registers before forking, for content of those
      registers to be automatically copied into the new thread.
      
      Fixes: cf43d3b2 ("powerpc: Enable pkey subsystem")
      Cc: stable@vger.kernel.org # v4.16+
      Signed-off-by: NRam Pai <linuxram@us.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c76662e8
  4. 20 7月, 2018 1 次提交
  5. 19 7月, 2018 1 次提交
  6. 16 7月, 2018 1 次提交
  7. 12 7月, 2018 2 次提交
    • M
      powerpc/64s: Report SLB multi-hit rather than parity error · 54dbcfc2
      Michael Ellerman 提交于
      When we take an SLB multi-hit on bare metal, we see both the multi-hit
      and parity error bits set in DSISR. The user manuals indicates this is
      expected to always happen on Power8, whereas on Power9 it says a
      multi-hit will "usually" also cause a parity error.
      
      We decide what to do based on the various error tables in mce_power.c,
      and because we process them in order and only report the first, we
      currently always report a parity error but not the multi-hit, eg:
      
        Severe Machine check interrupt [Recovered]
          Initiator: CPU
          Error type: SLB [Parity]
            Effective address: c000000ffffd4300
      
      Although this is correct, it leaves the user wondering why they got a
      parity error. It would be clearer instead if we reported the
      multi-hit because that is more likely to be simply a software bug,
      whereas a true parity error is possibly an indication of a bad core.
      
      We can do that simply by reordering the error tables so that multi-hit
      appears before parity. That doesn't affect the error recovery at all,
      because we flush the SLB either way.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      54dbcfc2
    • J
      powerpc: Remove Power8 DD1 from cputable · e11b64b1
      Joel Stanley 提交于
      This was added to support an early version of Power8 that did not have
      working doorbells. These machines were not publicly available, and all of
      the internal users have long since upgraded.
      Signed-off-by: NJoel Stanley <joel@jms.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      e11b64b1
  8. 04 7月, 2018 1 次提交
  9. 02 7月, 2018 2 次提交
  10. 25 6月, 2018 1 次提交
  11. 23 6月, 2018 1 次提交
    • W
      rseq: Avoid infinite recursion when delivering SIGSEGV · 784e0300
      Will Deacon 提交于
      When delivering a signal to a task that is using rseq, we call into
      __rseq_handle_notify_resume() so that the registers pushed in the
      sigframe are updated to reflect the state of the restartable sequence
      (for example, ensuring that the signal returns to the abort handler if
      necessary).
      
      However, if the rseq management fails due to an unrecoverable fault when
      accessing userspace or certain combinations of RSEQ_CS_* flags, then we
      will attempt to deliver a SIGSEGV. This has the potential for infinite
      recursion if the rseq code continuously fails on signal delivery.
      
      Avoid this problem by using force_sigsegv() instead of force_sig(), which
      is explicitly designed to reset the SEGV handler to SIG_DFL in the case
      of a recursive fault. In doing so, remove rseq_signal_deliver() from the
      internal rseq API and have an optional struct ksignal * parameter to
      rseq_handle_notify_resume() instead.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: peterz@infradead.org
      Cc: paulmck@linux.vnet.ibm.com
      Cc: boqun.feng@gmail.com
      Link: https://lkml.kernel.org/r/1529664307-983-1-git-send-email-will.deacon@arm.com
      784e0300
  12. 19 6月, 2018 1 次提交