1. 19 1月, 2018 2 次提交
  2. 18 1月, 2018 1 次提交
    • N
      powerpc/64s: Relax PACA address limitations · 1af19331
      Nicholas Piggin 提交于
      Book3S PACA memory allocation is restricted by the RMA limit and also
      must not take SLB faults when accessed in virtual mode. Currently a
      fixed 256MB limit is used for this, which is imprecise and sub-optimal.
      
      Update the paca allocation limits to use use the ppc64_rma_size for RMA
      limit, and share the safe_stack_limit() that is currently used for stack
      allocations that must not take virtual mode faults.
      
      The safe_stack_limit() name is changed to ppc64_bolted_size() to match
      ppc64_rma_size and some comments are updated. We also need to use
      early_mmu_has_feature() because we are now calling this function prior
      to the jump label patching that enables mmu_has_feature().
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      [mpe: Change mmu_has_feature() to early_mmu_has_feature()]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      1af19331
  3. 11 12月, 2017 1 次提交
  4. 10 11月, 2017 2 次提交
  5. 31 8月, 2017 2 次提交
  6. 13 7月, 2017 2 次提交
  7. 23 6月, 2017 1 次提交
    • N
      powerpc/64: Initialise thread_info for emergency stacks · 34f19ff1
      Nicholas Piggin 提交于
      Emergency stacks have their thread_info mostly uninitialised, which in
      particular means garbage preempt_count values.
      
      Emergency stack code runs with interrupts disabled entirely, and is
      used very rarely, so this has been unnoticed so far. It was found by a
      proposed new powerpc watchdog that takes a soft-NMI directly from the
      masked_interrupt handler and using the emergency stack. That crashed
      at BUG_ON(in_nmi()) in nmi_enter(). preempt_count()s were found to be
      garbage.
      
      To fix this, zero the entire THREAD_SIZE allocation, and initialize
      the thread_info.
      
      Cc: stable@vger.kernel.org
      Reported-by: NAbdul Haleem <abdhalee@linux.vnet.ibm.com>
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      [mpe: Move it all into setup_64.c, use a function not a macro. Fix
            crashes on Cell by setting preempt_count to 0 not HARDIRQ_OFFSET]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      34f19ff1
  8. 06 6月, 2017 1 次提交
    • M
      powerpc/numa: Fix percpu allocations to be NUMA aware · ba4a648f
      Michael Ellerman 提交于
      In commit 8c272261 ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"), we
      switched to the generic implementation of cpu_to_node(), which uses a percpu
      variable to hold the NUMA node for each CPU.
      
      Unfortunately we neglected to notice that we use cpu_to_node() in the allocation
      of our percpu areas, leading to a chicken and egg problem. In practice what
      happens is when we are setting up the percpu areas, cpu_to_node() reports that
      all CPUs are on node 0, so we allocate all percpu areas on node 0.
      
      This is visible in the dmesg output, as all pcpu allocs being in group 0:
      
        pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07
        pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15
        pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23
        pcpu-alloc: [0] 24 25 26 27 [0] 28 29 30 31
        pcpu-alloc: [0] 32 33 34 35 [0] 36 37 38 39
        pcpu-alloc: [0] 40 41 42 43 [0] 44 45 46 47
      
      To fix it we need an early_cpu_to_node() which can run prior to percpu being
      setup. We already have the numa_cpu_lookup_table we can use, so just plumb it
      in. With the patch dmesg output shows two groups, 0 and 1:
      
        pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07
        pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15
        pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23
        pcpu-alloc: [1] 24 25 26 27 [1] 28 29 30 31
        pcpu-alloc: [1] 32 33 34 35 [1] 36 37 38 39
        pcpu-alloc: [1] 40 41 42 43 [1] 44 45 46 47
      
      We can also check the data_offset in the paca of various CPUs, with the fix we
      see:
      
        CPU 0:  data_offset = 0x0ffe8b0000
        CPU 24: data_offset = 0x1ffe5b0000
      
      And we can see from dmesg that CPU 24 has an allocation on node 1:
      
        node   0: [mem 0x0000000000000000-0x0000000fffffffff]
        node   1: [mem 0x0000001000000000-0x0000001fffffffff]
      
      Cc: stable@vger.kernel.org # v3.16+
      Fixes: 8c272261 ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ba4a648f
  9. 09 5月, 2017 1 次提交
  10. 28 4月, 2017 1 次提交
    • N
      powerpc/64s: Dedicated system reset interrupt stack · b1ee8a3d
      Nicholas Piggin 提交于
      The system reset interrupt is used for crash/debug situations, so it is
      desirable to have as little impact on the normal state of the system as
      possible.
      
      Currently it uses the current kernel stack to process the exception.
      This stores into the stack which may be involved with the crash. The
      stack pointer may be corrupted, or it may have overflowed.
      
      Avoid or minimise these problems by creating a dedicated NMI stack for
      the system reset interrupt to use.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b1ee8a3d
  11. 28 3月, 2017 2 次提交
    • B
      powerpc: Disable HFSCR[TM] if TM is not supported · 7ed23e1b
      Benjamin Herrenschmidt 提交于
      On Power8 & Power9 the early CPU inititialisation in __init_HFSCR()
      turns on HFSCR[TM] (Hypervisor Facility Status and Control Register
      [Transactional Memory]), but that doesn't take into account that TM
      might be disabled by CPU features, or disabled by the kernel being built
      with CONFIG_PPC_TRANSACTIONAL_MEM=n.
      
      So later in boot, when we have setup the CPU features, clear HSCR[TM] if
      the TM CPU feature has been disabled. We use CPU_FTR_TM_COMP to account
      for the CONFIG_PPC_TRANSACTIONAL_MEM=n case.
      
      Without this a KVM guest might try use TM, even if told not to, and
      cause an oops in the host kernel. Typically the oops is seen in
      __kvmppc_vcore_entry() and may or may not be fatal to the host, but is
      always bad news.
      
      In practice all shipping CPU revisions do support TM, and all host
      kernels we are aware of build with TM support enabled, so no one should
      actually be able to hit this in the wild.
      
      Fixes: 2a3563b0 ("powerpc: Setup in HFSCR for POWER8")
      Cc: stable@vger.kernel.org # v3.10+
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Tested-by: NSam Bobroff <sam.bobroff@au1.ibm.com>
      [mpe: Rewrite change log with input from Sam, add Fixes/stable]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7ed23e1b
    • M
      powerpc/64: Don't use early_cpu_has_feature() in cpu_ready_for_interrupts() · 5511a45f
      Michael Ellerman 提交于
      cpu_ready_for_interrupts() is called after feature patching, so there's
      no need to use early_cpu_has_feature().
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      5511a45f
  12. 06 3月, 2017 1 次提交
  13. 15 2月, 2017 1 次提交
  14. 06 2月, 2017 8 次提交
  15. 30 11月, 2016 1 次提交
  16. 15 11月, 2016 1 次提交
  17. 10 8月, 2016 1 次提交
  18. 01 8月, 2016 2 次提交
    • A
      powerpc/mm: Convert early cpu/mmu feature check to use the new helpers · b8f1b4f8
      Aneesh Kumar K.V 提交于
      This switches early feature checks to use the non static key variant of
      the function. In later patches we will be switching cpu_has_feature()
      and mmu_has_feature() to use static keys and we can use them only after
      static key/jump label is initialized. Any check for feature before jump
      label init should be done using this new helper.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b8f1b4f8
    • M
      powerpc/64: Do feature patching before MMU init · 9e8066f3
      Michael Ellerman 提交于
      Up until now we needed to do the MMU init before feature patching,
      because part of the MMU init was scanning the device tree and setting
      and/or clearing some MMU feature bits.
      
      Now that we have split that MMU feature modification out into routines
      called from early_init_devtree() (called earlier) we can now do feature
      patching before calling MMU init.
      
      The advantage of this is it means the remainder of the MMU init runs
      with the final set of features which will apply for the rest of the life
      of the system. This means we don't have to special case anything called
      from MMU init to deal with a changing set of feature bits.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9e8066f3
  19. 21 7月, 2016 9 次提交