1. 20 4月, 2019 1 次提交
  2. 23 2月, 2019 1 次提交
    • M
      powerpc/64: Simplify __secondary_start paca->kstack handling · eafd825e
      Michael Ellerman 提交于
      In __secondary_start() we load the thread_info of the idle task of the
      secondary CPU from current_set[cpu], and then convert it into a stack
      pointer before storing that back to paca->kstack.
      
      As pointed out in commit f761622e ("powerpc: Initialise
      paca->kstack before early_setup_secondary") it's important that we
      initialise paca->kstack before calling the MMU setup code, in
      particular slb_initialize(), because it will bolt the SLB entry for
      the kstack into the SLB.
      
      However we have already setup paca->kstack in cpu_idle_thread_init(),
      since commit 3b575064 ("[POWERPC] Bolt in SLB entry for kernel
      stack on secondary cpus") (May 2008).
      
      It's also in cpu_idle_thread_init() that we initialise current_set[cpu]
      with the thread_info pointer, so there is no issue of the timing being
      different between the two.
      
      Therefore the initialisation of paca->kstack in __setup_secondary() is
      completely redundant, so remove it.
      
      This has the added benefit of removing code that runs in real mode,
      and is therefore restricted by the RMO, and so opens the way for us to
      enable THREAD_INFO_IN_TASK.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      eafd825e
  3. 30 7月, 2018 1 次提交
  4. 30 3月, 2018 1 次提交
  5. 19 1月, 2018 2 次提交
  6. 10 11月, 2017 1 次提交
  7. 31 8月, 2017 1 次提交
  8. 20 3月, 2017 1 次提交
  9. 30 11月, 2016 1 次提交
  10. 28 11月, 2016 1 次提交
    • N
      powerpc/64e: Convert cmpi to cmpwi in head_64.S · f87f253b
      Nicholas Piggin 提交于
      From 80f23935 ("powerpc: Convert cmp to cmpd in idle enter sequence"):
      
        PowerPC's "cmp" instruction has four operands. Normally people write
        "cmpw" or "cmpd" for the second cmp operand 0 or 1. But, frequently
        people forget, and write "cmp" with just three operands.
      
        With older binutils this is silently accepted as if this was "cmpw",
        while often "cmpd" is wanted. With newer binutils GAS will complain
        about this for 64-bit code. For 32-bit code it still silently assumes
        "cmpw" is what is meant.
      
      In this case, cmpwi is called for, so this is just a build fix for
      new toolchains.
      
      Cc: stable@vger.kernel.org # v3.0+
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f87f253b
  11. 14 11月, 2016 1 次提交
  12. 04 10月, 2016 2 次提交
    • N
      powerpc: Use gas sections for arranging exception vectors · 57f26649
      Nicholas Piggin 提交于
      Use assembler sections of fixed size and location to arrange the 64-bit
      Book3S exception vector code (64-bit Book3E also uses it in head_64.S
      for 0x0..0x100).
      
      This allows better flexibility in arranging exception code and hiding
      unimportant details behind macros.
      
      Gas sections can be a bit painful to use this way, mainly because the
      assembler does not know where they will be finally linked. Taking
      absolute addresses requires a bit of trickery for example, but it can
      be hidden behind macros for the most part.
      
      Generated code is mostly the same except locations, offsets, alignments.
      
      The "+ 0x2" is only required for the trap number / kvm exit number,
      which gets loaded as a constant into a register.
      
      Previously, code also used + 0x2 for label names, but we changed to
      using "H" to distinguish HV case for that. Remove the last vestiges
      of that.
      
      __after_prom_start is taking absolute address of a label in another
      fixed section. Newer toolchains seemed to compile this okay, but older
      ones do not. FIXED_SYMBOL_ABS_ADDR is more foolproof, it just takes an
      additional line to define.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      57f26649
    • N
      powerpc/64: Change the way relocation copy is calculated · 573819e3
      Nicholas Piggin 提交于
      With a subsequent patch to put text into different sections,
      (_end - _stext) can no longer be computed at link time to determine
      the end of the copy. Instead, calculate it at runtime with
      (copy_to_here - _stext) + (_end - copy_to_here).
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      573819e3
  13. 08 8月, 2016 1 次提交
  14. 21 7月, 2016 1 次提交
  15. 14 6月, 2016 1 次提交
    • M
      powerpc: Define and use PPC64_ELF_ABI_v2/v1 · f55d9665
      Michael Ellerman 提交于
      We're approaching 20 locations where we need to check for ELF ABI v2.
      That's fine, except the logic is a bit awkward, because we have to check
      that _CALL_ELF is defined and then what its value is.
      
      So check it once in asm/types.h and define PPC64_ELF_ABI_v2 when ELF ABI
      v2 is detected.
      
      We also have a few places where what we're really trying to check is
      that we are using the 64-bit v1 ABI, ie. function descriptors. So also
      add a #define for that, which simplifies several checks.
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f55d9665
  16. 11 5月, 2016 1 次提交
  17. 16 3月, 2016 1 次提交
  18. 05 3月, 2016 2 次提交
  19. 28 10月, 2015 3 次提交
  20. 30 7月, 2014 1 次提交
    • A
      powerpc/e6500: Add support for hardware threads · e16c8765
      Andy Fleming 提交于
      The general idea is that each core will release all of its
      threads into the secondary thread startup code, which will
      eventually wait in the secondary core holding area, for the
      appropriate bit in the PACA to be set. The kick_cpu function
      pointer will set that bit in the PACA, and thus "release"
      the core/thread to boot. We also need to do a few things that
      U-Boot normally does for CPUs (like enable branch prediction).
      Signed-off-by: NAndy Fleming <afleming@freescale.com>
      [scottwood@freescale.com: various changes, including only enabling
       threads if Linux wants to kick them]
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      e16c8765
  21. 28 7月, 2014 1 次提交
  22. 23 4月, 2014 5 次提交
  23. 15 1月, 2014 1 次提交
  24. 30 12月, 2013 2 次提交
  25. 11 10月, 2013 1 次提交
  26. 14 8月, 2013 1 次提交
  27. 26 4月, 2013 1 次提交
    • M
      powerpc: Add isync to copy_and_flush · 29ce3c50
      Michael Neuling 提交于
      In __after_prom_start we copy the kernel down to zero in two calls to
      copy_and_flush.  After the first call (copy from 0 to copy_to_here:)
      we jump to the newly copied code soon after.
      
      Unfortunately there's no isync between the copy of this code and the
      jump to it.  Hence it's possible that stale instructions could still be
      in the icache or pipeline before we branch to it.
      
      We've seen this on real machines and it's results in no console output
      after:
        calling quiesce...
        returning from prom_init
      
      The below adds an isync to ensure that the copy and flushing has
      completed before any branching to the new instructions occurs.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      CC: <stable@vger.kernel.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      29ce3c50
  28. 10 1月, 2013 2 次提交
    • J
      powerpc/kexec: Add kexec "hold" support for Book3e processors · 96f013fe
      Jimi Xenidis 提交于
      Motivation:
      IBM Blue Gene/Q comes with some very strange firmware that I'm trying to get out
      of using in the kernel.  So instead I spin all the threads in the boot wrapper
      (using the firmware) and have them enter the kexec stub, pre-translated at the
      virtual "linear" address, never touching firmware again.
      
      This works strategy works wonderfully, but I need the following patch in the
      kexec stub. I believe it should not effect Book3S and Book3E does not appear
      to be here yet so I'd love to get any criticisms up front.
      
      This patch adds two items:
      
      1) Book3e requires that GPR4 survive the "hold" process, so we make
         sure that happens.
      2) Book3e has no real mode, and the hold code exploits this.  Since
         these processors ares always translated, we arrange for the kexeced
         threads to enter the hold code using the normal kernel linear mapping.
      Signed-off-by: NJimi Xenidis <jimix@pobox.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      96f013fe
    • A
      powerpc: Build kernel with -mcmodel=medium · 1fbe9cf2
      Anton Blanchard 提交于
      Finally remove the two level TOC and build with -mcmodel=medium.
      
      Unfortunately we can't build modules with -mcmodel=medium due to
      the tricks the kernel module loader plays with percpu data:
      
      # -mcmodel=medium breaks modules because it uses 32bit offsets from
      # the TOC pointer to create pointers where possible. Pointers into the
      # percpu data area are created by this method.
      #
      # The kernel module loader relocates the percpu data section from the
      # original location (starting with 0xd...) to somewhere in the base
      # kernel percpu data space (starting with 0xc...). We need a full
      # 64bit relocation for this to work, hence -mcmodel=large.
      
      On older kernels we fall back to the two level TOC (-mminimal-toc)
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1fbe9cf2
  29. 15 11月, 2012 1 次提交
    • M
      powerpc: Add relocation on exception vector handlers · c1fb6816
      Michael Neuling 提交于
      POWER8/v2.07 allows exceptions to be taken with the MMU still on.
      
      A new set of exception vectors is added at 0xc000_0000_0000_4xxx.  When the HW
      takes us here, MSR IR/DR will be set already and we no longer need a costly
      RFID to turn the MMU back on again.
      
      The original 0x0 based exception vectors remain for when the HW can't leave the
      MMU on.  Examples of this are when we can't trust the current MMU mappings,
      like when we are changing from guest to hypervisor (HV 0 -> 1) or when the MMU
      was off already.  In these cases the HW will take us to the original 0x0 based
      exception vectors with the MMU off as before.
      
      This uses the new macros added previously too implement these new execption
      vectors at 0xc000_0000_0000_4xxx.  We exit these exception vectors using
      mflr/blr (rather than mtspr SSR0/RFID), since we don't need the costly MMU
      switch anymore.
      
      This moves the __end_interrupts marker down past these new 0x4000 vectors since
      they will need to be copied down to 0x0 when the kernel is not at 0x0.
      Signed-off-by: NMatt Evans <matt@ozlabs.org>
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c1fb6816