1. 15 6月, 2019 1 次提交
    • C
      powerpc/64: mark start_here_multiplatform as __ref · 9c4e4c90
      Christophe Leroy 提交于
      Otherwise, the following warning is encountered:
      
      WARNING: vmlinux.o(.text+0x3dc6): Section mismatch in reference from the variable start_here_multiplatform to the function .init.text:.early_setup()
      The function start_here_multiplatform() references
      the function __init .early_setup().
      This is often because start_here_multiplatform lacks a __init
      annotation or the annotation of .early_setup is wrong.
      
      Fixes: 56c46bba ("powerpc/64: Fix booting large kernels with STRICT_KERNEL_RWX")
      Cc: Russell Currey <ruscur@russell.cc>
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9c4e4c90
  2. 31 5月, 2019 1 次提交
  3. 20 4月, 2019 1 次提交
  4. 23 2月, 2019 1 次提交
    • M
      powerpc/64: Simplify __secondary_start paca->kstack handling · eafd825e
      Michael Ellerman 提交于
      In __secondary_start() we load the thread_info of the idle task of the
      secondary CPU from current_set[cpu], and then convert it into a stack
      pointer before storing that back to paca->kstack.
      
      As pointed out in commit f761622e ("powerpc: Initialise
      paca->kstack before early_setup_secondary") it's important that we
      initialise paca->kstack before calling the MMU setup code, in
      particular slb_initialize(), because it will bolt the SLB entry for
      the kstack into the SLB.
      
      However we have already setup paca->kstack in cpu_idle_thread_init(),
      since commit 3b575064 ("[POWERPC] Bolt in SLB entry for kernel
      stack on secondary cpus") (May 2008).
      
      It's also in cpu_idle_thread_init() that we initialise current_set[cpu]
      with the thread_info pointer, so there is no issue of the timing being
      different between the two.
      
      Therefore the initialisation of paca->kstack in __setup_secondary() is
      completely redundant, so remove it.
      
      This has the added benefit of removing code that runs in real mode,
      and is therefore restricted by the RMO, and so opens the way for us to
      enable THREAD_INFO_IN_TASK.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      eafd825e
  5. 30 7月, 2018 1 次提交
  6. 30 3月, 2018 1 次提交
  7. 19 1月, 2018 2 次提交
  8. 10 11月, 2017 1 次提交
  9. 31 8月, 2017 1 次提交
  10. 20 3月, 2017 1 次提交
  11. 30 11月, 2016 1 次提交
  12. 28 11月, 2016 1 次提交
    • N
      powerpc/64e: Convert cmpi to cmpwi in head_64.S · f87f253b
      Nicholas Piggin 提交于
      From 80f23935 ("powerpc: Convert cmp to cmpd in idle enter sequence"):
      
        PowerPC's "cmp" instruction has four operands. Normally people write
        "cmpw" or "cmpd" for the second cmp operand 0 or 1. But, frequently
        people forget, and write "cmp" with just three operands.
      
        With older binutils this is silently accepted as if this was "cmpw",
        while often "cmpd" is wanted. With newer binutils GAS will complain
        about this for 64-bit code. For 32-bit code it still silently assumes
        "cmpw" is what is meant.
      
      In this case, cmpwi is called for, so this is just a build fix for
      new toolchains.
      
      Cc: stable@vger.kernel.org # v3.0+
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f87f253b
  13. 14 11月, 2016 1 次提交
  14. 04 10月, 2016 2 次提交
    • N
      powerpc: Use gas sections for arranging exception vectors · 57f26649
      Nicholas Piggin 提交于
      Use assembler sections of fixed size and location to arrange the 64-bit
      Book3S exception vector code (64-bit Book3E also uses it in head_64.S
      for 0x0..0x100).
      
      This allows better flexibility in arranging exception code and hiding
      unimportant details behind macros.
      
      Gas sections can be a bit painful to use this way, mainly because the
      assembler does not know where they will be finally linked. Taking
      absolute addresses requires a bit of trickery for example, but it can
      be hidden behind macros for the most part.
      
      Generated code is mostly the same except locations, offsets, alignments.
      
      The "+ 0x2" is only required for the trap number / kvm exit number,
      which gets loaded as a constant into a register.
      
      Previously, code also used + 0x2 for label names, but we changed to
      using "H" to distinguish HV case for that. Remove the last vestiges
      of that.
      
      __after_prom_start is taking absolute address of a label in another
      fixed section. Newer toolchains seemed to compile this okay, but older
      ones do not. FIXED_SYMBOL_ABS_ADDR is more foolproof, it just takes an
      additional line to define.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      57f26649
    • N
      powerpc/64: Change the way relocation copy is calculated · 573819e3
      Nicholas Piggin 提交于
      With a subsequent patch to put text into different sections,
      (_end - _stext) can no longer be computed at link time to determine
      the end of the copy. Instead, calculate it at runtime with
      (copy_to_here - _stext) + (_end - copy_to_here).
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      573819e3
  15. 08 8月, 2016 1 次提交
  16. 21 7月, 2016 1 次提交
  17. 14 6月, 2016 1 次提交
    • M
      powerpc: Define and use PPC64_ELF_ABI_v2/v1 · f55d9665
      Michael Ellerman 提交于
      We're approaching 20 locations where we need to check for ELF ABI v2.
      That's fine, except the logic is a bit awkward, because we have to check
      that _CALL_ELF is defined and then what its value is.
      
      So check it once in asm/types.h and define PPC64_ELF_ABI_v2 when ELF ABI
      v2 is detected.
      
      We also have a few places where what we're really trying to check is
      that we are using the 64-bit v1 ABI, ie. function descriptors. So also
      add a #define for that, which simplifies several checks.
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f55d9665
  18. 11 5月, 2016 1 次提交
  19. 16 3月, 2016 1 次提交
  20. 05 3月, 2016 2 次提交
  21. 28 10月, 2015 3 次提交
  22. 30 7月, 2014 1 次提交
    • A
      powerpc/e6500: Add support for hardware threads · e16c8765
      Andy Fleming 提交于
      The general idea is that each core will release all of its
      threads into the secondary thread startup code, which will
      eventually wait in the secondary core holding area, for the
      appropriate bit in the PACA to be set. The kick_cpu function
      pointer will set that bit in the PACA, and thus "release"
      the core/thread to boot. We also need to do a few things that
      U-Boot normally does for CPUs (like enable branch prediction).
      Signed-off-by: NAndy Fleming <afleming@freescale.com>
      [scottwood@freescale.com: various changes, including only enabling
       threads if Linux wants to kick them]
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      e16c8765
  23. 28 7月, 2014 1 次提交
  24. 23 4月, 2014 5 次提交
  25. 15 1月, 2014 1 次提交
  26. 30 12月, 2013 2 次提交
  27. 11 10月, 2013 1 次提交
  28. 14 8月, 2013 1 次提交
  29. 26 4月, 2013 1 次提交
    • M
      powerpc: Add isync to copy_and_flush · 29ce3c50
      Michael Neuling 提交于
      In __after_prom_start we copy the kernel down to zero in two calls to
      copy_and_flush.  After the first call (copy from 0 to copy_to_here:)
      we jump to the newly copied code soon after.
      
      Unfortunately there's no isync between the copy of this code and the
      jump to it.  Hence it's possible that stale instructions could still be
      in the icache or pipeline before we branch to it.
      
      We've seen this on real machines and it's results in no console output
      after:
        calling quiesce...
        returning from prom_init
      
      The below adds an isync to ensure that the copy and flushing has
      completed before any branching to the new instructions occurs.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      CC: <stable@vger.kernel.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      29ce3c50
  30. 10 1月, 2013 1 次提交
    • J
      powerpc/kexec: Add kexec "hold" support for Book3e processors · 96f013fe
      Jimi Xenidis 提交于
      Motivation:
      IBM Blue Gene/Q comes with some very strange firmware that I'm trying to get out
      of using in the kernel.  So instead I spin all the threads in the boot wrapper
      (using the firmware) and have them enter the kexec stub, pre-translated at the
      virtual "linear" address, never touching firmware again.
      
      This works strategy works wonderfully, but I need the following patch in the
      kexec stub. I believe it should not effect Book3S and Book3E does not appear
      to be here yet so I'd love to get any criticisms up front.
      
      This patch adds two items:
      
      1) Book3e requires that GPR4 survive the "hold" process, so we make
         sure that happens.
      2) Book3e has no real mode, and the hold code exploits this.  Since
         these processors ares always translated, we arrange for the kexeced
         threads to enter the hold code using the normal kernel linear mapping.
      Signed-off-by: NJimi Xenidis <jimix@pobox.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      96f013fe