1. 02 3月, 2017 3 次提交
  2. 03 1月, 2017 9 次提交
    • M
      MIPS: Move register dump routines out of ptrace code · 08c941bf
      Marcin Nowakowski 提交于
      Current register dump methods for MIPS are implemented inside ptrace
      methods, but there will be other uses in the kernel for them, so keep
      them separately in process.c and use those definitions for ptrace
      instead.
      Signed-off-by: NMarcin Nowakowski <marcin.nowakowski@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14587/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      08c941bf
    • M
      MIPS: SMP: Use a completion event to signal CPU up · a00eeede
      Matt Redfearn 提交于
      If a secondary CPU failed to start, for any reason, the CPU requesting
      the secondary to start would get stuck in the loop waiting for the
      secondary to be present in the cpu_callin_map.
      
      Rather than that, use a completion event to signal that the secondary
      CPU has started and is waiting to synchronise counters.
      
      Since the CPU presence will no longer be marked in cpu_callin_map,
      remove the redundant test from arch_cpu_idle_dead().
      Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com>
      Cc: Maciej W. Rozycki <macro@imgtec.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Qais Yousef <qsyousef@gmail.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/14502/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a00eeede
    • P
      MIPS: Handle microMIPS jumps in the same way as MIPS32/MIPS64 jumps · 096a0de4
      Paul Burton 提交于
      is_jump_ins() checks for plain jump ("j") instructions since commit
      e7438c4b ("MIPS: Fix sibling call handling in get_frame_info") but
      that commit didn't make the same change to the microMIPS code, leaving
      it inconsistent with the MIPS32/MIPS64 code. Handle the microMIPS
      encoding of the jump instruction too such that it behaves consistently.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: e7438c4b ("MIPS: Fix sibling call handling in get_frame_info")
      Cc: Tony Wu <tung7970@gmail.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.10+
      Patchwork: https://patchwork.linux-mips.org/patch/14533/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      096a0de4
    • P
      MIPS: Calculate microMIPS ra properly when unwinding the stack · bb9bc468
      Paul Burton 提交于
      get_frame_info() calculates the offset of the return address within a
      stack frame simply by dividing a the bottom 16 bits of the instruction,
      treated as a signed integer, by the size of a long. Whilst this works
      for MIPS32 & MIPS64 ISAs where the sw or sd instructions are used, it's
      incorrect for microMIPS where encodings differ. The result is that we
      typically completely fail to unwind the stack on microMIPS.
      
      Fix this by adjusting is_ra_save_ins() to calculate the return address
      offset, and take into account the various different encodings there in
      the same place as we consider whether an instruction is storing the
      ra/$31 register.
      
      With this we are now able to unwind the stack for kernels targetting the
      microMIPS ISA, for example we can produce:
      
          Call Trace:
          [<80109e1f>] show_stack+0x63/0x7c
          [<8011ea17>] __warn+0x9b/0xac
          [<8011ea45>] warn_slowpath_fmt+0x1d/0x20
          [<8013fe53>] register_console+0x43/0x314
          [<8067c58d>] of_setup_earlycon+0x1dd/0x1ec
          [<8067f63f>] early_init_dt_scan_chosen_stdout+0xe7/0xf8
          [<8066c115>] do_early_param+0x75/0xac
          [<801302f9>] parse_args+0x1dd/0x308
          [<8066c459>] parse_early_options+0x25/0x28
          [<8066c48b>] parse_early_param+0x2f/0x38
          [<8066e8cf>] setup_arch+0x113/0x488
          [<8066c4f3>] start_kernel+0x57/0x328
          ---[ end trace 0000000000000000 ]---
      
      Whereas previously we only produced:
      
          Call Trace:
          [<80109e1f>] show_stack+0x63/0x7c
          ---[ end trace 0000000000000000 ]---
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 34c2f668 ("MIPS: microMIPS: Add unaligned access support.")
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.10+
      Patchwork: https://patchwork.linux-mips.org/patch/14532/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      bb9bc468
    • P
      MIPS: Fix is_jump_ins() handling of 16b microMIPS instructions · 67c75057
      Paul Burton 提交于
      is_jump_ins() checks 16b instruction fields without verifying that the
      instruction is indeed 16b, as is done by is_ra_save_ins() &
      is_sp_move_ins(). Add the appropriate check.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 34c2f668 ("MIPS: microMIPS: Add unaligned access support.")
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.10+
      Patchwork: https://patchwork.linux-mips.org/patch/14531/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      67c75057
    • P
      MIPS: Fix get_frame_info() handling of microMIPS function size · b6c7a324
      Paul Burton 提交于
      get_frame_info() is meant to iterate over up to the first 128
      instructions within a function, but for microMIPS kernels it will not
      reach that many instructions unless the function is 512 bytes long since
      we calculate the maximum number of instructions to check by dividing the
      function length by the 4 byte size of a union mips_instruction. In
      microMIPS kernels this won't do since instructions are variable length.
      
      Fix this by instead checking whether the pointer to the current
      instruction has reached the end of the function, and use max_insns as a
      simple constant to check the number of iterations against.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 34c2f668 ("MIPS: microMIPS: Add unaligned access support.")
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.10+
      Patchwork: https://patchwork.linux-mips.org/patch/14530/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b6c7a324
    • P
      MIPS: Prevent unaligned accesses during stack unwinding · a3552dac
      Paul Burton 提交于
      During stack unwinding we call a number of functions to determine what
      type of instruction we're looking at. The union mips_instruction pointer
      provided to them may be pointing at a 2 byte, but not 4 byte, aligned
      address & we thus cannot directly access the 4 byte wide members of the
      union mips_instruction. To avoid this is_ra_save_ins() copies the
      required half-words of the microMIPS instruction to a correctly aligned
      union mips_instruction on the stack, which it can then access safely.
      The is_jump_ins() & is_sp_move_ins() functions do not correctly perform
      this temporary copy, and instead attempt to directly dereference 4 byte
      fields which may be misaligned and lead to an address exception.
      
      Fix this by copying the instruction halfwords to a temporary union
      mips_instruction in get_frame_info() such that we can provide a 4 byte
      aligned union mips_instruction to the is_*_ins() functions and they do
      not need to deal with misalignment themselves.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 34c2f668 ("MIPS: microMIPS: Add unaligned access support.")
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.10+
      Patchwork: https://patchwork.linux-mips.org/patch/14529/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a3552dac
    • P
      MIPS: Clear ISA bit correctly in get_frame_info() · ccaf7caf
      Paul Burton 提交于
      get_frame_info() can be called in microMIPS kernels with the ISA bit
      already clear. For example this happens when unwind_stack_by_address()
      is called because we begin with a PC that has the ISA bit set & subtract
      the (odd) offset from the preceding symbol (which does not have the ISA
      bit set). Since get_frame_info() unconditionally subtracts 1 from the PC
      in microMIPS kernels it incorrectly misaligns the address it then
      attempts to access code at, leading to an address error exception.
      
      Fix this by using msk_isa16_mode() to clear the ISA bit, which allows
      get_frame_info() to function regardless of whether it is provided with a
      PC that has the ISA bit set or not.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 34c2f668 ("MIPS: microMIPS: Add unaligned access support.")
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # v3.10+
      Patchwork: https://patchwork.linux-mips.org/patch/14528/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      ccaf7caf
    • M
      MIPS: Stack unwinding while on IRQ stack · d42d8d10
      Matt Redfearn 提交于
      Within unwind stack, check if the stack pointer being unwound is within
      the CPU's irq_stack and if so use that page rather than the task's stack
      page.
      Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com>
      Acked-by: NJason A. Donenfeld <jason@zx2c4.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: Maciej W. Rozycki <macro@imgtec.com>
      Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/14741/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d42d8d10
  3. 25 12月, 2016 1 次提交
  4. 08 10月, 2016 1 次提交
    • C
      nmi_backtrace: add more trigger_*_cpu_backtrace() methods · 9a01c3ed
      Chris Metcalf 提交于
      Patch series "improvements to the nmi_backtrace code" v9.
      
      This patch series modifies the trigger_xxx_backtrace() NMI-based remote
      backtracing code to make it more flexible, and makes a few small
      improvements along the way.
      
      The motivation comes from the task isolation code, where there are
      scenarios where we want to be able to diagnose a case where some cpu is
      about to interrupt a task-isolated cpu.  It can be helpful to see both
      where the interrupting cpu is, and also an approximation of where the
      cpu that is being interrupted is.  The nmi_backtrace framework allows us
      to discover the stack of the interrupted cpu.
      
      I've tested that the change works as desired on tile, and build-tested
      x86, arm, mips, and sparc64.  For x86 I confirmed that the generic
      cpuidle stuff as well as the architecture-specific routines are in the
      new cpuidle section.  For arm, mips, and sparc I just build-tested it
      and made sure the generic cpuidle routines were in the new cpuidle
      section, but I didn't attempt to figure out which the platform-specific
      idle routines might be.  That might be more usefully done by someone
      with platform experience in follow-up patches.
      
      This patch (of 4):
      
      Currently you can only request a backtrace of either all cpus, or all
      cpus but yourself.  It can also be helpful to request a remote backtrace
      of a single cpu, and since we want that, the logical extension is to
      support a cpumask as the underlying primitive.
      
      This change modifies the existing lib/nmi_backtrace.c code to take a
      cpumask as its basic primitive, and modifies the linux/nmi.h code to use
      the new "cpumask" method instead.
      
      The existing clients of nmi_backtrace (arm and x86) are converted to
      using the new cpumask approach in this change.
      
      The other users of the backtracing API (sparc64 and mips) are converted
      to use the cpumask approach rather than the all/allbutself approach.
      The mips code ignored the "include_self" boolean but with this change it
      will now also dump a local backtrace if requested.
      
      Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.comSigned-off-by: NChris Metcalf <cmetcalf@mellanox.com>
      Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
      Reviewed-by: NAaron Tomlin <atomlin@redhat.com>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a01c3ed
  5. 19 9月, 2016 1 次提交
  6. 02 8月, 2016 1 次提交
    • P
      MIPS: Use per-mm page to execute branch delay slot instructions · 432c6bac
      Paul Burton 提交于
      In some cases the kernel needs to execute an instruction from the delay
      slot of an emulated branch instruction. These cases include:
      
        - Emulated floating point branch instructions (bc1[ft]l?) for systems
          which don't include an FPU, or upon which the kernel is run with the
          "nofpu" parameter.
      
        - MIPSr6 systems running binaries targeting older revisions of the
          architecture, which may include branch instructions whose encodings
          are no longer valid in MIPSr6.
      
      Executing instructions from such delay slots is done by writing the
      instruction to memory followed by a trap, as part of an "emuframe", and
      executing it. This avoids the requirement of an emulator for the entire
      MIPS instruction set. Prior to this patch such emuframes are written to
      the user stack and executed from there.
      
      This patch moves FP branch delay emuframes off of the user stack and
      into a per-mm page. Allocating a page per-mm leaves userland with access
      to only what it had access to previously, and compared to other
      solutions is relatively simple.
      
      When a thread requires a delay slot emulation, it is allocated a frame.
      A thread may only have one frame allocated at any one time, since it may
      only ever be executing one instruction at any one time. In order to
      ensure that we can free up allocated frame later, its index is recorded
      in struct thread_struct. In the typical case, after executing the delay
      slot instruction we'll execute a break instruction with the BRK_MEMU
      code. This traps back to the kernel & leads to a call to do_dsemulret
      which frees the allocated frame & moves the user PC back to the
      instruction that would have executed following the emulated branch.
      In some cases the delay slot instruction may be invalid, such as a
      branch, or may trigger an exception. In these cases the BRK_MEMU break
      instruction will not be hit. In order to ensure that frames are freed
      this patch introduces dsemul_thread_cleanup() and calls it to free any
      allocated frame upon thread exit. If the instruction generated an
      exception & leads to a signal being delivered to the thread, or indeed
      if a signal simply happens to be delivered to the thread whilst it is
      executing from the struct emuframe, then we need to take care to exit
      the frame appropriately. This is done by either rolling back the user PC
      to the branch or advancing it to the continuation PC prior to signal
      delivery, using dsemul_thread_rollback(). If this were not done then a
      sigreturn would return to the struct emuframe, and if that frame had
      meanwhile been used in response to an emulated branch instruction within
      the signal handler then we would execute the wrong user code.
      
      Whilst a user could theoretically place something like a compact branch
      to self in a delay slot and cause their thread to become stuck in an
      infinite loop with the frame never being deallocated, this would:
      
        - Only affect the users single process.
      
        - Be architecturally invalid since there would be a branch in the
          delay slot, which is forbidden.
      
        - Be extremely unlikely to happen by mistake, and provide a program
          with no more ability to harm the system than a simple infinite loop
          would.
      
      If a thread requires a delay slot emulation & no frame is available to
      it (ie. the process has enough other threads that all frames are
      currently in use) then the thread joins a waitqueue. It will sleep until
      a frame is freed by another thread in the process.
      
      Since we now know whether a thread has an allocated frame due to our
      tracking of its index, the cookie field of struct emuframe is removed as
      we can be more certain whether we have a valid frame. Since a thread may
      only ever have a single frame at any given time, the epc field of struct
      emuframe is also removed & the PC to continue from is instead stored in
      struct thread_struct. Together these changes simplify & shrink struct
      emuframe somewhat, allowing twice as many frames to fit into the page
      allocated for them.
      
      The primary benefit of this patch is that we are now free to mark the
      user stack non-executable where that is possible.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: Maciej Rozycki <maciej.rozycki@imgtec.com>
      Cc: Faraz Shahbazker <faraz.shahbazker@imgtec.com>
      Cc: Raghu Gandham <raghu.gandham@imgtec.com>
      Cc: Matthew Fortune <matthew.fortune@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13764/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      432c6bac
  7. 28 5月, 2016 1 次提交
  8. 21 5月, 2016 1 次提交
    • J
      exit_thread: remove empty bodies · 5f56a5df
      Jiri Slaby 提交于
      Define HAVE_EXIT_THREAD for archs which want to do something in
      exit_thread. For others, let's define exit_thread as an empty inline.
      
      This is a cleanup before we change the prototype of exit_thread to
      accept a task parameter.
      
      [akpm@linux-foundation.org: fix mips]
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f56a5df
  9. 13 5月, 2016 3 次提交
    • P
      MIPS: Force CPUs to lose FP context during mode switches · 6b832257
      Paul Burton 提交于
      Commit 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options
      for MIPS") added support for the PR_SET_FP_MODE prctl, which allows a
      userland program to modify its FP mode at runtime. This is most notably
      required if dynamic linking leads to the FP mode requirement changing at
      runtime from that indicated in the initial executable's ELF header. In
      order to avoid overhead in the general FP context restore code, it aimed
      to have threads in the process become unable to enable the FPU during a
      mode switch & have the thread calling the prctl syscall wait for all
      other threads in the process to be context switched at least once. Once
      that happens we can know that no thread in the process whose mode will
      be switched has live FP context, and it's safe to perform the mode
      switch. However in the (rare) case of modeswitches occurring in
      multithreaded programs this can lead to indeterminate delays for the
      thread invoking the prctl syscall, and the code monitoring for those
      context switches was woefully inadequate for all but the simplest cases.
      
      Fix this by broadcasting an IPI if other CPUs may have live FP context
      for an affected thread, with a handler causing those CPUs to relinquish
      their FPU ownership. Threads will then be allowed to continue running
      but will stall on the wait_on_atomic_t in enable_restore_fp_context if
      they attempt to use FP again whilst the mode switch is still in
      progress. The end result is less fragile poking at scheduler context
      switch counts & a more expedient completion of the mode switch.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
      Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: stable <stable@vger.kernel.org> # v4.0+
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13145/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6b832257
    • P
      MIPS: Disable preemption during prctl(PR_SET_FP_MODE, ...) · bd239f1e
      Paul Burton 提交于
      Whilst a PR_SET_FP_MODE prctl is performed there are decisions made
      based upon whether the task is executing on the current CPU. This may
      change if we're preempted, so disable preemption to avoid such changes
      for the lifetime of the mode switch.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
      Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com>
      Tested-by: NAurelien Jarno <aurelien@aurel32.net>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: stable <stable@vger.kernel.org> # v4.0+
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13144/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      bd239f1e
    • R
      MIPS: Make flush_thread · 04cc89d1
      Ralf Baechle 提交于
      Avoids function calls to an empty function.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      04cc89d1
  10. 09 5月, 2016 1 次提交
    • J
      MIPS: Don't unwind to user mode with EVA · a816b306
      James Hogan 提交于
      When unwinding through IRQs and exceptions, the unwinding only continues
      if the PC is a kernel text address, however since EVA it is possible for
      user and kernel address ranges to overlap, potentially allowing
      unwinding to continue to user mode if the user PC happens to be in the
      kernel text address range.
      
      Adjust the check to also ensure that the register state from before the
      exception is actually running in kernel mode, i.e. !user_mode(regs).
      
      I don't believe any harm can come of this problem, since the PC is only
      output, the stack pointer is checked to ensure it resides within the
      task's stack page before it is dereferenced in search of the return
      address, and the return address register is similarly only output (if
      the PC is in a leaf function or the beginning of a non-leaf function).
      
      However unwind_stack() is only meant for unwinding kernel code, so to be
      correct the unwind should stop there.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Reviewed-by: NLeonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # 3.15+
      Patchwork: https://patchwork.linux-mips.org/patch/11700/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a816b306
  11. 03 4月, 2016 1 次提交
  12. 02 2月, 2016 1 次提交
  13. 24 3月, 2015 1 次提交
  14. 05 3月, 2015 1 次提交
  15. 17 2月, 2015 1 次提交
  16. 12 2月, 2015 1 次提交
    • P
      MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS · 9791554b
      Paul Burton 提交于
      Userland code may be built using an ABI which permits linking to objects
      that have more restrictive floating point requirements. For example,
      userland code may be built to target the O32 FPXX ABI. Such code may be
      linked with other FPXX code, or code built for either one of the more
      restrictive FP32 or FP64. When linking with more restrictive code, the
      overall requirement of the process becomes that of the more restrictive
      code. The kernel has no way to know in advance which mode the process
      will need to be executed in, and indeed it may need to change during
      execution. The dynamic loader is the only code which will know the
      overall required mode, and so it needs to have a means to instruct the
      kernel to switch the FP mode of the process.
      
      This patch introduces 2 new options to the prctl syscall which provide
      such a capability. The FP mode of the process is represented as a
      simple bitmask combining a number of mode bits mirroring those present
      in the hardware. Userland can either retrieve the current FP mode of
      the process:
      
        mode = prctl(PR_GET_FP_MODE);
      
      or modify the current FP mode of the process:
      
        err = prctl(PR_SET_FP_MODE, new_mode);
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Matthew Fortune <matthew.fortune@imgtec.com>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/8899/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      9791554b
  17. 31 1月, 2015 1 次提交
    • J
      MIPS: fork: Fix MSA/FPU/DSP context duplication race · 39148e94
      James Hogan 提交于
      There is a race in the MIPS fork code which allows the child to get a
      stale copy of parent MSA/FPU/DSP state that is active in hardware
      registers when the fork() is called. This is because copy_thread() saves
      the live register state into the child context only if the hardware is
      currently in use, apparently on the assumption that the hardware state
      cannot have been saved and disabled since the initial duplication of the
      task_struct. However preemption is certainly possible during this
      window.
      
      An example sequence of events is as follows:
      
      1) The parent userland process puts important data into saved floating
         point registers ($f20-$f31), which are then dirty compared to the
         process' stored context.
      
      2) The parent process calls fork() which does a clone system call.
      
      3) In the kernel, do_fork() -> copy_process() -> dup_task_struct() ->
         arch_dup_task_struct() (which uses the weakly defined default
         implementation). This duplicates the parent process' task context,
         which includes a stale version of its FP context from when it was
         last saved, probably some time before (1).
      
      4) At some point before copy_process() calls copy_thread(), such as when
         duplicating the memory map, the process is desceduled. Perhaps it is
         preempted asynchronously, or perhaps it sleeps while blocked on a
         mutex. The dirty FP state in the FP registers is saved to the parent
         process' context and the FPU is disabled.
      
      5) When the process is rescheduled again it continues copying state
         until it gets to copy_thread(), which checks whether the FPU is in
         use, so that it can copy that dirty state to the child process' task
         context. Because of the deschedule however the FPU is not in use, so
         the child process' context is left with stale FP context from the
         last time the parent saved it (some time before (1)).
      
      6) When the new child process is scheduled it reads the important data
         from the saved floating point register, and ends up doing a NULL
         pointer dereference as a result of the stale data.
      
      This use of saved floating point registers across function calls can be
      triggered fairly easily by explicitly using inline asm with a current
      (MIPS R2) compiler, but is far more likely to happen unintentionally
      with a MIPS R6 compiler where the FP registers are more likely to get
      used as scratch registers for storing non-fp data.
      
      It is easily fixed, in the same way that other architectures do it, by
      overriding the implementation of arch_dup_task_struct() to sync the
      dirty hardware state to the parent process' task context *prior* to
      duplicating it, rather than copying straight to the child process' task
      context in copy_thread(). Note, the FPU hardware is not disabled so the
      parent process may continue executing with the live register context,
      but now the child process is guaranteed to have an identical copy of it
      at that point.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Reported-by: NMatthew Fortune <matthew.fortune@imgtec.com>
      Tested-by: NMarkos Chandras <markos.chandras@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/9075/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      39148e94
  18. 24 11月, 2014 2 次提交
    • E
      MIPS: Add arch_trigger_all_cpu_backtrace() function · 856839b7
      Eunbong Song 提交于
      Currently, arch_trigger_all_cpu_backtrace() is defined in only x86 and
      sparc which have an NMI.  But in case of softlockup, it could be possible
      to dump backtrace of all cpus. and this could be helpful for debugging.
      
      for example, if system has 2 cpus.
      
      	CPU 0				CPU 1
       acquire read_lock()
      
      				try to do write_lock()
      
       ,,,
       missing read_unlock()
      
      In this case, softlockup will occur becasuse CPU 0 does not call
      read_unlock().  And dump_stack() print only backtrace for "CPU 0". If
      CPU1's backtrace is printed it's very helpful.
      
      [ralf@linux-mips.org: Fixed whitespace and formatting issues.]
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/8200/
      856839b7
    • R
      MIPS: Remove useless parentheses · 635c9907
      Ralf Baechle 提交于
      Based on the spatch
      
      @@
      expression e;
      @@
      - return (e);
      + return e;
      
      with heavy hand editing because some of the changes are either whitespace
      or identation only or result in excessivly long lines.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      635c9907
  19. 02 8月, 2014 3 次提交
  20. 24 5月, 2014 1 次提交
    • R
      MIPS: MT: Remove SMTC support · b633648c
      Ralf Baechle 提交于
      Nobody is maintaining SMTC anymore and there also seems to be no userbase.
      Which is a pity - the SMTC technology primarily developed by Kevin D.
      Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT
      ASE's power and elegance.
      
      Based on Markos Chandras <Markos.Chandras@imgtec.com> patch
      https://patchwork.linux-mips.org/patch/6719/ which while very similar did
      no longer apply cleanly when I tried to merge it plus some additional
      post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to
      merge once upon a time.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b633648c
  21. 27 3月, 2014 2 次提交
    • P
      MIPS: Basic MSA context switching support · 1db1af84
      Paul Burton 提交于
      This patch adds support for context switching the MSA vector registers.
      These 128 bit vector registers are aliased with the FP registers - an
      FP register accesses the least significant bits of the vector register
      with which it is aliased (ie. the register with the same index). Due to
      both this & the requirement that the scalar FPU must be 64-bit (FR=1) if
      enabled at the same time as MSA the kernel will enable MSA & scalar FP
      at the same time for tasks which use MSA. If we restore the MSA vector
      context then we might as well enable the scalar FPU since the reason it
      was left disabled was to allow for lazy FP context restoring - but we
      just restored the FP context as it's a subset of the vector context. If
      we restore the FP context and have previously used MSA then we have to
      restore the whole vector context anyway (see comment in
      enable_restore_fp_context for details) so similarly we might as well
      enable MSA.
      
      Thus if a task does not use MSA then it will continue to behave as
      without this patch - the scalar FP context will be saved & restored as
      usual. But if a task executes an MSA instruction then it will save &
      restore the vector context forever more.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/6431/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      1db1af84
    • P
      MIPS: Don't assume 64-bit FP registers for dump_{,task_}fpu · 6cec7c4a
      Paul Burton 提交于
      This code assumed that saved FP registers are 64 bits wide, an
      assumption which will no longer be true once MSA is introduced. This
      patch modifies the code to copy the lower 64 bits of each register in
      turn, which is safe for any FP register width >= 64 bits.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/6425/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6cec7c4a
  22. 25 1月, 2014 1 次提交
  23. 14 1月, 2014 1 次提交
    • P
      MIPS: Support for 64-bit FP with O32 binaries · 597ce172
      Paul Burton 提交于
      CPUs implementing MIPS32 R2 may include a 64-bit FPU, just as MIPS64 CPUs
      do. In order to preserve backwards compatibility a 64-bit FPU will act
      like a 32-bit FPU (by accessing doubles from the least significant 32
      bits of an even-odd pair of FP registers) when the Status.FR bit is
      zero, again just like a mips64 CPU. The standard O32 ABI is defined
      expecting a 32-bit FPU, however recent toolchains support use of a
      64-bit FPU from an O32 MIPS32 executable. When an ELF executable is
      built to use a 64-bit FPU a new flag (EF_MIPS_FP64) is set in the ELF
      header.
      
      With this patch the kernel will check the EF_MIPS_FP64 flag when
      executing an O32 binary, and set Status.FR accordingly. The addition
      of O32 64-bit FP support lessens the opportunity for optimisation in
      the FPU emulator, so a CONFIG_MIPS_O32_FP64_SUPPORT Kconfig option is
      introduced to allow this support to be disabled for those that don't
      require it.
      
      Inspired by an earlier patch by Leonid Yegoshin, but implemented more
      cleanly & correctly.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: Paul Burton <paul.burton@imgtec.com>
      Patchwork: https://patchwork.linux-mips.org/patch/6154/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      597ce172
  24. 01 7月, 2013 1 次提交