1. 24 3月, 2015 1 次提交
  2. 17 2月, 2015 1 次提交
  3. 12 2月, 2015 1 次提交
    • P
      MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS · 9791554b
      Paul Burton 提交于
      Userland code may be built using an ABI which permits linking to objects
      that have more restrictive floating point requirements. For example,
      userland code may be built to target the O32 FPXX ABI. Such code may be
      linked with other FPXX code, or code built for either one of the more
      restrictive FP32 or FP64. When linking with more restrictive code, the
      overall requirement of the process becomes that of the more restrictive
      code. The kernel has no way to know in advance which mode the process
      will need to be executed in, and indeed it may need to change during
      execution. The dynamic loader is the only code which will know the
      overall required mode, and so it needs to have a means to instruct the
      kernel to switch the FP mode of the process.
      
      This patch introduces 2 new options to the prctl syscall which provide
      such a capability. The FP mode of the process is represented as a
      simple bitmask combining a number of mode bits mirroring those present
      in the hardware. Userland can either retrieve the current FP mode of
      the process:
      
        mode = prctl(PR_GET_FP_MODE);
      
      or modify the current FP mode of the process:
      
        err = prctl(PR_SET_FP_MODE, new_mode);
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Matthew Fortune <matthew.fortune@imgtec.com>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/8899/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      9791554b
  4. 31 1月, 2015 1 次提交
    • J
      MIPS: fork: Fix MSA/FPU/DSP context duplication race · 39148e94
      James Hogan 提交于
      There is a race in the MIPS fork code which allows the child to get a
      stale copy of parent MSA/FPU/DSP state that is active in hardware
      registers when the fork() is called. This is because copy_thread() saves
      the live register state into the child context only if the hardware is
      currently in use, apparently on the assumption that the hardware state
      cannot have been saved and disabled since the initial duplication of the
      task_struct. However preemption is certainly possible during this
      window.
      
      An example sequence of events is as follows:
      
      1) The parent userland process puts important data into saved floating
         point registers ($f20-$f31), which are then dirty compared to the
         process' stored context.
      
      2) The parent process calls fork() which does a clone system call.
      
      3) In the kernel, do_fork() -> copy_process() -> dup_task_struct() ->
         arch_dup_task_struct() (which uses the weakly defined default
         implementation). This duplicates the parent process' task context,
         which includes a stale version of its FP context from when it was
         last saved, probably some time before (1).
      
      4) At some point before copy_process() calls copy_thread(), such as when
         duplicating the memory map, the process is desceduled. Perhaps it is
         preempted asynchronously, or perhaps it sleeps while blocked on a
         mutex. The dirty FP state in the FP registers is saved to the parent
         process' context and the FPU is disabled.
      
      5) When the process is rescheduled again it continues copying state
         until it gets to copy_thread(), which checks whether the FPU is in
         use, so that it can copy that dirty state to the child process' task
         context. Because of the deschedule however the FPU is not in use, so
         the child process' context is left with stale FP context from the
         last time the parent saved it (some time before (1)).
      
      6) When the new child process is scheduled it reads the important data
         from the saved floating point register, and ends up doing a NULL
         pointer dereference as a result of the stale data.
      
      This use of saved floating point registers across function calls can be
      triggered fairly easily by explicitly using inline asm with a current
      (MIPS R2) compiler, but is far more likely to happen unintentionally
      with a MIPS R6 compiler where the FP registers are more likely to get
      used as scratch registers for storing non-fp data.
      
      It is easily fixed, in the same way that other architectures do it, by
      overriding the implementation of arch_dup_task_struct() to sync the
      dirty hardware state to the parent process' task context *prior* to
      duplicating it, rather than copying straight to the child process' task
      context in copy_thread(). Note, the FPU hardware is not disabled so the
      parent process may continue executing with the live register context,
      but now the child process is guaranteed to have an identical copy of it
      at that point.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Reported-by: NMatthew Fortune <matthew.fortune@imgtec.com>
      Tested-by: NMarkos Chandras <markos.chandras@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/9075/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      39148e94
  5. 24 11月, 2014 2 次提交
    • E
      MIPS: Add arch_trigger_all_cpu_backtrace() function · 856839b7
      Eunbong Song 提交于
      Currently, arch_trigger_all_cpu_backtrace() is defined in only x86 and
      sparc which have an NMI.  But in case of softlockup, it could be possible
      to dump backtrace of all cpus. and this could be helpful for debugging.
      
      for example, if system has 2 cpus.
      
      	CPU 0				CPU 1
       acquire read_lock()
      
      				try to do write_lock()
      
       ,,,
       missing read_unlock()
      
      In this case, softlockup will occur becasuse CPU 0 does not call
      read_unlock().  And dump_stack() print only backtrace for "CPU 0". If
      CPU1's backtrace is printed it's very helpful.
      
      [ralf@linux-mips.org: Fixed whitespace and formatting issues.]
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/8200/
      856839b7
    • R
      MIPS: Remove useless parentheses · 635c9907
      Ralf Baechle 提交于
      Based on the spatch
      
      @@
      expression e;
      @@
      - return (e);
      + return e;
      
      with heavy hand editing because some of the changes are either whitespace
      or identation only or result in excessivly long lines.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      635c9907
  6. 02 8月, 2014 3 次提交
  7. 24 5月, 2014 1 次提交
    • R
      MIPS: MT: Remove SMTC support · b633648c
      Ralf Baechle 提交于
      Nobody is maintaining SMTC anymore and there also seems to be no userbase.
      Which is a pity - the SMTC technology primarily developed by Kevin D.
      Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT
      ASE's power and elegance.
      
      Based on Markos Chandras <Markos.Chandras@imgtec.com> patch
      https://patchwork.linux-mips.org/patch/6719/ which while very similar did
      no longer apply cleanly when I tried to merge it plus some additional
      post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to
      merge once upon a time.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b633648c
  8. 27 3月, 2014 2 次提交
    • P
      MIPS: Basic MSA context switching support · 1db1af84
      Paul Burton 提交于
      This patch adds support for context switching the MSA vector registers.
      These 128 bit vector registers are aliased with the FP registers - an
      FP register accesses the least significant bits of the vector register
      with which it is aliased (ie. the register with the same index). Due to
      both this & the requirement that the scalar FPU must be 64-bit (FR=1) if
      enabled at the same time as MSA the kernel will enable MSA & scalar FP
      at the same time for tasks which use MSA. If we restore the MSA vector
      context then we might as well enable the scalar FPU since the reason it
      was left disabled was to allow for lazy FP context restoring - but we
      just restored the FP context as it's a subset of the vector context. If
      we restore the FP context and have previously used MSA then we have to
      restore the whole vector context anyway (see comment in
      enable_restore_fp_context for details) so similarly we might as well
      enable MSA.
      
      Thus if a task does not use MSA then it will continue to behave as
      without this patch - the scalar FP context will be saved & restored as
      usual. But if a task executes an MSA instruction then it will save &
      restore the vector context forever more.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/6431/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      1db1af84
    • P
      MIPS: Don't assume 64-bit FP registers for dump_{,task_}fpu · 6cec7c4a
      Paul Burton 提交于
      This code assumed that saved FP registers are 64 bits wide, an
      assumption which will no longer be true once MSA is introduced. This
      patch modifies the code to copy the lower 64 bits of each register in
      turn, which is safe for any FP register width >= 64 bits.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/6425/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6cec7c4a
  9. 25 1月, 2014 1 次提交
  10. 14 1月, 2014 1 次提交
    • P
      MIPS: Support for 64-bit FP with O32 binaries · 597ce172
      Paul Burton 提交于
      CPUs implementing MIPS32 R2 may include a 64-bit FPU, just as MIPS64 CPUs
      do. In order to preserve backwards compatibility a 64-bit FPU will act
      like a 32-bit FPU (by accessing doubles from the least significant 32
      bits of an even-odd pair of FP registers) when the Status.FR bit is
      zero, again just like a mips64 CPU. The standard O32 ABI is defined
      expecting a 32-bit FPU, however recent toolchains support use of a
      64-bit FPU from an O32 MIPS32 executable. When an ELF executable is
      built to use a 64-bit FPU a new flag (EF_MIPS_FP64) is set in the ELF
      header.
      
      With this patch the kernel will check the EF_MIPS_FP64 flag when
      executing an O32 binary, and set Status.FR accordingly. The addition
      of O32 64-bit FP support lessens the opportunity for optimisation in
      the FPU emulator, so a CONFIG_MIPS_O32_FP64_SUPPORT Kconfig option is
      introduced to allow this support to be disabled for those that don't
      require it.
      
      Inspired by an earlier patch by Leonid Yegoshin, but implemented more
      cleanly & correctly.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: Paul Burton <paul.burton@imgtec.com>
      Patchwork: https://patchwork.linux-mips.org/patch/6154/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      597ce172
  11. 01 7月, 2013 2 次提交
  12. 22 5月, 2013 1 次提交
  13. 18 5月, 2013 2 次提交
    • T
      MIPS: Extract schedule_mfi info from __schedule · 5000653e
      Tony Wu 提交于
      schedule_mfi is supposed to be extracted from schedule(), and
      is used in thread_saved_pc and get_wchan.
      
      But, after optimization, schedule() is reduced to a sibling
      call to __schedule(), and no real frame info can be extracted.
      
      One solution is to compile schedule() with -fno-omit-frame-pointer
      and -fno-optimize-sibling-calls, but that will incur performance
      degradation.
      
      Another solution is to extract info from the real scheduler,
      __schedule, and this is the approache adopted here.
      
      This patch reads the __schedule address by either following
      the 'j' call in schedule if KALLSYMS is disabled or by using
      kallsyms_lookup_name to lookup __schedule if KALLSYMS is
      available, then, extracts schedule_mfi from __schedule frame info.
      
      This patch also fixes the "Can't analyze schedule() prologue"
      warning at boot time.
      Signed-off-by: NTony Wu <tung7970@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/5237/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      5000653e
    • T
      MIPS: Fix sibling call handling in get_frame_info · e7438c4b
      Tony Wu 提交于
      Given a function, get_frame_info() analyzes its instructions
      to figure out frame size and return address. get_frame_info()
      works as follows:
      
      1. analyze up to 128 instructions if the function size is unknown
      2. search for 'addiu/daddiu sp,sp,-immed' for frame size
      3. search for 'sw ra,offset(sp)' for return address
      4. end search when it sees jr/jal/jalr
      
      This leads to an issue when the given function is a sibling
      call, example shown as follows.
      
      801ca110 <schedule>:
      801ca110:       8f820000        lw      v0,0(gp)
      801ca114:       8c420000        lw      v0,0(v0)
      801ca118:       080726f0        j       801c9bc0 <__schedule>
      801ca11c:       00000000        nop
      
      801ca120 <io_schedule>:
      801ca120:       27bdffe8        addiu   sp,sp,-24
      801ca124:       3c028022        lui     v0,0x8022
      801ca128:       afbf0014        sw      ra,20(sp)
      
      In this case, get_frame_info() cannot properly detect schedule's
      frame info, and eventually returns io_schedule's instead.
      
      This patch adds 'j' to the end search condition to workaround
      sibling call cases.
      Signed-off-by: NTony Wu <tung7970@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/5236/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e7438c4b
  14. 09 5月, 2013 1 次提交
  15. 08 4月, 2013 1 次提交
  16. 04 2月, 2013 1 次提交
  17. 01 2月, 2013 1 次提交
  18. 14 12月, 2012 1 次提交
  19. 29 11月, 2012 1 次提交
  20. 15 10月, 2012 1 次提交
  21. 29 3月, 2012 1 次提交
  22. 01 3月, 2012 1 次提交
  23. 12 12月, 2011 3 次提交
    • F
      nohz: Remove tick_nohz_idle_enter_norcu() / tick_nohz_idle_exit_norcu() · 1268fbc7
      Frederic Weisbecker 提交于
      Those two APIs were provided to optimize the calls of
      tick_nohz_idle_enter() and rcu_idle_enter() into a single
      irq disabled section. This way no interrupt happening in-between would
      needlessly process any RCU job.
      
      Now we are talking about an optimization for which benefits
      have yet to be measured. Let's start simple and completely decouple
      idle rcu and dyntick idle logics to simplify.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1268fbc7
    • F
      nohz: Allow rcu extended quiescent state handling seperately from tick stop · 2bbb6817
      Frederic Weisbecker 提交于
      It is assumed that rcu won't be used once we switch to tickless
      mode and until we restart the tick. However this is not always
      true, as in x86-64 where we dereference the idle notifiers after
      the tick is stopped.
      
      To prepare for fixing this, add two new APIs:
      tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().
      
      If no use of RCU is made in the idle loop between
      tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
      must instead call the new *_norcu() version such that the arch doesn't
      need to call rcu_idle_enter() and rcu_idle_exit().
      
      Otherwise the arch must call tick_nohz_enter_idle() and
      tick_nohz_exit_idle() and also call explicitly:
      
      - rcu_idle_enter() after its last use of RCU before the CPU is put
      to sleep.
      - rcu_idle_exit() before the first use of RCU after the CPU is woken
      up.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: David Miller <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      2bbb6817
    • F
      nohz: Separate out irq exit and idle loop dyntick logic · 280f0677
      Frederic Weisbecker 提交于
      The tick_nohz_stop_sched_tick() function, which tries to delay
      the next timer tick as long as possible, can be called from two
      places:
      
      - From the idle loop to start the dytick idle mode
      - From interrupt exit if we have interrupted the dyntick
      idle mode, so that we reprogram the next tick event in
      case the irq changed some internal state that requires this
      action.
      
      There are only few minor differences between both that
      are handled by that function, driven by the ts->inidle
      cpu variable and the inidle parameter. The whole guarantees
      that we only update the dyntick mode on irq exit if we actually
      interrupted the dyntick idle mode, and that we enter in RCU extended
      quiescent state from idle loop entry only.
      
      Split this function into:
      
      - tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
      dynticks idle mode unconditionally if it can, and enters into RCU
      extended quiescent state.
      
      - tick_nohz_irq_exit() which only updates the dynticks idle mode
      when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).
      
      To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
      into tick_nohz_idle_exit().
      
      This simplifies the code and micro-optimize the irq exit path (no need
      for local_irq_save there). This also prepares for the split between
      dynticks and rcu extended quiescent state logics. We'll need this split to
      further fix illegal uses of RCU in extended quiescent states in the idle
      loop.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: David Miller <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      280f0677
  24. 01 11月, 2011 2 次提交
  25. 26 7月, 2011 1 次提交
  26. 15 6月, 2011 1 次提交
  27. 31 3月, 2011 1 次提交
  28. 17 12月, 2010 1 次提交
    • A
      MIPS: Don't stomp on caller's ->regs[2] in copy_thread() · a989ff89
      Al Viro 提交于
      We never needed that (->regs[2] is overwritten on return from syscall paths
      with return value of syscall, so storing it there early made no sense) and
      with new restart logics since d27240bf7e61d2656de18e158ec910a902030847 it
      has become really bad - we lose the original syscall number before the
      place where we decide that we might need a syscall restart.
      
      Note that for child we do need the assignment to regs[2] - it won't go
      through the normal return from syscall path.
      
      [Ralf: Issue found and reported by Lluís; initial investigations by me;
      bug finally found and patch by Al; testing by me and Lluís.]
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NLluís Batlle i Rossell <viriketo@gmail.com>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a989ff89
  29. 13 4月, 2010 1 次提交
  30. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  31. 04 8月, 2009 1 次提交
    • D
      MIPS: Avoid clobbering struct pt_regs in kthreads · 484889fc
      David Daney 提交于
      The resume() implementation octeon_switch.S examines the saved cp0_status
      register.  We were clobbering the entire pt_regs structure in kernel
      threads leading to random crashes.
      
      When switching away from a kernel thread, the saved cp0_status is examined
      and if bit 30 is set it is cleared and the CP2 state saved into the pt_regs
      structure.  Since the kernel thread stack overlaid the pt_regs structure
      this resulted in a corrupt stack.  When the kthread with the corrupt stack
      was resumed, it could crash if it used any of the data in the stack that was
      clobbered.
      
      We fix it by moving the kernel thread stack down so it doesn't overlay
      pt_regs.
      Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      484889fc