1. 22 5月, 2012 1 次提交
  2. 07 5月, 2012 1 次提交
  3. 05 5月, 2012 1 次提交
  4. 27 4月, 2012 1 次提交
    • M
      MIPS: Use set_current_blocked() and block_sigmask() · 8598f3cd
      Matt Fleming 提交于
      As described in e6fa16ab ("signal: sigprocmask() should do
      retarget_shared_pending()") the modification of current->blocked is
      incorrect as we need to check whether the signal we're about to block
      is pending in the shared queue.
      
      Also, use the new helper function introduced in commit 5e6292c0
      ("signal: add block_sigmask() for adding sigmask to current->blocked")
      which centralises the code for updating current->blocked after
      successfully delivering a signal and reduces the amount of duplicate
      code across architectures. In the past some architectures got this
      code wrong, so using this helper function should stop that from
      happening again.
      
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-kernel@vger.kernel.org
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: David Daney <ddaney@caviumnetworks.com>
      Cc: linux-mips@linux-mips.org
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Patchwork: https://patchwork.linux-mips.org/patch/3363/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      8598f3cd
  5. 26 4月, 2012 2 次提交
    • T
      mips: Use generic idle thread allocation · 360014a3
      Thomas Gleixner 提交于
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Link: http://lkml.kernel.org/r/20120420124557.512158271@linutronix.de
      360014a3
    • T
      smp: Add task_struct argument to __cpu_up() · 8239c25f
      Thomas Gleixner 提交于
      Preparatory patch to make the idle thread allocation for secondary
      cpus generic.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: x86@kernel.org
      Link: http://lkml.kernel.org/r/20120420124556.964170564@linutronix.de
      8239c25f
  6. 29 3月, 2012 2 次提交
    • R
      remove references to cpu_*_map in arch/ · 0b5f9c00
      Rusty Russell 提交于
      This has been obsolescent for a while; time for the final push.
      
      In adjacent context, replaced old cpus_* with cpumask_*.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: David S. Miller <davem@davemloft.net> (arch/sparc)
      Acked-by: Chris Metcalf <cmetcalf@tilera.com> (arch/tile)
      Cc: user-mode-linux-devel@lists.sourceforge.net
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: linux-hexagon@vger.kernel.org
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Helge Deller <deller@gmx.de>
      Cc: sparclinux@vger.kernel.org
      0b5f9c00
    • D
      Disintegrate asm/system.h for MIPS · b81947c6
      David Howells 提交于
      Disintegrate asm/system.h for MIPS.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      cc: linux-mips@linux-mips.org
      b81947c6
  7. 24 3月, 2012 1 次提交
    • J
      coredump: remove VM_ALWAYSDUMP flag · 909af768
      Jason Baron 提交于
      The motivation for this patchset was that I was looking at a way for a
      qemu-kvm process, to exclude the guest memory from its core dump, which
      can be quite large.  There are already a number of filter flags in
      /proc/<pid>/coredump_filter, however, these allow one to specify 'types'
      of kernel memory, not specific address ranges (which is needed in this
      case).
      
      Since there are no more vma flags available, the first patch eliminates
      the need for the 'VM_ALWAYSDUMP' flag.  The flag is used internally by
      the kernel to mark vdso and vsyscall pages.  However, it is simple
      enough to check if a vma covers a vdso or vsyscall page without the need
      for this flag.
      
      The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new
      'VM_NODUMP' flag, which can be set by userspace using new madvise flags:
      'MADV_DONTDUMP', and unset via 'MADV_DODUMP'.  The core dump filters
      continue to work the same as before unless 'MADV_DONTDUMP' is set on the
      region.
      
      The qemu code which implements this features is at:
      
        http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch
      
      In my testing the qemu core dump shrunk from 383MB -> 13MB with this
      patch.
      
      I also believe that the 'MADV_DONTDUMP' flag might be useful for
      security sensitive apps, which might want to select which areas are
      dumped.
      
      This patch:
      
      The VM_ALWAYSDUMP flag is currently used by the coredump code to
      indicate that a vma is part of a vsyscall or vdso section.  However, we
      can determine if a vma is in one these sections by checking it against
      the gate_vma and checking for a non-NULL return value from
      arch_vma_name().  Thus, freeing a valuable vma bit.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Acked-by: NRoland McGrath <roland@hack.frob.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      909af768
  8. 05 3月, 2012 1 次提交
  9. 01 3月, 2012 1 次提交
  10. 25 2月, 2012 1 次提交
    • G
      irq_domain/mips: Allow irq_domain on MIPS · abd2363f
      Grant Likely 提交于
      This patch makes IRQ_DOMAIN usable on MIPS.  It uses an ugly workaround
      to preserve current behaviour so that MIPS has time to add irq_domain
      registration to the irq controller drivers.  The workaround will be
      removed in Linux v3.6
      Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rob Herring <rob.herring@calxeda.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mips@linux-mips.org
      abd2363f
  11. 24 2月, 2012 1 次提交
  12. 21 2月, 2012 3 次提交
  13. 18 1月, 2012 2 次提交
    • E
      audit: inline audit_syscall_entry to reduce burden on archs · b05d8447
      Eric Paris 提交于
      Every arch calls:
      
      if (unlikely(current->audit_context))
      	audit_syscall_entry()
      
      which requires knowledge about audit (the existance of audit_context) in
      the arch code.  Just do it all in static inline in audit.h so that arch's
      can remain blissfully ignorant.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      b05d8447
    • E
      Audit: push audit success and retcode into arch ptrace.h · d7e7528b
      Eric Paris 提交于
      The audit system previously expected arches calling to audit_syscall_exit to
      supply as arguments if the syscall was a success and what the return code was.
      Audit also provides a helper AUDITSC_RESULT which was supposed to simplify things
      by converting from negative retcodes to an audit internal magic value stating
      success or failure.  This helper was wrong and could indicate that a valid
      pointer returned to userspace was a failed syscall.  The fix is to fix the
      layering foolishness.  We now pass audit_syscall_exit a struct pt_reg and it
      in turns calls back into arch code to collect the return value and to
      determine if the syscall was a success or failure.  We also define a generic
      is_syscall_success() macro which determines success/failure based on if the
      value is < -MAX_ERRNO.  This works for arches like x86 which do not use a
      separate mechanism to indicate syscall failure.
      
      We make both the is_syscall_success() and regs_return_value() static inlines
      instead of macros.  The reason is because the audit function must take a void*
      for the regs.  (uml calls theirs struct uml_pt_regs instead of just struct
      pt_regs so audit_syscall_exit can't take a struct pt_regs).  Since the audit
      function takes a void* we need to use static inlines to cast it back to the
      arch correct structure to dereference it.
      
      The other major change is that on some arches, like ia64, MIPS and ppc, we
      change regs_return_value() to give us the negative value on syscall failure.
      THE only other user of this macro, kretprobe_example.c, won't notice and it
      makes the value signed consistently for the audit functions across all archs.
      
      In arch/sh/kernel/ptrace_64.c I see that we were using regs[9] in the old
      audit code as the return value.  But the ptrace_64.h code defined the macro
      regs_return_value() as regs[3].  I have no idea which one is correct, but this
      patch now uses the regs_return_value() function, so it now uses regs[3].
      
      For powerpc we previously used regs->result but now use the
      regs_return_value() function which uses regs->gprs[3].  regs->gprs[3] is
      always positive so the regs_return_value(), much like ia64 makes it negative
      before calling the audit code when appropriate.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Acked-by: H. Peter Anvin <hpa@zytor.com> [for x86 portion]
      Acked-by: Tony Luck <tony.luck@intel.com> [for ia64]
      Acked-by: Richard Weinberger <richard@nod.at> [for uml]
      Acked-by: David S. Miller <davem@davemloft.net> [for sparc]
      Acked-by: Ralf Baechle <ralf@linux-mips.org> [for mips]
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [for ppc]
      d7e7528b
  14. 13 1月, 2012 2 次提交
  15. 12 12月, 2011 3 次提交
    • F
      nohz: Remove tick_nohz_idle_enter_norcu() / tick_nohz_idle_exit_norcu() · 1268fbc7
      Frederic Weisbecker 提交于
      Those two APIs were provided to optimize the calls of
      tick_nohz_idle_enter() and rcu_idle_enter() into a single
      irq disabled section. This way no interrupt happening in-between would
      needlessly process any RCU job.
      
      Now we are talking about an optimization for which benefits
      have yet to be measured. Let's start simple and completely decouple
      idle rcu and dyntick idle logics to simplify.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1268fbc7
    • F
      nohz: Allow rcu extended quiescent state handling seperately from tick stop · 2bbb6817
      Frederic Weisbecker 提交于
      It is assumed that rcu won't be used once we switch to tickless
      mode and until we restart the tick. However this is not always
      true, as in x86-64 where we dereference the idle notifiers after
      the tick is stopped.
      
      To prepare for fixing this, add two new APIs:
      tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().
      
      If no use of RCU is made in the idle loop between
      tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
      must instead call the new *_norcu() version such that the arch doesn't
      need to call rcu_idle_enter() and rcu_idle_exit().
      
      Otherwise the arch must call tick_nohz_enter_idle() and
      tick_nohz_exit_idle() and also call explicitly:
      
      - rcu_idle_enter() after its last use of RCU before the CPU is put
      to sleep.
      - rcu_idle_exit() before the first use of RCU after the CPU is woken
      up.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: David Miller <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      2bbb6817
    • F
      nohz: Separate out irq exit and idle loop dyntick logic · 280f0677
      Frederic Weisbecker 提交于
      The tick_nohz_stop_sched_tick() function, which tries to delay
      the next timer tick as long as possible, can be called from two
      places:
      
      - From the idle loop to start the dytick idle mode
      - From interrupt exit if we have interrupted the dyntick
      idle mode, so that we reprogram the next tick event in
      case the irq changed some internal state that requires this
      action.
      
      There are only few minor differences between both that
      are handled by that function, driven by the ts->inidle
      cpu variable and the inidle parameter. The whole guarantees
      that we only update the dyntick mode on irq exit if we actually
      interrupted the dyntick idle mode, and that we enter in RCU extended
      quiescent state from idle loop entry only.
      
      Split this function into:
      
      - tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
      dynticks idle mode unconditionally if it can, and enters into RCU
      extended quiescent state.
      
      - tick_nohz_irq_exit() which only updates the dynticks idle mode
      when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).
      
      To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
      into tick_nohz_idle_exit().
      
      This simplifies the code and micro-optimize the irq exit path (no need
      for local_irq_save there). This also prepares for the split between
      dynticks and rcu extended quiescent state logics. We'll need this split to
      further fix illegal uses of RCU in extended quiescent states in the idle
      loop.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: David Miller <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      280f0677
  16. 09 12月, 2011 1 次提交
    • T
      mips: Use HAVE_MEMBLOCK_NODE_MAP · 9d15ffc8
      Tejun Heo 提交于
      mips used early_node_map[] just to prime free_area_init_nodes().  Now
      memblock can be used for the same purpose and early_node_map[] is
      scheduled to be dropped.  Use memblock instead.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: linux-mips@linux-mips.org
      9d15ffc8
  17. 08 12月, 2011 16 次提交
    • J
      MIPS: Netlogic: Add support for XLP 3XX cores · 2aa54b20
      Jayachandran C 提交于
      Add new processor ID to asm/cpu.h and kernel/cpu-probe.c.
      Update to new CPU frequency detection code which works on XLP 3XX
      and 8XX.
      Signed-off-by: NJayachandran C <jayachandranc@netlogicmicro.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2971/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      2aa54b20
    • J
      MIPS: Netlogic: Add XLP makefiles and config · 1c773ea4
      Jayachandran C 提交于
      - Add CPU_XLP and NLM_XLR_BOARD to arch/mips/Kconfig for Netlogic XLP boards
      - Update mips Makefiles to add XLP
      Signed-off-by: NJayachandran C <jayachandranc@netlogicmicro.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2968/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      1c773ea4
    • J
      MIPS: Netlogic: XLP CPU support. · a3d4fb2d
      Jayachandran C 提交于
      Add support for Netlogic's XLP MIPS SoC. This patch adds:
      * XLP processor ID in cpu_probe.c and asm/cpu.h
      * XLP case to asm/module.h
      * CPU_XLP case to mm/tlbex.c
      * minor change to r4k cache handling to ignore XLP secondary cache
      * XLP cpu overrides to mach-netlogic/cpu-feature-overrides.h
      Signed-off-by: NJayachandran C <jayachandranc@netlogicmicro.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2966/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a3d4fb2d
    • J
      MIPS: Netlogic: add r4k_wait as the cpu_wait · 11d48aac
      Jayachandran C 提交于
      Use r4k_wait as the CPU wait function for XLR/XLS processors.
      Signed-off-by: NJayachandran C <jayachandranc@netlogicmicro.com>
      To: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2728/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      11d48aac
    • D
      MIPS/Perf-events: Cleanup event->destroy at event init · ff5d7265
      Deng-Cheng Zhu 提交于
      Simplify the code by changing the place of event->destroy().
      Signed-off-by: NDeng-Cheng Zhu <dczhu@mips.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Eyal Barzilay <eyal@mips.com>
      Cc: Zenon Fortuna <zenon@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/3109/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      ff5d7265
    • D
      MIPS/Perf-events: Remove pmu and event state checking in validate_event() · 266623b7
      Deng-Cheng Zhu 提交于
      Why removing pmu checking:
      Since 3.2-rc1, when arch level event init is called, the event is already
      connected to its PMU. Also, validate_event() is _only_ called by
      validate_group() in event init, so there is no need of checking or
      temporarily assigning event pmu during validate_group().
      
      Why removing event state checking:
      Events could be created in PERF_EVENT_STATE_OFF (attr->disabled == 1), when
      these events go through this checking, validate_group() does dummy work.
      But we do need to do group scheduling emulation for them in event init.
      Again, validate_event() is _only_ called by validate_group().
      
      Reference: http://www.spinics.net/lists/mips/msg42190.htmlSigned-off-by: NDeng-Cheng Zhu <dczhu@mips.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Eyal Barzilay <eyal@mips.com>
      Cc: Zenon Fortuna <zenon@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/3108/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      266623b7
    • D
      MIPS/Perf-events: Remove erroneous check on active_events · 74653ccf
      Deng-Cheng Zhu 提交于
      Port the following patch for ARM by Mark Rutland:
      
      - 57ce9bb3
          ARM: 6902/1: perf: Remove erroneous check on active_events
      
          When initialising a PMU, there is a check to protect against races with
          other CPUs filling all of the available event slots. Since armpmu_add
          checks that an event can be scheduled, we do not need to do this at
          initialisation time. Furthermore the current code is broken because it
          assumes that atomic_inc_not_zero will unconditionally increment
          active_counts and then tries to decrement it again on failure.
      
          This patch removes the broken, redundant code.
      Signed-off-by: NDeng-Cheng Zhu <dczhu@mips.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Eyal Barzilay <eyal@mips.com>
      Cc: Zenon Fortuna <zenon@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/3106/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      74653ccf
    • D
      MIPS/Perf-events: Don't do validation on raw events · 2c1b54d3
      Deng-Cheng Zhu 提交于
      MIPS licensees may want to modify performance counters to count extra
      events. Also, now that the user is working on raw events, the manual is
      being used for sure. And feeding unsupported events shouldn't cause
      hardware failure and the like.
      
      [ralf@linux-mips.org: performance events also being used in internal
      performance evaluation and have a tendency to change as the micro-
      architecture evolves, even for minor revisions that may not be
      distinguishable by PrID.  It's not very practicable to maintain a list
      of all events and there is no real benefit.]
      Signed-off-by: NDeng-Cheng Zhu <dczhu@mips.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Eyal Barzilay <eyal@mips.com>
      Cc: Zenon Fortuna <zenon@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/3107/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      2c1b54d3
    • M
      MIPS Kprobes: Support branch instructions probing · 6457a396
      Maneesh Soni 提交于
      This patch provides support for kprobes on branch instructions. The branch
      instruction at the probed address is actually emulated and not executed
      out-of-line like other normal instructions. Instead the delay-slot instruction
      is copied and single stepped out of line.
      
      At the time of probe hit, the original branch instruction is evaluated
      and the target cp0_epc is computed similar to compute_retrun_epc(). It
      is also checked if the delay slot instruction can be skipped, which is
      true if there is a NOP in delay slot or branch is taken in case of
      branch likely instructions. Once the delay slot instruction is single
      stepped the normal execution resume with the cp0_epc updated the earlier
      computed cp0_epc as per the branch instructions.
      Signed-off-by: NManeesh Soni <manesoni@cisco.com>
      Signed-off-by: NVictor Kamensky <kamensky@cisco.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: ananth@in.ibm.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2914/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6457a396
    • M
      MIPS Kprobes: Refactor branch emulation · d8d4e3ae
      Maneesh Soni 提交于
      This patch refactors MIPS branch emulation code so as to allow skipping
      delay slot instruction in case of branch likely instructions when branch is
      not taken. This is useful for keeping the code common for use cases like
      kprobes where one would like to handle the branch instructions keeping the
      delay slot instuction also in picture for branch likely instructions. Also
      allow emulation when instruction to be decoded is not at pt_regs->cp0_epc
      as in case of kprobes where pt_regs->cp0_epc points to the breakpoint
      instruction.
      
      The patch also exports the function for modules.
      Signed-off-by: NManeesh Soni <manesoni@cisco.com>
      Signed-off-by: NVictor Kamensky <kamensky@cisco.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: ananth@in.ibm.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2913/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d8d4e3ae
    • M
      MIPS Kprobes: Deny probes on ll/sc instructions · 9233c1ee
      Maneesh Soni 提交于
      As ll/sc instruction are for atomic read-modify-write operations, allowing
      probes on top of these insturctions is a bad idea.
      Signed-off-by: NVictor Kamensky <kamensky@cisco.com>
      Signed-off-by: NManeesh Soni <manesoni@cisco.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: ananth@in.ibm.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2912/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      9233c1ee
    • M
      MIPS Kprobes: Fix OOPS in arch_prepare_kprobe() · 41dde781
      Maneesh Soni 提交于
      This patch fixes the arch_prepare_kprobe() on MIPS when it tries to find the
      instruction at the previous address to the probed address. The oops happens
      when the probed address is the first address in a kernel module and there is
      no previous address. The patch uses probe_kernel_read() to safely read the
      previous instruction.
      
      CPU 3 Unable to handle kernel paging request at virtual address ffffffffc0211ffc, epc == ffffffff81113204, ra == ffffffff8111511c
      Oops[#1]:
      Cpu 3
      $ 0   : 0000000000000000 0000000000000001 ffffffffc0212000 0000000000000000
      $ 4   : ffffffffc0220030 0000000000000000 0000000000000adf ffffffff81a3f898
      $ 8   : ffffffffc0220030 ffffffffffffffff 000000000000ffff 0000000000004821
      $12   : 000000000000000a ffffffff81105ddc ffffffff812927d0 0000000000000000
      $16   : ffffffff81a40000 ffffffffc0220030 ffffffffc0220030 ffffffffc0212660
      $20   : 0000000000000000 0000000000000008 efffffffffffffff ffffffffc0220000
      $24   : 0000000000000002 ffffffff8139f5b0
      $28   : a800000072adc000 a800000072adfca0 ffffffffc0220000 ffffffff8111511c
      Hi    : 0000000000000000
      Lo    : 0000000000000000
      epc   : ffffffff81113204 arch_prepare_kprobe+0x1c/0xe8
          Tainted: P
      ra    : ffffffff8111511c register_kprobe+0x33c/0x730
      Status: 10008ce3    KX SX UX KERNEL EXL IE
      Cause : 00800008
      BadVA : ffffffffc0211ffc
      PrId  : 000d9008 (Cavium Octeon II)
      Modules linked in: bpa_mem crashinfo pds tun cpumem ipv6 exportfs nfsd OOBnd(P) OOBhal(P) cvmx_mdio cvmx_gpio aipcmod(P) mtsmod procfs(P) utaker_mod dplr_pci hello atomicm_foo [last unloaded: sysmgr_hb]
      Process stapio (pid: 5603, threadinfo=a800000072adc000, task=a8000000722e0438, tls=000000002b4bcda0)
      Stack : ffffffff81a40000 ffffffff81a40000 ffffffffc0220030 ffffffff8111511c
              ffffffffc0218008 0000000000000001 ffffffffc0218008 0000000000000001
              ffffffffc0220000 ffffffffc021efe8 1000000000000000 0000000000000008
              efffffffffffffff ffffffffc0220000 ffffffffc0220000 ffffffffc021d500
              0000000000000022 0000000000000002 1111000072be02b8 0000000000000000
              00000000000015e6 00000000000015e6 00000000007d0f00 a800000072be02b8
              0000000000000000 ffffffff811d16c8 a80000000382e3b0 ffffffff811d5ba0
              ffffffff81b0a270 ffffffff81b0a270 ffffffffc0212000 0000000000000013
              ffffffffc0220030 ffffffffc021ed00 a800000089114c80 000000007f90d590
              a800000072adfe38 a800000089114c80 0000000010020000 0000000010020000
              ...
      Call Trace:
      [<ffffffff81113204>] arch_prepare_kprobe+0x1c/0xe8
      [<ffffffff8111511c>] register_kprobe+0x33c/0x730
      [<ffffffffc021d500>] _stp_ctl_write_cmd+0x8e8/0xa88 [atomicm_foo]
      [<ffffffff812925cc>] vfs_write+0xb4/0x178
      [<ffffffff81292828>] SyS_write+0x58/0x148
      [<ffffffff81103844>] handle_sysn32+0x44/0x84
      
      Code: ffb20010  ffb00000  dc820028 <8c44fffc> 8c500000  0c4449e0  0004203c  14400029  3c048199
      Signed-off-by: NManeesh Soni <manesoni@cisco.com>
      Signed-off-by: NVictor Kamensky <kamensky@cisco.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: ananth@in.ibm.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2915/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      41dde781
    • Y
      MIPS: irq: Remove IRQF_DISABLED · 8b5690f8
      Yong Zhang 提交于
      Since commit [e58aa3d2: genirq: Run irq handlers with interrupts disabled],
      We run all interrupt handlers with interrupts disabled and we even check
      and yell when an interrupt handler returns with interrupts enabled (see
      commit [b738a50a: genirq: Warn when handler enables interrupts]).
      
      So now this flag is a NOOP and can be removed.
      
      [ralf@linux-mips.org: Fixed up conflicts in
      arch/mips/alchemy/common/dbdma.c, arch/mips/cavium-octeon/smp.c and
      arch/mips/kernel/perf_event.c.]
      Signed-off-by: NYong Zhang <yong.zhang0@gmail.com>
      To: linux-kernel@vger.kernel.org
      Cc: tglx@linutronix.de
      linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2835/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      8b5690f8
    • D
      MIPS: Handle initmem in systems with kernel not in add_memory_region() mem · 43064c0c
      David Daney 提交于
      This patch addresses a couple of related problems:
      
      1) The kernel may reside in physical memory outside of the ranges set
         by plat_mem_setup().  If this is the case, init mem cannot be
         reused as it resides outside of the range of pages that the kernel
         memory allocators control.
      
      2) initrd images might be loaded in physical memory outside of the
         ranges set by plat_mem_setup().  The memory likewise cannot be
         reused.  The patch doesn't handle this specific case, but the
         infrastructure is useful for future patches that do.
      
      The crux of the problem is that there are memory regions that need be
      memory_present(), but that cannot be free_bootmem() at the time of
      arch_mem_init().  We create a new type of memory (BOOT_MEM_INIT_RAM)
      for use with add_memory_region().  Then arch_mem_init() adds the init
      mem with this type if the init mem is not already covered by existing
      ranges.
      
      When memory is being freed into the bootmem allocator, we skip the
      BOOT_MEM_INIT_RAM ranges so they are not clobbered, but we do signal
      them as memory_present().  This way when they are later freed, the
      necessary memory manager structures have initialized and the Sparse
      allocater is prevented from crashing.
      
      The Octeon specific code that handled this case is removed, because
      the new general purpose code handles the case.
      Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com>
      To: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/1988/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      43064c0c
    • K
      MIPS: BMIPS: Add SMP support code for BMIPS43xx/BMIPS5000 · df0ac8a4
      Kevin Cernekee 提交于
      Initial commit of BMIPS SMP support code.  Smoke-tested on a variety of
      BMIPS4350, BMIPS4380, and BMIPS5000 platforms.
      Signed-off-by: NKevin Cernekee <cernekee@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2977/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      df0ac8a4
    • K
      MIPS: Add board_ebase_setup() · 6fb97eff
      Kevin Cernekee 提交于
      Some systems need to relocate the MIPS exception vector base during
      trap initialization.  Add a hook to make this possible.
      Signed-off-by: NKevin Cernekee <cernekee@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2959/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6fb97eff