1. 29 7月, 2016 2 次提交
    • J
      MIPS: SMP: Update cpu_foreign_map on CPU disable · 826e99be
      James Hogan 提交于
      When a CPU is disabled via CPU hotplug, cpu_foreign_map is not updated.
      This could result in cache management SMP calls being sent to offline
      CPUs instead of online siblings in the same core.
      
      Add a call to calculate_cpu_foreign_map() in the various MIPS cpu
      disable callbacks after set_cpu_online(). All cases are updated for
      consistency and to keep cpu_foreign_map strictly up to date, not just
      those which may support hardware multithreading.
      
      Fixes: cccf34e9 ("MIPS: c-r4k: Fix cache flushing for MT cores")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Kevin Cernekee <cernekee@gmail.com>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Hongliang Tao <taohl@lemote.com>
      Cc: Hua Yan <yanh@lemote.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13799/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      826e99be
    • J
      MIPS: SMP: Clear ASID without confusing has_valid_asid() · a05c3920
      James Hogan 提交于
      The SMP flush_tlb_*() functions may clear the memory map's ASIDs for
      other CPUs if the mm has only a single user (the current CPU) in order
      to avoid SMP calls. However this makes it appear to has_valid_asid(),
      which is used by various cache flush functions, as if the CPUs have
      never run in the mm, and therefore can't have cached any of its memory.
      
      For flush_tlb_mm() this doesn't sound unreasonable.
      
      flush_tlb_range() corresponds to flush_cache_range() which does do full
      indexed cache flushes, but only on the icache if the specified mapping
      is executable, otherwise it doesn't guarantee that there are no cache
      contents left for the mm.
      
      flush_tlb_page() corresponds to flush_cache_page(), which will perform
      address based cache ops on the specified page only, and also only
      touches the icache if the page is executable. It does not guarantee that
      there are no cache contents left for the mm.
      
      For example, this affects flush_cache_range() which uses the
      has_valid_asid() optimisation. It is required to flush the icache when
      mappings are made executable (e.g. using mprotect) so they are
      immediately usable. If some code is changed to non executable in order
      to be modified then it will not be flushed from the icache during that
      time, but the ASID on other CPUs may still be cleared for TLB flushing.
      When the code is changed back to executable, flush_cache_range() will
      assume the code hasn't run on those other CPUs due to the zero ASID, and
      won't invalidate the icache on them.
      
      This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the
      above two flush_tlb_*() functions when the corresponding cache flushes
      are likely to be incomplete (non executable range flush, or any page
      flush). This ASID appears valid to has_valid_asid(), but still triggers
      ASID regeneration due to the upper ASID version bits being 0, which is
      less than the minimum ASID version of 1 and so always treated as stale.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13795/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a05c3920
  2. 28 7月, 2016 2 次提交
  3. 24 7月, 2016 2 次提交
  4. 28 5月, 2016 8 次提交
  5. 24 5月, 2016 1 次提交
  6. 21 5月, 2016 1 次提交
    • J
      exit_thread: remove empty bodies · 5f56a5df
      Jiri Slaby 提交于
      Define HAVE_EXIT_THREAD for archs which want to do something in
      exit_thread. For others, let's define exit_thread as an empty inline.
      
      This is a cleanup before we change the prototype of exit_thread to
      accept a task parameter.
      
      [akpm@linux-foundation.org: fix mips]
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f56a5df
  7. 17 5月, 2016 4 次提交
    • J
      MIPS: perf: Fix I6400 event numbers · fd716fca
      James Hogan 提交于
      Fix perf hardware performance counter event numbers for I6400. This core
      does not follow the performance event numbering scheme of previous MIPS
      cores. All performance counters (both odd and even) are capable of
      counting any of the available events.
      
      Fixes: 4e88a862 ("MIPS: Add cases for CPU_I6400")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13259/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      fd716fca
    • M
      MIPS: MSA: Fix a link error on `_init_msa_upper' with older GCC · e49d3848
      Maciej W. Rozycki 提交于
      Fix a build regression from commit c9017757 ("MIPS: init upper 64b
      of vector registers when MSA is first used"):
      
      arch/mips/built-in.o: In function `enable_restore_fp_context':
      traps.c:(.text+0xbb90): undefined reference to `_init_msa_upper'
      traps.c:(.text+0xbb90): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper'
      traps.c:(.text+0xbef0): undefined reference to `_init_msa_upper'
      traps.c:(.text+0xbef0): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper'
      
      to !CONFIG_CPU_HAS_MSA configurations with older GCC versions, which are
      unable to figure out that calls to `_init_msa_upper' are indeed dead.
      Of the many ways to tackle this failure choose the approach we have
      already taken in `thread_msa_context_live'.
      
      [ralf@linux-mips.org: Drop patch segment to junk file.]
      Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: stable@vger.kernel.org # v3.16+
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13271/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e49d3848
    • A
      perf core: Add a 'nr' field to perf_event_callchain_context · 3b1fff08
      Arnaldo Carvalho de Melo 提交于
      We will use it to count how many addresses are in the entry->ip[] array,
      excluding PERF_CONTEXT_{KERNEL,USER,etc} entries, so that we can really
      return the number of entries specified by the user via the relevant
      sysctl, kernel.perf_event_max_contexts, or via the per event
      perf_event_attr.sample_max_stack knob.
      
      This way we keep the perf_sample->ip_callchain->nr meaning, that is the
      number of entries, be it real addresses or PERF_CONTEXT_ entries, while
      honouring the max_stack knobs, i.e. the end result will be max_stack
      entries if we have at least that many entries in a given stack trace.
      
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/n/tip-s8teto51tdqvlfhefndtat9r@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      3b1fff08
    • A
      perf core: Pass max stack as a perf_callchain_entry context · cfbcf468
      Arnaldo Carvalho de Melo 提交于
      This makes perf_callchain_{user,kernel}() receive the max stack
      as context for the perf_callchain_entry, instead of accessing
      the global sysctl_perf_event_max_stack.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/n/tip-kolmn1yo40p7jhswxwrc7rrd@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      cfbcf468
  8. 13 5月, 2016 20 次提交
    • P
      MIPS: Prevent "restoration" of MSA context in non-MSA kernels · 6533af4d
      Paul Burton 提交于
      If a kernel doesn't support MSA context (ie. CONFIG_CPU_HAS_MSA=n) then
      it will only keep 64 bits per FP register in thread context, and the
      calls to set_fpr64 in restore_msa_extcontext will overrun the end of the
      FP register context into the FCSR & MSACSR values. GCC 6.x has become
      smart enough to detect this & complain like so:
      
          arch/mips/kernel/signal.c: In function 'protected_restore_fp_context':
          ./arch/mips/include/asm/processor.h:114:17: error: array subscript is above array bounds [-Werror=array-bounds]
            fpr->val##width[FPR_IDX(width, idx)] = val;   \
            ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
          ./arch/mips/include/asm/processor.h:118:1: note: in expansion of macro 'BUILD_FPR_ACCESS'
           BUILD_FPR_ACCESS(64)
      
      The only way to trigger this code to run would be for a program to set
      up an artificial extended MSA context structure following a sigframe &
      execute sigreturn. Whilst this doesn't allow a program to write to any
      state that it couldn't already, it makes little sense to allow this
      "restoration" of MSA context in a system that doesn't support MSA.
      
      Fix this by killing a program with SIGSYS if it tries something as crazy
      as "restoring" fake MSA context in this way, also fixing the build error
      & allowing for most of restore_msa_extcontext to be optimised out of
      kernels without support for MSA.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reported-by: NMichal Toman <michal.toman@imgtec.com>
      Fixes: bf82cb30 ("MIPS: Save MSA extended context around signals")
      Tested-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Michal Toman <michal.toman@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable <stable@vger.kernel.org> # v4.3+
      Patchwork: https://patchwork.linux-mips.org/patch/13164/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6533af4d
    • J
      MIPS: cevt-r4k: Dynamically calculate min_delta_ns · 1fa40555
      James Hogan 提交于
      Calculate the MIPS clockevent device's min_delta_ns dynamically based on
      the time it takes to perform the mips_next_event() sequence.
      
      Virtualisation in particular makes the current fixed min_delta of 0x300
      inappropriate under some circumstances, as the CP0_Count and CP0_Compare
      registers may be being emulated by the hypervisor, and the frequency may
      not correspond directly to the CPU frequency.
      
      We actually use twice the median of multiple 75th percentiles of
      multiple measurements of how long the mips_next_event() sequence takes,
      in order to fairly efficiently eliminate outliers due to unexpected
      hypervisor latency (which would need handling with retries when it
      occurs during normal operation anyway).
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13176/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      1fa40555
    • P
      MIPS: Force CPUs to lose FP context during mode switches · 6b832257
      Paul Burton 提交于
      Commit 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options
      for MIPS") added support for the PR_SET_FP_MODE prctl, which allows a
      userland program to modify its FP mode at runtime. This is most notably
      required if dynamic linking leads to the FP mode requirement changing at
      runtime from that indicated in the initial executable's ELF header. In
      order to avoid overhead in the general FP context restore code, it aimed
      to have threads in the process become unable to enable the FPU during a
      mode switch & have the thread calling the prctl syscall wait for all
      other threads in the process to be context switched at least once. Once
      that happens we can know that no thread in the process whose mode will
      be switched has live FP context, and it's safe to perform the mode
      switch. However in the (rare) case of modeswitches occurring in
      multithreaded programs this can lead to indeterminate delays for the
      thread invoking the prctl syscall, and the code monitoring for those
      context switches was woefully inadequate for all but the simplest cases.
      
      Fix this by broadcasting an IPI if other CPUs may have live FP context
      for an affected thread, with a handler causing those CPUs to relinquish
      their FPU ownership. Threads will then be allowed to continue running
      but will stall on the wait_on_atomic_t in enable_restore_fp_context if
      they attempt to use FP again whilst the mode switch is still in
      progress. The end result is less fragile poking at scheduler context
      switch counts & a more expedient completion of the mode switch.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
      Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: stable <stable@vger.kernel.org> # v4.0+
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13145/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6b832257
    • P
      MIPS: Disable preemption during prctl(PR_SET_FP_MODE, ...) · bd239f1e
      Paul Burton 提交于
      Whilst a PR_SET_FP_MODE prctl is performed there are decisions made
      based upon whether the task is executing on the current CPU. This may
      change if we're preempted, so disable preemption to avoid such changes
      for the lifetime of the mode switch.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
      Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com>
      Tested-by: NAurelien Jarno <aurelien@aurel32.net>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: stable <stable@vger.kernel.org> # v4.0+
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13144/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      bd239f1e
    • P
      MIPS: Allow emulation for unaligned [LS]DXC1 instructions · e70ac023
      Paul Burton 提交于
      If an address error exception occurs for a LDXC1 or SDXC1 instruction,
      within the cop1x opcode space, allow it to be passed through to the FPU
      emulator rather than resulting in a SIGILL. This causes LDXC1 & SDXC1 to
      be handled in a manner consistent with the more common LDC1 & SDC1
      instructions.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Tested-by: NAurelien Jarno <aurelien@aurel32.net>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13143/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e70ac023
    • M
      MIPS: ptrace: Prevent writes to read-only FCSR bits · abf378be
      Maciej W. Rozycki 提交于
      Correct the cases missed with commit 9b26616c ("MIPS: Respect the
      ISA level in FCSR handling") and prevent writes to read-only FCSR bits
      there.
      
      This in particular applies to FP context initialisation where any IEEE
      754-2008 bits preset by `mips_set_personality_nan' are cleared before
      the relevant ptrace(2) call takes effect and the PTRACE_POKEUSR request
      addressing FPC_CSR where no masking of read-only FCSR bits is done.
      
      Remove the FCSR clearing from FP context initialisation then and unify
      PTRACE_POKEUSR/FPC_CSR and PTRACE_SETFPREGS handling, by factoring out
      code from `ptrace_setfpregs' and calling it from both places.
      
      This mostly matters to soft float configurations where the emulator can
      be switched this way to a mode which should not be accessible and cannot
      be set with the CTC1 instruction.  With hard float configurations any
      effect is transient anyway as read-only bits will retain their values at
      the time the FP context is restored.
      Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: stable@vger.kernel.org # v4.0+
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13239/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      abf378be
    • M
      MIPS: ptrace: Fix FP context restoration FCSR regression · 42495484
      Maciej W. Rozycki 提交于
      Fix a floating-point context restoration regression introduced with
      commit 9b26616c ("MIPS: Respect the ISA level in FCSR handling")
      that causes a Floating Point exception and consequently a kernel oops
      with hard float configurations when one or more FCSR Enable and their
      corresponding Cause bits are set both at a time via a ptrace(2) call.
      
      To do so reinstate Cause bit masking originally introduced with commit
      b1442d39 ("MIPS: Prevent user from setting FCSR cause bits") to
      address this exact problem and then inadvertently removed from the
      PTRACE_SETFPREGS request with the commit referred above.
      Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: stable@vger.kernel.org # v4.0+
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13238/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      42495484
    • M
      MIPS: ELF: Unify ABI classification macros · c9babb19
      Maciej W. Rozycki 提交于
      Remove a duplicate o32 `elf_check_arch' implementation, move all macro
      variants to <asm/elf.h> and define them unconditionally under indvidual
      names, substituting alias `elf_check_arch' definitions in variant code.
      Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13245/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      c9babb19
    • M
      4a60ad51
    • J
      MIPS: Add probing & defs for VZ & guest features · 6ad816e7
      James Hogan 提交于
      Add a few new cpu-features.h definitions for VZ sub-features, namely the
      existence of the CP0_GuestCtl0Ext, CP0_GuestCtl1, and CP0_GuestCtl2
      registers, and support for GuestID to dialias TLB entries belonging to
      different guests.
      
      Also add certain features present in the guest, with the naming scheme
      cpu_guest_has_*. These are added separately to the main options bitfield
      since they generally parallel similar features in the root context. A
      few of these (FPU, MSA, watchpoints, perf counters, CP0_[X]ContextConfig
      registers, MAAR registers, and probably others in future) can be
      dynamically configured in the guest context, for which the
      cpu_guest_has_dyn_* macros are added.
      
      [ralf@linux-mips.org: Resolve merge conflict.]
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13231/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6ad816e7
    • J
      MIPS: Avoid magic numbers probing kscratch_mask · 9e575f75
      James Hogan 提交于
      The decode_config4() function reads kscratch_mask from
      CP0_Config4.KScrExist using a hard coded shift and mask. We already have
      a definition for the mask in mipsregs.h, so add a definition for the
      shift and make use of them.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13227/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      9e575f75
    • J
      MIPS: Add perf counter feature · 30228c40
      James Hogan 提交于
      Add CPU feature for standard MIPS r2 performance counters, as determined
      by the Config1.PC bit. Both perf_events and oprofile probe this bit, so
      lets combine the probing and change both to use cpu_has_perf.
      
      This will also be used for VZ support in KVM to know whether performance
      counters exist which can be exposed to guests.
      
      [ralf@linux-mips.org: resolve conflict.]
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Robert Richter <rric@kernel.org>
      Cc: linux-mips@linux-mips.org
      Cc: oprofile-list@lists.sf.net
      Patchwork: https://patchwork.linux-mips.org/patch/13226/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      30228c40
    • J
      MIPS: Add defs & probing of [X]ContextConfig · f18bdfa1
      James Hogan 提交于
      The CP0_[X]ContextConfig registers are present if CP0_Config3.CTXTC or
      CP0_Config3.SM are set, and provide more control over which bits of
      CP0_[X]Context are set to the faulting virtual address on a TLB
      exception.
      
      KVM/VZ will need to be able to save and restore these registers in the
      guest context, so add the relevant definitions and probing of the
      ContextConfig feature in the root context first.
      
      [ralf@linux-mips.org: resolve merge conflict.]
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13225/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f18bdfa1
    • J
      MIPS: Add defs & probing of BadInstr[P] registers · e06a1548
      James Hogan 提交于
      The optional CP0_BadInstr and CP0_BadInstrP registers are written with
      the encoding of the instruction that caused a synchronous exception to
      occur, and the prior branch instruction if in a delay slot.
      
      These will be useful for instruction emulation in KVM, and especially
      for VZ support where reading guest virtual memory is a bit more awkward.
      
      Add CPU option numbers and cpu_has_* definitions to indicate the
      presence of each registers, and add code to probe for them using bits in
      the CP0_Config3 register.
      
      [ralf@linux-mips.org: resolve merge conflict.]
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13224/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e06a1548
    • J
      MIPS: Add defs & probing of extended CP0_EBase · 37fb60f8
      James Hogan 提交于
      The CP0_EBase register may optionally have a write gate (WG) bit to
      allow the upper bits to be written, i.e. bits 31:30 on MIPS32 since r3
      (to allow for an exception base outside of KSeg0/KSeg1 when segmentation
      control is in use) and bits 63:30 on MIPS64 (which also implies the
      extension of CP0_EBase to 64 bits long).
      
      The presence of this feature will need to be known about for VZ support
      in order to correctly save and restore all the bits of the guest
      CP0_EBase register, so add CPU feature definition and probing for this
      feature.
      
      Probing the WG bit on MIPS64 can be a bit fiddly, since 64-bit COP0
      register access instructions were UNDEFINED for 32-bit registers prior
      to MIPS r6, and it'd be nice to be able to probe without clobbering the
      existing state, so there are 3 potential paths:
      
      - If we do a 32-bit read of CP0_EBase and the WG bit is already set, the
        register must be 64-bit.
      
      - On MIPS r6 we can do a 64-bit read-modify-write to set CP0_EBase.WG,
        since the upper bits will read 0 and be ignored on write if the
        register is 32-bit.
      
      - On pre-r6 cores, we do a 32-bit read-modify-write of CP0_EBase. This
        avoids the potentially UNDEFINED behaviour, but will clobber the upper
        32-bits of CP0_EBase if it isn't a simple sign extension (which also
        requires us to ensure BEV=1 or modifying the exception base would be
        UNDEFINED too). It is hopefully unlikely a bootloader would set up
        CP0_EBase to a 64-bit segment and leave WG=0.
      
      [ralf@linux-mips.org: Resolved merge conflict.]
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Tested-by: NMatt Redfearn <matt.redfearn@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13223/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      37fb60f8
    • A
      MIPS: Octeon: detect and fix byte swapped initramfs · 8f4703aa
      Aurelien Jarno 提交于
      Octeon machines support running in little endian mode. U-Boot usually
      runs in big endian-mode. Therefore the initramfs is loaded in big endian
      mode, and the kernel later tries to access it in little endian mode.
      
      This patch fixes that by detecting byte swapped initramfs using either the
      CPIO header or the header from standard compression methods, and
      byte swaps it if needed. It first checks that the header doesn't match
      in the native endianness to avoid false detections. It uses the kernel
      decompress library so that we don't have to maintain the list of magics
      if some decompression methods are added to the kernel.
      Signed-off-by: NAurelien Jarno <aurelien@aurel32.net>
      Acked-by: NDavid Daney <david.daney@cavium.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13219/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      8f4703aa
    • F
      MIPS: BMIPS: BMIPS4380 and BMIPS5000 support RIXI · b4720809
      Florian Fainelli 提交于
      Make BMIPS4380 and BMIPS5000 advertise support for RIXI through
      cpu_probe_broadcom(). bmips_cpu_setup() needs to be called shortly after that,
      during prom_init() in order to enable the proper Broadcom-specific register to
      turn on RIXI and the "rotr" instruction.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Cc: john@phrozen.org
      Cc: cernekee@gmail.com
      Cc: jon.fraser@broadcom.com
      Cc: pgynther@google.com
      Cc: paul.burton@imgtec.com
      Cc: ddaney.cavm@gmail.com
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12507/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b4720809
    • F
      MIPS: Move RIXI exception enabling after vendor-specific cpu_probe · 2e274768
      Florian Fainelli 提交于
      Some processors may not have the RIXI bit advertised in the Config3 register,
      not being a MIPS32R2 or R6 core, yet, they might be supporting it through a
      different way, which is overriden during vendor-specific cpu_probe().
      
      Move the RIXI exceptions enabling after the vendor-specific cpu_probe()
      function has had a change to run and override the current CPU's options with
      MIPS_CPU_RIXI.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Cc: john@phrozen.org
      Cc: cernekee@gmail.com
      Cc: jon.fraser@broadcom.com
      Cc: pgynther@google.com
      Cc: paul.burton@imgtec.com
      Cc: ddaney.cavm@gmail.com
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12506/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      2e274768
    • P
      MIPS: Remove redundant asm/pgtable-bits.h inclusions · 253f0d4a
      Paul Burton 提交于
      asm/pgtable-bits.h is included in 2 assembly files and thus has to
      ifdef around C code, however nothing defined by the header is used
      in either of the assembly files that include it.
      
      Remove the redundant inclusions such that asm/pgtable-bits.h doesn't
      need to #ifdef around C code, for cleanliness and in preparation for
      later patches which will add more C.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Jonas Gorski <jogo@openwrt.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13114/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      253f0d4a
    • J
      MIPS: Separate XPA CPU feature into LPA and MVH · 12822570
      James Hogan 提交于
      XPA (eXtended Physical Addressing) should be detected as a combination
      of two architectural features:
      - Large Physical Address (as per Config3.LPA). With XPA this will be set
        on MIPS32r5 cores, but it may also be set for MIPS64r2 cores too.
      - MTHC0/MFHC0 instructions (as per Config5.MVH). With XPA this will be
        set, but it may also be set in VZ guest context even when Config3.LPA
        in the guest context has been cleared by the hypervisor.
      
      As such, XPA is only usable if both bits are set. Update CPU features to
      separate these two features, with cpu_has_xpa requiring both to be set.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Maciej W. Rozycki <macro@imgtec.com>
      Cc: Joshua Kinard <kumba@gentoo.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13112/Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      12822570