- 13 5月, 2016 40 次提交
-
-
由 James Hogan 提交于
Calculate the MIPS clockevent device's min_delta_ns dynamically based on the time it takes to perform the mips_next_event() sequence. Virtualisation in particular makes the current fixed min_delta of 0x300 inappropriate under some circumstances, as the CP0_Count and CP0_Compare registers may be being emulated by the hypervisor, and the frequency may not correspond directly to the CPU frequency. We actually use twice the median of multiple 75th percentiles of multiple measurements of how long the mips_next_event() sequence takes, in order to fairly efficiently eliminate outliers due to unexpected hypervisor latency (which would need handling with retries when it occurs during normal operation anyway). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13176/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
When estimating the clock frequency based on the RTC, take seconds into account in case the Update In Progress (UIP) bit wasn't seen. This can happen in virtual machines (which may get pre-empted by the hypervisor at inopportune times) with QEMU emulating the RTC (and in fact not setting the UIP bit for very long), especially on slow hosts such as FPGA systems and hardware emulators. This results in several seconds actually having elapsed before seeing the UIP bit instead of just one second, and exaggerated timer frequencies. While updating the comments, they're also fixed to match the code in that the rising edge of the update flag is detected first, not the falling edge. The rising edge gives a more precise point to read the counters in a virtualised system than the falling edge, resulting in a more accurate frequency. It does however mean that we have to also wait for the falling edge before doing the read of the RTC seconds register, otherwise it seems to be possible in slow hardware emulation to stray into the interval when the RTC time is undefined during the update (at least 244uS after the rising edge of the update flag). This can result in both seconds values reading the same, and it wrapping to 60 seconds, vastly underestimating the frequency. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13174/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The sampling of the GIC counter on Malta after observing a rising edge of the RTC update flag differs slightly between the first and second sample, with the first sample also calling gic_start_count(). The two samples should really be taken as similarly as possible to get the most accurate figure, so move the gic_start_count() call before detecting the rising edge. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13173/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Commit 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS") added support for the PR_SET_FP_MODE prctl, which allows a userland program to modify its FP mode at runtime. This is most notably required if dynamic linking leads to the FP mode requirement changing at runtime from that indicated in the initial executable's ELF header. In order to avoid overhead in the general FP context restore code, it aimed to have threads in the process become unable to enable the FPU during a mode switch & have the thread calling the prctl syscall wait for all other threads in the process to be context switched at least once. Once that happens we can know that no thread in the process whose mode will be switched has live FP context, and it's safe to perform the mode switch. However in the (rare) case of modeswitches occurring in multithreaded programs this can lead to indeterminate delays for the thread invoking the prctl syscall, and the code monitoring for those context switches was woefully inadequate for all but the simplest cases. Fix this by broadcasting an IPI if other CPUs may have live FP context for an affected thread, with a handler causing those CPUs to relinquish their FPU ownership. Threads will then be allowed to continue running but will stall on the wait_on_atomic_t in enable_restore_fp_context if they attempt to use FP again whilst the mode switch is still in progress. The end result is less fragile poking at scheduler context switch counts & a more expedient completion of the mode switch. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS") Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: stable <stable@vger.kernel.org> # v4.0+ Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13145/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Whilst a PR_SET_FP_MODE prctl is performed there are decisions made based upon whether the task is executing on the current CPU. This may change if we're preempted, so disable preemption to avoid such changes for the lifetime of the mode switch. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS") Reviewed-by: NMaciej W. Rozycki <macro@imgtec.com> Tested-by: NAurelien Jarno <aurelien@aurel32.net> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: stable <stable@vger.kernel.org> # v4.0+ Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13144/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
If an address error exception occurs for a LDXC1 or SDXC1 instruction, within the cop1x opcode space, allow it to be passed through to the FPU emulator rather than resulting in a SIGILL. This causes LDXC1 & SDXC1 to be handled in a manner consistent with the more common LDC1 & SDC1 instructions. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Tested-by: NAurelien Jarno <aurelien@aurel32.net> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13143/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maciej W. Rozycki 提交于
Correct the cases missed with commit 9b26616c ("MIPS: Respect the ISA level in FCSR handling") and prevent writes to read-only FCSR bits there. This in particular applies to FP context initialisation where any IEEE 754-2008 bits preset by `mips_set_personality_nan' are cleared before the relevant ptrace(2) call takes effect and the PTRACE_POKEUSR request addressing FPC_CSR where no masking of read-only FCSR bits is done. Remove the FCSR clearing from FP context initialisation then and unify PTRACE_POKEUSR/FPC_CSR and PTRACE_SETFPREGS handling, by factoring out code from `ptrace_setfpregs' and calling it from both places. This mostly matters to soft float configurations where the emulator can be switched this way to a mode which should not be accessible and cannot be set with the CTC1 instruction. With hard float configurations any effect is transient anyway as read-only bits will retain their values at the time the FP context is restored. Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com> Cc: stable@vger.kernel.org # v4.0+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13239/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maciej W. Rozycki 提交于
Fix a floating-point context restoration regression introduced with commit 9b26616c ("MIPS: Respect the ISA level in FCSR handling") that causes a Floating Point exception and consequently a kernel oops with hard float configurations when one or more FCSR Enable and their corresponding Cause bits are set both at a time via a ptrace(2) call. To do so reinstate Cause bit masking originally introduced with commit b1442d39 ("MIPS: Prevent user from setting FCSR cause bits") to address this exact problem and then inadvertently removed from the PTRACE_SETFPREGS request with the commit referred above. Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com> Cc: stable@vger.kernel.org # v4.0+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13238/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Purna Chandra Mandal 提交于
- now clock nodes definition is merged with core .dtsi file - only one rootclk is now part of DT - clock clients also updated based on new binding doc Signed-off-by: NPurna Chandra Mandal <purna.mandal@microchip.com> Signed-off-by: NJoshua Henderson <joshua.henderson@microchip.com> Cc: Michael Turquette <mturquette@baylibre.com> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Kumar Gala <galak@codeaurora.org> Cc: Ian Campbell <ijc+devicetree@hellion.org.uk> Cc: Rob Herring <robh+dt@kernel.org> Cc: Pawel Moll <pawel.moll@arm.com> Cc: Sandeep Sheriker <sandeepsheriker.mallikarjun@microchip.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-clk@vger.kernel.org Cc: devicetree@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13248/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Purna Chandra Mandal 提交于
This clock driver implements PIC32 specific clock-tree. clock-tree entities can only be configured through device-tree file (OF). Signed-off-by: NPurna Chandra Mandal <purna.mandal@microchip.com> Cc: Michael Turquette <mturquette@baylibre.com> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-clk@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13247/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Purna Chandra Mandal 提交于
Document the devicetree bindings for the clock driver found on Microchip PIC32 class devices. Signed-off-by: NPurna Chandra Mandal <purna.mandal@microchip.com> Acked-by: NRob Herring <robh@kernel.org> Acked-by: NMichael Turquette <mturquette@baylibre.com> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Purna Chandra Mandal <purna.mandal@microchip.com> Cc: Kumar Gala <galak@codeaurora.org> Cc: Ian Campbell <ijc+devicetree@hellion.org.uk> Cc: Pawel Moll <pawel.moll@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-clk@vger.kernel.org Cc: devicetree@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13246/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maciej W. Rozycki 提交于
Remove a duplicate o32 `elf_check_arch' implementation, move all macro variants to <asm/elf.h> and define them unconditionally under indvidual names, substituting alias `elf_check_arch' definitions in variant code. Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13245/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maciej W. Rozycki 提交于
Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13244/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maciej W. Rozycki 提交于
Move the `mips_elf_abiflags_v0' structure and FP ABI flag macros outside #ifndef ELF_ARCH. These are public interfaces. Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13243/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The GuestCtl1 CP0 register can contain the GuestID used for root TLB operations, which affects TLB matching. The other TLB registers are already dumped out to the log on a machine check exception due to multiple matching TLB entries, so also dump the value of the GuestCtl1 register if GuestIDs are supported. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13232/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The GuestID for root TLB operations (GuestCtl1.RID) is modified by TLB reads, so needs preserving by dump_tlb() like the ASID field of EntryHi. Also dump the GuestID of each entry if it exists alongside the ASID, as it forms an important part of the TLB entry when VZ guests are used. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13230/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Add a few new cpu-features.h definitions for VZ sub-features, namely the existence of the CP0_GuestCtl0Ext, CP0_GuestCtl1, and CP0_GuestCtl2 registers, and support for GuestID to dialias TLB entries belonging to different guests. Also add certain features present in the guest, with the naming scheme cpu_guest_has_*. These are added separately to the main options bitfield since they generally parallel similar features in the root context. A few of these (FPU, MSA, watchpoints, perf counters, CP0_[X]ContextConfig registers, MAAR registers, and probably others in future) can be dynamically configured in the guest context, for which the cpu_guest_has_dyn_* macros are added. [ralf@linux-mips.org: Resolve merge conflict.] Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13231/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Add guest CP0 accessors and guest TLB operations along the same lines as the existing macros and functions for the root CP0. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NDavid Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13229/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Add various register definitions to <asm/mipsregs.h> for the coprocessor zero registers in the VZ ASE, namely CP0_GuestCtl0, CP0_GuestCtl0Ext, CP0_GuestCtl1, CP0_GuestCtl2, CP0_GuestCtl3, and CP0_GTOffset. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NDavid Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13228/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The decode_config4() function reads kscratch_mask from CP0_Config4.KScrExist using a hard coded shift and mask. We already have a definition for the mask in mipsregs.h, so add a definition for the shift and make use of them. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13227/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Add CPU feature for standard MIPS r2 performance counters, as determined by the Config1.PC bit. Both perf_events and oprofile probe this bit, so lets combine the probing and change both to use cpu_has_perf. This will also be used for VZ support in KVM to know whether performance counters exist which can be exposed to guests. [ralf@linux-mips.org: resolve conflict.] Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Robert Richter <rric@kernel.org> Cc: linux-mips@linux-mips.org Cc: oprofile-list@lists.sf.net Patchwork: https://patchwork.linux-mips.org/patch/13226/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The CP0_[X]ContextConfig registers are present if CP0_Config3.CTXTC or CP0_Config3.SM are set, and provide more control over which bits of CP0_[X]Context are set to the faulting virtual address on a TLB exception. KVM/VZ will need to be able to save and restore these registers in the guest context, so add the relevant definitions and probing of the ContextConfig feature in the root context first. [ralf@linux-mips.org: resolve merge conflict.] Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13225/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The optional CP0_BadInstr and CP0_BadInstrP registers are written with the encoding of the instruction that caused a synchronous exception to occur, and the prior branch instruction if in a delay slot. These will be useful for instruction emulation in KVM, and especially for VZ support where reading guest virtual memory is a bit more awkward. Add CPU option numbers and cpu_has_* definitions to indicate the presence of each registers, and add code to probe for them using bits in the CP0_Config3 register. [ralf@linux-mips.org: resolve merge conflict.] Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13224/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The CP0_EBase register may optionally have a write gate (WG) bit to allow the upper bits to be written, i.e. bits 31:30 on MIPS32 since r3 (to allow for an exception base outside of KSeg0/KSeg1 when segmentation control is in use) and bits 63:30 on MIPS64 (which also implies the extension of CP0_EBase to 64 bits long). The presence of this feature will need to be known about for VZ support in order to correctly save and restore all the bits of the guest CP0_EBase register, so add CPU feature definition and probing for this feature. Probing the WG bit on MIPS64 can be a bit fiddly, since 64-bit COP0 register access instructions were UNDEFINED for 32-bit registers prior to MIPS r6, and it'd be nice to be able to probe without clobbering the existing state, so there are 3 potential paths: - If we do a 32-bit read of CP0_EBase and the WG bit is already set, the register must be 64-bit. - On MIPS r6 we can do a 64-bit read-modify-write to set CP0_EBase.WG, since the upper bits will read 0 and be ignored on write if the register is 32-bit. - On pre-r6 cores, we do a 32-bit read-modify-write of CP0_EBase. This avoids the potentially UNDEFINED behaviour, but will clobber the upper 32-bits of CP0_EBase if it isn't a simple sign extension (which also requires us to ensure BEV=1 or modifying the exception base would be UNDEFINED too). It is hopefully unlikely a bootloader would set up CP0_EBase to a 64-bit segment and leave WG=0. [ralf@linux-mips.org: Resolved merge conflict.] Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Tested-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13223/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Add definitions for the bits & fields in the CP0_EBase register, and use them from a few different places in arch/mips which hardcoded these values. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Jayachandran C <jchandra@broadcom.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13222/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Stephen Boyd 提交于
This flag is a no-op now (see commit 47b0eeb3 "clk: Deprecate CLK_IS_ROOT", 2016-02-02) so remove it. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Cc: Manuel Lauss <manuel.lauss@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13134/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Stephen Boyd 提交于
This flag is a no-op now (see commit 47b0eeb3 "clk: Deprecate CLK_IS_ROOT", 2016-02-02) so remove it. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Cc: Antony Pavlov <antonynpavlov@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13133/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Aurelien Jarno 提交于
Octeon machines support running in little endian mode. U-Boot usually runs in big endian-mode. Therefore the initramfs is loaded in big endian mode, and the kernel later tries to access it in little endian mode. This patch fixes that by detecting byte swapped initramfs using either the CPIO header or the header from standard compression methods, and byte swaps it if needed. It first checks that the header doesn't match in the native endianness to avoid false detections. It uses the kernel decompress library so that we don't have to maintain the list of magics if some decompression methods are added to the kernel. Signed-off-by: NAurelien Jarno <aurelien@aurel32.net> Acked-by: NDavid Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13219/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Florian Fainelli 提交于
Make BMIPS4380 and BMIPS5000 advertise support for RIXI through cpu_probe_broadcom(). bmips_cpu_setup() needs to be called shortly after that, during prom_init() in order to enable the proper Broadcom-specific register to turn on RIXI and the "rotr" instruction. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Cc: john@phrozen.org Cc: cernekee@gmail.com Cc: jon.fraser@broadcom.com Cc: pgynther@google.com Cc: paul.burton@imgtec.com Cc: ddaney.cavm@gmail.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12507/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Florian Fainelli 提交于
Some processors may not have the RIXI bit advertised in the Config3 register, not being a MIPS32R2 or R6 core, yet, they might be supporting it through a different way, which is overriden during vendor-specific cpu_probe(). Move the RIXI exceptions enabling after the vendor-specific cpu_probe() function has had a change to run and override the current CPU's options with MIPS_CPU_RIXI. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Cc: john@phrozen.org Cc: cernekee@gmail.com Cc: jon.fraser@broadcom.com Cc: pgynther@google.com Cc: paul.burton@imgtec.com Cc: ddaney.cavm@gmail.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12506/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Florian Fainelli 提交于
Some processors, like Broadcom's BMIPS4380 and BMIPS5000 support RIXI and the "rotr" instruction, which can be used to get a slightly more efficient page table layout. Introduce a CONFIG_CPU_HAS_RIXI such that those cores can benefit from this feature. Perform the conditional check updates where relevant. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Cc: john@phrozen.org Cc: cernekee@gmail.com Cc: jon.fraser@broadcom.com Cc: pgynther@google.com Cc: paul.burton@imgtec.com Cc: ddaney.cavm@gmail.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12505/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
XPA kernels hardcode for the presence of RIXI - the PTE format & its handling presume RI & XI bits. Make this dependence explicit by panicing if we run on a system that violates it. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13125/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Performing an MTHC0 instruction without XPA being present will trigger a reserved instruction exception, therefore conditionalise the use of this instruction when building TLB handlers (build_update_entries()), and in __update_tlb(). This allows an XPA kernel to run on non XPA hardware without that instruction implemented, just like it can run on XPA capable hardware without XPA in use (with the noxpa kernel argument) or with XPA not configured in hardware. [paul.burton@imgtec.com: - Rebase atop other TLB work. - Add "mm" to subject. - Handle the __kmap_pgprot case.] Fixes: c5b36783 ("MIPS: Add support for XPA.") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13124/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
We can simplify build_update_entries by unifying the code for the 36 bit physical addressing with MIPS32 case with the general case, by using pte_off_ variables in all cases & handling the trivial _PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This leaves XPA as the only special case. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13123/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
The XPA case in iPTE_SW or's in software mode bits to the pte_low value (which is what actually ends up in the high 32 bits of EntryLo...). It does this presuming that only bits in the upper 16 bits of the 32 bit pte_low value will be set. Make this assumption explicit with a BUG_ON. A similar assumption is made for the hardware mode bits, which are or'd in with a single ori instruction. Make that assumption explicit with a BUG_ON too. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13122/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Rather than hardcode a scratch register for the XPA case in iPTE_SW, pass one through from the work registers allocated by the caller. This allows for the XPA path to function correctly regardless of the work registers in use. Without doing this there are cases (where KScratch registers are unavailable) in which iPTE_SW will incorrectly clobber $1 despite it already being in use for the PTE or PTE pointer. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13121/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
For XPA kernels build_update_entries() uses $1 (at) as a scratch register, but doesn't arrange for it to be preserved, so it will always be clobbered by the TLB refill exception. Although this register normally has a very short lifetime that doesn't cross memory accesses, TLB refills due to instruction fetches (either on a page boundary or after preemption) could clobber live data, and its easy to reproduce the clobber with a little bit of assembler code. Note that the use of a hardware page table walker will partly mask the problem, as the TLB refill handler will not always be invoked. This is fixed by avoiding the use of the extra scratch register. The pte_high parts (going into the lower half of the EntryLo registers) are loaded and manipulated separately so as to keep the PTE pointer around for the other halves (instead of storing in the scratch register), and the pte_low parts (going into the high half of the EntryLo registers) are masked with 0x00ffffff using an ext instruction (instead of loading 0x00ffffff into the scratch register and AND'ing). [paul.burton@imgtec.com: - Rebase atop other TLB work. - Use ext instead of an sll, srl sequence. - Use cpu_has_xpa instead of #ifdefs. - Modify commit subject to include "mm".] Fixes: c5b36783 ("MIPS: Add support for XPA.") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/13120/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
There are 2 distinct cases in which a kernel for a MIPS32 CPU (CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses (CONFIG_PHYS_ADDR_T_64BIT=y): - 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR CPUs. - MIPS32r5 eXtended Physical Addressing (XPA). These 2 cases are distinct in that they require different behaviour from the kernel - the EntryLo registers have different formats. Until Linux v4.1 we only supported the first case, with code conditional upon the 2 aforementioned Kconfig variables being set. Commit c5b36783 ("MIPS: Add support for XPA.") added support for the second case, but did so by modifying the code that existed for the first case rather than treating the 2 cases as distinct. Since the EntryLo registers have different formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by splitting the 2 cases, with XPA cases now being conditional upon CONFIG_XPA and the non-XPA case matching the code as it existed prior to commit c5b36783 ("MIPS: Add support for XPA."). Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reported-by: NManuel Lauss <manuel.lauss@gmail.com> Tested-by: NManuel Lauss <manuel.lauss@gmail.com> Fixes: c5b36783 ("MIPS: Add support for XPA.") Cc: James Hogan <james.hogan@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13119/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
The same definition for pte_page is duplicated for the MIPS32 PHYS_ADDR_T_64BIT case & the generic case. Unify them by moving a single definition outside of preprocessor conditionals. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13117/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Ever since support for RI/XI was implemented by commit 6dd9344c ("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of _PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch away from using _PAGE_READ to determine page presence & instead invert the use to _PAGE_NO_READ. Wherever we formerly had no definition for _PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end result is that we consistently use _PAGE_NO_READ to determine whether a page is readable, regardless of whether RI/XI is implemented. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: David Daney <david.daney@cavium.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/13116/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-