- 24 11月, 2014 9 次提交
-
-
由 Andrew Bresticker 提交于
Create a legacy IRQ domain for the 16 i8259 interrupts. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Reviewed-by: NQais Yousef <qais.yousef@imgtec.com> Tested-by: NQais Yousef <qais.yousef@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Jeffrey Deans <jeffrey.deans@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: John Crispin <blogic@openwrt.org> Cc: David Daney <ddaney.cavm@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7804/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Andrew Bresticker 提交于
When mapping an interrupt in the CPU IRQ domain, set the vint handler for that interrupt if the CPU uses vectored interrupt handling. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Reviewed-by: NQais Yousef <qais.yousef@imgtec.com> Tested-by: NQais Yousef <qais.yousef@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Jeffrey Deans <jeffrey.deans@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: John Crispin <blogic@openwrt.org> Cc: David Daney <ddaney.cavm@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7802/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Andrew Bresticker 提交于
For platforms which boot with device-tree or have correctly chained all external interrupt controllers, a generic plat_irq_dispatch() can be used. Implement a plat_irq_dispatch() which simply handles all the pending interrupts as reported by C0_Cause. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Reviewed-by: NQais Yousef <qais.yousef@imgtec.com> Tested-by: NQais Yousef <qais.yousef@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Jeffrey Deans <jeffrey.deans@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: John Crispin <blogic@openwrt.org> Cc: David Daney <ddaney.cavm@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7801/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Andrew Bresticker 提交于
mips_cpu_intc_init() is used for DT-based initialization of the CPU IRQ domain. Give it a more appropriate name. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Reviewed-by: NQais Yousef <qais.yousef@imgtec.com> Tested-by: NQais Yousef <qais.yousef@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Jeffrey Deans <jeffrey.deans@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: John Crispin <blogic@openwrt.org> Cc: David Daney <ddaney.cavm@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7800/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Andrew Bresticker 提交于
Use an IRQ domain for the 8 CPU IRQs in both the DT and non-DT cases. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Reviewed-by: NQais Yousef <qais.yousef@imgtec.com> Tested-by: NQais Yousef <qais.yousef@imgtec.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Jeffrey Deans <jeffrey.deans@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: John Crispin <blogic@openwrt.org> Cc: David Daney <ddaney.cavm@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7799/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Add new 'noftlb' kernel command line option to disable the FTLB. Since the kernel command line is not available when probing and enabling the CPU features in cpu_probe(), we let the kernel configure the FTLB during the config4 decode operation and we disable the FTLB later on, once the command line has become available to us. This should have no negative effects since FTLB isn't used so early in the boot process. FTLB increases the effective TLB size leading to less TLB misses. However, sometimes it's useful to be able to disable it when debugging memory related core features or other hardware components. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/7586/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Joe Perches 提交于
Use the much more common pr_warn instead of pr_warning with the goal of removing pr_warning eventually. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Cc: linux-mips <linux-mips@linux-mips.org> Cc: LKML <linux-kernel@vger.kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/7935/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Eunbong Song 提交于
Currently, arch_trigger_all_cpu_backtrace() is defined in only x86 and sparc which have an NMI. But in case of softlockup, it could be possible to dump backtrace of all cpus. and this could be helpful for debugging. for example, if system has 2 cpus. CPU 0 CPU 1 acquire read_lock() try to do write_lock() ,,, missing read_unlock() In this case, softlockup will occur becasuse CPU 0 does not call read_unlock(). And dump_stack() print only backtrace for "CPU 0". If CPU1's backtrace is printed it's very helpful. [ralf@linux-mips.org: Fixed whitespace and formatting issues.] Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/8200/
-
由 Ralf Baechle 提交于
Based on the spatch @@ expression e; @@ - return (e); + return e; with heavy hand editing because some of the changes are either whitespace or identation only or result in excessivly long lines. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 20 11月, 2014 4 次提交
-
-
由 Maciej W. Rozycki 提交于
Implement the microMIPS encoding of the J instruction for the purpose of the static keys feature, fixing a crash early on in bootstrap as the kernel is unhappy seeing the ISA bit set in jump table entries. Make sure the ISA bit correctly reflects the instruction encoding chosen for the kernel, 0 for the standard MIPS and 1 for the microMIPS encoding. Also make sure the instruction to patch is a 32-bit NOP in the microMIPS mode as by default the 16-bit short encoding is assumed Signed-off-by: NMaciej W. Rozycki <macro@codesourcery.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8516/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Maciej W. Rozycki 提交于
Correct the check for the span of the 256MB segment addressable by the J instruction according to this instruction's semantics. The calculation of the jump target is applied to the address of the delay-slot instruction that immediately follows. Adjust the check accordingly by adding 4 to `e->code' that holds the address of the J instruction itself. Signed-off-by: NMaciej W. Rozycki <macro@codesourcery.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8515/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Huacai Chen 提交于
In CPU manual Loongson-3 is MIPS64R2 compatible, but during tests we found that its EI/DI instructions have problems. So we just set the ISA level to MIPS64R1. Signed-off-by: NHuacai Chen <chenhc@lemote.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/8320/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Huacai Chen 提交于
All Loongson-2/3 processors support _CACHE_UNCACHED_ACCELERATED, not only Loongson-3A. Signed-off-by: NHuacai Chen <chenhc@lemote.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/8319/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 07 11月, 2014 1 次提交
-
-
由 Manuel Lauss 提交于
Starting with version 2.24.51.20140728 MIPS binutils complain loudly about mixing soft-float and hard-float object files, leading to this build failure since GCC is invoked with "-msoft-float" on MIPS: {standard input}: Warning: .gnu_attribute 4,3 requires `softfloat' LD arch/mips/alchemy/common/built-in.o mipsel-softfloat-linux-gnu-ld: Warning: arch/mips/alchemy/common/built-in.o uses -msoft-float (set by arch/mips/alchemy/common/prom.o), arch/mips/alchemy/common/sleeper.o uses -mhard-float To fix this, we detect if GAS is new enough to support "-msoft-float" command option, and if it does, we can let GCC pass it to GAS; but then we also need to sprinkle the files which make use of floating point registers with the necessary ".set hardfloat" directives. Signed-off-by: NManuel Lauss <manuel.lauss@gmail.com> Cc: Linux-MIPS <linux-mips@linux-mips.org> Cc: Matthew Fortune <Matthew.Fortune@imgtec.com> Cc: Markos Chandras <Markos.Chandras@imgtec.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/8355/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 29 10月, 2014 1 次提交
-
-
Even if CMA is disabled, the for_each_memblock macro expands to run reserve_bootmem once. Hence, reserve_bootmem attempts to reserve location 0 of size 0. Add a check to avoid that. Issue was highlighted during testing with EVA enabled. resrve_bootmem used to exit gracefully when passed arguments to reserve 0 size location at 0 without EVA. But with EVA enabled, macros would point to different addresses and the code would trigger a BUG. Signed-off-by: NZubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Tested-by: NMarkos Chandras <markos.chandras@imgtec.com> Tested-by: NHuacai Chen <chenhc@lemote.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/8231/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 27 10月, 2014 1 次提交
-
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 24 10月, 2014 1 次提交
-
-
由 Markos Chandras 提交于
The __pastwait symbol was only used by the address_is_in_r4k_wait_irqoff function but this is no longer used since the SMTC removal in commit b633648c ('MIPS: MT: Remove SMTC support'). That symbol also led to build failures under certain random configuration due to the way the compiler compiled the r4k_wait_irqoff function. If that function was called multiple times, the __pastwait symbol was redefined breaking the build like this: CHK include/generated/compile.h CC arch/mips/kernel/idle.o {standard input}: Assembler messages: {standard input}:527: Error: symbol `__pastwait' is already defined Link: http://www.linux-mips.org/cgi-bin/mesg.cgi?a=linux-mips&i=1244879922.24479.30.camel%40falconSigned-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/7791/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 26 9月, 2014 1 次提交
-
-
由 Markos Chandras 提交于
Every mcount() call in the MIPS 32-bit kernel is done as follows: [...] move at, ra jal _mcount addiu sp, sp, -8 [...] but upon returning from the mcount() function, the stack pointer is not adjusted properly. This is explained in details in 58b69401 (MIPS: Function tracer: Fix broken function tracing). Commit ad8c3969 ("MIPS: Unbreak function tracer for 64-bit kernel.) fixed the stack manipulation for 64-bit but it didn't fix it completely for MIPS32. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: <stable@vger.kernel.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7792/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 24 9月, 2014 1 次提交
-
-
由 Eric Paris 提交于
We have a function where the arch can be queried, syscall_get_arch(). So rather than have every single piece of arch specific code use and/or duplicate syscall_get_arch(), just have the audit code use the syscall_get_arch() code. Based-on-patch-by: NRichard Briggs <rgb@redhat.com> Signed-off-by: NEric Paris <eparis@redhat.com> Cc: linux-alpha@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: microblaze-uclinux@itee.uq.edu.au Cc: linux-mips@linux-mips.org Cc: linux@lists.openrisc.net Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: user-mode-linux-devel@lists.sourceforge.net Cc: linux-xtensa@linux-xtensa.org Cc: x86@kernel.org
-
- 22 9月, 2014 2 次提交
-
-
由 Markos Chandras 提交于
Different cores use different CCA values to achieve write-combine memory writes. For cores that do not support write-combine we set the default value to CCA:2 (uncached, non-coherent) which is the default value as set by the kernel. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7402/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
Adds cma support to the MIPS architecture. cma uses memblock. However, mips uses bootmem. bootmem is informed about any regions reserved by memblock dma api is modified to use cma reserved memory regions when available Tested using cma_test. cma_test is a simple driver that assigns blocks of memory from cma reserved sections. Signed-off-by: NZubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Cc: catalin.marinas@arm.com Cc: will.deacon@arm.com Cc: tglx@linutronix.de Cc: mingo@redhat.com Cc: hpa@zytor.com Cc: arnd@arndb.de Cc: gregkh@linuxfoundation.org Cc: m.szyprowski@samsung.com Cc: x86@kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-arch@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7360/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 04 9月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
The secure_computing function took a syscall number parameter, but it only paid any attention to that parameter if seccomp mode 1 was enabled. Rather than coming up with a kludge to get the parameter to work in mode 2, just remove the parameter. To avoid churn in arches that don't have seccomp filters (and may not even support syscall_get_nr right now), this leaves the parameter in secure_computing_strict, which is now a real function. For ARM, this is a bit ugly due to the fact that ARM conditionally supports seccomp filters. Fixing that would probably only be a couple of lines of code, but it should be coordinated with the audit maintainers. This will be a slight slowdown on some arches. The right fix is to pass in all of seccomp_data instead of trying to make just the syscall nr part be fast. This is a prerequisite for making two-phase seccomp work cleanly. Cc: Russell King <linux@arm.linux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux-s390@vger.kernel.org Cc: x86@kernel.org Cc: Kees Cook <keescook@chromium.org> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 27 8月, 2014 1 次提交
-
-
由 Christoph Lameter 提交于
__get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : #define __get_cpu_var(var) (*this_cpu_ptr(&(var))) __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 26 8月, 2014 4 次提交
-
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
The CPS code is doing several memory loads when configuring the VPEs from secondary cores, so the segmentation control registers must be initialized in time otherwise the kernel will crash with strange TLB exceptions. Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7424/Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
-
由 Markos Chandras 提交于
Commit 4c21b8fd (MIPS: seccomp: Handle indirect system calls (o32)) added indirect syscall detection for O32 processes running on MIPS64 but it did not work as expected. The reason is the the scall64-o32 implementation differs compared to scall32-o32. In the former, the v0 (syscall number) register contains the absolute syscall number (4000 + X) whereas in the latter it contains the relative syscall number (X). Fix the code to avoid doing an extra addition, and load the v0 register directly to the first argument for syscall_trace_enter. Moreover, set the .reorder assembler option in order to have better control on this part of the assembly code. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7481/ Cc: <stable@vger.kernel.org> # v3.15+ Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
-
由 Yang Wei 提交于
In RT kernel, I ran into the following calltrace, so PMU interrupts cannot be threaded in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0 INFO: lockdep is turned off. Call Trace: [<ffffffff8088595c>] dump_stack+0x1c/0x50 [<ffffffff801a958c>] __might_sleep+0x13c/0x148 [<ffffffff80891c54>] rt_spin_lock+0x3c/0xb0 [<ffffffff801ad29c>] __wake_up+0x3c/0x80 [<ffffffff80243ba4>] perf_event_wakeup+0x8c/0xf8 [<ffffffff80243c50>] perf_pending_event+0x40/0x78 [<ffffffff8023d88c>] irq_work_run+0x74/0xc0 [<ffffffff80152640>] mipsxx_pmu_handle_shared_irq+0x110/0x228 [<ffffffff8015276c>] mipsxx_pmu_handle_irq+0x14/0x30 [<ffffffff801ffda4>] handle_irq_event_percpu+0xbc/0x470 [<ffffffff80204478>] handle_percpu_irq+0x98/0xc8 [<ffffffff801ff284>] generic_handle_irq+0x4c/0x68 [<ffffffff8089748c>] do_IRQ+0x2c/0x48 [<ffffffff80105864>] plat_irq_dispatch+0x64/0xd0 [ralf@linux-mips.org: I don't see why based on this register dump the handler should be marked IRQF_NO_THREAD - but the handler is manipulating per-CPU resources so we don't want it to be rescheduled to another CPU.] Signed-off-by: NYang Wei <Wei.Yang@windriver.com> Cc: a.p.zijlstra@chello.nl Cc: paulus@samba.org Cc: mingo@redhat.com Cc: acme@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7506/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 25 8月, 2014 1 次提交
-
-
由 Yang Wei 提交于
Since there is not indirection page in crash type, so the vaule of the head field of kimage structure is not equal to the address of indirection page but IND_DONE. so we have to set kexec_indirection_page variable to the address of the head field of image structure. [ralf@linux-mips.org: Don't add pointless empty line, fix trailing whitespace damage.] Signed-off-by: NYang Wei <Wei.Yang@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7499/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 20 8月, 2014 2 次提交
-
-
由 Markos Chandras 提交于
The CPS code is doing several memory loads when configuring the VPEs from secondary cores, so the segmentation control registers must be initialized in time otherwise the kernel will crash with strange TLB exceptions. Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7424/Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
-
由 Markos Chandras 提交于
Commit 4c21b8fd (MIPS: seccomp: Handle indirect system calls (o32)) added indirect syscall detection for O32 processes running on MIPS64 but it did not work as expected. The reason is the the scall64-o32 implementation differs compared to scall32-o32. In the former, the v0 (syscall number) register contains the absolute syscall number (4000 + X) whereas in the latter it contains the relative syscall number (X). Fix the code to avoid doing an extra addition, and load the v0 register directly to the first argument for syscall_trace_enter. Moreover, set the .reorder assembler option in order to have better control on this part of the assembly code. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7481/ Cc: <stable@vger.kernel.org> # v3.15+ Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
-
- 19 8月, 2014 1 次提交
-
-
由 Yang Wei 提交于
In RT kernel, I ran into the following calltrace, so PMU interrupts cannot be threaded in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0 INFO: lockdep is turned off. Call Trace: [<ffffffff8088595c>] dump_stack+0x1c/0x50 [<ffffffff801a958c>] __might_sleep+0x13c/0x148 [<ffffffff80891c54>] rt_spin_lock+0x3c/0xb0 [<ffffffff801ad29c>] __wake_up+0x3c/0x80 [<ffffffff80243ba4>] perf_event_wakeup+0x8c/0xf8 [<ffffffff80243c50>] perf_pending_event+0x40/0x78 [<ffffffff8023d88c>] irq_work_run+0x74/0xc0 [<ffffffff80152640>] mipsxx_pmu_handle_shared_irq+0x110/0x228 [<ffffffff8015276c>] mipsxx_pmu_handle_irq+0x14/0x30 [<ffffffff801ffda4>] handle_irq_event_percpu+0xbc/0x470 [<ffffffff80204478>] handle_percpu_irq+0x98/0xc8 [<ffffffff801ff284>] generic_handle_irq+0x4c/0x68 [<ffffffff8089748c>] do_IRQ+0x2c/0x48 [<ffffffff80105864>] plat_irq_dispatch+0x64/0xd0 [ralf@linux-mips.org: I don't see why based on this register dump the handler should be marked IRQF_NO_THREAD - but the handler is manipulating per-CPU resources so we don't want it to be rescheduled to another CPU.] Signed-off-by: NYang Wei <Wei.Yang@windriver.com> Cc: a.p.zijlstra@chello.nl Cc: paulus@samba.org Cc: mingo@redhat.com Cc: acme@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7506/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 06 8月, 2014 2 次提交
-
-
由 Richard Weinberger 提交于
Use sigsp() instead of the open coded variant. Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Richard Weinberger 提交于
Use the more generic functions get_signal() signal_setup_done() for signal delivery. Signed-off-by: NRichard Weinberger <richard@nod.at>
-
- 02 8月, 2014 7 次提交
-
-
由 Paul Burton 提交于
Detect the presence of MAAR using the MRP bit in Config5, and record that presence using a CPU option bit. A cpu_has_maar macro will then allow code to conditionalise upon the presence of MAARs. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7330/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
The TIF_MSA_CTX_LIVE flag (indicating that a task has MSA context which needs to be preserved) was being cleared in start_thread, but the TIF_USEDMSA flag (indicating that a task has used MSA in this timeslice) was not. In copy_thread neither flag was cleared, but both need to be. Without clearing these flags the kernel will proceed to attempt to save MSA context when the task is context switched out, and if the task had not used MSA in the meantime then it will fail because MSA or the FPU are disabled. The end result is typically: do_cpu invoked from kernel context![#1]: CPU: 0 PID: 99 Comm: sh Not tainted 3.16.0-rc4-00025-g6dc9476-dirty #88 task: 8f23dc60 ti: 8f1d8000 task.ti: 8f1d8000 ... Call Trace: [<8010edbc>] resume+0x5c/0x280 [<80481e0c>] __schedule+0x370/0x800 [<80104838>] work_resched+0x8/0x2c Fix by consistently clearing both flags in both functions. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7309/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Preemption must be disabled throughout the process of enabling the FPU, enabling MSA & initialising the vector registers. Without doing so it is possible to lose the FPU or MSA whilst initialising them causing that initialisation to fail. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7307/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
The kernel relies upon MSA being disabled when a task begins running, so that it can initialise or restore context in response to the resulting MSA disabled exception. Previously the state of MSA following boot was left as it was before the kernel ran, where MSA could potentially have been enabled. Explicitly disable it during boot to prevent any problems. As a nice side effect the code reads a little better too. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7306/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
If a task does not execute scalar FP instructions prior to using MSA then the flags indicating that the task has live MSA context were not being set. The upper 64b of each vector register would then be lost upon the tasks first context switch after using MSA. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7500/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
When a task first makes use of MSA we need to ensure that the upper 64b of the vector registers are set to some value such that no information can be leaked to it from the previous task to use MSA context on the CPU. The architecture formerly specified that these bits would be cleared to 0 when a scalar FP instructions wrote to the aliased FP registers, which would have implicitly handled this as the kernel restored scalar FP context. However more recent versions of the specification now state that the value of the bits in such cases is unpredictable. Initialise them explictly to be sure, and set all the bits to 1 rather than 0 for consistency with the least significant 64b. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7497/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Switching the vector context implicitly saves & restores the state of the aliased scalar FP data registers, however the scalar FP control & status register is distinct from the MSA control & status register. In order to allow scalar FP to function correctly in programs using MSA, the scalar CSR needs to be saved & restored along with the MSA vector context. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7301/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-