- 21 7月, 2015 3 次提交
-
-
由 Alex Bennée 提交于
This is a precursor for later patches which will need to do more to setup debug state before entering the hyp.S switch code. The existing functionality for setting mdcr_el2 has been moved out of hyp.S and now uses the value kept in vcpu->arch.mdcr_el2. As the assembler used to previously mask and preserve MDCR_EL2.HPMN I've had to add a mechanism to save the value of mdcr_el2 as a per-cpu variable during the initialisation code. The kernel never sets this number so we are assuming the bootcode has set up the correct value here. This also moves the conditional setting of the TDA bit from the hyp code into the C code which is currently used for the lazy debug register context switch code. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Alex Bennée 提交于
This commit adds a stub function to support the KVM_SET_GUEST_DEBUG ioctl. Any unsupported flag will return -EINVAL. For now, only KVM_GUESTDBG_ENABLE is supported, although it won't have any effects. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Alex Bennée 提交于
This commit defines the API headers for guest debugging. There are two architecture specific debug structures: - kvm_guest_debug_arch, allows us to pass in HW debug registers - kvm_debug_exit_arch, signals exception and possible faulting address The type of debugging being used is controlled by the architecture specific control bits of the kvm_guest_debug->control flags in the ioctl structure. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 11 7月, 2015 1 次提交
-
-
由 John David Anglin 提交于
The increased use of pdtlb/pitlb instructions seemed to increase the frequency of random segmentation faults building packages. Further, we had a number of cases where TLB inserts would repeatedly fail and all forward progress would stop. The Haskell ghc package caused a lot of trouble in this area. The final indication of a race in pte handling was this syslog entry on sibaris (C8000): swap_free: Unused swap offset entry 00000004 BUG: Bad page map in process mysqld pte:00000100 pmd:019bbec5 addr:00000000ec464000 vm_flags:00100073 anon_vma:0000000221023828 mapping: (null) index:ec464 CPU: 1 PID: 9176 Comm: mysqld Not tainted 4.0.0-2-parisc64-smp #1 Debian 4.0.5-1 Backtrace: [<0000000040173eb0>] show_stack+0x20/0x38 [<0000000040444424>] dump_stack+0x9c/0x110 [<00000000402a0d38>] print_bad_pte+0x1a8/0x278 [<00000000402a28b8>] unmap_single_vma+0x3d8/0x770 [<00000000402a4090>] zap_page_range+0xf0/0x198 [<00000000402ba2a4>] SyS_madvise+0x404/0x8c0 Note that the pte value is 0 except for the accessed bit 0x100. This bit shouldn't be set without the present bit. It should be noted that the madvise system call is probably a trigger for many of the random segmentation faults. In looking at the kernel code, I found the following problems: 1) The pte_clear define didn't take TLB lock when clearing a pte. 2) We didn't test pte present bit inside lock in exception support. 3) The pte and tlb locks needed to merged in order to ensure consistency between page table and TLB. This also has the effect of serializing TLB broadcasts on SMP systems. The attached change implements the above and a few other tweaks to try to improve performance. Based on the timing code, TLB purges are very slow (e.g., ~ 209 cycles per page on rp3440). Thus, I think it beneficial to test the split_tlb variable to avoid duplicate purges. Probably, all PA 2.0 machines have combined TLBs. I dropped using __flush_tlb_range in flush_tlb_mm as I realized all applications and most threads have a stack size that is too large to make this useful. I added some comments to this effect. Since implementing 1 through 3, I haven't had any random segmentation faults on mx3210 (rp3440) in about one week of building code and running as a Debian buildd. Signed-off-by: NJohn David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # v3.18+ Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 10 7月, 2015 6 次提交
-
-
由 Mark Rutland 提交于
We currently set x27 in compat_sys_sigreturn_wrapper and compat_sys_rt_sigreturn_wrapper, similarly to what we do with r8/why on 32-bit ARM, in an attempt to prevent sigreturns from being restarted. However, on arm64 we have always used pt_regs::syscallno for syscall restarting (for both native and compat tasks), and x27 is never inspected again before being overwritten in kernel_exit. This patch removes the pointless register assignments. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ralf Baechle 提交于
We were using the native syscall and that results in subtle breakage. This is the same issue as fixed in 077d0e65 (MIPS: N32: Use compat getsockopt syscall) but that commit did fix it only for N32. Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Link: https://bugzilla.kernel.org/show_bug.cgi?id=100291
-
由 Paul Burton 提交于
The L2 cache in the I6400 core has 16 ways, so extend the way_string array to take such caches into account. [ralf@linux-mips.org: Other already supported CPUs are free to support more than 8 ways of cache as well.] Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10640/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Implement the mips_cdmm_phys_base() platform callback to provide a default Common Device Memory Map (CDMM) physical base address for the Pistachio SoC. This allows the CDMM in each VPE to be configured and probed for devices, such as the Fast Debug Channel (FDC). The physical address chosen is just below the default CPC address, which appears to also be unallocated. The FDC IRQ is also usable on Pistachio, and is routed through the GIC, so implement the get_c0_fdc_int() platform callback using gic_get_c0_fdc_int(), so the FDC driver doesn't have to fall back to polling. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: James Hartley <james.hartley@imgtec.com> Cc: linux-mips@linux-mips.org Reviewed-by: NAndrew Bresticker <abrestic@chromium.org> Patchwork: http://patchwork.linux-mips.org/patch/9749/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Wider testing reveals that the Fast Debug Channel (FDC) interrupt is routed through the GIC just fine on Pistachio SoC, even though it contains interAptiv cores. Clearly the FDC interrupt routing problems previously observed on interAptiv and proAptiv cores are specific to the Malta FPGA bitstreams. Move the workaround for interAptiv and proAptiv out of gic_get_c0_fdc_int() in the GIC irqchip driver into Malta's get_c0_fdc_int() platform callback, to allow the Pistachio SoC to use the FDC interrupt. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: linux-mips@linux-mips.org Reviewed-by: NAndrew Bresticker <abrestic@chromium.org> Cc: James Hartley <james.hartley@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/9748/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
MT_SMP is not the only SMP option for MT cores. The MT_SMP option allows more than one VPE per core to appear as a secondary CPU in the system. Because of how CM works, it propagates the address-based cache ops to the secondary cores but not the index-based ones. Because of that, the code does not use IPIs to flush the L1 caches on secondary cores because the CM would have done that already. However, the CM functionality is independent of the type of SMP kernel so even in non-MT kernels, IPIs are not necessary. As a result of which, we change the conditional to depend on the CM presence. Moreover, since VPEs on the same core share the same L1 caches, there is no need to send an IPI on all of them so we calculate a suitable cpumask with only one VPE per core. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: <stable@vger.kernel.org> # 3.15+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10654/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 09 7月, 2015 19 次提交
-
-
由 Markos Chandras 提交于
This reverts commit 6ca716f2. SMP/CPS is now supported on 64bit cores. Cc: <stable@vger.kernel.org> # 4.1 Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10592/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Replace lw/sw and various arithmetic instructions with macros so the code can work on 64-bit kernels as well. Cc: <stable@vger.kernel.org> # 3.16+ Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10591/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
In preparation for 64-bit CPS support, we replace KSEG0 with CKSEG0 so 64-bit kernels can be supported. Cc: <stable@vger.kernel.org> # 3.16+ Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10590/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
The cps-vec code assumes O32 ABI and uses t4-t7 in quite a few places. This breaks the build on 64-bit. As a result of which, use the pseudo-registers ta0-ta3 to make the code compatible with 64-bit. Cc: <stable@vger.kernel.org> # 3.16+ Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10589/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
mips32r2 is a subset of mips64r2, so we replace mips32r2 with mips64r2 in preparation for 64-bit CPS support. Cc: <stable@vger.kernel.org> # 3.16+ Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10588/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
The PTR_LA macro will pick the correct "la" or "dla" macro to load an address to a register. This gets rids of the following warnings (and others) when building a 64-bit CPS kernel: arch/mips/kernel/cps-vec.S:63: Warning: la used to load 64-bit address arch/mips/kernel/cps-vec.S:159: Warning: la used to load 64-bit address arch/mips/kernel/cps-vec.S:220: Warning: la used to load 64-bit address arch/mips/kernel/cps-vec.S:240: Warning: la used to load 64-bit address [...] Cc: <stable@vger.kernel.org> # 3.16+ Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10587/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Commit 1d8f1f5a ("MIPS: smp-cps: hotplug support") added hotplug support in the SMP/CPS implementation but it introduced a few build problems on 64-bit kernels due to pointer being casted to and from 'int' C types. We fix this problem by using 'unsigned long' instead which should match the size of the pointers in 32/64-bit kernels. Finally, we fix the comment since the CM base address is loaded to v1($3) instead of v0. Fixes the following build problems: arch/mips/kernel/smp-cps.c: In function 'wait_for_sibling_halt': arch/mips/kernel/smp-cps.c:366:17: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast] [...] arch/mips/kernel/smp-cps.c: In function 'cps_cpu_die': arch/mips/kernel/smp-cps.c:427:13: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] cc1: all warnings being treated as errors Fixes: 1d8f1f5a ("MIPS: smp-cps: hotplug support") Cc: <stable@vger.kernel.org> # 3.16+ Reviewed-by: NPaul Burton <paul.burton@imgtec.com> Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10586/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Commit 5f9f41c4 ("MIPS: kernel: Prepare the JR instruction for emulation on MIPS R6") added support for emulating the JR instruction on MIPS R6 cores but that introduced a bug which could be triggered when hitting a JALR opcode because the code used the wrong field in the 'r_format' struct to determine the instruction opcode. This lead to crashes because an emulated JALR instruction was treated as a JR one when the R6 emulator was turned off. Fixes: 5f9f41c4 ("MIPS: kernel: Prepare the JR instruction for emulation on MIPS R6") Cc: <stable@vger.kernel.org> # 4.0+ Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/10583/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Commits f1b44067 ("MIPS: Emulate the new MIPS R6 B{L,G}T{Z,}{AL,}C instructions") and commit a8ff66f5 ("MIPS: Emulate the new MIPS R6 B{L,G}E{Z,}{AL,}C instructions") added support for emulating various branch compact instructions. However, it missed the case for those which use the old BLEZL and BGTZL opcodes leading to random crashes when the R6 emulator is disabled. We fix this by ensuring that the 'rt' field is not zero which is always true for these branch compact instructions. Fixes: f1b44067 ("MIPS: Emulate the new MIPS R6 B{L,G}T{Z,}{AL,}C instructions") Fixes: a8ff66f5 ("MIPS: Emulate the new MIPS R6 B{L,G}E{Z,}{AL,}C instructions") Cc: <stable@vger.kernel.org> # 4.0+ Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/10582/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Fix broken indentation caused by the SMTC removal commit b633648c ("MIPS: MT: Remove SMTC support") Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Fixes: b633648c ("MIPS: MT: Remove SMTC support") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10581/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Alexander Sverdlin 提交于
Commit f9a7febd leads to a fact that mapstart and therefore a page bitmap for bootmem allocator immediately follows initrd_end. This doesn't always work well on Octeon, where there are holes in PFN ranges (refer to 5b3b1688 and 4MB-aligned PFN allocation). Depending on the inird location it could happen, that mapstart would be in an area not allocated by plat_mem_setup() in arch/mips/cavium-octeon/setup.c, but in the alignment hole between initrd and the next PFN area. Later on this memory will be unconditionally made available to buddy allocator at the end of free_all_bootmem_core() (mm/bootmem.c). All of this results in Linux using the memory not designated for Linux in Octeon's plat_mem_setup(), which in turn means corruption of the memory used by another OS/baremetal code on the same SoC. It doesn't look to me as a problem of Octeon platform code, but rather as an inability of f9a7febd to deal correctly with the fragmented memory-mappings. Proposed fix moves the check for initrd address to the same calculation-loop in bootmem_init() (arch/mips/kernel/setup.c), which also accounts for kernel code location. This should result in mapstart located starting from the first PFN area after kernel code AND initrd. Signed-off-by: NAlexander Sverdlin <alexander.sverdlin@nokia.com> Cc: linux-mips@linux-mips.org Cc: David Daney <david.daney@cavium.com> Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: Joe Perches <joe@perches.com> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Yusuf Khan <yusuf.khan@nokia.com> Cc: Michael Kreuzer <michael.kreuzer@nokia.com> Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Patchwork: https://patchwork.linux-mips.org/patch/10594/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Thomas Gleixner 提交于
ktime_get_ts() is the proper interface today. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Jesper Nilsson <jesper.nilsson@axis.com>
-
由 Chris Metcalf 提交于
The tilegx and tilepro compilers use .coldtext for their unlikely executed text section name, so an __attribute__((cold)) function will (when compiled with higher optimization levels) land in the .coldtext section. Modify modpost to add .coldtext to the set of OTHER_TEXT_SECTIONS so we don't get warnings about referencing such a section in an __ex_table block, and then also modify arch/tile/lib/memcpy_user_64.c so that it uses plain ".coldtext" instead of ".coldtext.memcpy". The latter naming is a relic of an earlier use of -ffunction-sections, which we no longer use by default. Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Kristina Martsenko 提交于
Add a DTS file for the MP2 Cortex-A53 Soft Macrocell Model implemented on a LogicTile Express 20MG (V2F-1XV7) daughterboard. This is based on the version that's currently available from the ARM DTS repository [1]. [1] git://linux-arm.org/arm-dts.gitSigned-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
由 Sudeep Holla 提交于
The CCI device node was added to vexpress CA15_A7(i.e. TC2) much before the CCI PMU support and binding was added. This patch adds the missing PMU node so that CCI PMUs can be used on TC2. Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
由 Mark Rutland 提交于
The dts for the CoreTile Express A15x2 A7x3 (TC2) only describes the PMUs of the Cortex-A15 CPUs, and not the Cortex-A7 CPUs. Now that we have a mechanism for describing disparate PMUs and their interrupts in device tree, this patch makes use of these to describe the PMUs for all CPUs in the system. For consistency, the existing A15 PMU interrupt-affinity property is reflowed across two lines. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
由 Tirumalesh Chalamarla 提交于
The PCIe host controller uses MSIs provided by GICv3 ITS. Enable it on Thunder SoCs by adding an entry to DT. Signed-off-by: NTirumalesh Chalamarla <tchalamarla@cavium.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
由 Y Vo 提交于
This patch adds poweroff button device node to support poweroff feature on APM X-Gene Mustang platform. Signed-off-by: NY Vo <yvo@apm.com> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
由 Mark Rutland 提交于
Currently we enable debug exceptions before reading ESR_EL1 in both el0_inv and el1_inv. If a debug exception is taken before we read ESR_EL1, the value will have been corrupted. As el*_inv is typically fatal, an intervening debug exception results in misleading debug information being logged to the console, but is not otherwise harmful. As with the other entry paths, we can use the ESR_EL1 value stashed earlier in the exception entry (in x25 for el0_sync{,_compat}, and x1 for el1_sync), giving us better error reporting in this case. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 7月, 2015 6 次提交
-
-
由 Sébastien Hinderer 提交于
Signed-off-by: NSébastien Hinderer <Sebastien.Hinderer@ens-lyon.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Samuel Thibault <Samuel.Thibault@ens-lyon.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Sukadev Bhattiprolu 提交于
The sysfs attributes for the 24x7 counters are dynamically allocated. Initialize the attributes using sysfs_attr_init() to fix following warning which occurs when CONFIG_DEBUG_LOCK_VMALLOC=y. [ 0.346249] audit: initializing netlink subsys (disabled) [ 0.346284] audit: type=2000 audit(1436295254.340:1): initialized [ 0.346489] BUG: key c0000000efe90198 not in .data! [ 0.346491] DEBUG_LOCKS_WARN_ON(1) [ 0.346502] ------------[ cut here ]------------ [ 0.346504] WARNING: at ../kernel/locking/lockdep.c:3002 [ 0.346506] Modules linked in: Reported-by: NGustavo Luiz Duarte <gustavold@linux.vnet.ibm.com> Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Tested-by: NGustavo Luiz Duarte <gustavold@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ralf Baechle 提交于
Commit 46e12c07 (MIPS: O32 / 32-bit: Always copy 4 stack arguments.) change the O32 syscall handler to always load four arguments from the userspace stack even for syscalls that require fewer or no arguments to be copied. This removes a large table from kernel space and need to maintain it. It appeared that it was ok the implementation chosen requires 16 bytes of readable stack space above the user stack pointer. Turned out a few threading implementations munmap the user stack before the thread exits resulting in errors due to the unreadable stack. We now treat any failed load as a if the loaded value was zero and let the actual syscall deal with the situation. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
CC arch/mips/loongson64/lemote-2f/clock.o /home/ralf/src/linux/linux-mips/arch/mips/loongson64/lemote-2f/clock.c:18:40: fatal error: asm/mach-loongson/loongson.h: No such file or directory #include <asm/mach-loongson/loongson.h> ^ compilation terminated. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Catalin Marinas 提交于
Move EDAC_SUPPORT to the right place. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 7月, 2015 5 次提交
-
-
由 Al Stone 提交于
For those parts of the arm64 ACPI code that need to check GICC subtables in the MADT, use the new BAD_MADT_GICC_ENTRY macro instead of the previous BAD_MADT_ENTRY. The new macro takes into account differences in the size of the GICC subtable that the old macro did not; this caused failures even though the subtable entries are valid. Fixes: aeb823bb ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: NAl Stone <al.stone@linaro.org> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Al Stone 提交于
The BAD_MADT_ENTRY() macro is designed to work for all of the subtables of the MADT. In the ACPI 5.1 version of the spec, the struct for the GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in ACPI 6.0, the struct is 80 bytes long. But, there is only one definition in ACPICA for this struct -- and that is the 6.0 version. Hence, when BAD_MADT_ENTRY() compares the struct size to the length in the GICC subtable, it fails if 5.1 structs are in use, and there are systems in the wild that have them. This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable only, accounting for the difference in specification versions that are possible. The BAD_MADT_ENTRY() will continue to work as is for all other MADT subtables. This code is being added to an arm64 header file since that is currently the only architecture using the GICC subtable of the MADT. As a GIC is specific to ARM, it is also unlikely the subtable will be used elsewhere. Fixes: aeb823bb ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: NAl Stone <al.stone@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net> [catalin.marinas@arm.com: extra brackets around macro arguments] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Thomas Gleixner 提交于
irq_data is protected by irq_desc->lock, so retrieving the irq chip from irq_data outside the lock is racy vs. an concurrent update. Move it into the lock held region. While at it add a comment why the vector walk does not require vector_lock. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: xiao jin <jin.xiao@intel.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Borislav Petkov <bp@suse.de> Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com> Link: http://lkml.kernel.org/r/20150705171102.331320612@linutronix.de
-
由 Thomas Gleixner 提交于
It's unsafe to examine fields in the irq descriptor w/o holding the descriptor lock. Add proper locking. While at it add a comment why the vector check can run lock less Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: xiao jin <jin.xiao@intel.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Borislav Petkov <bp@suse.de> Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com> Link: http://lkml.kernel.org/r/20150705171102.236544164@linutronix.de
-
由 Thomas Gleixner 提交于
Jin debugged a nasty cpu hotplug race which results in leaking a irq vector on the newly hotplugged cpu. cpu N cpu M native_cpu_up device_shutdown do_boot_cpu free_msi_irqs start_secondary arch_teardown_msi_irqs smp_callin default_teardown_msi_irqs setup_vector_irq arch_teardown_msi_irq __setup_vector_irq native_teardown_msi_irq lock(vector_lock) destroy_irq install vectors unlock(vector_lock) lock(vector_lock) ---> __clear_irq_vector unlock(vector_lock) lock(vector_lock) set_cpu_online unlock(vector_lock) This leaves the irq vector(s) which are torn down on CPU M stale in the vector array of CPU N, because CPU M does not see CPU N online yet. There is a similar issue with concurrent newly setup interrupts. The alloc/free protection of irq descriptors does not prevent the above race, because it merily prevents interrupt descriptors from going away or changing concurrently. Prevent this by moving the call to setup_vector_irq() into the vector_lock held region which protects set_cpu_online(): cpu N cpu M native_cpu_up device_shutdown do_boot_cpu free_msi_irqs start_secondary arch_teardown_msi_irqs smp_callin default_teardown_msi_irqs lock(vector_lock) arch_teardown_msi_irq setup_vector_irq() __setup_vector_irq native_teardown_msi_irq install vectors destroy_irq set_cpu_online unlock(vector_lock) lock(vector_lock) __clear_irq_vector unlock(vector_lock) So cpu M either sees the cpu N online before clearing the vector or cpu N installs the vectors after cpu M has cleared it. Reported-by: Nxiao jin <jin.xiao@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Joerg Roedel <jroedel@suse.de> Cc: Borislav Petkov <bp@suse.de> Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com> Link: http://lkml.kernel.org/r/20150705171102.141898931@linutronix.de
-