- 19 8月, 2017 2 次提交
-
-
由 Kees Cook 提交于
Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 broke AddressSanitizer. This is a partial revert of: eab09532 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") 02445990 ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") The AddressSanitizer tool has hard-coded expectations about where executable mappings are loaded. The motivation for changing the PIE base in the above commits was to avoid the Stack-Clash CVEs that allowed executable mappings to get too close to heap and stack. This was mainly a problem on 32-bit, but the 64-bit bases were moved too, in an effort to proactively protect those systems (proofs of concept do exist that show 64-bit collisions, but other recent changes to fix stack accounting and setuid behaviors will minimize the impact). The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC base), so only the 64-bit PIE base needs to be reverted to let x86 and arm64 ASan binaries run again. Future changes to the 64-bit PIE base on these architectures can be made optional once a more dynamic method for dealing with AddressSanitizer is found. (e.g. always loading PIE into the mmap region for marked binaries.) Link: http://lkml.kernel.org/r/20170807201542.GA21271@beast Fixes: eab09532 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") Fixes: 02445990 ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") Signed-off-by: NKees Cook <keescook@chromium.org> Reported-by: NKostya Serebryany <kcc@google.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nicholas Piggin 提交于
Commit 05a4a952 ("kernel/watchdog: split up config options") lost the perf-based hardlockup detector's dependency on PERF_EVENTS, which can result in broken builds with some powerpc configurations. Restore the dependency. Add it in for x86 too, despite x86 always selecting PERF_EVENTS it seems reasonable to make the dependency explicit. Link: http://lkml.kernel.org/r/20170810114452.6673-1-npiggin@gmail.com Fixes: 05a4a952 ("kernel/watchdog: split up config options") Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NDon Zickus <dzickus@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 8月, 2017 2 次提交
-
-
由 Thomas Gleixner 提交于
The hardlockup detector on x86 uses a performance counter based on unhalted CPU cycles and a periodic hrtimer. The hrtimer period is about 2/5 of the performance counter period, so the hrtimer should fire 2-3 times before the performance counter NMI fires. The NMI code checks whether the hrtimer fired since the last invocation. If not, it assumess a hard lockup. The calculation of those periods is based on the nominal CPU frequency. Turbo modes increase the CPU clock frequency and therefore shorten the period of the perf/NMI watchdog. With extreme Turbo-modes (3x nominal frequency) the perf/NMI period is shorter than the hrtimer period which leads to false positives. A simple fix would be to shorten the hrtimer period, but that comes with the side effect of more frequent hrtimer and softlockup thread wakeups, which is not desired. Implement a low pass filter, which checks the perf/NMI period against kernel time. If the perf/NMI fires before 4/5 of the watchdog period has elapsed then the event is ignored and postponed to the next perf/NMI. That solves the problem and avoids the overhead of shorter hrtimer periods and more frequent softlockup thread wakeups. Fixes: 58687acb ("lockup_detector: Combine nmi_watchdog and softlockup detector") Reported-and-tested-by: NKan Liang <Kan.liang@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: dzickus@redhat.com Cc: prarit@redhat.com Cc: ak@linux.intel.com Cc: babu.moger@oracle.com Cc: peterz@infradead.org Cc: eranian@google.com Cc: acme@redhat.com Cc: stable@vger.kernel.org Cc: atomlin@redhat.com Cc: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1708150931310.1886@nanos
-
由 Gary Bisson 提交于
Previous value was a bad copy of nitrogen6_max device tree. Signed-off-by: NGary Bisson <gary.bisson@boundarydevices.com> Fixes: 3faa1bb2 ("ARM: dts: imx: add Boundary Devices Nitrogen6_SOM2 support") Cc: <stable@vger.kernel.org> Signed-off-by: NShawn Guo <shawnguo@kernel.org>
-
- 16 8月, 2017 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
VSX uses a combination of the old vector registers, the old FP registers and new "second halves" of the FP registers. Thus when we need to see the VSX state in the thread struct (flush_vsx_to_thread()) or when we'll use the VSX in the kernel (enable_kernel_vsx()) we need to ensure they are all flushed into the thread struct if either of them is individually enabled. Unfortunately we only tested if the whole VSX was enabled, not if they were individually enabled. Fixes: 72cd7b44 ("powerpc: Uncomment and make enable_kernel_vsx() routine available") Cc: stable@vger.kernel.org # v4.3+ Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 14 8月, 2017 1 次提交
-
-
由 Icenowy Zheng 提交于
The pin controller of H5 has three IRQs at the chip's GIC, which represents three banks of pinctrl IRQs. However, the device tree used to miss the third IRQ of the pin controller, which makes the PG bank IRQ not usable. Add the missing IRQ to the pinctrl node. Fixes: 4e36de17 ("arm64: allwinner: h5: add Allwinner H5 .dtsi") Signed-off-by: NIcenowy Zheng <icenowy@aosc.io> Signed-off-by: NChen-Yu Tsai <wens@csie.org>
-
- 11 8月, 2017 6 次提交
-
-
由 Juergen Gross 提交于
A Xen HVM guest running with KASLR enabled will die rather soon today because the shared info page mapping is using va() too early. This was introduced by commit a5d5f328 ("xen: allocate page for shared info page from low memory"). In order to fix this use early_memremap() to get a temporary virtual address for shared info until va() can be used safely. Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NJuergen Gross <jgross@suse.com>
-
由 Juergen Gross 提交于
Instead of calling xen_hvm_init_shared_info() on boot and resume split it up into a boot time function searching for the pfn to use and a mapping function doing the hypervisor mapping call. Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NJuergen Gross <jgross@suse.com>
-
由 Juergen Gross 提交于
Provide a hook in hypervisor_x86 called after setting up initial memory mapping. This is needed e.g. by Xen HVM guests to map the hypervisor shared info page. Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NJuergen Gross <jgross@suse.com>
-
由 Doug Smythies 提交于
According to Intel 64 and IA-32 Architectures SDM, Volume 3, Chapter 14.2, "Software needs to exercise care to avoid delays between the two RDMSRs (for example interrupts)". So, disable interrupts during reading MSRs IA32_APERF and IA32_MPERF. See also: commit 4ab60c3f (cpufreq: intel_pstate: Disable interrupts during MSRs reading). Signed-off-by: NDoug Smythies <dsmythies@telus.net> Reviewed-by: NLen Brown <len.brown@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Minchan Kim 提交于
Nadav reported parallel MADV_DONTNEED on same range has a stale TLB problem and Mel fixed it[1] and found same problem on MADV_FREE[2]. Quote from Mel Gorman: "The race in question is CPU 0 running madv_free and updating some PTEs while CPU 1 is also running madv_free and looking at the same PTEs. CPU 1 may have writable TLB entries for a page but fail the pte_dirty check (because CPU 0 has updated it already) and potentially fail to flush. Hence, when madv_free on CPU 1 returns, there are still potentially writable TLB entries and the underlying PTE is still present so that a subsequent write does not necessarily propagate the dirty bit to the underlying PTE any more. Reclaim at some unknown time at the future may then see that the PTE is still clean and discard the page even though a write has happened in the meantime. I think this is possible but I could have missed some protection in madv_free that prevents it happening." This patch aims for solving both problems all at once and is ready for other problem with KSM, MADV_FREE and soft-dirty story[3]. TLB batch API(tlb_[gather|finish]_mmu] uses [inc|dec]_tlb_flush_pending and mmu_tlb_flush_pending so that when tlb_finish_mmu is called, we can catch there are parallel threads going on. In that case, forcefully, flush TLB to prevent for user to access memory via stale TLB entry although it fail to gather page table entry. I confirmed this patch works with [4] test program Nadav gave so this patch supersedes "mm: Always flush VMA ranges affected by zap_page_range v2" in current mmotm. NOTE: This patch modifies arch-specific TLB gathering interface(x86, ia64, s390, sh, um). It seems most of architecture are straightforward but s390 need to be careful because tlb_flush_mmu works only if mm->context.flush_mm is set to non-zero which happens only a pte entry really is cleared by ptep_get_and_clear and friends. However, this problem never changes the pte entries but need to flush to prevent memory access from stale tlb. [1] http://lkml.kernel.org/r/20170725101230.5v7gvnjmcnkzzql3@techsingularity.net [2] http://lkml.kernel.org/r/20170725100722.2dxnmgypmwnrfawp@suse.de [3] http://lkml.kernel.org/r/BD3A0EBE-ECF4-41D4-87FA-C755EA9AB6BD@gmail.com [4] https://patchwork.kernel.org/patch/9861621/ [minchan@kernel.org: decrease tlb flush pending count in tlb_finish_mmu] Link: http://lkml.kernel.org/r/20170808080821.GA31730@bbox Link: http://lkml.kernel.org/r/20170802000818.4760-7-namit@vmware.comSigned-off-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NNadav Amit <namit@vmware.com> Reported-by: NNadav Amit <namit@vmware.com> Reported-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NMel Gorman <mgorman@techsingularity.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Jeff Dike <jdike@addtoit.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Rik van Riel <riel@redhat.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
This patch is a preparatory patch for solving race problems caused by TLB batch. For that, we will increase/decrease TLB flush pending count of mm_struct whenever tlb_[gather|finish]_mmu is called. Before making it simple, this patch separates architecture specific part and rename it to arch_tlb_[gather|finish]_mmu and generic part just calls it. It shouldn't change any behavior. Link: http://lkml.kernel.org/r/20170802000818.4760-5-namit@vmware.comSigned-off-by: NMinchan Kim <minchan@kernel.org> Signed-off-by: NNadav Amit <namit@vmware.com> Acked-by: NMel Gorman <mgorman@techsingularity.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Jeff Dike <jdike@addtoit.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Rik van Riel <riel@redhat.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 8月, 2017 3 次提交
-
-
由 Icenowy Zheng 提交于
The EMAC Ethernet controller was enabled, but an accompanying alias was not added. This results in unstable numbering if other Ethernet devices, such as a USB dongle, are present. Also, the bootloader uses the alias to assign a generated stable MAC address to the device node. Signed-off-by: NIcenowy Zheng <icenowy@aosc.io> Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Fixes: 96219b00 ("arm64: allwinner: a64: add device tree for SoPine with baseboard") [wens@csie.org: Rewrite commit log as fixing a previous patch with Fixes] Signed-off-by: NChen-Yu Tsai <wens@csie.org>
-
由 Icenowy Zheng 提交于
The EMAC Ethernet controller was enabled, but an accompanying alias was not added. This results in unstable numbering if other Ethernet devices, such as a USB dongle, are present. Also, the bootloader uses the alias to assign a generated stable MAC address to the device node. Signed-off-by: NIcenowy Zheng <icenowy@aosc.io> Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Fixes: 97023943 ("arm64: allwinner: pine64: Enable dwmac-sun8i") [wens@csie.org: Rewrite commit log as fixing a previous patch with Fixes] Signed-off-by: NChen-Yu Tsai <wens@csie.org>
-
由 Icenowy Zheng 提交于
The EMAC Ethernet controller was enabled, but an accompanying alias was not added. This results in unstable numbering if other Ethernet devices, such as a USB dongle, are present. Also, the bootloader uses the alias to assign a generated stable MAC address to the device node. Signed-off-by: NIcenowy Zheng <icenowy@aosc.io> Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Fixes: e7295499 ("arm64: allwinner: bananapi-m64: Enable dwmac-sun8i") [wens@csie.org: Rewrite commit log as fixing a previous patch with Fixes] Signed-off-by: NChen-Yu Tsai <wens@csie.org>
-
- 09 8月, 2017 8 次提交
-
-
由 Nicholas Piggin 提交于
When CPUs start and stop the watchdog, they manipulate shared data that is normally protected by the lock. Other CPUs can be running concurrently at this time, so it's a good idea to use locking here to be on the safe side. Remove the barrier which is undocumented and didn't do anything. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
When the SMP detector finds other CPUs stuck, it iterates over them and marks them as stuck. This pulls them out of the pending mask and allows the detector to continue with remaining good CPUs (if nmi_watchdog=panic is not enabled). The code to dothat was buggy because when setting a CPU stuck, if the pending mask became empty, it resets it to keep the watchdog running. However the iterator will continue to run over the new pending mask and mark remaining good CPUs sas stuck. Fix this by doing it with cpumask bitwise operations. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
When the watchdog decides to panic, it takes the lock and double checks everything (to avoid races with the CPU being unstuck or panic()ed by something else). The exit label was misplaced and would result in all-CPUs backtrace and watchdog panic even in the case that the condition was found to be resolved. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Some code can go into a tight loop calling touch_nmi_watchdog (e.g., stop_machine CPU hotplug code). This can cause contention on watchdog locks particularly if all CPUs with watchdog enabled are spinning in the loops. Avoid this storm of activity by running the watchdog timer callback from this path if we have exceeded the timer period since it was last run. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
- Hard-disable interrupts before taking the lock, which prevents soft-NMI re-entrancy and therefore can prevent deadlocks. - Use raw_ variants of local_irq_disable to avoid irq debugging. - When the lock is contended, spin at low SMT priority, using loads only, and with interrupts enabled (where possible). Some stalls have been noticed at high loads that go away with improved locking. There should not be so much locking contention in the first place (which is addressed in a subsequent patch), but locking should still be improved. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
When the NMI IPI lock is contended, spin at low SMT priority, using loads only, and with interrupts enabled (where possible). This improves behaviour under high contention (e.g., a system crash when a number of CPUs are trying to enter the debugger). Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
In commit 05a4a952 ("kernel/watchdog: split up config options"), CONFIG_LOCKUP_DETECTOR was split into two separate config options, CONFIG_HARDLOCKUP_DETECTOR and CONFIG_SOFTLOCKUP_DETECTOR. Our defconfigs still have CONFIG_LOCKUP_DETECTOR=y, but that is no longer user selectable, and we don't mention the new options, so we end up with none of them enabled. So update the defconfigs to turn on the new SOFT and HARD options, the end result being the same as what we had previously. Fixes: 05a4a952 ("kernel/watchdog: split up config options") Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
It was reported that the sha1 AVX2 function(sha1_transform_avx2) is reading ahead beyond its intended data, and causing a crash if the next block is beyond page boundary: http://marc.info/?l=linux-crypto-vger&m=149373371023377 This patch makes sure that there is no overflow for any buffer length. It passes the tests written by Jan Stancek that revealed this problem: https://github.com/jstancek/sha1-avx2-crash I have re-enabled sha1-avx2 by reverting commit b82ce244 Cc: <stable@vger.kernel.org> Fixes: b82ce244 ("crypto: sha1-ssse3 - Disable avx2") Originally-by: NIlya Albrekht <ilya.albrekht@intel.com> Tested-by: NJan Stancek <jstancek@redhat.com> Signed-off-by: NMegha Dey <megha.dey@linux.intel.com> Reported-by: NJan Stancek <jstancek@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 08 8月, 2017 5 次提交
-
-
由 Paul Burton 提交于
When building a kernel for the microMIPS ISA, ensure that the ISA bit (ie. bit 0) in the entry address is set. Otherwise we may include an entry address in images which bootloaders will jump to as MIPS32 code. I originally tried using "objdump -f" to obtain the entry address, which works for microMIPS but it always outputs a 32 bit address for a 32 bit ELF whilst nm will sign extend to 64 bit. That matters for systems where we might want to run a MIPS32 kernel on a MIPS64 CPU & load it with a MIPS64 bootloader, which would then jump to a non-canonical (non-sign-extended) address. This works in all cases as it only changes the behaviour for microMIPS kernels, but isn't the prettiest solution. A possible alternative would be to write a custom tool to just extract, sign extend & print the entry point of an ELF executable. I'm open to feedback if that would be preferred. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16950/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
We don't currently support the MT ASE for microMIPS kernels, and there are no CPUs currently in existence that use both. They can however both be enabled in Kconfig, resulting in build failures such as: AS arch/mips/kernel/cps-vec.o arch/mips/kernel/cps-vec.S: Assembler messages: arch/mips/kernel/cps-vec.S:242: Warning: the 32-bit microMIPS architecture does not support the `mt' extension arch/mips/kernel/cps-vec.S:276: Error: unrecognized opcode `mttc0 $13,$2,2' arch/mips/kernel/cps-vec.S:282: Error: unrecognized opcode `mttc0 $8,$1,2' arch/mips/kernel/cps-vec.S:285: Error: unrecognized opcode `mttc0 $0,$2,1' ... Fix this by preventing MT from being enabled when targeting microMIPS. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16951/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Gautham R. Shenoy 提交于
Currently, we use the opal call opal_slw_set_reg() to inform the Sleep-Winkle Engine (SLW) to restore the contents of some of the Hypervisor state on wakeup from deep idle states that lose full hypervisor context (characterized by the flag OPAL_PM_LOSE_FULL_CONTEXT). However, the current code has a bug in that if opal_slw_set_reg() fails, we don't disable the use of these deep states (winkle on POWER8, stop4 onwards on POWER9). This patch fixes this bug by ensuring that if programing the sleep-winkle engine to restore the hypervisor states in pnv_save_sprs_for_deep_states() fails, then we exclude such states by clearing the OPAL_PM_LOSE_FULL_CONTEXT flag from supported_cpuidle_states. As a result POWER8 will be prevented from using winkle for CPU-Hotplug, and POWER9 will put the offlined CPUs to the default stop state when available. Further, we ensure in the initialization of the cpuidle-powernv driver to only include those states whose flags are present in supported_cpuidle_states, thereby skipping OPAL_PM_LOSE_FULL_CONTEXT states when they have been disabled due to stop-api failure. Fixes: 1e1601b3 ("powerpc/powernv/idle: Restore SPRs for deep idle states via stop API.") Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Matt Redfearn 提交于
Commit 1c3c5eab ("sched/core: Enable might_sleep() and smp_processor_id() checks early") enables checks for might_sleep() and smp_processor_id() being used in preemptible code earlier in the boot than before. This results in a new BUG from pcibios_set_cache_line_size(). BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1 caller is pcibios_set_cache_line_size+0x10/0x70 CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc1-00007-g3ce3e4ba4275 #615 Stack: 0000000000000000 ffffffff81189694 0000000000000000 ffffffff81822318 000000000000004e 0000000000000001 800000000e20bd08 20c49ba5e3540000 0000000000000000 0000000000000000 ffffffff818d0000 0000000000000000 0000000000000000 ffffffff81189328 ffffffff818ce692 0000000000000000 0000000000000000 ffffffff81189bc8 ffffffff818d0000 0000000000000000 ffffffff81828907 ffffffff81769970 800000020ec78d80 ffffffff818c7b48 0000000000000001 0000000000000001 ffffffff818652b0 ffffffff81896268 ffffffff818c0000 800000020ec7fb40 800000020ec7fc58 ffffffff81684cac 0000000000000000 ffffffff8118ab50 0000000000000030 ffffffff81769970 0000000000000001 ffffffff81122a58 0000000000000000 0000000000000000 ... Call Trace: [<ffffffff81122a58>] show_stack+0x90/0xb0 [<ffffffff81684cac>] dump_stack+0xac/0xf0 [<ffffffff813f7050>] check_preemption_disabled+0x120/0x128 [<ffffffff818855e8>] pcibios_set_cache_line_size+0x10/0x70 [<ffffffff81100578>] do_one_initcall+0x48/0x140 [<ffffffff81865dc4>] kernel_init_freeable+0x194/0x24c [<ffffffff8169c534>] kernel_init+0x14/0x118 [<ffffffff8111ca84>] ret_from_kernel_thread+0x14/0x1c Fix this by using the cpu_*cache_line_size() macros instead. These macros are the "proper" way to determine the CPU cache sizes. This makes use of the newly added cpu_tcache_line_size. Fixes: 1c3c5eab ("sched/core: Enable might_sleep() and smp_processor_id() checks early") Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Suggested-by: NJames Hogan <james.hogan@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matt Redfearn 提交于
There exist macros to return the cache line size of the L1 dcache and L2 scache but there is currently no macro for the L3 tcache. Add this macro which will be used by the following patch "MIPS: PCI: Fix smp_processor_id() in preemptible" Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: Maciej W. Rozycki <macro@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16871/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 07 8月, 2017 10 次提交
-
-
由 Maciej W. Rozycki 提交于
Fix a commit 3021773c ("MIPS: DEC: Avoid la pseudo-instruction in delay slots") regression and remove assembly errors: arch/mips/dec/int-handler.S: Assembler messages: arch/mips/dec/int-handler.S:162: Error: Macro used $at after ".set noat" arch/mips/dec/int-handler.S:163: Error: Macro used $at after ".set noat" arch/mips/dec/int-handler.S:229: Error: Macro used $at after ".set noat" arch/mips/dec/int-handler.S:230: Error: Macro used $at after ".set noat" triggering with with the CPU_DADDI_WORKAROUNDS option set and the DADDIU instruction. This is because with that option in place the instruction becomes a macro, which expands to an LI/DADDU (or actually ADDIU/DADDU) sequence that uses $at as a temporary register. With CPU_DADDI_WORKAROUNDS we only support `-msym32' compilation though, and this is already enforced in arch/mips/Makefile, so choose the 32-bit expansion variant for the supported configurations and then replace the 64-bit variant with #error just in case. Fixes: 3021773c ("MIPS: DEC: Avoid la pseudo-instruction in delay slots") Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 4.8+ Patchwork: https://patchwork.linux-mips.org/patch/16893/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Goran Ferenc 提交于
Extend clobber lists to include all GP registers. Fixes: 0b523a85 ("MIPS: VDSO: Add implementation of gettimeofday() fallback") Signed-off-by: NMiodrag Dinic <miodrag.dinic@imgtec.com> Signed-off-by: NGoran Ferenc <goran.ferenc@imgtec.com> Signed-off-by: NAleksandar Markovic <aleksandar.markovic@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: Bo Hu <bohu@google.com> Cc: Douglas Leung <douglas.leung@imgtec.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jin Qian <jinqian@google.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Petar Jovanovic <petar.jovanovic@imgtec.com> Cc: Raghu Gandham <raghu.gandham@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16879/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Michael Ellerman 提交于
This reverts commit bc4f65e4. As reported by Andreas, this commit is causing unrecoverable SLB misses in the system call exit path: Unrecoverable exception 4100 at c00000000000a1ec Oops: Unrecoverable exception, sig: 6 [#1] SMP NR_CPUS=2 PowerMac ... CPU: 0 PID: 18626 Comm: rm Not tainted 4.13.0-rc3 #1 task: c00000018335e080 task.stack: c000000139e50000 NIP: c00000000000a1ec LR: c00000000000a118 CTR: 0000000000000000 REGS: c000000139e53bb0 TRAP: 4100 Not tainted (4.13.0-rc3) MSR: 9000000000001030 <SF,HV,ME,IR,DR> CR: 24000044 XER: 20000000 SOFTE: 1 GPR00: 0000000000000000 c000000139e53e30 c000000000abb500 fffffffffffffffe GPR04: c0000001eb866298 0000000000000000 0000000000000000 c00000018335e080 GPR08: 900000000000d032 0000000000000000 0000000000000002 fffffffffffff001 GPR12: c000000139e50000 c00000000ffff000 00003fffa8c0dca0 00003fffa8c0dc88 GPR16: 0000000010000000 0000000000000001 00003fffa8c0eaa0 0000000000000000 GPR20: 00003fffa8c27528 00003fffa8c27b00 0000000000000000 0000000000000000 GPR24: 00003fffa8c0d918 00003ffff1b3efa0 00003fffa8c26d68 0000000000000000 GPR28: 00003fffa8c249e8 00003fffa8c263d0 00003fffa8c27550 00003ffff1b3ef10 NIP [c00000000000a1ec] system_call_exit+0xc0/0x21c LR [c00000000000a118] system_call+0x58/0x6c Call Trace: [c000000139e53e30] [c00000000000a118] system_call+0x58/0x6c (unreliable) Instruction dump: 64a51000 7c6300d0 f8a101a0 4bffff9c 3c000000 60000006 780007c6 64000000 60000000 7c004039 4082001c e8ed0170 <88070b78> 88c70b79 7c003214 2c200000 This is caused by us trying to load THREAD_LOAD_FP with MSR_RI=0, and taking an SLB miss on the thread struct. Reported-by: NAndreas Schwab <schwab@linux-m68k.org> Diagnosed-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Paul Burton 提交于
Commit 296e46db ("MIPS: Don't unnecessarily include kmalloc.h into <asm/cache.h>.") claimed that the inclusion of the machine's kmalloc.h from asm/cache.h is unnecessary, but this is not true. Without including kmalloc.h we don't get a definition for ARCH_DMA_MINALIGN, which means we no longer suitably align DMA. Further to this the definition of ARCH_KMALLOC_MINALIGN provided by linux/slab.h ends up being set to the alignment of an unsigned long long value rather than to ARCH_DMA_MINALIGN, which means that buffers allocated using kmalloc may no longer be safely aligned for use with DMA. Fix this by re-adding the include of kmalloc.h in asm/cache.h. This reverts commit 296e46db ("MIPS: Don't unnecessarily include kmalloc.h into <asm/cache.h>.") Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Fixes: 296e46db ("MIPS: Don't unnecessarily include kmalloc.h into <asm/cache.h>.") Cc: linux-mips@linux-mips.org Cc: stable <stable@vger.kernel.org> # v4.12+ Patchwork: https://patchwork.linux-mips.org/patch/16895/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Steven J. Hill 提交于
Fix build error when CONFIG_SMP is turned off: CC [M] arch/mips/cavium-octeon/octeon-usb.o arch/mips/cavium-octeon/octeon-usb.c: In function ‘dwc3_octeon_device_init’: arch/mips/cavium-octeon/octeon-usb.c:540:4: error: implicit declaration of function ‘devm_iounmap’ [-Werror=implicit-function-declaration] devm_iounmap(&pdev->dev, base); Signed-off-by: NSteven J. Hill <steven.hill@cavium.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Tested-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16907/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Steven J. Hill 提交于
Commit "MIPS: Octeon: Remove unused L2C types and macros." broke the the EDAC driver. Bring back 'cvmx-l2d-defs.h' file and the missing types for L2C. Fixes: 15f68479 ("MIPS: Octeon: Remove unused L2C types and macros.") Fixes: 15f68479 ("MIPS: Octeon: Remove unused L2C types and macros.") Signed-off-by: NSteven J. Hill <steven.hill@cavium.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.12+ Patchwork: https://patchwork.linux-mips.org/patch/16906/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Bartosz Golaszewski 提交于
Add ashldi3.c and bswapsi.c to the list of ignored files. Signed-off-by: NBartosz Golaszewski <brgl@bgdev.pl> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16905/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matija Glavinic Pecotic 提交于
While testing cpu hoptlug (cpu down and up in loops) on kernel 4.4, it was observed that occasionally check for cpu online will fail in kernel/cpu.c, _cpu_up: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/kernel/cpu.c?h=v4.4.79#n485 518 /* Arch-specific enabling code. */ 519 ret = __cpu_up(cpu, idle); 520 521 if (ret != 0) 522 goto out_notify; 523 BUG_ON(!cpu_online(cpu)); Reason is race between start_secondary and _cpu_up. cpu_callin_map is set before cpu_online_mask. In __cpu_up, cpu_callin_map is waited for, but cpu online mask is not, resulting in race in which secondary processor started and set cpu_callin_map, but not yet set the online mask,resulting in above BUG being hit. Upstream differs in the area. cpu_online check is in bringup_wait_for_ap, which is after cpu reached AP_ONLINE_IDLE,where secondary passed its start function. Nonetheless, fix makes start_secondary safe and not depending on other locks throughout the code. It protects as well against cpu_online checks put in between sometimes in the future. Fix this by moving completion after all flags are set. Signed-off-by: NMatija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com> Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16925/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Thomas Petazzoni 提交于
Fixes the following gcc 7.x build error: arch/mips/mm/uasm-mips.c:51:26: error: duplicate ‘const’ declaration specifier [-Werror=duplicate-decl-specifier] static const struct insn const insn_table[insn_invalid] = { Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Fixes: ce807d5f ("MIPS: Optimize uasm insn lookup.") Cc: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16926/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Kuninori Morimoto 提交于
clock name of "audio_clkout" is used by Renesas sound driver. This duplicated naming breaks its clock registering/unregistering. Especially, when unbind/bind it can't handle clkout correctly. This patch renames "audio_clkout" to "audio-clkout" to avoid naming conflict. Fixes: 8a8f181d ("arm64: renesas: salvator-x: use CS2000 as AUDIO_CLK_B") Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: NSimon Horman <horms+renesas@verge.net.au>
-
- 05 8月, 2017 2 次提交
-
-
由 Martin Kaiser 提交于
Add a ranges; line to the tscadc node. This creates a 1:1 mapping between the addresses used by tscadc and those in its child nodes (adc, tsc). Without such a mapping, the reg = ... lines in the tsc and adc nodes do not create a resource. Probing the fsl-imx25-tcq and fsl-imx25-tsadc drivers will then fail since there's no IORESOURCE_MEM. Signed-off-by: NMartin Kaiser <martin@kaiser.cx> Fixes: 92f651f3 ("ARM: dts: imx25: Add TSC and ADC support") Signed-off-by: NShawn Guo <shawnguo@kernel.org>
-
由 David Daney 提交于
Inexplicably, commit f381bf6d ("MIPS: Add support for eBPF JIT.") lost a file somewhere on its path to Linus' tree. Add back the missing ebpf_jit.c so that we can build with CONFIG_BPF_JIT selected. This version of ebpf_jit.c is identical to the original except for two minor change need to resolve conflicts with changes merged from the BPF branch: A) Set prog->jited_len = image_size; B) Use BPF_TAIL_CALL instead of BPF_CALL | BPF_X Fixes: f381bf6d ("MIPS: Add support for eBPF JIT.") Signed-off-by: NDavid Daney <david.daney@cavium.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-