- 07 10月, 2016 6 次提交
-
-
由 Rik van Riel 提交于
With the removal of the lazy FPU code, this field is no longer used. Get rid of it. Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-7-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Rik van Riel 提交于
With the lazy FPU code gone, we no longer use the counter field in struct fpu for anything. Get rid it. Signed-off-by: NRik van Riel <riel@redhat.com> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-6-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
This removes all the obvious code paths that depend on lazy FPU mode. It shouldn't change the generated code at all. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-5-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
Now that lazy mode is gone, we don't need to distinguish which xfeatures require eager mode. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-4-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
Since commit: 58122bf1 ("x86/fpu: Default eagerfpu=on on all CPUs") ... in Linux 4.6, eager FPU mode has been the default on all x86 systems, and no one has reported any regressions. This patch removes the ability to enable lazy mode: use_eager_fpu() becomes "return true" and all of the FPU mode selection machinery is removed. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-3-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
The crypto code was checking both use_eager_fpu() and defined(X86_FEATURE_EAGER_FPU). The latter was nonsensical, so remove it. This will avoid breakage when we remove X86_FEATURE_EAGER_FPU. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pbonzini@redhat.com Link: http://lkml.kernel.org/r/1475627678-20788-2-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 10月, 2016 1 次提交
-
-
由 Srinivas Ramana 提交于
If the bootloader uses the long descriptor format and jumps to kernel decompressor code, TTBCR may not be in a right state. Before enabling the MMU, it is required to clear the TTBCR.PD0 field to use TTBR0 for translation table walks. The commit dbece458 ("ARM: 7501/1: decompressor: reset ttbcr for VMSA ARMv7 cores") does the reset of TTBCR.N, but doesn't consider all the bits for the size of TTBCR.N. Clear TTBCR.PD0 field and reset all the three bits of TTBCR.N to indicate the use of TTBR0 and the correct base address width. Fixes: dbece458 ("ARM: 7501/1: decompressor: reset ttbcr for VMSA ARMv7 cores") Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NSrinivas Ramana <sramana@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 10月, 2016 1 次提交
-
-
由 Paul Burton 提交于
When discovering the number of VPEs per core, smp_num_siblings will be incorrect for kernels built without support for the MIPS MultiThreading (MT) ASE running on systems which implement said ASE. This leads to accesses to VPEs in secondary cores being performed incorrectly since mips_cm_vp_id calculates the wrong ID to write to the local "other" registers. Fix this by examining the number of VPEs in the core as reported by the CM. This patch presumes that the number of VPEs will be the same in each core of the system. As this path only applies to systems with CM version 2.5 or lower, and this property is true of all such known systems, this is likely to be fine but is described in a comment for good measure. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14338/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 01 10月, 2016 19 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Commit d9676fa1 ("ARCv2: Enable LOCKDEP"), changed local_save_flags() to not return raw STATUS32 but encoded in the form such that it could be fed directly to CLRI/SETI instructions. However the STATUS32.E[] was not captured correctly as it corresponds to bits [4:1] in the register and not [3:0] Fixes: d9676fa1 ("ARCv2: Enable LOCKDEP") Cc: Evgeny Voevodin <evgeny.voevodin@intel.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Noam Camus 提交于
Seem like values assigned as absolute number and not and shift value, i.e. should be 0 for one node (2^0) and 1 for couple of nodes (2^1) Signed-off-by: NNoam Camus <noamca@mellanox.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Yuriy Kolerov 提交于
In the end of "arc_init_IRQ" STATUS32.IE flag is going to be affected by "flag" instruction but "flag" never touches IE flag on ARCv2. So "kflag" instruction must be used instead of "flag". Signed-off-by: NYuriy Kolerov <yuriy.kolerov@synopsys.com> Cc: stable@vger.kernel.org #4.2+ Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
We used to keep the .exit.* sections as linker would fail in final link due to references from .debug_frame which itself could not be discardrd due to the forced "write,alloc" attributes for it. | LD init/built-in.o | `.exit.text' referenced in section `.debug_frame' of arch/arc/built-in.o: defined in discarded section `.exit.text' of arch/arc/built-in.o | Makefile:949: recipe for target 'vmlinux' failed With .debug_frame now retired, this hack is no longer needed. kernel binary is now a little bit smaller as well. closes STAR 9000549913 Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This uses a new set of annoations viz. ENTRY_CFI/END_CFI to enabel cfi ops generation. Note that we didn't change the normal ENTRY/EXIT as we don't actually want unwind info in the trap/exception/interrutp handlers which use these, as unwinder then gets confused (it keeps recursing vs. stopping). Semantically these are leaf routines and unwinding should stop when it hits those routines. Before ------ 28.52% 1.19% 9929 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--8.95%--EV_Trap | --8.25%--sys_write | |--3.93%--sock_write_iter ... |--2.62%--memset <==== [LEAF entry as no unwind info] ^^^^^^ After ----- 29.46% 1.24% 13622 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--9.31%--EV_Trap | --8.62%--sys_write | |--4.17%--sock_write_iter ... |--6.19%--sys_write | --6.19%--sock_write_iter | unix_stream_sendmsg | |--1.62%--sock_alloc_send_pskb | |--0.89%--sock_def_readable | |--0.88%--_raw_spin_unlock_irqrestore | |--0.69%--memset | | ^^^^^^ <==== [now in proper callframe] | | | --0.52%--skb_copy_datagram_from_iter Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
1. detect whether binutils supports the cfi pseudo ops 2. define conditional macros to generate the ops 3. define new ENTRY_CFI/END_CFI to annotate hand asm code. - Needed because we don't want to emit dwarf info in general ENTRY/END used by lowest level trap/exception/interrutp handlers as unwinder gets confused trying to unwind out of them. We want unwinder to instead stop when it hits onfo those routines - These provide minimal start/end cfi ops assuming routine doesn't touch stack memory/regs Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This essentially removes ENTRY() assembler annotation for this symbol since it didn't have a pairing END() This in ahead of introducing cfi pseudo ops in ENTRY/END which expects paired cfi_startproc/cfi_endproc | ../arch/arc/kernel/entry.S: Assembler messages: | ../arch/arc/kernel/entry.S:270: Error: previous CFI entry not closed (missing .cfi_endproc) | ../scripts/Makefile.build:326: recipe for target 'arch/arc/kernel/entry-arcv2.o' failed | make[4]: *** [arch/arc/kernel/entry-arcv2.o] Error 1 Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
In .debug_frame based unwinding regime, we used to force -gdwarf-2 since kernel unwinder only claimed to handle dwarf 2. This changed since commit 6d0d5060 ("ARC: dw2 unwind: Don't bail for CIE.version != 1") which added some support beyond dwarf 2, atleast to handle CIE != 1 The ill-effect of -gdwarf-2 is that it forces generation of .debug_* sections, which bloats loadable modules .ko files. For the curious, this doesn't affect vmlinx binary since linker script discards .debug_* but same discard is not yet implemented for modules. So it seems we can drop the -gdwarf-2 toggle, which should not be needed anyways given that we now use .eh_frame based unwinding. I've verified using GNU 2016.09-engo10 that the actual unwind info is not different with or w/o this toggle - but the debug_* sections are gone for good. before ----- arc-linux-readelf -S q_proc.ko-unwinding-1-eh_frame-switch | grep debug [15] .debug_info PROGBITS 00000000 000300 00d08d 00 0 0 1 [16] .rela.debug_info RELA 00000000 0162a0 008844 0c I 29 15 4 [17] .debug_abbrev PROGBITS 00000000 00d38d 0005f8 00 0 0 1 [18] .debug_loc PROGBITS 00000000 00d985 000070 00 0 0 1 [19] .rela.debug_loc RELA 00000000 01eae4 0000c0 0c I 29 18 4 [20] .debug_aranges PROGBITS 00000000 00d9f5 000040 00 0 0 1 [21] .rela.debug_arang RELA 00000000 01eba4 000030 0c I 29 20 4 [22] .debug_ranges PROGBITS 00000000 00da35 000018 00 0 0 1 [23] .rela.debug_range RELA 00000000 01ebd4 000030 0c I 29 22 4 [24] .debug_line PROGBITS 00000000 00da4d 000b5b 00 0 0 1 [25] .rela.debug_line RELA 00000000 01ec04 0000cc 0c I 29 24 4 [26] .debug_str PROGBITS 00000000 00e5a8 007831 01 MS 0 0 1 after ---- Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
So finally after almost 8 years of dealing with .debug_frame, we are finally switching to .eh_frame. The reason being stripped kernel binaries had non-functional unwinder as .debug_frame was gone. Also, in general .eh_frame seems more common way of doing unwinding. This also folds a revert of f52e126c ("ARC: unwind: ensure that .debug_frame is generated (vs. .eh_frame)") to ensure that we start getting .eh_frame Reported-by: NDaniel Mentz <danielmentz@google.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This paves way for switching to .eh_frame based unwindiing Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Alexey Brodkin 提交于
We used to live with PERF_COUNT_HW_CACHE_REFERENCES and PERF_COUNT_HW_CACHE_REFERENCES not specified on ARC. Those events are actually aliases to 2 cache events that we do support and so this change sets "cache-reference" and "cache-misses" events in the same way as "L1-dcache-loads" and L1-dcache-load-misses. And while at it adding debug info for cache events as well as doing a subtle fix in HW events debug info - config value is much better represented by hex so we may see not only event index but as well other control bits set (if they exist). Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Noam Camus 提交于
Build brekeage since last changes to generic atomic operations. Added couple of missing macros which are now mandatory Signed-off-by: NNoam Camus <noamca@mellanox.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
ARCv2 ISA provides 64-bit exclusive load/stores so use them to implement the 64-bit atomics and elide the spinlock based generic 64-bit atomics boot tested with atomic64 self-test (and GOD bless the person who wrote them, I realized my inline assmebly is sloppy as hell) Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
HS release 3.0 provides for even more flexibility in specifying the volatile address space for mapping peripherals. With HS 2.1 @start was made flexible / programmable - with HS 3.0 even @end can be setup (vs. fixed to 0xFFFF_FFFF before). So add code to reflect that and while at it remove an unused struct defintion Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
The cool thing is that same kernel image can run on - nsim OSCI simulation platform - SDPlite FPGA setups Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Alexey Brodkin 提交于
As it was discussed quite some time ago (see https://lkml.org/lkml/2015/11/5/862) it's a good practice to add "model" property in .dts. Moreover as per ePAPR "model" property is required and should look like "manufacturer,model" so we do here. Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Jonas Gorski <jonas.gorski@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Rob Herring <robh@kernel.org> Cc: Christian Ruppert <christian.ruppert@alitech.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 30 9月, 2016 13 次提交
-
-
由 Wanpeng Li 提交于
This warning: WARNING: CPU: 0 PID: 3331 at arch/x86/entry/common.c:45 enter_from_user_mode+0x32/0x50 CPU: 0 PID: 3331 Comm: ldt_gdt_64 Not tainted 4.8.0-rc7+ #13 Call Trace: dump_stack+0x99/0xd0 __warn+0xd1/0xf0 warn_slowpath_null+0x1d/0x20 enter_from_user_mode+0x32/0x50 error_entry+0x6d/0xc0 ? general_protection+0x12/0x30 ? native_load_gs_index+0xd/0x20 ? do_set_thread_area+0x19c/0x1f0 SyS_set_thread_area+0x24/0x30 do_int80_syscall_32+0x7c/0x220 entry_INT80_compat+0x38/0x50 ... can be reproduced by running the GS testcase of the ldt_gdt test unit in the x86 selftests. do_int80_syscall_32() will call enter_form_user_mode() to convert context tracking state from user state to kernel state. The load_gs_index() call can fail with user gsbase, gsbase will be fixed up and proceed if this happen. However, enter_from_user_mode() will be called again in the fixed up path though it is context tracking kernel state currently. This patch fixes it by just fixing up gsbase and telling lockdep that IRQs are off once load_gs_index() failed with user gsbase. Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1475197266-3440-1-git-send-email-wanpeng.li@hotmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
Otherwise arch_task_struct_size == 0 and we die. While we're at it, set X86_FEATURE_ALWAYS, too. Reported-by: NDavid Saggiorato <david@saggiorato.net> Tested-by: NDavid Saggiorato <david@saggiorato.net> Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: aaeb5c01c5b ("x86/fpu, sched: Introduce CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT and use it on x86") Link: http://lkml.kernel.org/r/8de723afbf0811071185039f9088733188b606c9.1475103911.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
We use __read_cr4() vs __read_cr4_safe() inconsistently. On CR4-less CPUs, all CR4 bits are effectively clear, so we can make the code simpler and more robust by making __read_cr4() always fix up faults on 32-bit kernels. This may fix some bugs on old 486-like CPUs, but I don't have any easy way to test that. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: david@saggiorato.net Link: http://lkml.kernel.org/r/ea647033d357d9ce2ad2bbde5a631045f5052fb6.1475178370.git.luto@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Segher Boessenkool 提交于
We need to call GET_LE to read hdr->e_type. Fixes: 57f90c3d ("x86/vdso: Error out if the vDSO isn't a valid DSO") Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NSegher Boessenkool <segher@kernel.crashing.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: linux-next@vger.kernel.org Link: http://lkml.kernel.org/r/20160929193442.GA16617@gate.crashing.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andy Lutomirski 提交于
The condition for reading CR4 was wrong: there are some CPUs with CPUID but not CR4. Rather than trying to make the condition exact, use __read_cr4_safe(). Fixes: 18bc7bd5 ("x86/boot: Synchronize trampoline_cr4_features and mmu_cr4_features directly") Reported-by: david@saggiorato.net Signed-off-by: NAndy Lutomirski <luto@kernel.org> Reviewed-by: NBorislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Link: http://lkml.kernel.org/r/8c453a61c4f44ab6ff43c29780ba04835234d2e5.1475178369.git.luto@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Peter Zijlstra 提交于
Rename the ia64 only set_curr_task() function to free up the name. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Nikolay Borisov 提交于
cmpxchg contained definitions for unused (x)add_* operations, dating back to the original ticket spinlock implementation. Nowadays these are unused so remove them. Signed-off-by: NNikolay Borisov <n.borisov.lkml@gmail.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: hpa@zytor.com Link: http://lkml.kernel.org/r/1474913478-17757-1-git-send-email-n.borisov.lkml@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
We've unconditionally used the queued spinlock for many releases now. Its time to remove the old ticket lock code. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <waiman.long@hpe.com> Cc: Waiman.Long@hpe.com Cc: david.vrabel@citrix.com Cc: dhowells@redhat.com Cc: pbonzini@redhat.com Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/20160518184302.GO3193@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Tim Chen 提交于
Current code can call set_cpu_sibling_map() and invoke sched_set_topology() more than once (e.g. on CPU hot plug). When this happens after sched_init_smp() has been called, we lose the NUMA topology extension to sched_domain_topology in sched_init_numa(). This results in incorrect topology when the sched domain is rebuilt. This patch fixes the bug and issues warning if we call sched_set_topology() after sched_init_smp(). Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bp@suse.de Cc: jolsa@redhat.com Cc: rjw@rjwysocki.net Link: http://lkml.kernel.org/r/1474485552-141429-2-git-send-email-srinivas.pandruvada@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Paul Gortmaker 提交于
This file was only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile this. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
-
由 Andy Lutomirski 提交于
cr4_init_shadow() will panic on 486-like machines without CR4. Fix it using __read_cr4_safe(). Reported-by: david@saggiorato.net Signed-off-by: NAndy Lutomirski <luto@kernel.org> Reviewed-by: NBorislav Petkov <bp@suse.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: 1e02ce4c ("x86: Store a per-cpu shadow copy of CR4") Link: http://lkml.kernel.org/r/43a20f81fb504013bf613913dc25574b45336a61.1475091074.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Paul Burton 提交于
The paging_init() function contains code which detects that highmem is in use but unsupported due to dcache aliasing. However this code was ineffective because it was being run before the caches are probed, meaning that cpu_has_dc_aliases would always evaluate to false (unless a platform overrides it to a compile-time constant) and the detection of the unsupported case is never triggered. The kernel would then go on to attempt to use highmem & either hit coherency issues or trigger the BUG_ON in flush_kernel_dcache_page(). Fix this by running paging_init() later than cpu_cache_init(), such that the cpu_has_dc_aliases macro will evaluate correctly & the unsupported highmem case will be detected successfully. This then leads to a formerly hidden issue in that mem_init_free_highmem() will attempt to free all highmem pages, even though we're avoiding use of them & don't have valid page structs for them. This leads to an invalid pointer dereference & a TLB exception. Avoid this by skipping the loop in mem_init_free_highmem() if cpu_has_dc_aliases evaluates true. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: Rabin Vincent <rabinv@axis.com> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Alexander Sverdlin <alexander.sverdlin@gmail.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Jaedon Shin <jaedon.shin@gmail.com> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Sergey Ryazanov <ryazanov.s.a@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14184/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Malta boards used with CPU emulators feature a switch to disable use of an IOCU. Software has to check this switch & ignore any present IOCU if the switch is closed. The read used to do this was unsafe for 64 bit kernels, as it simply casted the address 0xbf403000 to a pointer & dereferenced it. Whilst in a 32 bit kernel this would access kseg1, in a 64 bit kernel this attempts to access xuseg & results in an address error exception. Fix by accessing a correctly formed ckseg1 address generated using the CKSEG1ADDR macro. Whilst modifying this code, define the name of the register and the bit we care about within it, which indicates whether PCI DMA is routed to the IOCU or straight to DRAM. The code previously checked that bit 0 was also set, but the least significant 7 bits of the CONFIG_GEN0 register contain the value of the MReqInfo signal provided to the IOCU OCP bus, so singling out bit 0 makes little sense & that part of the check is dropped. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Fixes: b6d92b4a ("MIPS: Add option to disable software I/O coherency.") Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Kees Cook <keescook@chromium.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14187/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-