- 08 10月, 2016 4 次提交
-
-
由 Yisheng Xie 提交于
Arm64 supports gigantic pages after commit 084bd298 ("ARM64: mm: HugeTLB support.") however, it can only be allocated at boottime and can't be freed. This patch selects ARCH_HAS_GIGANTIC_PAGE to make gigantic pages can be allocated and freed at runtime for arch arm64. Link: http://lkml.kernel.org/r/1475227569-63446-3-git-send-email-xieyisheng1@huawei.comSigned-off-by: NYisheng Xie <xieyisheng1@huawei.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NHillf Danton <hillf.zj@alibaba-inc.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yisheng Xie 提交于
Avoid making ifdef get pretty unwieldy if many ARCHs support gigantic page. No functional change with this patch. Link: http://lkml.kernel.org/r/1475227569-63446-2-git-send-email-xieyisheng1@huawei.comSigned-off-by: NYisheng Xie <xieyisheng1@huawei.com> Suggested-by: NMichal Hocko <mhocko@suse.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: NHillf Danton <hillf.zj@alibaba-inc.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Baoyou Xie 提交于
We get 1 warning when building kernel with W=1: drivers/char/mem.c:220:12: warning: no previous prototype for 'phys_mem_access_prot_allowed' [-Wmissing-prototypes] int __weak phys_mem_access_prot_allowed(struct file *file, In fact, its declaration is spreading to several header files in different architecture, but need to be declare in common header file. So this patch moves phys_mem_access_prot_allowed() to pgtable.h. Link: http://lkml.kernel.org/r/1473751597-12139-1-git-send-email-baoyou.xie@linaro.orgSigned-off-by: NBaoyou Xie <baoyou.xie@linaro.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Srikar Dronamraju 提交于
Currently significant amount of memory is reserved only in kernel booted to capture kernel dump using the fa_dump method. Kernels compiled with CONFIG_DEFERRED_STRUCT_PAGE_INIT will initialize only certain size memory per node. The certain size takes into account the dentry and inode cache sizes. Currently the cache sizes are calculated based on the total system memory including the reserved memory. However such a kernel when booting the same kernel as fadump kernel will not be able to allocate the required amount of memory to suffice for the dentry and inode caches. This results in crashes like Hence only implement arch_reserved_kernel_pages() for CONFIG_FA_DUMP configurations. The amount reserved will be reduced while calculating the large caches and will avoid crashes like the below on large systems such as 32 TB systems. Dentry cache hash table entries: 536870912 (order: 16, 4294967296 bytes) vmalloc: allocation failure, allocated 4097114112 of 17179934720 bytes swapper/0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC) CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6-master+ #3 Call Trace: dump_stack+0xb0/0xf0 (unreliable) warn_alloc_failed+0x114/0x160 __vmalloc_node_range+0x304/0x340 __vmalloc+0x6c/0x90 alloc_large_system_hash+0x1b8/0x2c0 inode_init+0x94/0xe4 vfs_caches_init+0x8c/0x13c start_kernel+0x50c/0x578 start_here_common+0x20/0xa8 Link: http://lkml.kernel.org/r/1472476010-4709-4-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com> Suggested-by: NMel Gorman <mgorman@techsingularity.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Cc: Hari Bathini <hbathini@linux.vnet.ibm.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 10月, 2016 6 次提交
-
-
由 Boris Ostrovsky 提交于
Early during boot topology_update_package_map() computes logical_pkg_ids for all present processors. Later, when processors are brought up, identify_cpu() updates these values based on phys_pkg_id which is a function of initial_apicid. On PV guests the latter may point to a non-existing node, causing logical_pkg_ids to be set to -1. Intel's RAPL uses logical_pkg_id (as topology_logical_package_id()) to index its arrays and therefore in this case will point to index 65535 (since logical_pkg_id is a u16). This could lead to either a crash or may actually access random memory location. As a workaround, we recompute topology during CPU bringup to reset logical_pkg_id to a valid value. (The reason for initial_apicid being bogus is because it is initial_apicid of the processor from which the guest is launched. This value is CPUID(1).EBX[31:24]) Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Cc: stable@vger.kernel.org Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Russell King 提交于
Commit 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value limitation") tried to increase the bogomips limitation, but in doing so messed up udelay such that it always gives about a 5% error in the delay, even if we use a timer. The calculation is: loops = UDELAY_MULT * us_delay * ticks_per_jiffy >> UDELAY_SHIFT Originally, UDELAY_MULT was ((UL(2199023) * HZ) >> 11) and UDELAY_SHIFT 30. Assuming HZ=100, us_delay of 1000 and ticks_per_jiffy of 1660000 (eg, 166MHz timer, 1ms delay) this would calculate: ((UL(2199023) * HZ) >> 11) * 1000 * 1660000 >> 30 => 165999 With the new values of 2047 * HZ + 483648 * HZ / 1000000 and 31, we get: (2047 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31 => 158269 which is incorrect. This is due to a typo - correcting it gives: (2147 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31 => 165999 i.o.w, the original value. Fixes: 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value limitation") Cc: <stable@vger.kernel.org> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 netmonk@netmonk.org 提交于
Good evening, Following LinuxCodingStyle documentation and with the help of Sam, fixed severals identation issues in the code, and few others cosmetic changes And last and i hope least fixing my name :) Signed-off-by : Dominique Carrel <netmonk@netmonk.org> Acked-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 chris hyser 提交于
Enable relaxed ordering for memory writes in IOMMU TSB entry from dma_4v_alloc_coherent(), dma_4v_map_page() and dma_4v_map_sg() when dma_attrs DMA_ATTR_WEAK_ORDERING is set. This requires PCI IOMMU I/O Translation Services version 2.0 API. Many PCIe devices allow enabling relaxed-ordering (memory writes bypassing other memory writes) for various DMA buffers. A notable exception is the Mellanox mlx4 IB adapter. Due to the nature of x86 HW this appears to have little performance impact there. On SPARC HW however, this results in major performance degradation getting only about 3Gbps. Enabling RO in the IOMMU entries corresponding to mlx4 data buffers increases the throughput to about 13 Gbps. Orabug: 19245907 Signed-off-by: NChris Hyser <chris.hyser@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 chris hyser 提交于
Enable Version 2 of the PCI IOMMU API needed for advanced features such as PCI Relaxed Ordering and greater than 2 GB DMA address space per root complex. Signed-off-by: NChris Hyser <chris.hyser@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paul Gortmaker 提交于
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: "David S. Miller" <davem@davemloft.net> Cc: sparclinux@vger.kernel.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 10月, 2016 2 次提交
-
-
由 Boris Ostrovsky 提交于
xen_cpuhp_setup() calls mutex_lock() which, when CONFIG_DEBUG_MUTEXES is defined, ends up calling xen_save_fl(). That routine expects per_cpu(xen_vcpu, 0) to be already initialized. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Reported-by: NSander Eikelenboom <linux@eikelenboom.it> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Keith Busch 提交于
Move the driver source and Kconfig to the PCI host bridge drivers directory and move the config option to a more appropriate sub-menu instead of occupying the top-level location. Update the Kconfig option with the X86_64 dependency that was implicitly included from the previous location, and add information about the module name when built as a loadable module. Signed-off-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> CC: Jon Derrick <jonathan.derrick@intel.com>
-
- 03 10月, 2016 1 次提交
-
-
由 Srinivas Ramana 提交于
If the bootloader uses the long descriptor format and jumps to kernel decompressor code, TTBCR may not be in a right state. Before enabling the MMU, it is required to clear the TTBCR.PD0 field to use TTBR0 for translation table walks. The commit dbece458 ("ARM: 7501/1: decompressor: reset ttbcr for VMSA ARMv7 cores") does the reset of TTBCR.N, but doesn't consider all the bits for the size of TTBCR.N. Clear TTBCR.PD0 field and reset all the three bits of TTBCR.N to indicate the use of TTBR0 and the correct base address width. Fixes: dbece458 ("ARM: 7501/1: decompressor: reset ttbcr for VMSA ARMv7 cores") Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NSrinivas Ramana <sramana@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 10月, 2016 1 次提交
-
-
由 Paul Burton 提交于
When discovering the number of VPEs per core, smp_num_siblings will be incorrect for kernels built without support for the MIPS MultiThreading (MT) ASE running on systems which implement said ASE. This leads to accesses to VPEs in secondary cores being performed incorrectly since mips_cm_vp_id calculates the wrong ID to write to the local "other" registers. Fix this by examining the number of VPEs in the core as reported by the CM. This patch presumes that the number of VPEs will be the same in each core of the system. As this path only applies to systems with CM version 2.5 or lower, and this property is true of all such known systems, this is likely to be fine but is described in a comment for good measure. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14338/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 01 10月, 2016 19 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Commit d9676fa1 ("ARCv2: Enable LOCKDEP"), changed local_save_flags() to not return raw STATUS32 but encoded in the form such that it could be fed directly to CLRI/SETI instructions. However the STATUS32.E[] was not captured correctly as it corresponds to bits [4:1] in the register and not [3:0] Fixes: d9676fa1 ("ARCv2: Enable LOCKDEP") Cc: Evgeny Voevodin <evgeny.voevodin@intel.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Noam Camus 提交于
Seem like values assigned as absolute number and not and shift value, i.e. should be 0 for one node (2^0) and 1 for couple of nodes (2^1) Signed-off-by: NNoam Camus <noamca@mellanox.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Yuriy Kolerov 提交于
In the end of "arc_init_IRQ" STATUS32.IE flag is going to be affected by "flag" instruction but "flag" never touches IE flag on ARCv2. So "kflag" instruction must be used instead of "flag". Signed-off-by: NYuriy Kolerov <yuriy.kolerov@synopsys.com> Cc: stable@vger.kernel.org #4.2+ Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
We used to keep the .exit.* sections as linker would fail in final link due to references from .debug_frame which itself could not be discardrd due to the forced "write,alloc" attributes for it. | LD init/built-in.o | `.exit.text' referenced in section `.debug_frame' of arch/arc/built-in.o: defined in discarded section `.exit.text' of arch/arc/built-in.o | Makefile:949: recipe for target 'vmlinux' failed With .debug_frame now retired, this hack is no longer needed. kernel binary is now a little bit smaller as well. closes STAR 9000549913 Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This uses a new set of annoations viz. ENTRY_CFI/END_CFI to enabel cfi ops generation. Note that we didn't change the normal ENTRY/EXIT as we don't actually want unwind info in the trap/exception/interrutp handlers which use these, as unwinder then gets confused (it keeps recursing vs. stopping). Semantically these are leaf routines and unwinding should stop when it hits those routines. Before ------ 28.52% 1.19% 9929 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--8.95%--EV_Trap | --8.25%--sys_write | |--3.93%--sock_write_iter ... |--2.62%--memset <==== [LEAF entry as no unwind info] ^^^^^^ After ----- 29.46% 1.24% 13622 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--9.31%--EV_Trap | --8.62%--sys_write | |--4.17%--sock_write_iter ... |--6.19%--sys_write | --6.19%--sock_write_iter | unix_stream_sendmsg | |--1.62%--sock_alloc_send_pskb | |--0.89%--sock_def_readable | |--0.88%--_raw_spin_unlock_irqrestore | |--0.69%--memset | | ^^^^^^ <==== [now in proper callframe] | | | --0.52%--skb_copy_datagram_from_iter Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
1. detect whether binutils supports the cfi pseudo ops 2. define conditional macros to generate the ops 3. define new ENTRY_CFI/END_CFI to annotate hand asm code. - Needed because we don't want to emit dwarf info in general ENTRY/END used by lowest level trap/exception/interrutp handlers as unwinder gets confused trying to unwind out of them. We want unwinder to instead stop when it hits onfo those routines - These provide minimal start/end cfi ops assuming routine doesn't touch stack memory/regs Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This essentially removes ENTRY() assembler annotation for this symbol since it didn't have a pairing END() This in ahead of introducing cfi pseudo ops in ENTRY/END which expects paired cfi_startproc/cfi_endproc | ../arch/arc/kernel/entry.S: Assembler messages: | ../arch/arc/kernel/entry.S:270: Error: previous CFI entry not closed (missing .cfi_endproc) | ../scripts/Makefile.build:326: recipe for target 'arch/arc/kernel/entry-arcv2.o' failed | make[4]: *** [arch/arc/kernel/entry-arcv2.o] Error 1 Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
In .debug_frame based unwinding regime, we used to force -gdwarf-2 since kernel unwinder only claimed to handle dwarf 2. This changed since commit 6d0d5060 ("ARC: dw2 unwind: Don't bail for CIE.version != 1") which added some support beyond dwarf 2, atleast to handle CIE != 1 The ill-effect of -gdwarf-2 is that it forces generation of .debug_* sections, which bloats loadable modules .ko files. For the curious, this doesn't affect vmlinx binary since linker script discards .debug_* but same discard is not yet implemented for modules. So it seems we can drop the -gdwarf-2 toggle, which should not be needed anyways given that we now use .eh_frame based unwinding. I've verified using GNU 2016.09-engo10 that the actual unwind info is not different with or w/o this toggle - but the debug_* sections are gone for good. before ----- arc-linux-readelf -S q_proc.ko-unwinding-1-eh_frame-switch | grep debug [15] .debug_info PROGBITS 00000000 000300 00d08d 00 0 0 1 [16] .rela.debug_info RELA 00000000 0162a0 008844 0c I 29 15 4 [17] .debug_abbrev PROGBITS 00000000 00d38d 0005f8 00 0 0 1 [18] .debug_loc PROGBITS 00000000 00d985 000070 00 0 0 1 [19] .rela.debug_loc RELA 00000000 01eae4 0000c0 0c I 29 18 4 [20] .debug_aranges PROGBITS 00000000 00d9f5 000040 00 0 0 1 [21] .rela.debug_arang RELA 00000000 01eba4 000030 0c I 29 20 4 [22] .debug_ranges PROGBITS 00000000 00da35 000018 00 0 0 1 [23] .rela.debug_range RELA 00000000 01ebd4 000030 0c I 29 22 4 [24] .debug_line PROGBITS 00000000 00da4d 000b5b 00 0 0 1 [25] .rela.debug_line RELA 00000000 01ec04 0000cc 0c I 29 24 4 [26] .debug_str PROGBITS 00000000 00e5a8 007831 01 MS 0 0 1 after ---- Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
So finally after almost 8 years of dealing with .debug_frame, we are finally switching to .eh_frame. The reason being stripped kernel binaries had non-functional unwinder as .debug_frame was gone. Also, in general .eh_frame seems more common way of doing unwinding. This also folds a revert of f52e126c ("ARC: unwind: ensure that .debug_frame is generated (vs. .eh_frame)") to ensure that we start getting .eh_frame Reported-by: NDaniel Mentz <danielmentz@google.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This paves way for switching to .eh_frame based unwindiing Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Alexey Brodkin 提交于
We used to live with PERF_COUNT_HW_CACHE_REFERENCES and PERF_COUNT_HW_CACHE_REFERENCES not specified on ARC. Those events are actually aliases to 2 cache events that we do support and so this change sets "cache-reference" and "cache-misses" events in the same way as "L1-dcache-loads" and L1-dcache-load-misses. And while at it adding debug info for cache events as well as doing a subtle fix in HW events debug info - config value is much better represented by hex so we may see not only event index but as well other control bits set (if they exist). Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Noam Camus 提交于
Build brekeage since last changes to generic atomic operations. Added couple of missing macros which are now mandatory Signed-off-by: NNoam Camus <noamca@mellanox.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
ARCv2 ISA provides 64-bit exclusive load/stores so use them to implement the 64-bit atomics and elide the spinlock based generic 64-bit atomics boot tested with atomic64 self-test (and GOD bless the person who wrote them, I realized my inline assmebly is sloppy as hell) Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
HS release 3.0 provides for even more flexibility in specifying the volatile address space for mapping peripherals. With HS 2.1 @start was made flexible / programmable - with HS 3.0 even @end can be setup (vs. fixed to 0xFFFF_FFFF before). So add code to reflect that and while at it remove an unused struct defintion Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
The cool thing is that same kernel image can run on - nsim OSCI simulation platform - SDPlite FPGA setups Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Alexey Brodkin 提交于
As it was discussed quite some time ago (see https://lkml.org/lkml/2015/11/5/862) it's a good practice to add "model" property in .dts. Moreover as per ePAPR "model" property is required and should look like "manufacturer,model" so we do here. Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Jonas Gorski <jonas.gorski@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Rob Herring <robh@kernel.org> Cc: Christian Ruppert <christian.ruppert@alitech.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 30 9月, 2016 7 次提交
-
-
由 Wanpeng Li 提交于
This warning: WARNING: CPU: 0 PID: 3331 at arch/x86/entry/common.c:45 enter_from_user_mode+0x32/0x50 CPU: 0 PID: 3331 Comm: ldt_gdt_64 Not tainted 4.8.0-rc7+ #13 Call Trace: dump_stack+0x99/0xd0 __warn+0xd1/0xf0 warn_slowpath_null+0x1d/0x20 enter_from_user_mode+0x32/0x50 error_entry+0x6d/0xc0 ? general_protection+0x12/0x30 ? native_load_gs_index+0xd/0x20 ? do_set_thread_area+0x19c/0x1f0 SyS_set_thread_area+0x24/0x30 do_int80_syscall_32+0x7c/0x220 entry_INT80_compat+0x38/0x50 ... can be reproduced by running the GS testcase of the ldt_gdt test unit in the x86 selftests. do_int80_syscall_32() will call enter_form_user_mode() to convert context tracking state from user state to kernel state. The load_gs_index() call can fail with user gsbase, gsbase will be fixed up and proceed if this happen. However, enter_from_user_mode() will be called again in the fixed up path though it is context tracking kernel state currently. This patch fixes it by just fixing up gsbase and telling lockdep that IRQs are off once load_gs_index() failed with user gsbase. Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1475197266-3440-1-git-send-email-wanpeng.li@hotmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
Otherwise arch_task_struct_size == 0 and we die. While we're at it, set X86_FEATURE_ALWAYS, too. Reported-by: NDavid Saggiorato <david@saggiorato.net> Tested-by: NDavid Saggiorato <david@saggiorato.net> Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: aaeb5c01c5b ("x86/fpu, sched: Introduce CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT and use it on x86") Link: http://lkml.kernel.org/r/8de723afbf0811071185039f9088733188b606c9.1475103911.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 KarimAllah Ahmed 提交于
Ever since commit 254d1a3f ("xen/pv-on-hvm kexec: shutdown watches from old kernel") using the INTx interrupt from Xen PCI platform device for event channel notification would just lockup the guest during bootup. postcore_initcall now calls xs_reset_watches which will eventually try to read a value from XenStore and will get stuck on read_reply at XenBus forever since the platform driver is not probed yet and its INTx interrupt handler is not registered yet. That means that the guest can not be notified at this moment of any pending event channels and none of the per-event handlers will ever be invoked (including the XenStore one) and the reply will never be picked up by the kernel. The exact stack where things get stuck during xenbus_init: -xenbus_init -xs_init -xs_reset_watches -xenbus_scanf -xenbus_read -xs_single -xs_single -xs_talkv Vector callbacks have always been the favourite event notification mechanism since their introduction in commit 38e20b07 ("x86/xen: event channels delivery on HVM.") and the vector callback feature has always been advertised for quite some time by Xen that's why INTx was broken for several years now without impacting anyone. Luckily this also means that event channel notification through INTx is basically dead-code which can be safely removed without impacting anybody since it has been effectively disabled for more than 4 years with nobody complaining about it (at least as far as I'm aware of). This commit removes event channel notification through Xen PCI platform device. Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Juergen Gross <jgross@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Julien Grall <julien.grall@citrix.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Ross Lagerwall <ross.lagerwall@citrix.com> Cc: xen-devel@lists.xenproject.org Cc: linux-kernel@vger.kernel.org Cc: linux-pci@vger.kernel.org Cc: Anthony Liguori <aliguori@amazon.com> Signed-off-by: NKarimAllah Ahmed <karahmed@amazon.de> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Andy Lutomirski 提交于
We use __read_cr4() vs __read_cr4_safe() inconsistently. On CR4-less CPUs, all CR4 bits are effectively clear, so we can make the code simpler and more robust by making __read_cr4() always fix up faults on 32-bit kernels. This may fix some bugs on old 486-like CPUs, but I don't have any easy way to test that. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: david@saggiorato.net Link: http://lkml.kernel.org/r/ea647033d357d9ce2ad2bbde5a631045f5052fb6.1475178370.git.luto@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Segher Boessenkool 提交于
We need to call GET_LE to read hdr->e_type. Fixes: 57f90c3d ("x86/vdso: Error out if the vDSO isn't a valid DSO") Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NSegher Boessenkool <segher@kernel.crashing.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: linux-next@vger.kernel.org Link: http://lkml.kernel.org/r/20160929193442.GA16617@gate.crashing.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andy Lutomirski 提交于
The condition for reading CR4 was wrong: there are some CPUs with CPUID but not CR4. Rather than trying to make the condition exact, use __read_cr4_safe(). Fixes: 18bc7bd5 ("x86/boot: Synchronize trampoline_cr4_features and mmu_cr4_features directly") Reported-by: david@saggiorato.net Signed-off-by: NAndy Lutomirski <luto@kernel.org> Reviewed-by: NBorislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Link: http://lkml.kernel.org/r/8c453a61c4f44ab6ff43c29780ba04835234d2e5.1475178369.git.luto@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Boris Ostrovsky 提交于
Switch to new CPU hotplug infrastructure. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Suggested-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-