- 22 8月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
Add CONFIG_ASM_MODVERSIONS. This allows to remove one if-conditional nesting in scripts/Makefile.build. scripts/Makefile.build is run every time Kbuild descends into a sub-directory. So, I want to avoid $(wildcard ...) evaluation where possible although computing $(wildcard ...) is so cheap that it may not make measurable performance difference. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
-
- 21 8月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
Currently, the timestamp of module linker scripts are not checked. Add them to the dependency of modules so they are correctly rebuilt. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 11 8月, 2019 1 次提交
-
-
由 Joe Perches 提交于
A compilation -Wimplicit-fallthrough warning was enabled by commit a035d552 ("Makefile: Globally enable fall-through warning") Even though clang 10.0.0 does not currently support this warning without a patch, clang currently does not support a value for this option. Link: https://bugs.llvm.org/show_bug.cgi?id=39382 The gcc default for this warning is 3 so removing the =3 has no effect for gcc and enables the warning for patched versions of clang. Also remove the =3 from an existing use in a parisc Makefile: arch/parisc/math-emu/Makefile Signed-off-by: NJoe Perches <joe@perches.com> Reviewed-and-tested-by: NNathan Chancellor <natechancellor@gmail.com> Cc: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 8月, 2019 6 次提交
-
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. Fix the following warnings (Building: arm-ep93xx_defconfig arm): arch/arm/mach-ep93xx/crunch.c: In function 'crunch_do': arch/arm/mach-ep93xx/crunch.c:46:3: warning: this statement may fall through [-Wimplicit-fallthrough=] memset(crunch_state, 0, sizeof(*crunch_state)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/arm/mach-ep93xx/crunch.c:53:2: note: here case THREAD_NOTIFY_EXIT: ^~~~ Notice that, in this particular case, the code comment is modified in accordance with what GCC is expecting to find. Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. This patch fixes the following warning: arch/arm/kernel/signal.c: In function 'do_signal': arch/arm/kernel/signal.c:598:12: warning: this statement may fall through [-Wimplicit-fallthrough=] restart -= 2; ~~~~~~~~^~~~ arch/arm/kernel/signal.c:599:3: note: here case -ERESTARTNOHAND: ^~~~ Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. This patch fixes the following warnings: arch/arm/plat-omap/dma.c: In function 'omap_set_dma_src_burst_mode': arch/arm/plat-omap/dma.c:384:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (dma_omap2plus()) { ^ arch/arm/plat-omap/dma.c:393:2: note: here case OMAP_DMA_DATA_BURST_16: ^~~~ arch/arm/plat-omap/dma.c:394:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (dma_omap2plus()) { ^ arch/arm/plat-omap/dma.c:402:2: note: here default: ^~~~~~~ arch/arm/plat-omap/dma.c: In function 'omap_set_dma_dest_burst_mode': arch/arm/plat-omap/dma.c:473:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (dma_omap2plus()) { ^ arch/arm/plat-omap/dma.c:481:2: note: here default: ^~~~~~~ Notice that, in this particular case, the code comment is modified in accordance with what GCC is expecting to find. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. This patch fixes the following warnings: arch/arm/mm/alignment.c: In function 'thumb2arm': arch/arm/mm/alignment.c:688:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if ((tinstr & (3 << 9)) == 0x0400) { ^ arch/arm/mm/alignment.c:700:2: note: here default: ^~~~~~~ arch/arm/mm/alignment.c: In function 'do_alignment_t32_to_handler': arch/arm/mm/alignment.c:753:15: warning: this statement may fall through [-Wimplicit-fallthrough=] poffset->un = (tinst2 & 0xff) << 2; ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~ arch/arm/mm/alignment.c:754:2: note: here case 0xe940: ^~~~ Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. This patch fixes the following warning: arch/arm/mach-tegra/reset.c: In function 'tegra_cpu_reset_handler_enable': arch/arm/mach-tegra/reset.c:72:3: warning: this statement may fall through [-Wimplicit-fallthrough=] tegra_cpu_reset_handler_set(reset_address); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/arm/mach-tegra/reset.c:74:2: note: here case 0: ^~~~ Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. This patch fixes the following warnings: arch/arm/kernel/hw_breakpoint.c: In function 'hw_breakpoint_arch_parse': arch/arm/kernel/hw_breakpoint.c:609:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2) ^ arch/arm/kernel/hw_breakpoint.c:611:2: note: here case 3: ^~~~ arch/arm/kernel/hw_breakpoint.c:613:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1) ^ arch/arm/kernel/hw_breakpoint.c:615:2: note: here default: ^~~~~~~ arch/arm/kernel/hw_breakpoint.c: In function 'arch_build_bp_info': arch/arm/kernel/hw_breakpoint.c:544:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if ((hw->ctrl.type != ARM_BREAKPOINT_EXECUTE) ^ arch/arm/kernel/hw_breakpoint.c:547:2: note: here default: ^~~~~~~ In file included from include/linux/kernel.h:11, from include/linux/list.h:9, from include/linux/preempt.h:11, from include/linux/hardirq.h:5, from arch/arm/kernel/hw_breakpoint.c:16: arch/arm/kernel/hw_breakpoint.c: In function 'hw_breakpoint_pending': include/linux/compiler.h:78:22: warning: this statement may fall through [-Wimplicit-fallthrough=] # define unlikely(x) __builtin_expect(!!(x), 0) ^~~~~~~~~~~~~~~~~~~~~~~~~~ include/asm-generic/bug.h:136:2: note: in expansion of macro 'unlikely' unlikely(__ret_warn_on); \ ^~~~~~~~ arch/arm/kernel/hw_breakpoint.c:863:3: note: in expansion of macro 'WARN' WARN(1, "Asynchronous watchpoint exception taken. Debugging results may be unreliable\n"); ^~~~ arch/arm/kernel/hw_breakpoint.c:864:2: note: here case ARM_ENTRY_SYNC_WATCHPOINT: ^~~~ arch/arm/kernel/hw_breakpoint.c: In function 'core_has_os_save_restore': arch/arm/kernel/hw_breakpoint.c:910:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (oslsr & ARM_OSLSR_OSLM0) ^ arch/arm/kernel/hw_breakpoint.c:912:2: note: here default: ^~~~~~~ Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
- 09 8月, 2019 6 次提交
-
-
由 Heiko Carstens 提交于
s390 does not map the vdso for statically linked binaries, assuming that this doesn't make sense. See commit fc5243d9 ("[S390] arch_setup_additional_pages arguments"). However with glibc commit d665367f596d ("linux: Enable vDSO for static linking as default (BZ#19767)") and commit 5e855c895401 ("s390: Enable VDSO for static linking") the vdso is also used for statically linked binaries - if the kernel would make it available. Therefore map the vdso always, just like all other architectures. Reported-by: NStefan Liebler <stli@linux.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Marc Zyngier 提交于
At the moment, the way we reset CP15 registers is mildly insane: We write junk to them, call the reset functions, and then check that we have something else in them. The "fun" thing is that this can happen while the guest is running (PSCI, for example). If anything in KVM has to evaluate the state of a CP15 register while junk is in there, bad thing may happen. Let's stop doing that. Instead, we track that we have called a reset function for that register, and assume that the reset function has done something. In the end, the very need of this reset check is pretty dubious, as it doesn't check everything (a lot of the CP15 reg leave outside of the cp15_regs[] array). It may well be axed in the near future. Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
At the moment, the way we reset system registers is mildly insane: We write junk to them, call the reset functions, and then check that we have something else in them. The "fun" thing is that this can happen while the guest is running (PSCI, for example). If anything in KVM has to evaluate the state of a system register while junk is in there, bad thing may happen. Let's stop doing that. Instead, we track that we have called a reset function for that register, and assume that the reset function has done something. This requires fixing a couple of sysreg refinition in the trap table. In the end, the very need of this reset check is pretty dubious, as it doesn't check everything (a lot of the sysregs leave outside of the sys_regs[] array). It may well be axed in the near future. Tested-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Palmer Dabbelt 提交于
This should never have landed in the first place: it was added as part of 64-bit divide support for 32-bit systems, but the kernel doesn't allow this sort of division. I must have forgotten to remove it. This patch removes the support. Since this routine only worked on 64-bit platforms but was only built on 32-bit platforms, it's essentially just nonsense anyway. Signed-off-by: NPalmer Dabbelt <palmer@sifive.com> Acked-by: NNicolas Pitre <nico@fluxnic.net> Link: https://lore.kernel.org/linux-riscv/nycvar.YSQ.7.76.1908061413360.19480@knanqh.ubzr/T/#tReported-by: NEric Lin <tesheng@andestech.com> Signed-off-by: NPaul Walmsley <paul.walmsley@sifive.com>
-
由 Paul Walmsley 提交于
In preparation for removing __udivdi3() from the RISC-V architecture-specific files, convert its one user to use do_div(). This avoids breaking the RV32 build after __udivdi3() is removed. This second version removes the assignment of the remainder to an unused temporary variable. Thanks to Nicolas Pitre <nico@fluxnic.net> for the suggestion. Signed-off-by: NPaul Walmsley <paul.walmsley@sifive.com> Cc: Nicolas Pitre <nico@fluxnic.net>
-
由 Jia He 提交于
Without this patch, the MAP_SYNC test case will cause a print_bad_pte warning on arm64 as follows: [ 25.542693] BUG: Bad page map in process mapdax333 pte:2e8000448800f53 pmd:41ff5f003 [ 25.546360] page:ffff7e0010220000 refcount:1 mapcount:-1 mapping:ffff8003e29c7440 index:0x0 [ 25.550281] ext4_dax_aops [ 25.550282] name:"__aaabbbcccddd__" [ 25.551553] flags: 0x3ffff0000001002(referenced|reserved) [ 25.555802] raw: 03ffff0000001002 ffff8003dfffa908 0000000000000000 ffff8003e29c7440 [ 25.559446] raw: 0000000000000000 0000000000000000 00000001fffffffe 0000000000000000 [ 25.563075] page dumped because: bad pte [ 25.564938] addr:0000ffffbe05b000 vm_flags:208000fb anon_vma:0000000000000000 mapping:ffff8003e29c7440 index:0 [ 25.574272] file:__aaabbbcccddd__ fault:ext4_dax_fault mmmmap:ext4_file_mmap readpage:0x0 [ 25.578799] CPU: 1 PID: 1180 Comm: mapdax333 Not tainted 5.2.0+ #21 [ 25.581702] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 [ 25.585624] Call trace: [ 25.587008] dump_backtrace+0x0/0x178 [ 25.588799] show_stack+0x24/0x30 [ 25.590328] dump_stack+0xa8/0xcc [ 25.591901] print_bad_pte+0x18c/0x218 [ 25.593628] unmap_page_range+0x778/0xc00 [ 25.595506] unmap_single_vma+0x94/0xe8 [ 25.597304] unmap_vmas+0x90/0x108 [ 25.598901] unmap_region+0xc0/0x128 [ 25.600566] __do_munmap+0x284/0x3f0 [ 25.602245] __vm_munmap+0x78/0xe0 [ 25.603820] __arm64_sys_munmap+0x34/0x48 [ 25.605709] el0_svc_common.constprop.0+0x78/0x168 [ 25.607956] el0_svc_handler+0x34/0x90 [ 25.609698] el0_svc+0x8/0xc [...] The root cause is in _vm_normal_page, without the PTE_SPECIAL bit, the return value will be incorrectly set to pfn_to_page(pfn) instead of NULL. Besides, this patch also rewrite the pmd_mkdevmap to avoid setting PTE_SPECIAL for pmd The MAP_SYNC test case is as follows(Provided by Yibo Cai) $#include <stdio.h> $#include <string.h> $#include <unistd.h> $#include <sys/file.h> $#include <sys/mman.h> $#ifndef MAP_SYNC $#define MAP_SYNC 0x80000 $#endif /* mount -o dax /dev/pmem0 /mnt */ $#define F "/mnt/__aaabbbcccddd__" int main(void) { int fd; char buf[4096]; void *addr; if ((fd = open(F, O_CREAT|O_TRUNC|O_RDWR, 0644)) < 0) { perror("open1"); return 1; } if (write(fd, buf, 4096) != 4096) { perror("lseek"); return 1; } addr = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_SYNC, fd, 0); if (addr == MAP_FAILED) { perror("mmap"); printf("did you mount with '-o dax'?\n"); return 1; } memset(addr, 0x55, 4096); if (munmap(addr, 4096) == -1) { perror("munmap"); return 1; } close(fd); return 0; } Fixes: 73b20c84 ("arm64: mm: implement pte_devmap support") Reported-by: NYibo Cai <Yibo.Cai@arm.com> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NRobin Murphy <Robin.Murphy@arm.com> Signed-off-by: NJia He <justin.he@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 8月, 2019 4 次提交
-
-
由 Vasily Gorbik 提交于
Currently empty .bss checks performed do not pay attention to "common objects" in object files which end up in .bss section eventually. The "size" tool is a part of binutils and since version 2.18 provides "--common" command line option, which allows to account "common objects" sizes in .bss section size. Utilize "size --common" to perform accurate check that .bss section is unused. Besides that the size tool handles object files without .bss section gracefully and doesn't require additional objdump run. The linux kernel requires binutils 2.20 since 4.13. Kbuild exports OBJSIZE to reference the right size tool. Link: http://lkml.kernel.org/r/patch-2.thread-2257a1.git-2257a1c53d4a.your-ad-here.call-01565088755-ext-5120@work.hoursReported-and-tested-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Valdis Klētnieks 提交于
When building with W=1, warnings about missing prototypes are emitted: CC arch/x86/lib/cpu.o arch/x86/lib/cpu.c:5:14: warning: no previous prototype for 'x86_family' [-Wmissing-prototypes] 5 | unsigned int x86_family(unsigned int sig) | ^~~~~~~~~~ arch/x86/lib/cpu.c:18:14: warning: no previous prototype for 'x86_model' [-Wmissing-prototypes] 18 | unsigned int x86_model(unsigned int sig) | ^~~~~~~~~ arch/x86/lib/cpu.c:33:14: warning: no previous prototype for 'x86_stepping' [-Wmissing-prototypes] 33 | unsigned int x86_stepping(unsigned int sig) | ^~~~~~~~~~~~ Add the proper include file so the prototypes are there. Signed-off-by: NValdis Kletnieks <valdis.kletnieks@vt.edu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/42513.1565234837@turing-police
-
由 Nick Desaulniers 提交于
KBUILD_CFLAGS is very carefully built up in the top level Makefile, particularly when cross compiling or using different build tools. Resetting KBUILD_CFLAGS via := assignment is an antipattern. The comment above the reset mentions that -pg is problematic. Other Makefiles use `CFLAGS_REMOVE_file.o = $(CC_FLAGS_FTRACE)` when CONFIG_FUNCTION_TRACER is set. Prefer that pattern to wiping out all of the important KBUILD_CFLAGS then manually having to re-add them. Seems also that __stack_chk_fail references are generated when using CONFIG_STACKPROTECTOR or CONFIG_STACKPROTECTOR_STRONG. Fixes: 8fc5b4d4 ("purgatory: core purgatory functionality") Reported-by: NVaibhav Rustagi <vaibhavrustagi@google.com> Suggested-by: NPeter Zijlstra <peterz@infradead.org> Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NVaibhav Rustagi <vaibhavrustagi@google.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190807221539.94583-2-ndesaulniers@google.com
-
由 Nick Desaulniers 提交于
Implementing memcpy and memset in terms of __builtin_memcpy and __builtin_memset is problematic. GCC at -O2 will replace calls to the builtins with calls to memcpy and memset (but will generate an inline implementation at -Os). Clang will replace the builtins with these calls regardless of optimization level. $ llvm-objdump -dr arch/x86/purgatory/string.o | tail 0000000000000339 memcpy: 339: 48 b8 00 00 00 00 00 00 00 00 movabsq $0, %rax 000000000000033b: R_X86_64_64 memcpy 343: ff e0 jmpq *%rax 0000000000000345 memset: 345: 48 b8 00 00 00 00 00 00 00 00 movabsq $0, %rax 0000000000000347: R_X86_64_64 memset 34f: ff e0 Such code results in infinite recursion at runtime. This is observed when doing kexec. Instead, reuse an implementation from arch/x86/boot/compressed/string.c. This requires to implement a stub function for warn(). Also, Clang may lower memcmp's that compare against 0 to bcmp's, so add a small definition, too. See also: commit 5f074f3e ("lib/string.c: implement a basic bcmp") Fixes: 8fc5b4d4 ("purgatory: core purgatory functionality") Reported-by: NVaibhav Rustagi <vaibhavrustagi@google.com> Debugged-by: NVaibhav Rustagi <vaibhavrustagi@google.com> Debugged-by: NManoj Gupta <manojgupta@google.com> Suggested-by: NAlistair Delva <adelva@google.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NVaibhav Rustagi <vaibhavrustagi@google.com> Cc: stable@vger.kernel.org Link: https://bugs.chromium.org/p/chromium/issues/detail?id=984056 Link: https://lkml.kernel.org/r/20190807221539.94583-1-ndesaulniers@google.com
-
- 07 8月, 2019 2 次提交
-
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. Fix the following warning (Building: i386_defconfig i386): arch/x86/kernel/cpu/mtrr/cyrix.c:99:6: warning: this statement may fall through [-Wimplicit-fallthrough=] Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20190805201712.GA19927@embeddedor
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. Fix the following warning (Building: allnoconfig i386): arch/x86/kernel/ptrace.c:202:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (unlikely(value == 0)) ^ arch/x86/kernel/ptrace.c:206:2: note: here default: ^~~~~~~ Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20190805195654.GA17831@embeddedor
-
- 06 8月, 2019 7 次提交
-
-
由 Vasily Gorbik 提交于
Perf relies on _etext and _stext symbols being one of 't', 'T', 'v' or 'V'. Put them into .text section to guarantee that. Also moves padding to page boundary inside .text which has an effect that .text section is now padded with nops rather than 0's, which apparently has been the initial intention for specifying 0x0700 fill expression. Reported-by: NThomas Richter <tmricht@linux.ibm.com> Tested-by: NThomas Richter <tmricht@linux.ibm.com> Suggested-by: NAndreas Krebbel <krebbel@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Vasily Gorbik 提交于
Cleanup labels in head64 some of which are not being used since git recorded history. Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Vasily Gorbik 提交于
Remove pointless stack recursion on stack type ... warning, which only confuses people. There is no way to make backchain unwinder 100% reliable. When a task is interrupted in-between stack frame allocation and backchain write instructions new stack frame backchain pointer is left uninitialized (there are also sometimes additional instruction in-between stack frame allocation and backchain write instructions due to gcc shrink-wrapping). In attempt to unwind such stack the unwinder would still try to use that invalid backchain value and perform all kind of sanity checks on it to make sure we are not pointed out of stack. In some cases that invalid backchain value would be 0 and we would falsely treat next stackframe as pt_regs and again gprs[15] in those pt_regs might happen to point at some address within the task's stack. Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Vasily Gorbik 提交于
After some investigation it doesn't look like init_mm fields start_code/end_code are used anywhere besides potentially in dump_mm for debugging purposes. Originally the value of 0 for start_code reflected the presence of lowcore and early boot code. But with kaslr in place start_code/end_code range should not span over unoccupied by the code segment memory. So, adjust init_mm start_code to point at the beginning of the code segment like other architectures do it. Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Vasily Gorbik 提交于
Since commit d1874a0c ("s390/mm: make the pxd_offset functions more robust") behaviour of p4d_offset, pud_offset and pmd_offset has been changed so that they cannot be used to iterate through top level page table, because the index for the top level page table is now calculated in pgd_offset. To avoid dumping the very first region/segment top level table entry 2048 times simply iterate entry pointer like it is already done in other page walking cases. Fixes: d1874a0c ("s390/mm: make the pxd_offset functions more robust") Reported-by: NIlya Leoshkevich <iii@linux.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Vasily Gorbik 提交于
This reverts commit db9492ce ("s390/protvirt: add memory sharing for diag 308 set/store") which due to ultravisor implementation change is not needed after all. Fixes: db9492ce ("s390/protvirt: add memory sharing for diag 308 set/store") Reviewed-by: NJanosch Frank <frankja@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Gustavo A. R. Silva 提交于
Mark switch cases where we are expecting to fall through. This patch fixes the following warning (Building: bcm63xx_defconfig mips): arch/mips/pci/ops-bcm63xx.c: In function ‘bcm63xx_pcie_can_access’: arch/mips/pci/ops-bcm63xx.c:474:6: warning: this statement may fall through [-Wimplicit-fallthrough=] if (PCI_SLOT(devfn) == 0) ^ arch/mips/pci/ops-bcm63xx.c:477:2: note: here default: ^~~~~~~ Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: bcm-kernel-feedback-list@broadcom.com Cc: linux-mips@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org
-
- 05 8月, 2019 5 次提交
-
-
由 Paolo Bonzini 提交于
Most code in arch/x86/kernel/kvm.c is called through x86_hyper_kvm, and thus only runs if KVM has been detected. There is no need to check again for the CPUID base. Cc: Sergio Lopez <slp@redhat.com> Cc: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Greg KH 提交于
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Also, when doing this, change kvm_arch_create_vcpu_debugfs() to return void instead of an integer, as we should not care at all about if this function actually does anything or not. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <x86@kernel.org> Cc: <kvm@vger.kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
There is no need for this function as all arches have to implement kvm_arch_create_vcpu_debugfs() no matter what. A #define symbol let us actually simplify the code. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wanpeng Li 提交于
After commit d73eb57b (KVM: Boost vCPUs that are delivering interrupts), a five years old bug is exposed. Running ebizzy benchmark in three 80 vCPUs VMs on one 80 pCPUs Skylake server, a lot of rcu_sched stall warning splatting in the VMs after stress testing: INFO: rcu_sched detected stalls on CPUs/tasks: { 4 41 57 62 77} (detected by 15, t=60004 jiffies, g=899, c=898, q=15073) Call Trace: flush_tlb_mm_range+0x68/0x140 tlb_flush_mmu.part.75+0x37/0xe0 tlb_finish_mmu+0x55/0x60 zap_page_range+0x142/0x190 SyS_madvise+0x3cd/0x9c0 system_call_fastpath+0x1c/0x21 swait_active() sustains to be true before finish_swait() is called in kvm_vcpu_block(), voluntarily preempted vCPUs are taken into account by kvm_vcpu_on_spin() loop greatly increases the probability condition kvm_arch_vcpu_runnable(vcpu) is checked and can be true, when APICv is enabled the yield-candidate vCPU's VMCS RVI field leaks(by vmx_sync_pir_to_irr()) into spinning-on-a-taken-lock vCPU's current VMCS. This patch fixes it by checking conservatively a subset of events. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Marc Zyngier <Marc.Zyngier@arm.com> Cc: stable@vger.kernel.org Fixes: 98f4a146 (KVM: add kvm_arch_vcpu_runnable() test to kvm_vcpu_on_spin() loop) Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wanpeng Li 提交于
kvm_set_pending_timer() will take care to wake up the sleeping vCPU which has pending timer, don't need to check this in apic_timer_expired() again. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 8月, 2019 1 次提交
-
-
由 Arnd Bergmann 提交于
ARM64 randdconfig builds regularly run into a build error, especially when NUMA_BALANCING and SPARSEMEM are enabled but not SPARSEMEM_VMEMMAP: #error "KASAN: not enough bits in page flags for tag" The last-cpuid bits are already contitional on the available space, so the result of the calculation is a bit random on whether they were already left out or not. Adding the kasan tag bits before last-cpuid makes it much more likely to end up with a successful build here, and should be reliable for randconfig at least, as long as that does not randomize NR_CPUS or NODES_SHIFT but uses the defaults. In order for the modified check to not trigger in the x86 vdso32 code where all constants are wrong (building with -m32), enclose all the definitions with an #ifdef. [arnd@arndb.de: build fix] Link: http://lkml.kernel.org/r/CAK8P3a3Mno1SWTcuAOT0Wa9VS15pdU6EfnkxLbDpyS55yO04+g@mail.gmail.com Link: http://lkml.kernel.org/r/20190722115520.3743282-1-arnd@arndb.de Link: https://lore.kernel.org/lkml/20190618095347.3850490-1-arnd@arndb.de/ Fixes: 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 8月, 2019 2 次提交
-
-
由 Masami Hiramatsu 提交于
Make debug exceptions visible from RCU so that synchronize_rcu() correctly track the debug exception handler. This also introduces sanity checks for user-mode exceptions as same as x86's ist_enter()/ist_exit(). The debug exception can interrupt in idle task. For example, it warns if we put a kprobe on a function called from idle task as below. The warning message showed that the rcu_read_lock() caused this problem. But actually, this means the RCU is lost the context which is already in NMI/IRQ. /sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events /sys/kernel/debug/tracing # echo 1 > events/kprobes/enable /sys/kernel/debug/tracing # [ 135.122237] [ 135.125035] ============================= [ 135.125310] WARNING: suspicious RCU usage [ 135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted [ 135.125904] ----------------------------- [ 135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle! [ 135.126839] [ 135.126839] other info that might help us debug this: [ 135.126839] [ 135.127410] [ 135.127410] RCU used illegally from idle CPU! [ 135.127410] rcu_scheduler_active = 2, debug_locks = 1 [ 135.128114] RCU used illegally from extended quiescent state! [ 135.128555] 1 lock held by swapper/0/0: [ 135.128944] #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178 [ 135.130499] [ 135.130499] stack backtrace: [ 135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20 [ 135.131841] Hardware name: linux,dummy-virt (DT) [ 135.132224] Call trace: [ 135.132491] dump_backtrace+0x0/0x140 [ 135.132806] show_stack+0x24/0x30 [ 135.133133] dump_stack+0xc4/0x10c [ 135.133726] lockdep_rcu_suspicious+0xf8/0x108 [ 135.134171] call_break_hook+0x170/0x178 [ 135.134486] brk_handler+0x28/0x68 [ 135.134792] do_debug_exception+0x90/0x150 [ 135.135051] el1_dbg+0x18/0x8c [ 135.135260] default_idle_call+0x0/0x44 [ 135.135516] cpu_startup_entry+0x2c/0x30 [ 135.135815] rest_init+0x1b0/0x280 [ 135.136044] arch_call_rest_init+0x14/0x1c [ 135.136305] start_kernel+0x4d4/0x500 [ 135.136597] So make debug exception visible to RCU can fix this warning. Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Acked-by: NPaul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Masami Hiramatsu 提交于
kprobes manipulates the interrupted PSTATE for single step, and doesn't restore it. Thus, if we put a kprobe where the pstate.D (debug) masked, the mask will be cleared after the kprobe hits. Moreover, in the most complicated case, this can lead a kernel crash with below message when a nested kprobe hits. [ 152.118921] Unexpected kernel single-step exception at EL1 When the 1st kprobe hits, do_debug_exception() will be called. At this point, debug exception (= pstate.D) must be masked (=1). But if another kprobes hits before single-step of the first kprobe (e.g. inside user pre_handler), it unmask the debug exception (pstate.D = 0) and return. Then, when the 1st kprobe setting up single-step, it saves current DAIF, mask DAIF, enable single-step, and restore DAIF. However, since "D" flag in DAIF is cleared by the 2nd kprobe, the single-step exception happens soon after restoring DAIF. This has been introduced by commit 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") To solve this issue, this stores all DAIF bits and restore it after single stepping. Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Fixes: 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") Reviewed-by: NJames Morse <james.morse@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 01 8月, 2019 4 次提交
-
-
由 Qian Cai 提交于
When CONFIG_KASAN_SW_TAGS=n, set_tag() is compiled away. GCC throws a warning, mm/kasan/common.c: In function '__kasan_kmalloc': mm/kasan/common.c:464:5: warning: variable 'tag' set but not used [-Wunused-but-set-variable] u8 tag = 0xff; ^~~ Fix it by making __tag_set() a static inline function the same as arch_kasan_set_tag() in mm/kasan/kasan.h for consistency because there is a macro in arch/arm64/include/asm/kasan.h, #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) However, when CONFIG_DEBUG_VIRTUAL=n and CONFIG_SPARSEMEM_VMEMMAP=y, page_to_virt() will call __tag_set() with incorrect type of a parameter, so fix that as well. Also, still let page_to_virt() return "void *" instead of "const void *", so will not need to add a similar cast in lowmem_page_address(). Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Qian Cai 提交于
GCC throws a warning, arch/arm64/mm/mmu.c: In function 'pud_free_pmd_page': arch/arm64/mm/mmu.c:1033:8: warning: variable 'pud' set but not used [-Wunused-but-set-variable] pud_t pud; ^~~ because pud_table() is a macro and compiled away. Fix it by making it a static inline function and for pud_sect() as well. Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Masami Hiramatsu 提交于
Remove rcu_read_lock()/rcu_read_unlock() from debug exception handlers since we are sure those are not preemptible and interrupts are off. Acked-by: NPaul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Masami Hiramatsu 提交于
Prohibit probing on return_address() and subroutines which is called from return_address(), since the it is invoked from trace_hardirqs_off() which is also kprobe blacklisted. Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-