- 03 10月, 2014 4 次提交
-
-
由 Yalin Wang 提交于
This patch extends the start and end address of initrd to be page aligned, so that we can free all memory including the un-page aligned head or tail page of initrd, if the start or end address of initrd are not page aligned, the page can't be freed by free_initrd_mem() function. Signed-off-by: NYalin Wang <yalin.wang@sonymobile.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Yalin Wang 提交于
This patch changes the __init_end address to a page align address, so that free_initmem() can free the whole .init section, because if the end address is not page aligned, it will round down to a page align address, then the tail unligned page will not be freed. Signed-off-by: Nwang <yalin.wang2010@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Walleij 提交于
When both 'cache-size' and 'cache-sets' are specified for a L2 cache controller node, parse those properties and set up the set size based on which type of L2 cache controller we are using. Update the L2 cache controller Device Tree binding with the optional 'cache-size', 'cache-sets', 'cache-block-size' and 'cache-line-size' properties. These come from the ePAPR specification. Using the cache size, number of sets and cache line size we can calculate desired associativity of the L2 cache. This is done by the calculation: set size = cache size / sets ways = set size / line size way size = cache size / ways = sets * line size associativity = cache size / way size Example output from the PB1176 DT that look like this: L2: l2-cache { compatible = "arm,l220-cache"; (...) arm,override-auxreg; cache-size = <131072>; // 128kB cache-sets = <512>; cache-line-size = <32>; }; Ends up like this: L2C OF: override cache size: 131072 bytes (128KB) L2C OF: override line size: 32 bytes L2C OF: override way size: 16384 bytes (16KB) L2C OF: override associativity: 8 L2C: DT/platform modifies aux control register: 0x02020fff -> 0x02030fff L2C-220 cache controller enabled, 8 ways, 128 kB L2C-220: CACHE_ID 0x41000486, AUX_CTRL 0x06030fff Which is consistent with the value earlier hardcoded for the PB1176 platform. This patch is an extended version based on the initial patch by Florian Fainelli. Reviewed-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Paolo Bonzini 提交于
This fixes the following OOPS: loaded kvm module (v3.17-rc1-168-gcec26bc3) BUG: unable to handle kernel paging request at fffffffffffffffe IP: [<ffffffff81168449>] put_page+0x9/0x30 PGD 1e15067 PUD 1e17067 PMD 0 Oops: 0000 [#1] PREEMPT SMP [<ffffffffa063271d>] ? kvm_vcpu_reload_apic_access_page+0x5d/0x70 [kvm] [<ffffffffa013b6db>] vmx_vcpu_reset+0x21b/0x470 [kvm_intel] [<ffffffffa0658816>] ? kvm_pmu_reset+0x76/0xb0 [kvm] [<ffffffffa064032a>] kvm_vcpu_reset+0x15a/0x1b0 [kvm] [<ffffffffa06403ac>] kvm_arch_vcpu_setup+0x2c/0x50 [kvm] [<ffffffffa062e540>] kvm_vm_ioctl+0x200/0x780 [kvm] [<ffffffff81212170>] do_vfs_ioctl+0x2d0/0x4b0 [<ffffffff8108bd99>] ? __mmdrop+0x69/0xb0 [<ffffffff812123d1>] SyS_ioctl+0x81/0xa0 [<ffffffff8112a6f6>] ? __audit_syscall_exit+0x1f6/0x2a0 [<ffffffff817229e9>] system_call_fastpath+0x16/0x1b Code: c6 78 ce a3 81 4c 89 e7 e8 d9 80 ff ff 0f 0b 4c 89 e7 e8 8f f6 ff ff e9 fa fe ff ff 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 <48> f7 07 00 c0 00 00 55 48 89 e5 75 1e 8b 47 1c 85 c0 74 27 f0 RIP [<ffffffff81193045>] put_page+0x5/0x50 when not using the in-kernel irqchip ("-machine kernel_irqchip=off" with QEMU). The fix is to make the same check in kvm_vcpu_reload_apic_access_page that we already have in vmx.c's vm_need_virtualize_apic_accesses(). Reported-by: NJan Kiszka <jan.kiszka@siemens.com> Tested-by: NJan Kiszka <jan.kiszka@siemens.com> Fixes: 4256f43fSigned-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 01 10月, 2014 2 次提交
-
-
由 David Hildenbrand 提交于
This patch introduces the halt_wakeup counter used by common code and uses it to count vcpu wakeups done in s390 arch specific code. Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Christian Borntraeger 提交于
There is nothing to do for KVM to support TOD-CLOCK steering. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
-
- 30 9月, 2014 3 次提交
-
-
由 Jon Medhurst 提交于
When compiling kprobes-test-arm.c the following error has been observed /tmp/ccoT403o.s:21439: Error: bad immediate value for offset (4168) This is caused by the compiler spilling it's literal pool too far away from the site which is trying to reference it with a PC relative load. This arises because the compiler is underestimating the size of the inline assembler code present, which apparently it approximates as 4 bytes per line or instruction. We fix this problem by moving the operations which generate more than 4 bytes out of the text section. Specifically, moving the .ascii directives to the .rodata section. Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
Joachim Eastwood reports that commit fbfb872f "ARM: 8148/1: flush TLS and thumbee register state during exec" causes a boot-time crash on a Cortex-M4 nommu system: Freeing unused kernel memory: 68K (281e5000 - 281f6000) Unhandled exception: IPSR = 00000005 LR = fffffff1 CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191 task: 29834000 ti: 29832000 task.ti: 29832000 PC is at flush_thread+0x2e/0x40 LR is at flush_thread+0x21/0x40 pc : [<2800954a>] lr : [<2800953d>] psr: 4100000b sp : 29833d60 ip : 00000000 fp : 00000001 r10: 00003cf8 r9 : 29b1f000 r8 : 00000000 r7 : 29b0bc00 r6 : 29834000 r5 : 29832000 r4 : 29832000 r3 : ffff0ff0 r2 : 29832000 r1 : 00000000 r0 : 282121f0 xPSR: 4100000b CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191 [<2800afa5>] (unwind_backtrace) from [<2800a327>] (show_stack+0xb/0xc) [<2800a327>] (show_stack) from [<2800a963>] (__invalid_entry+0x4b/0x4c) The problem is that set_tls is attempting to clear the TLS location in the kernel-user helper page, which isn't set up on V7M. Fix this by guarding the write to the kuser helper page with a CONFIG_KUSER_HELPERS ifdef. Fixes: fbfb872f ARM: 8148/1: flush TLS and thumbee register state during exec Reported-by: NJoachim Eastwood <manabian@gmail.com> Tested-by: NJoachim Eastwood <manabian@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Krzysztof Kozlowski 提交于
This fixes build breakage of platsmp.c if ARMv6 was chosen for compile time options (e.g. by building allmodconfig): $ make allmodconfig $ make CC arch/arm/mach-exynos/platsmp.o /tmp/ccdQM0Eg.s: Assembler messages: /tmp/ccdQM0Eg.s:432: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:437: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:438: Error: selected processor does not support ARM mode `dsb ' make[1]: *** [arch/arm/mach-exynos/platsmp.o] Error 1 The error was introduced in commit "ARM: EXYNOS: Move code from hotplug.c to platsmp.c". Previously code using v7_exit_coherency_flush() macro was built with '-march=armv7-a' flag but this flag dissapeared during the movement. Fix this by annotating the v7_exit_coherency_flush() asm code with armv7-a architecture. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Reported-by: NMark Brown <broonie@kernel.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 9月, 2014 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
We use cma reserved area for creating guest hash page table. Don't do the reservation in non-hypervisor mode. This avoids unnecessary CMA reservation when booting with limited memory configs like fadump and kdump. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Reviewed-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 26 9月, 2014 14 次提交
-
-
由 Uwe Kleine-König 提交于
The warning was introduced in 2009 (commit 4bf1fa5a ([ARM] 5613/1: implement CALLER_ADDRESSx)). The only "problem" here is that CALLER_ADDRESSx for x > 1 returns NULL which doesn't do much harm. The drawback of implementing a fix (i.e. use unwind tables to implement CALLER_ADDRESSx) is that much of the unwinder code would need to be marked as not traceable. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Uwe Kleine-König 提交于
Syntactically FOOTBRIDGE and ARCH_FOOTBRIDGE are identical (the former is defined in an if ARCH_FOOTBRIDGE block and the latter selects the former). Sematically FOOTBRIDGE means "we have a DC21285 (aka footbridge) device in the system" and ARCH_FOOTBRIDGE is the support for boards with a footbridge device, so ARCH_FOOTBRIDGE is the better symbol here. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Behan Webster 提交于
With compilers which follow the C99 standard (like modern versions of gcc and clang), "extern inline" does the wrong thing (emits code for an externally linkable version of the inline function). In this case using static inline and removing the NULL version of return_address in return_address.c does the right thing. Signed-off-by: NBehan Webster <behanw@converseincode.com> Reviewed-by: NMark Charlebois <charlebm@gmail.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
The sigpage is currently placed alongside shared libraries etc in the address space. Similar to what x86_64 does for its VDSO, place the sigpage at a randomized offset above the stack so that learning the base address of the sigpage doesn't help expose where shared libraries are loaded in the address space (and vice versa). Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
_install_special_mapping allows the VMA to be identifed in /proc/pid/maps without the use of arch_vma_name, providing a slight net reduction in object size: text data bss dec hex filename 2996 96 144 3236 ca4 arch/arm/kernel/process.o (before) 2956 104 144 3204 c84 arch/arm/kernel/process.o (after) Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Vincent Sanders 提交于
Enable gcov support for ARM based on original patches by David Singleton and George G. Davis Riku - updated to patch to current mainline kernel. The patch has been submitted in 2010, 2012 - for symmetry, now in 2014 too. https://lwn.net/Articles/390419/ http://marc.info/?l=linux-arm-kernel&m=133823081813044 v2: remove arch/arm/kernel from gcov disabled files Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Naresh Kamboju <naresh.kamboju@linaro.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRiku Voipio <riku.voipio@linaro.org> Signed-off-by: NVincent Sanders <vincent.sanders@collabora.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
If we are not changing the control register value, avoid writing to it. Writes to the control register can be very expensive, taking around a hundred cycles or so. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joe Perches 提交于
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Christoffer Dall 提交于
When we catch something that's not a permission fault or a translation fault, we log the unsupported FSC in the kernel log, but we were masking off the bottom bits of the FSC which was not very helpful. Also correctly report the FSC for data and instruction faults rather than telling people it was a DFCS, which doesn't exist in the ARM ARM. Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Joel Schopp 提交于
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits and not all the bits in the PA range. This is clearly a bug that manifests itself on systems that allocate memory in the higher address space range. [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT instead of a hard-coded value and to move the alignment check of the allocation to mmu.c. Also added a comment explaining why we hardcode the IPA range and changed the stage-2 pgd allocation to be based on the 40 bit IPA range instead of the maximum possible 48 bit PA range. - Christoffer ] Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoel Schopp <joel.schopp@amd.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Markos Chandras 提交于
Every mcount() call in the MIPS 32-bit kernel is done as follows: [...] move at, ra jal _mcount addiu sp, sp, -8 [...] but upon returning from the mcount() function, the stack pointer is not adjusted properly. This is explained in details in 58b69401 (MIPS: Function tracer: Fix broken function tracing). Commit ad8c3969 ("MIPS: Unbreak function tracer for 64-bit kernel.) fixed the stack manipulation for 64-bit but it didn't fix it completely for MIPS32. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: <stable@vger.kernel.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7792/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Commit bbd426f5 "MIPS: Simplify FP context access" modified the SIFROMREG & SIFROMHREG macros such that they return unsigned rather than signed 32b integers. I had believed that to be fine, but inadvertently missed the MFC1 & MFHC1 cases which write to a struct pt_regs regs element. On MIPS32 this is fine, but on 64 bit those saved regs' fields are 64 bit wide. Using unsigned values caused the 32 bit value from the FP register to be zero rather than sign extended as the architecture specifies, causing incorrect emulation of the MFC1 & MFHc1 instructions. Fix by reintroducing the casts to signed integers, and therefore the sign extension. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: stable@vger.kernel.org # v3.15+ Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7848/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Richard Weinberger 提交于
The symbol is an orphan, get rid of it. Signed-off-by: NRichard Weinberger <richard@nod.at> Acked-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Masanari Iida 提交于
This patch fix spelling typos found in Kconfig. Signed-off-by: NMasanari Iida <standby24x7@gmail.com> Acked-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 25 9月, 2014 13 次提交
-
-
由 Robin Murphy 提交于
The alignment fixup incorrectly decodes faulting ARM VLDn/VSTn instructions (where the optional alignment hint is given but incorrect) as LDR/STR, leading to register corruption. Detect these and correctly treat them as unhandled, so that userspace gets the fault it expects. Reported-by: NSimon Hosie <simon.hosie@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
SCTLR.HA (hardware access flag) is deprecated and not actually implemented by any CPUs. Furthermore, it can confuse cr_alignment checks where the whole value of SCTLR is compared against the value sitting in the hardware, since the bit is actually RAZ/WI and will not match the saved cr_alignment value. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Matt Fleming 提交于
If we're executing the 32-bit efi_char16_printk() code path (i.e. running on top of 32-bit firmware) we know that efi_early->text_output will be a 32-bit value, even though ->text_output has type u64. Unfortunately, we currently pass ->text_output directly to efi_early->call() so for CONFIG_X86_32 the compiler will push a 64-bit value onto the stack, causing the other parameters to be misaligned. The way we handle this in the rest of the EFI boot stub is to pass pointers as arguments to efi_early->call(), which automatically do the right thing (pointers are 32-bit on CONFIG_X86_32, and we simply ignore the upper 32-bits of the argument register if running in 64-bit mode with 32-bit firmware). This fixes a corruption bug when printing strings from the 32-bit EFI boot stub. Link: https://bugzilla.kernel.org/show_bug.cgi?id=84241Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Alexei Starovoitov 提交于
- fix BPF_LD|ABS|IND from negative offsets: make sure to sign extend lower 32 bits in 64-bit register before calling C helpers from JITed code, otherwise 'int k' argument of bpf_internal_load_pointer_neg_helper() function will be added as large unsigned integer, causing packet size check to trigger and abort the program. It's worth noting that JITed code for 'A = A op K' will affect upper 32 bits differently depending whether K is simm13 or not. Since small constants are sign extended, whereas large constants are stored in temp register and zero extended. That is ok and we don't have to pay a penalty of sign extension for every sethi, since all classic BPF instructions have 32-bit semantics and we only need to set correct upper bits when transitioning from JITed code into C. - though instructions 'A &= 0' and 'A *= 0' are odd, JIT compiler should not optimize them out Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Commit df568d8e ("scsi: Use 'depends' with LIBFC instead of 'select'.") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Commit df568d8e ("scsi: Use 'depends' with LIBFC instead of 'select'.") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Commit df568d8e ("scsi: Use 'depends' with LIBFC instead of 'select'.") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Commit df568d8e ("scsi: Use 'depends' with LIBFC instead of 'select'.") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Marek 提交于
Commit 5d6be6a5 ("scsi_netlink : Make SCSI_NETLINK dependent on NET instead of selecting NET") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: sparclinux@vger.kernel.org Signed-off-by: NMichal Marek <mmarek@suse.cz> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Marek 提交于
Commit 5d6be6a5 ("scsi_netlink : Make SCSI_NETLINK dependent on NET instead of selecting NET") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: linux-sh@vger.kernel.org Signed-off-by: NMichal Marek <mmarek@suse.cz> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Marek 提交于
Commit 5d6be6a5 ("scsi_netlink : Make SCSI_NETLINK dependent on NET instead of selecting NET") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NMichal Marek <mmarek@suse.cz> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Marek 提交于
Commit 5d6be6a5 ("scsi_netlink : Make SCSI_NETLINK dependent on NET instead of selecting NET") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: linux-parisc@vger.kernel.org Signed-off-by: NMichal Marek <mmarek@suse.cz> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Marek 提交于
Commit 5d6be6a5 ("scsi_netlink : Make SCSI_NETLINK dependent on NET instead of selecting NET") removed what happened to be the only instance of 'select NET'. Defconfigs that were relying on the select now lack networking support. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: linux-mips@linux-mips.org Signed-off-by: NMichal Marek <mmarek@suse.cz> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 9月, 2014 3 次提交
-
-
由 Mathias Krause 提交于
The "by8" implementation introduced in commit 22cddcc7 ("crypto: aes - AES CTR x86_64 "by8" AVX optimization") is failing crypto tests as it handles counter block overflows differently. It only accounts the right most 32 bit as a counter -- not the whole block as all other implementations do. This makes it fail the cryptomgr test #4 that specifically tests this corner case. As we're quite late in the release cycle, just disable the "by8" variant for now. Reported-by: NRomain Francoise <romain@orebokech.com> Signed-off-by: NMathias Krause <minipli@googlemail.com> Cc: Chandramouli Narayanan <mouli@linux.intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Wanpeng Li 提交于
The following bug can be triggered by hot adding and removing a large number of xen domain0's vcpus repeatedly: BUG: unable to handle kernel NULL pointer dereference at 0000000000000004 IP: [..] find_busiest_group PGD 5a9d5067 PUD 13067 PMD 0 Oops: 0000 [#3] SMP [...] Call Trace: load_balance ? _raw_spin_unlock_irqrestore idle_balance __schedule schedule schedule_timeout ? lock_timer_base schedule_timeout_uninterruptible msleep lock_device_hotplug_sysfs online_store dev_attr_store sysfs_write_file vfs_write SyS_write system_call_fastpath Last level cache shared mask is built during CPU up and the build_sched_domain() routine takes advantage of it to setup the sched domain CPU topology. However, llc_shared_mask is not released during CPU disable, which leads to an invalid sched domainCPU topology. This patch fix it by releasing the llc_shared_mask correctly during CPU disable. Yasuaki also reported that this can happen on real hardware: https://lkml.org/lkml/2014/7/22/1018 His case is here: == Here is an example on my system. My system has 4 sockets and each socket has 15 cores and HT is enabled. In this case, each core of sockes is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 Socket#2 | 30-44, 90-104 Socket#3 | 45-59, 105-119 Then llc_shared_mask of CPU#30 has 0x3fff80000001fffc0000000. It means that last level cache of Socket#2 is shared with CPU#30-44 and 90-104. When hot-removing socket#2 and #3, each core of sockets is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 But llc_shared_mask is not cleared. So llc_shared_mask of CPU#30 remains having 0x3fff80000001fffc0000000. After that, when hot-adding socket#2 and #3, each core of sockets is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 Socket#2 | 30-59 Socket#3 | 90-119 Then llc_shared_mask of CPU#30 becomes 0x3fff8000fffffffc0000000. It means that last level cache of Socket#2 is shared with CPU#30-59 and 90-104. So the mask has the wrong value. Signed-off-by: NWanpeng Li <wanpeng.li@linux.intel.com> Tested-by: NLinn Crosetto <linn@hp.com> Reviewed-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NToshi Kani <toshi.kani@hp.com> Reviewed-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: <stable@vger.kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1411547885-48165-1-git-send-email-wanpeng.li@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Tang Chen 提交于
In order to make the APIC access page migratable, stop pinning it in memory. And because the APIC access page is not pinned in memory, we can remove kvm_arch->apic_access_page. When we need to write its physical address into vmcs, we use gfn_to_page() to get its page struct, which is needed to call page_to_phys(); the page is then immediately unpinned. Suggested-by: NGleb Natapov <gleb@kernel.org> Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-