- 12 3月, 2018 1 次提交
-
-
由 Andrea Parri 提交于
The following two commits: 79d44246 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB") 472e8c55 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs") ... ended up adding unnecessary barriers to the _local() variants on Alpha, which the previous code took care to avoid. Fix them by adding the smp_mb() into the cmpxchg() macro rather than into the ____cmpxchg() variants. Reported-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAndrea Parri <parri.andrea@gmail.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-alpha@vger.kernel.org Fixes: 472e8c55 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs") Fixes: 79d44246 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB") Link: http://lkml.kernel.org/r/1519704058-13430-1-git-send-email-parri.andrea@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 3月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
A recent update to the ARM SMCCC ARCH_WORKAROUND_1 specification allows firmware to return a non zero, positive value to describe that although the mitigation is implemented at the higher exception level, the CPU on which the call is made is not affected. Let's relax the check on the return value from ARCH_WORKAROUND_1 so that we only error out if the returned value is negative. Fixes: b092201e ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support") Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 3月, 2018 1 次提交
-
-
由 Francis Deslauriers 提交于
Disable the kprobe probing of the entry trampoline: .entry_trampoline is a code area that is used to ensure page table isolation between userspace and kernelspace. At the beginning of the execution of the trampoline, we load the kernel's CR3 register. This has the effect of enabling the translation of the kernel virtual addresses to physical addresses. Before this happens most kernel addresses can not be translated because the running process' CR3 is still used. If a kprobe is placed on the trampoline code before that change of the CR3 register happens the kernel crashes because int3 handling pages are not accessible. To fix this, add the .entry_trampoline section to the kprobe blacklist to prohibit the probing of code before all the kernel pages are accessible. Signed-off-by: NFrancis Deslauriers <francis.deslauriers@efficios.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: mathieu.desnoyers@efficios.com Cc: mhiramat@kernel.org Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 3月, 2018 12 次提交
-
-
由 Seunghun Han 提交于
The check_interval file in /sys/devices/system/machinecheck/machinecheck<cpu number> directory is a global timer value for MCE polling. If it is changed by one CPU, mce_restart() broadcasts the event to other CPUs to delete and restart the MCE polling timer and __mcheck_cpu_init_timer() reinitializes the mce_timer variable. If more than one CPU writes a specific value to the check_interval file concurrently, mce_timer is not protected from such concurrent accesses and all kinds of explosions happen. Since only root can write to those sysfs variables, the issue is not a big deal security-wise. However, concurrent writes to these configuration variables is void of reason so the proper thing to do is to serialize the access with a mutex. Boris: - Make store_int_with_restart() use device_store_ulong() to filter out negative intervals - Limit min interval to 1 second - Correct locking - Massage commit message Signed-off-by: NSeunghun Han <kkamagui@gmail.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20180302202706.9434-1-kkamagui@gmail.com
-
由 Tony Luck 提交于
Updating microcode used to be relatively rare. Now that it has become more common we should save the microcode version in a machine check record to make sure that those people looking at the error have this important information bundled with the rest of the logged information. [ Borislav: Simplify a bit. ] Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Yazen Ghannam <yazen.ghannam@amd.com> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20180301233449.24311-1-tony.luck@intel.com
-
由 Seunghun Han 提交于
s/visinble/visible/ Signed-off-by: NSeunghun Han <kkamagui@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1520397135-132809-1-git-send-email-kkamagui@gmail.com
-
由 Ashok Raj 提交于
Original idea by Ashok, completely rewritten by Borislav. Before you read any further: the early loading method is still the preferred one and you should always do that. The following patch is improving the late loading mechanism for long running jobs and cloud use cases. Gather all cores and serialize the microcode update on them by doing it one-by-one to make the late update process as reliable as possible and avoid potential issues caused by the microcode update. [ Borislav: Rewrite completely. ] Co-developed-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NAshok Raj <ashok.raj@intel.com> Reviewed-by: NTom Lendacky <thomas.lendacky@amd.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Link: https://lkml.kernel.org/r/20180228102846.13447-8-bp@alien8.de
-
由 Borislav Petkov 提交于
... so that any newer version can land in the cache and can later be fished out by the application functions. Do that before grabbing the hotplug lock. Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NAshok Raj <ashok.raj@intel.com> Reviewed-by: NTom Lendacky <thomas.lendacky@amd.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Link: https://lkml.kernel.org/r/20180228102846.13447-7-bp@alien8.de
-
由 Borislav Petkov 提交于
The cache might contain a newer patch - look in there first. A follow-on change will make sure newest patches are loaded into the cache of microcode patches. Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NAshok Raj <ashok.raj@intel.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Link: https://lkml.kernel.org/r/20180228102846.13447-6-bp@alien8.de
-
由 Ashok Raj 提交于
Avoid loading microcode if any of the CPUs are offline, and issue a warning. Having different microcode revisions on the system at any time is outright dangerous. [ Borislav: Massage changelog. ] Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NAshok Raj <ashok.raj@intel.com> Reviewed-by: NTom Lendacky <thomas.lendacky@amd.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Link: http://lkml.kernel.org/r/1519352533-15992-4-git-send-email-ashok.raj@intel.com Link: https://lkml.kernel.org/r/20180228102846.13447-5-bp@alien8.de
-
由 Ashok Raj 提交于
Updating microcode is less error prone when caches have been flushed and depending on what exactly the microcode is updating. For example, some of the issues around certain Broadwell parts can be addressed by doing a full cache flush. [ Borislav: Massage it and use native_wbinvd() in both cases. ] Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NAshok Raj <ashok.raj@intel.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Link: http://lkml.kernel.org/r/1519352533-15992-3-git-send-email-ashok.raj@intel.com Link: https://lkml.kernel.org/r/20180228102846.13447-4-bp@alien8.de
-
由 Ashok Raj 提交于
After updating microcode on one of the threads of a core, the other thread sibling automatically gets the update since the microcode resources on a hyperthreaded core are shared between the two threads. Check the microcode revision on the CPU before performing a microcode update and thus save us the WRMSR 0x79 because it is a particularly expensive operation. [ Borislav: Massage changelog and coding style. ] Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NAshok Raj <ashok.raj@intel.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Link: http://lkml.kernel.org/r/1519352533-15992-2-git-send-email-ashok.raj@intel.com Link: https://lkml.kernel.org/r/20180228102846.13447-3-bp@alien8.de
-
由 Borislav Petkov 提交于
It is a useless remnant from earlier times. Use the ucode_state enum directly. No functional change. Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NTom Lendacky <thomas.lendacky@amd.com> Tested-by: NAshok Raj <ashok.raj@intel.com> Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com> Link: https://lkml.kernel.org/r/20180228102846.13447-2-bp@alien8.de
-
由 Konrad Rzeszutek Wilk 提交于
As: 1) It's known that hypervisors lie about the environment anyhow (host mismatch) 2) Even if the hypervisor (Xen, KVM, VMWare, etc) provided a valid "correct" value, it all gets to be very murky when migration happens (do you provide the "new" microcode of the machine?). And in reality the cloud vendors are the ones that should make sure that the microcode that is running is correct and we should just sing lalalala and trust them. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Cc: Wanpeng Li <kernellwp@gmail.com> Cc: kvm <kvm@vger.kernel.org> Cc: Krčmář <rkrcmar@redhat.com> Cc: Borislav Petkov <bp@alien8.de> CC: "H. Peter Anvin" <hpa@zytor.com> CC: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180226213019.GE9497@char.us.oracle.com
-
由 Andy Lutomirski 提交于
Since Linux v3.2, vsyscalls have been deprecated and slow. From v3.2 on, Linux had three vsyscall modes: "native", "emulate", and "none". "emulate" is the default. All known user programs work correctly in emulate mode, but vsyscalls turn into page faults and are emulated. This is very slow. In "native" mode, the vsyscall page is easily usable as an exploit gadget, but vsyscalls are a bit faster -- they turn into normal syscalls. (This is in contrast to vDSO functions, which can be much faster than syscalls.) In "none" mode, there are no vsyscalls. For all practical purposes, "native" was really just a chicken bit in case something went wrong with the emulation. It's been over six years, and nothing has gone wrong. Delete it. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Acked-by: NKees Cook <keescook@chromium.org> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Kernel Hardening <kernel-hardening@lists.openwall.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/519fee5268faea09ae550776ce969fa6e88668b0.1520449896.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 3月, 2018 6 次提交
-
-
由 Dominik Brodowski 提交于
As %rdi is never user except in the following push, there is no need to restore %rdi to the original value. Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: luto@amacapital.net Cc: viro@zeniv.linux.org.uk Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dominik Brodowski 提交于
With the CPU renaming registers on its own, and all the overhead of the syscall entry/exit, it is doubtful whether the compiled output of mov %r8, %rax mov %rcx, %r8 mov %rax, %rcx jmpq sys_clone is measurably slower than the hand-crafted version of xchg %r8, %rcx So get rid of this special case. Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: luto@amacapital.net Cc: viro@zeniv.linux.org.uk Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dominik Brodowski 提交于
While at it, convert declarations of type "unsigned" to "unsigned int". Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: luto@amacapital.net Cc: viro@zeniv.linux.org.uk Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dominik Brodowski 提交于
Using SYSCALL_DEFINEx() is recommended, so use it also here. Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: luto@amacapital.net Cc: viro@zeniv.linux.org.uk Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dominik Brodowski 提交于
sys32_vm86_warning() is long gone. Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: luto@amacapital.net Cc: viro@zeniv.linux.org.uk Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dominik Brodowski 提交于
If the compat entry point is equivalent to the native entry point, it does not need to be specified explicitly. Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: luto@amacapital.net Cc: viro@zeniv.linux.org.uk Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 3月, 2018 9 次提交
-
-
由 David Hildenbrand 提交于
Even if we don't have extended SCA support, we can have more than 64 CPUs if we don't enable any HW features that might use the SCA entries. Now, this works just fine, but we missed a return, which is why we would actually store the SCA entries. If we have more than 64 CPUs, this means writing outside of the basic SCA - bad. Let's fix this. This allows > 64 CPUs when running nested (under vSIE) without random crashes. Fixes: a6940674 ("KVM: s390: allow 255 VCPUs when sca entries aren't used") Reported-by: NChristian Borntraeger <borntraeger@de.ibm.com> Tested-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NDavid Hildenbrand <david@redhat.com> Message-Id: <20180306132758.21034-1-david@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Bharata B Rao 提交于
With ibm,dynamic-memory-v2 and ibm,drc-info coming around the same time, byte22 in vector5 of ibm architecture vector table got set twice separately. The end result is that guest kernel isn't advertising support for ibm,dynamic-memory-v2. Fix this by removing the duplicate assignment of byte22. Fixes: 02ef6dd8 ("powerpc: Enable support for ibm,drc-info devtree property") Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christian Borntraeger 提交于
when a system call is interrupted we might call the critical section cleanup handler that re-does some of the operations. When we are between .Lsysc_vtime and .Lsysc_do_svc we might also redo the saving of the problem state registers r0-r7: .Lcleanup_system_call: [...] 0: # update accounting time stamp mvc __LC_LAST_UPDATE_TIMER(8),__LC_SYNC_ENTER_TIMER # set up saved register r11 lg %r15,__LC_KERNEL_STACK la %r9,STACK_FRAME_OVERHEAD(%r15) stg %r9,24(%r11) # r11 pt_regs pointer # fill pt_regs mvc __PT_R8(64,%r9),__LC_SAVE_AREA_SYNC ---> stmg %r0,%r7,__PT_R0(%r9) The problem is now, that we might have already zeroed out r0. The fix is to move the zeroing of r0 after sysc_do_svc. Reported-by: NFarhan Ali <alifm@linux.vnet.ibm.com> Fixes: 7041d281 ("s390: scrub registers on kernel entry and KVM exit") Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Eric W. Biederman 提交于
Due to an oversight when refactoring siginfo_t si_pkey has been in the wrong position since 4.16-rc1. Add an explicit check of the offset of every user space field in siginfo_t and compat_siginfo_t to make a mistake like this hard to make in the future. I have run this code on 4.15 and 4.16-rc1 with the position of si_pkey fixed and all of the fields show up in the same location. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Justin Chen 提交于
Commit a3e6c1ef ("MIPS: IRQ: Fix disable_irq on CPU IRQs") fixes an issue where disable_irq did not actually disable the irq. The bug caused our IPIs to not be disabled, which actually is the correct behavior. With the addition of commit a3e6c1ef ("MIPS: IRQ: Fix disable_irq on CPU IRQs"), the IPIs were getting disabled going into suspend, thus schedule_ipi() was not being called. This caused deadlocks where schedulable task were not being scheduled and other cpus were waiting for them to do something. Add the IRQF_NO_SUSPEND flag so an irq_disable will not be called on the IPIs during suspend. Signed-off-by: NJustin Chen <justinpopo6@gmail.com> Fixes: a3e6c1ef ("MIPS: IRQ: Fix disabled_irq on CPU IRQs") Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/17385/ [jhogan@kernel.org: checkpatch: wrap long lines and fix commit refs] Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Colin Ian King 提交于
Trivial fix to spelling mistake in debug message text. Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Davidlohr Bueso 提交于
At the point of sysfs callback, the call to gup is done without mmap_sem (or any lock for that matter). This is racy. As such, use the get_user_pages_fast() alternative and safely avoid taking the lock, if possible. Signed-off-by: NDavidlohr Bueso <dbueso@suse.de> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Matthew Wilcox 提交于
While we've only seen inlining problems with atomic_sub_return(), the other atomic operations could have the same problem. Convert all remaining operations to use the same solution as atomic_sub_return(). Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Corentin Labbe 提交于
Since my system use python3 as default, arch/ia64/scripts/unwcheck.py no longer run. This patch convert it to the python3 syntax. I have ran it with python2/python3 while printing values of start/end/rlen_sum which could be impacted by this change and I see no difference. Fixes: 94a47083 ("scripts: change scripts to use system python instead of env") Signed-off-by: NCorentin Labbe <clabbe@baylibre.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 05 3月, 2018 2 次提交
-
-
由 Huacai Chen 提交于
Commit 7a407aa5 ("MIPS: Push ARCH_MIGHT_HAVE_PC_SERIO down to platform level") moves the global MIPS ARCH_MIGHT_HAVE_PC_SERIO select down to various platforms, but doesn't add it to Loongson64 platforms which need it, so add the selects to these platforms too. Fixes: 7a407aa5 ("MIPS: Push ARCH_MIGHT_HAVE_PC_SERIO down to platform level") Signed-off-by: NHuacai Chen <chenhc@lemote.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/18704/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Huacai Chen 提交于
Commit a211a082 ("MIPS: Push ARCH_MIGHT_HAVE_PC_PARPORT down to platform level") moves the global MIPS ARCH_MIGHT_HAVE_PC_PARPORT select down to various platforms, but doesn't add it to Loongson64 platforms which need it, so add the selects to these platforms too. Fixes: a211a082 ("MIPS: Push ARCH_MIGHT_HAVE_PC_PARPORT down to platform level") Signed-off-by: NHuacai Chen <chenhc@lemote.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/18703/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 04 3月, 2018 1 次提交
-
-
由 Kan Liang 提交于
There is no event extension (bit 21) for SKX UPI, so use 'event' instead of 'event_ext'. Reported-by: NStephane Eranian <eranian@google.com> Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Fixes: cd34cd97 ("perf/x86/intel/uncore: Add Skylake server uncore support") Link: http://lkml.kernel.org/r/1520004150-4855-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 3月, 2018 1 次提交
-
-
由 Laurent Vivier 提交于
Since commit 8b24e69f ("KVM: PPC: Book3S HV: Close race with testing for signals on guest entry"), if CONFIG_VIRT_CPU_ACCOUNTING_GEN is set, the guest time is not accounted to guest time and user time, but instead to system time. This is because guest_enter()/guest_exit() are called while interrupts are disabled and the tick counter cannot be updated between them. To fix that, move guest_exit() after local_irq_enable(), and as guest_enter() is called with IRQ disabled, call guest_enter_irqoff() instead. Fixes: 8b24e69f ("KVM: PPC: Book3S HV: Close race with testing for signals on guest entry") Signed-off-by: NLaurent Vivier <lvivier@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
- 02 3月, 2018 6 次提交
-
-
由 Guenter Roeck 提交于
When running s390 images with 'compat' processes, the following BUG is seen repeatedly. BUG: non-zero pgtables_bytes on freeing mm: -16384 Bisect points to commit b4e98d9a ("mm: account pud page tables"). Analysis shows that init_new_context() is called with mm->context.asce_limit set to _REGION3_SIZE. In this situation, pgtables_bytes remains set to 0 and is not increased. The message is displayed when the affected process dies and mm_dec_nr_puds() is called. Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Fixes: b4e98d9a ("mm: account pud page tables") Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Helge Deller 提交于
When run under QEMU, calling mfctl(16) creates some overhead because the qemu timer has to be scaled and moved into the register. This patch reduces the number of calls to mfctl(16) by moving the calls out of the loops. Additionally, increase the minimal time interval to 8000 cycles instead of 500 to compensate possible QEMU delays when delivering interrupts. Signed-off-by: NHelge Deller <deller@gmx.de> Cc: stable@vger.kernel.org # 4.14+
-
由 Helge Deller 提交于
When running on qemu we know that the (emulated) cr16 cpu-internal clocks are syncronized. So let's use them unconditionally on qemu. Signed-off-by: NHelge Deller <deller@gmx.de> Cc: stable@vger.kernel.org # 4.14+
-
由 Helge Deller 提交于
The architecture specification says (for 64-bit systems): PDC is a per processor resource, and operating system software must be prepared to manage separate pointers to PDCE_PROC for each processor. The address of PDCE_PROC for the monarch processor is stored in the Page Zero location MEM_PDC. The address of PDCE_PROC for each non-monarch processor is passed in gr26 when PDCE_RESET invokes OS_RENDEZ. Currently we still use one PDC for all CPUs, but in case we face a machine which is following the specification let's warn about it. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
For security reasons do not expose the virtual kernel memory layout to userspace. Signed-off-by: NHelge Deller <deller@gmx.de> Suggested-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org # 4.15 Reviewed-by: NKees Cook <keescook@chromium.org>
-
由 John David Anglin 提交于
The change to flush_kernel_vmap_range() wasn't sufficient to avoid the SMP stalls. The problem is some drivers call these routines with interrupts disabled. Interrupts need to be enabled for flush_tlb_all() and flush_cache_all() to work. This version adds checks to ensure interrupts are not disabled before calling routines that need IPI interrupts. When interrupts are disabled, we now drop into slower code. The attached change fixes the ordering of cache and TLB flushes in several cases. When we flush the cache using the existing PTE/TLB entries, we need to flush the TLB after doing the cache flush. We don't need to do this when we flush the entire instruction and data caches as these flushes don't use the existing TLB entries. The same is true for tmpalias region flushes. The flush_kernel_vmap_range() and invalidate_kernel_vmap_range() routines have been updated. Secondly, we added a new purge_kernel_dcache_range_asm() routine to pacache.S and use it in invalidate_kernel_vmap_range(). Nominally, purges are faster than flushes as the cache lines don't have to be written back to memory. Hopefully, this is sufficient to resolve the remaining problems due to cache speculation. So far, testing indicates that this is the case. I did work up a patch using tmpalias flushes, but there is a performance hit because we need the physical address for each page, and we also need to sequence access to the tmpalias flush code. This increases the probability of stalls. Signed-off-by: John David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # 4.9+ Signed-off-by: NHelge Deller <deller@gmx.de>
-