- 08 9月, 2018 4 次提交
-
-
由 Nadav Amit 提交于
When page-table entries are set, the compiler might optimize their assignment by using multiple instructions to set the PTE. This might turn into a security hazard if the user somehow manages to use the interim PTE. L1TF does not make our lives easier, making even an interim non-present PTE a security hazard. Using WRITE_ONCE() to set PTEs and friends should prevent this potential security hazard. I skimmed the differences in the binary with and without this patch. The differences are (obviously) greater when CONFIG_PARAVIRT=n as more code optimizations are possible. For better and worse, the impact on the binary with this patch is pretty small. Skimming the code did not cause anything to jump out as a security hazard, but it seems that at least move_soft_dirty_pte() caused set_pte_at() to use multiple writes. Signed-off-by: NNadav Amit <namit@vmware.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Sean Christopherson <sean.j.christopherson@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180902181451.80520-1-namit@vmware.com
-
由 Thomas Gleixner 提交于
activate_managed() returns EINVAL instead of -EINVAL in case of error. While this is unlikely to happen, the positive return value would cause further malfunction at the call site. Fixes: 2db1f959 ("x86/vector: Handle managed interrupts proper") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org
-
由 Wanpeng Li 提交于
Dan Carpenter reported that the untrusted data returns from kvm_register_read() results in the following static checker warning: arch/x86/kvm/lapic.c:576 kvm_pv_send_ipi() error: buffer underflow 'map->phys_map' 's32min-s32max' KVM guest can easily trigger this by executing the following assembly sequence in Ring0: mov $10, %rax mov $0xFFFFFFFF, %rbx mov $0xFFFFFFFF, %rdx mov $0, %rsi vmcall As this will cause KVM to execute the following code-path: vmx_handle_exit() -> handle_vmcall() -> kvm_emulate_hypercall() -> kvm_pv_send_ipi() which will reach out-of-bounds access. This patch fixes it by adding a check to kvm_pv_send_ipi() against map->max_apic_id, ignoring destinations that are not present and delivering the rest. We also check whether or not map->phys_map[min + i] is NULL since the max_apic_id is set to the max apic id, some phys_map maybe NULL when apic id is sparse, especially kvm unconditionally set max_apic_id to 255 to reserve enough space for any xAPIC ID. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NLiran Alon <liran.alon@oracle.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Liran Alon <liran.alon@oracle.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NWanpeng Li <wanpengli@tencent.com> [Add second "if (min > map->max_apic_id)" to complete the fix. -Radim] Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Liran Alon 提交于
Consider the case L1 had a IRQ/NMI event until it executed VMLAUNCH/VMRESUME which wasn't delivered because it was disallowed (e.g. interrupts disabled). When L1 executes VMLAUNCH/VMRESUME, L0 needs to evaluate if this pending event should cause an exit from L2 to L1 or delivered directly to L2 (e.g. In case L1 don't intercept EXTERNAL_INTERRUPT). Usually this would be handled by L0 requesting a IRQ/NMI window by setting VMCS accordingly. However, this setting was done on VMCS01 and now VMCS02 is active instead. Thus, when L1 executes VMLAUNCH/VMRESUME we force L0 to perform pending event evaluation by requesting a KVM_REQ_EVENT. Note that above scenario exists when L1 KVM is about to enter L2 but requests an "immediate-exit". As in this case, L1 will disable-interrupts and then send a self-IPI before entering L2. Reviewed-by: NNikita Leshchenko <nikita.leshchenko@oracle.com> Co-developed-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
- 07 9月, 2018 4 次提交
-
-
由 Steven Price 提交于
The lock has never been used and the page tables are protected by mmu_lock in struct kvm. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
kvm_unmap_hva is long gone, and we only have kvm_unmap_hva_range to deal with. Drop the now obsolete code. Fixes: fb1522e0 ("KVM: update to new mmu_notifier semantic v2") Cc: James Hogan <jhogan@kernel.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
If trapping FPSIMD in the context of an AArch32 guest, it is critical to set FPEXC32_EL2.EN to 1 so that the trapping is taken to EL2 and not EL1. Conversely, it is just as critical *not* to set FPEXC32_EL2.EN to 1 if we're not going to trap FPSIMD, as we then corrupt the existing VFP state. Moving the call to __activate_traps_fpsimd32 to the point where we know for sure that we are going to trap ensures that we don't set that bit spuriously. Fixes: e6b673b7 ("KVM: arm64: Optimise FPSIMD handling to reduce guest/host thrashing") Cc: stable@vger.kernel.org # v4.18 Cc: Dave Martin <dave.martin@arm.com> Reported-by: NAlexander Graf <agraf@suse.de> Tested-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Mark Rutland 提交于
In pmd_free_pte_page() and pud_free_pmd_page() we try to warn if they hit a present non-table entry. In both cases we'll warn for non-present entries, as the VM_WARN_ON() only checks the entry is not a table entry. This has been observed to result in warnings when booting a v4.19-rc2 kernel under qemu. Fix this by bailing out earlier for non-present entries. Fixes: ec28bb9c ("arm64: Implement page table free interfaces") Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 06 9月, 2018 2 次提交
-
-
由 Jann Horn 提交于
When the kernel.print-fatal-signals sysctl has been enabled, a simple userspace crash will cause the kernel to write a crash dump that contains, among other things, the kernel gsbase into dmesg. As suggested by Andy, limit output to pt_regs, FS_BASE and KERNEL_GS_BASE in this case. This also moves the bitness-specific logic from show_regs() into process_{32,64}.c. Fixes: 45807a1d ("vdso: print fatal signals") Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bpetkov@suse.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180831194151.123586-1-jannh@google.com
-
由 Chuanhua Lei 提交于
Loops per jiffy is calculated by multiplying tsc_khz with 1e3 and then dividing it by HZ. Both tsc_khz and the temporary variable holding the multiplication result are of type unsigned long, so on 32bit the result is truncated to the lower 32bit. Use u64 as type for the temporary variable and cast tsc_khz to it before multiplying. [ tglx: Massaged changelog and removed pointless braces ] Fixes: cf7a63ef ("x86/tsc: Calibrate tsc only once") Signed-off-by: NChuanhua Lei <chuanhua.lei@linux.intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: yixin.zhu@linux.intel.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Len Brown <len.brown@intel.com> Cc: Pavel Tatashin <pasha.tatashin@microsoft.com> Cc: Rajvi Jingar <rajvi.jingar@intel.com> Cc: Dou Liyang <douly.fnst@cn.fujitsu.com> Link: https://lkml.kernel.org/r/1536228203-18701-1-git-send-email-chuanhua.lei@linux.intel.com
-
- 05 9月, 2018 4 次提交
-
-
由 Greentime Hu 提交于
This patch is used to fix nds32 allmodconfig/allyesconfig build error because GCOV kernel embeds counters in the kernel for each line and a part of that embed in __exit text. So we need to keep the EXIT_TEXT and EXIT_DATA if CONFIG_GCOV_KERNEL=y. Link: https://lkml.org/lkml/2018/9/1/125Signed-off-by: NGreentime Hu <greentime@andestech.com> Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org>
-
由 Eugeniy Paltsev 提交于
__GFP_HIGHMEM flag is cleared by upper layer functions (in include/linux/dma-mapping.h) so we'll never get a __GFP_HIGHMEM flag in arch_dma_alloc gfp argument. That's why alloc_pages will never return highmem page here. Get rid of highmem pages handling and cleanup arch_dma_alloc and arch_dma_free functions. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Eugeniy Paltsev 提交于
Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Eugeniy Paltsev 提交于
So far the IOC treatment was global on ARC, being turned on (or off) for all devices in the system. With this patch, this can now be done per device using the "dma-coherent" DT property; IOW with this patch we can use both HW-coherent and regular DMA peripherals simultaneously. The changes involved are too many so enlisting the summary below: 1. common code calls ARC arch_setup_dma_ops() per device. 2. For coherent dma (IOC) it plugs in generic @dma_direct_ops which doesn't need any arch specific backend: No need for any explicit cache flushes or MMU mappings to provide for uncached access - dma_(map|sync)_single* return early as corresponding dma ops callbacks are NULL in generic code. So arch_sync_dma_*() -> dma_cache_*() need not handle the coherent dma case, hence drop ARC __dma_cache_*_ioc() which were no-op anyways 3. For noncoherent dma (non IOC) generic @dma_noncoherent_ops is used which in turns calls ARC specific routines - arch_dma_alloc() no longer checks for @ioc_enable since this is called only for !IOC case. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com> [vgupta: rewrote changelog]
-
- 04 9月, 2018 20 次提交
-
-
由 Janosch Frank 提交于
We have to do down_write on the mm semaphore to set a bitfield in the mm context. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Fixes: a4499382 ("KVM: s390: Add huge page enablement control") Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Pierre Morel 提交于
Copy the key mask to the right offset inside the shadow CRYCB Fixes: bbeaa58b ("KVM: s390: vsie: support aes dea wrapping keys") Signed-off-by: NPierre Morel <pmorel@linux.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Reviewed-by: NJanosch Frank <frankja@linux.ibm.com> Cc: stable@vger.kernel.org # v4.8+ Message-Id: <1535019956-23539-2-git-send-email-pmorel@linux.ibm.com> Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Janosch Frank 提交于
We should not return with a lock. We also have to increase the address when we do page clearing. Fixes: bd096f64 ("KVM: s390: Add skey emulation fault handling") Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Message-Id: <20180830081355.59234-1-frankja@linux.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Greentime Hu 提交于
It shall be removed in the define usage. We shall not put a semicolon there. /kisskb/src/arch/nds32/include/asm/elf.h:126:29: error: expected '}' before ';' token #define ELF_DATA ELFDATA2LSB; ^ /kisskb/src/fs/proc/kcore.c:318:17: note: in expansion of macro 'ELF_DATA' [EI_DATA] = ELF_DATA, ^~~~~~~~ /kisskb/src/fs/proc/kcore.c:312:15: note: to match this '{' .e_ident = { ^ /kisskb/src/scripts/Makefile.build:307: recipe for target 'fs/proc/kcore.o' failed Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Greentime Hu 提交于
It can make sure that trace_hardirqs_off/trace_hardirqs_on can get a correct return address by frame pointer through __builtin_return_address() in this fix. Unable to handle kernel paging request at virtual address fffffffc pgd = 3c42e9cf [fffffffc] *pgd=02a9c000 Internal error: Oops: 1 [#1] Modules linked in: CPU: 0 PC is at trace_hardirqs_off+0x78/0xec LP is at common_exception_handler+0xda/0xf4 pc : [<b23ea5a4>] lp : [<b2352eba>] Tainted: G W sp : ada60ab0 fp : efcaff48 gp : 3a020490 r25: efcb0000 r24: 00000000 r23: 00000000 r22: 00000000 r21: 00000000 r20: 000700c1 r19: 000700ca r18: 3a21b018 r17: 00000001 r16: 00000002 r15: 00000001 r14: 0000002a r13: 3a00a804 r12: ada60ab0 r11: 3a113af8 r10: 3a01c530 r9 : 3a124404 r8 : 00120f9c r7 : b2352eba r6 : 00000000 r5 : 3a126b58 r4 : 00000000 r3 : 3a1726a8 r2 : b2921000 r1 : 00000000 r0 : 00000000 IRQs off Segment user Process init (pid: 1, stack limit = 0x069d7f15) Stack: (0xada60ab0 to 0xada61000) Stack: 0aa0: 00000000 00000003 3a110000 0011f000 Stack: 0ac0: 00000005 00000000 00000000 00000000 ada60b10 3a01fe68 ada60b0c ada60b08 Stack: 0ae0: 00000000 ada60ab8 ada60b30 3a020550 00000000 00000001 3a11c2f8 3a01c6e8 Stack: 0b00: 3a01cb80 fffffba8 3a113af8 3a21b018 3a122c28 00003ec4 00000165 00000000 Stack: 0b20: 3a126aec 0000006c 00000000 00000001 3a01fe68 00000000 00000003 00000000 Stack: 0b40: 00000001 000003f8 3a020930 3a01c530 00000008 ada60c18 3a020490 3a003120 Stack: 0b60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0b80: 00000000 00000000 00000000 00000000 ffff8000 00000000 00000000 00000000 Stack: 0ba0: 00000000 00000001 3a020550 00000000 3a01d020 00000000 fffff000 fffff000 Stack: 0bc0: 00000000 00000000 00000000 00000000 ada60f2c 00000000 00000001 00000000 Stack: 0be0: 00000000 00000000 3a01fe68 fffffab0 00008034 00000008 3a0010cc 3a01fe68 Stack: 0c00: 00000000 00000000 00000001 ada60c88 3a020490 3a0139d4 0009dc6f 00000000 Stack: 0c20: 00000000 00000000 ada60fce fffff000 00000000 0000ebe0 3a020038 3a020550 Stack: 0c40: ada60f20 ada60c90 3a0007f0 3a0002a8 ada60c8c 00000000 00000000 ada60c88 Stack: 0c60: 3a020490 3a004570 00000000 00000000 ada60f20 3a0007f0 3a000000 00000000 Stack: 0c80: 3a020490 3a004850 00000000 3a013f24 3a000000 00000000 3a01ff44 00000000 Stack: 0ca0: 00000000 00000000 00000000 00000000 00000000 00000000 3a01ff84 3a01ff7c Stack: 0cc0: 3a01ff4c 3a01ff5c 3a01ff64 3a01ff9c 3a01ffa4 3a01ffac 3a01ff6c 3a01ff74 Stack: 0ce0: 00000000 00000000 3a01ff44 00000000 00000000 00000000 00000000 00000000 Stack: 0d00: 3a01ff8c 00000000 00000000 3a01ff94 00000000 00000000 00000000 00000000 Stack: 0d20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0d40: 3a01ffbc 3a01ffb4 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0d60: 00000000 00000000 00000000 00000000 00000000 3a01ffc4 00000000 00000000 Stack: 0d80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0da0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0dc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 3a01ff54 Stack: 0de0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0e00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0e20: 00000000 00000004 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0e40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0e60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0e80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0ea0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Stack: 0ec0: 00000000 00000000 00000000 00000000 ffffffff 00000000 00000000 00000000 Stack: 0ee0: 00000000 00000000 00000000 00000000 ada60f20 00000000 00000000 00000000 Stack: 0f00: 00000000 00000000 00000000 00000000 00000000 00000000 3a020490 3a000b24 Stack: 0f20: 00000001 ada60fde 00000000 ada60fe4 ada60feb 00000000 00000021 3a038000 Stack: 0f40: 00000010 0009dc6f 00000006 00001000 00000011 00000064 00000003 00008034 Stack: 0f60: 00000004 00000020 00000005 00000008 00000007 3a000000 00000008 00000000 Stack: 0f80: 00000009 0000ebe0 0000000b 00000000 0000000c 00000000 0000000d 00000000 Stack: 0fa0: 0000000e 00000000 00000017 00000000 00000019 ada60fce 0000001f ada60ff6 Stack: 0fc0: 00000000 00000000 00000000 b5010000 fa839914 23b5dd89 a2aea540 692fc82e Stack: 0fe0: 0074696e 454d4f48 54002f3d 3d4d5245 756e696c 692f0078 0074696e 00000000 CPU: 0 PID: 1 Comm: init Tainted: G W 4.18.0-00015-g1888b64a2558-dirty #112 Hardware name: andestech,ae3xx (DT) Call Trace: [<b27a8e34>] dump_stack+0x2c/0x38 [<b2354874>] die+0x128/0x18c [<b2356f4c>] do_page_fault+0x3b8/0x4e0 [<b2352ed4>] ret_from_exception+0x0/0x10 [<b2352eba>] common_exception_handler+0xda/0xf4 Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Greentime Hu 提交于
It may print too much information sometimes if the stack is wrong or too big. This patch can limit the debug information in a page of stack. Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
Use macro to replace the magic number. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
We are not using NDS32 ABI 2 for now, just remove the preprocessor directives __NDS32_ABI_2. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
Function graph tracer has modified the return address to 'return_to_handler' on stack, and provide the 'ftrace_graph_ret_addr' to get the real return address. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
This patch contains the implementation of dynamic function graph tracer. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
This patch contains the implementation of dynamic function tracer. The mcount call is composed of three instructions, so there are three nop for enough placeholder. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
Recognize NDS32 object files in recordmcount.pl. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
This patch contains implementation of static function graph tracer. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
This patch support the static function tracer. On nds32 ABI, we need to always push return address to stack for __builtin_return_address can work correctly, otherwise, it will get the wrong value of $lp at leaf function. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
1. Adjust indentation. 2. Unify argument name of each macro. 3. Add space after comma in parameters list. 4. Add space after 'if' keyword. 5. Replace space by tab. 6. Change asm volatile to __asm__ __volatile__ Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
The pointer argument of macro need to be taken out once first, and then use the new pointer in the macro body. In kernel/trace/trace.c, get_user(ch, ubuf++) causes the unexpected increment after expand the macro. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Zong Li 提交于
The compiler predefined macro 'NDS32_ABI_2' had been removed, it should use the '__NDS32_ABI_2' here. Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 YueHaibing 提交于
Make sure of_device_id tables are NULL terminated. Found by coccinelle spatch "misc/of_table.cocci" Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
由 Greentime Hu 提交于
This bug is report by Dan Carpenter. We shall use ~loc_mask instead of !loc_mask because we need to and(&) the bits of ~loc_mask. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Fixes: c9a4a8da ("nds32: Loadable modules") Signed-off-by: NGreentime Hu <greentime@andestech.com>
-
- 03 9月, 2018 2 次提交
-
-
由 Randy Dunlap 提交于
Fix kernel-doc warnings in arch/x86/include/asm/atomic.h that are caused by having a #define macro between the kernel-doc notation and the function name. Fixed by moving the #define macro to after the function implementation. Make the same change for atomic64_{32,64}.h for consistency even though there were no kernel-doc warnings found in these header files, but there would be if they were used in generation of documentation. Fixes these kernel-doc warnings: ../arch/x86/include/asm/atomic.h:84: warning: Excess function parameter 'i' description in 'arch_atomic_sub_and_test' ../arch/x86/include/asm/atomic.h:84: warning: Excess function parameter 'v' description in 'arch_atomic_sub_and_test' ../arch/x86/include/asm/atomic.h:96: warning: Excess function parameter 'v' description in 'arch_atomic_inc' ../arch/x86/include/asm/atomic.h:109: warning: Excess function parameter 'v' description in 'arch_atomic_dec' ../arch/x86/include/asm/atomic.h:124: warning: Excess function parameter 'v' description in 'arch_atomic_dec_and_test' ../arch/x86/include/asm/atomic.h:138: warning: Excess function parameter 'v' description in 'arch_atomic_inc_and_test' ../arch/x86/include/asm/atomic.h:153: warning: Excess function parameter 'i' description in 'arch_atomic_add_negative' ../arch/x86/include/asm/atomic.h:153: warning: Excess function parameter 'v' description in 'arch_atomic_add_negative' Fixes: 18cc1814 ("atomics/treewide: Make test ops optional") Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/0a1e678d-c8c5-b32c-2640-ed4e94d399d2@infradead.org
-
由 Mike Rapoport 提交于
The bootmem to memblock conversion introduced by the commit 1008a115 ("m68k: switch to MEMBLOCK + NO_BOOTMEM") made reservation of kernel code and data to start from a wrong address. Fix it. Signed-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com> Tested-by: NAngelo Dureghello <angelo@sysam.it> Signed-off-by: NGreg Ungerer <gerg@linux-m68k.org>
-
- 02 9月, 2018 4 次提交
-
-
由 Filippo Sironi 提交于
Handle the case where microcode gets loaded on the BSP's hyperthread sibling first and the boot_cpu_data's microcode revision doesn't get updated because of early exit due to the siblings sharing a microcode engine. For that, simply write the updated revision on all CPUs unconditionally. Signed-off-by: NFilippo Sironi <sironi@amazon.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: prarit@redhat.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1533050970-14385-1-git-send-email-sironi@amazon.de
-
由 Prarit Bhargava 提交于
When preparing an MCE record for logging, boot_cpu_data.microcode is used to read out the microcode revision on the box. However, on systems where late microcode update has happened, the microcode revision output in a MCE log record is wrong because boot_cpu_data.microcode is not updated when the microcode gets updated. But, the microcode revision saved in boot_cpu_data's microcode member should be kept up-to-date, regardless, for consistency. Make it so. Fixes: fa94d0c6 ("x86/MCE: Save microcode revision in machine check records") Signed-off-by: NPrarit Bhargava <prarit@redhat.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: sironi@amazon.de Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20180731112739.32338-1-prarit@redhat.com
-
由 Randy Dunlap 提交于
Fix the section mismatch warning in arch/x86/mm/pti.c: WARNING: vmlinux.o(.text+0x6972a): Section mismatch in reference from the function pti_clone_pgtable() to the function .init.text:pti_user_pagetable_walk_pte() The function pti_clone_pgtable() references the function __init pti_user_pagetable_walk_pte(). This is often because pti_clone_pgtable lacks a __init annotation or the annotation of pti_user_pagetable_walk_pte is wrong. FATAL: modpost: Section mismatches detected. Fixes: 85900ea5 ("x86/pti: Map the vsyscall page if needed") Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/43a6d6a3-d69d-5eda-da09-0b1c88215a2a@infradead.org
-
由 Christoph Hellwig 提交于
This keeps the historic default behavior for devices without a DMA mask, but removes the warning about a lacking DMA mask for doing DMA without a mask. Reported-by: NMeelis Roos <mroos@linux.ee> Signed-off-by: NChristoph Hellwig <hch@lst.de> Tested-by: NGuenter Roeck <linux@roeck-us.net>
-