- 03 10月, 2020 5 次提交
-
-
由 Atish Patra 提交于
Add a RISC-V architecture specific stub code that actually copies the actual kernel image to a valid address and jump to it after boot services are terminated. Enable UEFI related kernel configs as well for RISC-V. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Link: https://lore.kernel.org/r/20200421033336.9663-4-atish.patra@wdc.com [ardb: - move hartid fetch into check_platform_features() - use image_size not reserve_size - select ISA_C - do not use dram_base] Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
Linux kernel Image can appear as an EFI application With appropriate PE/COFF header fields in the beginning of the Image header. An EFI application loader can directly load a Linux kernel Image and an EFI stub residing in kernel can boot Linux kernel directly. Add the necessary PE/COFF header. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Link: https://lore.kernel.org/r/20200421033336.9663-3-atish.patra@wdc.com [ardb: - use C prefix for c.li to ensure the expected opcode is emitted - align all image sections according to PE/COFF section alignment ] Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Reviewed-by: NAnup Patel <anup@brainfault.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
Currently, page table setup is done during setup_va_final where fixmap can be used to create the temporary mappings. The physical frame is allocated from memblock_alloc_* functions. However, this won't work if page table mapping needs to be created for a different mm context (i.e. efi mm) at a later point of time. Use generic kernel page allocation function & macros for any mapping after setup_vm_final. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Acked-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
UEFI uses early IO or memory mappings for runtime services before normal ioremap() is usable. Add the necessary fixmap bindings and pmd mappings for generic ioremap support to work. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Anup Patel 提交于
Currently, RISC-V reserves 1MB of fixmap memory for device tree. However, it maps only single PMD (2MB) space for fixmap which leaves only < 1MB space left for other kernel features such as early ioremap which requires fixmap as well. The fixmap size can be increased by another 2MB but it brings additional complexity and changes the virtual memory layout as well. If we require some additional feature requiring fixmap again, it has to be moved again. Technically, DT doesn't need a fixmap as the memory occupied by the DT is only used during boot. That's why, We map device tree in early page table using two consecutive PGD mappings at lower addresses (< PAGE_OFFSET). This frees lot of space in fixmap and also makes maximum supported device tree size supported as PGDIR_SIZE. Thus, init memory section can be used for the same purpose as well. This simplifies fixmap implementation. Signed-off-by: NAnup Patel <anup.patel@wdc.com> Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 16 9月, 2020 16 次提交
-
-
由 Ard Biesheuvel 提交于
The way we use the base of DRAM in the EFI stub is problematic as it is ill defined what the base of DRAM actually means. There are some restrictions on the placement of FDT and initrd which are defined in terms of dram_base, but given that the placement of the kernel in memory is what defines these boundaries (as on ARM, this is where the linear region starts), it is better to use the image address in these cases, and disregard dram_base altogether. Reviewed-by: NMaxim Uvarov <maxim.uvarov@linaro.org> Tested-by: NMaxim Uvarov <maxim.uvarov@linaro.org> Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
-
由 Tian Tao 提交于
asm/thread_info.h is included more than once, Remove the one that isn't necessary. Signed-off-by: NTian Tao <tiantao6@hisilicon.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
If the page fault "cause" is EXC_INST_PAGE_FAULT, set the FAULT_FLAG_INSTRUCTION flag to let handle_mm_fault() and friends know about it. This has no functional changes because RISC-V uses the default arch_vma_access_permitted() implementation, which always returns true. However, dax_pmd_fault(), for example, has a tracepoint that uses FAULT_FLAG_INSTRUCTION, so we might as well set it. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
The "inline" keyword is in the wrong place in vmalloc_fault() declaration: >> arch/riscv/mm/fault.c:56:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration] 56 | static void inline vmalloc_fault(struct pt_regs *regs, int code, unsigned long addr) | ^~~~~~ Fix that up. Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Zong Li 提交于
There are no standard CSR registers to provide cache information, the way for RISC-V is to get this information from DT. Currently, AT_L1I_X, AT_L1D_X and AT_L2_X are present in glibc header, and sysconf syscall could use them to get information of cache through AUX vector. The result of 'getconf -a' as follows: LEVEL1_ICACHE_SIZE 32768 LEVEL1_ICACHE_ASSOC 8 LEVEL1_ICACHE_LINESIZE 64 LEVEL1_DCACHE_SIZE 32768 LEVEL1_DCACHE_ASSOC 8 LEVEL1_DCACHE_LINESIZE 64 LEVEL2_CACHE_SIZE 2097152 LEVEL2_CACHE_ASSOC 32 LEVEL2_CACHE_LINESIZE 64 Signed-off-by: NZong Li <zong.li@sifive.com> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Zong Li 提交于
AT_VECTOR_SIZE_ARCH should be defined with the maximum number of NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined for RISC-V at all even though ARCH_DLINFO will contain one NEW_AUX_ENT for the VDSO address. Signed-off-by: NZong Li <zong.li@sifive.com> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Zong Li 提交于
Set cacheinfo.{size,sets,line_size} for each cache node, then we can get these information from userland through auxiliary vector. Signed-off-by: NZong Li <zong.li@sifive.com> Reviewed-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Move the access error check into a access_error() function to simplify the control flow in do_page_fault(). Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Let's handle the translation of EXC_STORE_PAGE_FAULT to FAULT_FLAG_WRITE once before looking up the VMA. This makes it easier to extract access error logic in the next patch. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Simplify the mm_fault_error() handling function by eliminating the unnecessary gotos. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the fault error handling to mm_fault_error() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Move fault error handling after retry logic. This simplifies the code flow and makes it easier to move fault error handling to its own function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the vmalloc fault handling in do_page_fault() to vmalloc_fault() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the bad area handling in do_page_fault() to bad_area() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
This patch moves the no context handling in do_page_fault() to no_context() function and converts gotos to calls to the new function. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Pekka Enberg 提交于
Let's combine the two retry logic if statements in do_page_fault() to simplify the code. Signed-off-by: NPekka Enberg <penberg@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 22 8月, 2020 3 次提交
-
-
由 Will Deacon 提交于
When an MMU notifier call results in unmapping a range that spans multiple PGDs, we end up calling into cond_resched_lock() when crossing a PGD boundary, since this avoids running into RCU stalls during VM teardown. Unfortunately, if the VM is destroyed as a result of OOM, then blocking is not permitted and the call to the scheduler triggers the following BUG(): | BUG: sleeping function called from invalid context at arch/arm64/kvm/mmu.c:394 | in_atomic(): 1, irqs_disabled(): 0, non_block: 1, pid: 36, name: oom_reaper | INFO: lockdep is turned off. | CPU: 3 PID: 36 Comm: oom_reaper Not tainted 5.8.0 #1 | Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 | Call trace: | dump_backtrace+0x0/0x284 | show_stack+0x1c/0x28 | dump_stack+0xf0/0x1a4 | ___might_sleep+0x2bc/0x2cc | unmap_stage2_range+0x160/0x1ac | kvm_unmap_hva_range+0x1a0/0x1c8 | kvm_mmu_notifier_invalidate_range_start+0x8c/0xf8 | __mmu_notifier_invalidate_range_start+0x218/0x31c | mmu_notifier_invalidate_range_start_nonblock+0x78/0xb0 | __oom_reap_task_mm+0x128/0x268 | oom_reap_task+0xac/0x298 | oom_reaper+0x178/0x17c | kthread+0x1e4/0x1fc | ret_from_fork+0x10/0x30 Use the new 'flags' argument to kvm_unmap_hva_range() to ensure that we only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is set in the notifier flags. Cc: <stable@vger.kernel.org> Fixes: 8b3405e3 ("kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd") Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Message-Id: <20200811102725.7121-3-will@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Will Deacon 提交于
The 'flags' field of 'struct mmu_notifier_range' is used to indicate whether invalidate_range_{start,end}() are permitted to block. In the case of kvm_mmu_notifier_invalidate_range_start(), this field is not forwarded on to the architecture-specific implementation of kvm_unmap_hva_range() and therefore the backend cannot sensibly decide whether or not to block. Add an extra 'flags' parameter to kvm_unmap_hva_range() so that architectures are aware as to whether or not they are permitted to block. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Message-Id: <20200811102725.7121-2-will@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Stephen Boyd 提交于
Add the 32-bit vdso Makefile to the vdso_install rule so that 'make vdso_install' installs the 32-bit compat vdso when it is compiled. Fixes: a7f71a2c ("arm64: compat: Add vDSO") Signed-off-by: NStephen Boyd <swboyd@chromium.org> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: NWill Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20200818014950.42492-1-swboyd@chromium.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 8月, 2020 9 次提交
-
-
由 Sean Christopherson 提交于
KVM has an optmization to avoid expensive MRS read/writes on VMENTER/EXIT. It caches the MSR values and restores them either when leaving the run loop, on preemption or when going out to user space. The affected MSRs are not required for kernel context operations. This changed with the recently introduced mechanism to handle FSGSBASE in the paranoid entry code which has to retrieve the kernel GSBASE value by accessing per CPU memory. The mechanism needs to retrieve the CPU number and uses either LSL or RDPID if the processor supports it. Unfortunately RDPID uses MSR_TSC_AUX which is in the list of cached and lazily restored MSRs, which means between the point where the guest value is written and the point of restore, MSR_TSC_AUX contains a random number. If an NMI or any other exception which uses the paranoid entry path happens in such a context, then RDPID returns the random guest MSR_TSC_AUX value. As a consequence this reads from the wrong memory location to retrieve the kernel GSBASE value. Kernel GS is used to for all regular this_cpu_*() operations. If the GSBASE in the exception handler points to the per CPU memory of a different CPU then this has the obvious consequences of data corruption and crashes. As the paranoid entry path is the only place which accesses MSR_TSX_AUX (via RDPID) and the fallback via LSL is not significantly slower, remove the RDPID alternative from the entry path and always use LSL. The alternative would be to write MSR_TSC_AUX on every VMENTER and VMEXIT which would be inflicting massive overhead on that code path. [ tglx: Rewrote changelog ] Fixes: eaad9812 ("x86/entry/64: Introduce the FIND_PERCPU_BASE macro") Reported-by: NTom Lendacky <thomas.lendacky@amd.com> Debugged-by: NTom Lendacky <thomas.lendacky@amd.com> Suggested-by: NAndy Lutomirski <luto@kernel.org> Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20200821105229.18938-1-pbonzini@redhat.com
-
由 Kajol Jain 提交于
Commit 792f73f7 ("powerpc/hv-24x7: Add sysfs files inside hv-24x7 device to show cpumask") added cpumask file as part of hv-24x7 driver inside the interface folder. The cpumask file is supposed to be in the top folder of the PMU driver in order to make hotplug work. This patch fixes that issue and creates new group 'cpumask_attr_group' to add cpumask file and make sure it added in top folder. command:# cat /sys/devices/hv_24x7/cpumask 0 Fixes: 792f73f7 ("powerpc/hv-24x7: Add sysfs files inside hv-24x7 device to show cpumask") Signed-off-by: NKajol Jain <kjain@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200821080610.123997-1-kjain@linux.ibm.com
-
由 Christophe Leroy 提交于
In is_module_segment(), when VMALLOC_END is over 0xf0000000, ALIGN(VMALLOC_END, SZ_256M) has value 0. In that case, addr >= ALIGN(VMALLOC_END, SZ_256M) is always true then is_module_segment() always returns false. Use (ALIGN(VMALLOC_END, SZ_256M) - 1) which will have value 0xffffffff and will be suitable for the comparison. Fixes: c4964331 ("powerpc/32s: Only leave NX unset on segments used for modules") Reported-by: NAndreas Schwab <schwab@linux-m68k.org> Signed-off-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Tested-by: NAndreas Schwab <schwab@linux-m68k.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/09fc73fe9c7423c6b4cf93f93df9bb0ed8eefab5.1597994047.git.christophe.leroy@csgroup.eu
-
由 Rob Herring 提交于
If guests don't have certain CPU erratum workarounds implemented, then there is a possibility a guest can deadlock the system. IOW, only trusted guests should be used on systems with the erratum. This is the case for Cortex-A57 erratum 832075. Signed-off-by: NRob Herring <robh@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20200803193127.3012242-2-robh@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
As we can now switch from a system that isn't affected by 1418040 to a system that globally is affected, let's allow affected CPUs to come in at a later time. Signed-off-by: NMarc Zyngier <maz@kernel.org> Tested-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by: NStephen Boyd <swboyd@chromium.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200731173824.107480-3-maz@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Instead of dealing with erratum 1418040 on each entry and exit, let's move the handling to __switch_to() instead, which has several advantages: - It can be applied when it matters (switching between 32 and 64 bit tasks). - It is written in C (yay!) - It can rely on static keys rather than alternatives Signed-off-by: NMarc Zyngier <maz@kernel.org> Tested-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by: NStephen Boyd <swboyd@chromium.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200731173824.107480-2-maz@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Bin Meng 提交于
This adds SiFive drivers to rv32_defconfig, to keep in sync with the 64-bit config. This is useful when testing 32-bit kernel with QEMU 'sifive_u' 32-bit machine. Signed-off-by: NBin Meng <bin.meng@windriver.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Reviewed-by: NAlistair Francis <alistair.francis@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Anup Patel 提交于
Right now the RISC-V timer driver is convoluted to support: 1. Linux RISC-V S-mode (with MMU) where it will use TIME CSR for clocksource and SBI timer calls for clockevent device. 2. Linux RISC-V M-mode (without MMU) where it will use CLINT MMIO counter register for clocksource and CLINT MMIO compare register for clockevent device. We now have a separate CLINT timer driver which also provide CLINT based IPI operations so let's remove CLINT MMIO related code from arch/riscv directory and RISC-V timer driver. Signed-off-by: NAnup Patel <anup.patel@wdc.com> Tested-by: NEmil Renner Berhing <kernel@esmil.dk> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Anup Patel 提交于
We add mechanism to set custom IPI operations so that CLINT driver from drivers directory can provide custom IPI operations. Signed-off-by: NAnup Patel <anup.patel@wdc.com> Tested-by: NEmil Renner Berhing <kernel@esmil.dk> Reviewed-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 20 8月, 2020 7 次提交
-
-
由 Vasant Hegde 提交于
As per PAPR we have to look for both EPOW sensor value and event modifier to identify the type of event and take appropriate action. In LoPAPR v1.1 section 10.2.2 includes table 136 "EPOW Action Codes": SYSTEM_SHUTDOWN 3 The system must be shut down. An EPOW-aware OS logs the EPOW error log information, then schedules the system to be shut down to begin after an OS defined delay internal (default is 10 minutes.) Then in section 10.3.2.2.8 there is table 146 "Platform Event Log Format, Version 6, EPOW Section", which includes the "EPOW Event Modifier": For EPOW sensor value = 3 0x01 = Normal system shutdown with no additional delay 0x02 = Loss of utility power, system is running on UPS/Battery 0x03 = Loss of system critical functions, system should be shutdown 0x04 = Ambient temperature too high All other values = reserved We have a user space tool (rtas_errd) on LPAR to monitor for EPOW_SHUTDOWN_ON_UPS. Once it gets an event it initiates shutdown after predefined time. It also starts monitoring for any new EPOW events. If it receives "Power restored" event before predefined time it will cancel the shutdown. Otherwise after predefined time it will shutdown the system. Commit 79872e35 ("powerpc/pseries: All events of EPOW_SYSTEM_SHUTDOWN must initiate shutdown") changed our handling of the "on UPS/Battery" case, to immediately shutdown the system. This breaks existing setups that rely on the userspace tool to delay shutdown and let the system run on the UPS. Fixes: 79872e35 ("powerpc/pseries: All events of EPOW_SYSTEM_SHUTDOWN must initiate shutdown") Cc: stable@vger.kernel.org # v4.0+ Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> [mpe: Massage change log and add PAPR references] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200820061844.306460-1-hegdevasant@linux.vnet.ibm.com
-
由 Athira Rajeev 提交于
Performance monitor interrupt handler checks if any counter has overflown and calls record_and_restart() in core-book3s which invokes perf_event_overflow() to record the sample information. Apart from creating sample, perf_event_overflow() also does the interrupt and period checks via perf_event_account_interrupt(). Currently we record information only if the SIAR (Sampled Instruction Address Register) valid bit is set (using siar_valid() check) and hence the interrupt check. But it is possible that we do sampling for some events that are not generating valid SIAR, and hence there is no chance to disable the event if interrupts are more than max_samples_per_tick. This leads to soft lockup. Fix this by adding perf_event_account_interrupt() in the invalid SIAR code path for a sampling event. ie if SIAR is invalid, just do interrupt check and don't record the sample information. Reported-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NAthira Rajeev <atrajeev@linux.vnet.ibm.com> Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596717992-7321-1-git-send-email-atrajeev@linux.vnet.ibm.com
-
由 Ard Biesheuvel 提交于
Now that the old memmap code has been removed, some code that was left behind in arch/x86/platform/efi/efi.c is only used for 32-bit builds, which means it can live in efi_32.c as well. So move it over. Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
-
由 Arvind Sankar 提交于
When remapping the kernel rodata section RO in the EFI pagetables, the protection flags that were used for the text section are being reused, but the rodata section should not be marked executable. Cc: <stable@vger.kernel.org> Signed-off-by: NArvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200717194526.3452089-1-nivedita@alum.mit.eduSigned-off-by: NArd Biesheuvel <ardb@kernel.org>
-
由 Randy Dunlap 提交于
../arch/x86/pci/xen.c: In function ‘pci_xen_init’: ../arch/x86/pci/xen.c:410:2: error: implicit declaration of function ‘acpi_noirq_set’; did you mean ‘acpi_irq_get’? [-Werror=implicit-function-declaration] acpi_noirq_set(); Fixes: 88e9ca16 ("xen/pci: Use acpi_noirq_set() helper to avoid #ifdef") Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Reviewed-by: NJuergen Gross <jgross@suse.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: xen-devel@lists.xenproject.org Cc: linux-pci@vger.kernel.org Signed-off-by: NJuergen Gross <jgross@suse.com>
-
由 Frederic Barrat 提交于
Fix a typo introduced during recent code cleanup, which could lead to silently not freeing resources or an oops message (on PCI hotplug or CAPI reset). Only impacts ioda2, the code path for ioda1 is correct. Fixes: 01e12629 ("powerpc/powernv/pci: Add explicit tracking of the DMA setup state") Signed-off-by: NFrederic Barrat <fbarrat@linux.ibm.com> Reviewed-by: NOliver O'Halloran <oohall@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819130741.16769-1-fbarrat@linux.ibm.com
-
由 Arvind Sankar 提交于
Since commits c041b5ad ("x86, boot: Create a separate string.h file to provide standard string functions") fb4cac57 ("x86, boot: Move memcmp() into string.h and string.c") the decompressor stub has been using the compiler's builtin memcpy, memset and memcmp functions, _except_ where it would likely have the largest impact, in the decompression code itself. Remove the #undef's of memcpy and memset in misc.c so that the decompressor code also uses the compiler builtins. The rationale given in the comment doesn't really apply: just because some functions use the out-of-line version is no reason to not use the builtin version in the rest. Replace the comment with an explanation of why memzero and memmove are being #define'd. Drop the suggestion to #undef in boot/string.h as well: the out-of-line versions are not really optimized versions, they're generic code that's good enough for the preboot environment. The compiler will likely generate better code for constant-size memcpy/memset/memcmp if it is allowed to. Most decompressors' performance is unchanged, with the exception of LZ4 and 64-bit ZSTD. Before After ARCH LZ4 73ms 10ms 32 LZ4 120ms 10ms 64 ZSTD 90ms 74ms 64 Measurements on QEMU on 2.2GHz Broadwell Xeon, using defconfig kernels. Decompressor code size has small differences, with the largest being that 64-bit ZSTD decreases just over 2k. The largest code size increase was on 64-bit XZ, of about 400 bytes. Signed-off-by: NArvind Sankar <nivedita@alum.mit.edu> Suggested-by: NNick Terrell <nickrterrell@gmail.com> Tested-by: NNick Terrell <nickrterrell@gmail.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-