- 08 9月, 2020 23 次提交
-
-
由 Joerg Roedel 提交于
Install an exception handler for #VC exception that uses a GHCB. Also add the infrastructure for handling different exit-codes by decoding the instruction that caused the exception and error handling. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-24-joro@8bytes.org
-
由 Joerg Roedel 提交于
The functions are needed to map the GHCB for SEV-ES guests. The GHCB is used for communication with the hypervisor, so its content must not be encrypted. After the GHCB is not needed anymore it must be mapped encrypted again so that the running kernel image can safely re-use the memory. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-23-joro@8bytes.org
-
由 Joerg Roedel 提交于
The function can fail to create an identity mapping, check for that and bail out if it happens. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-22-joro@8bytes.org
-
由 Joerg Roedel 提交于
Call set_sev_encryption_mask() while still on the stage 1 #VC-handler because the stage 2 handler needs the kernel's own page tables to be set up, to which calling set_sev_encryption_mask() is a prerequisite. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-21-joro@8bytes.org
-
由 Joerg Roedel 提交于
Add the first handler for #VC exceptions. At stage 1 there is no GHCB yet because the kernel might still be running on the EFI page table. The stage 1 handler is limited to the MSR-based protocol to talk to the hypervisor and can only support CPUID exit-codes, but that is enough to get to stage 2. [ bp: Zap superfluous newlines after rd/wrmsr instruction mnemonics. ] Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-20-joro@8bytes.org
-
由 Joerg Roedel 提交于
Changing the function to take start and end as parameters instead of start and size simplifies the callers which don't need to calculate the size if they already have start and end. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200907131613.12703-19-joro@8bytes.org
-
由 Joerg Roedel 提交于
With the page-fault handler in place, he identity mapping can be built on-demand. So remove the code which manually creates the mappings and unexport/remove the functions used for it. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200907131613.12703-18-joro@8bytes.org
-
由 Joerg Roedel 提交于
When booted through startup_64(), the kernel keeps running on the EFI page table until the KASLR code sets up its own page table. Without KASLR, the pre-decompression boot code never switches off the EFI page table. Change that by unconditionally switching to a kernel-controlled page table after relocation. This makes sure the kernel can make changes to the mapping when necessary, for example map pages unencrypted in SEV and SEV-ES guests. Also, remove the debug_putstr() calls in initialize_identity_maps() because the function now runs before console_init() is called. [ bp: Massage commit message. ] Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200907131613.12703-17-joro@8bytes.org
-
由 Joerg Roedel 提交于
Install a page-fault handler to add an identity mapping to addresses not yet mapped. Also do some checking whether the error code is sane. This makes non SEV-ES machines use the exception handling infrastructure in the pre-decompressions boot code too, making it less likely to break in the future. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200907131613.12703-16-joro@8bytes.org
-
由 Joerg Roedel 提交于
The file contains only code related to identity-mapped page tables. Rename the file and compile it always in. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200907131613.12703-15-joro@8bytes.org
-
由 Joerg Roedel 提交于
Add code needed to setup an IDT in the early pre-decompression boot-code. The IDT is loaded first in startup_64, which is after EfiExitBootServices() has been called, and later reloaded when the kernel image has been relocated to the end of the decompression area. This allows to setup different IDT handlers before and after the relocation. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-14-joro@8bytes.org
-
由 Joerg Roedel 提交于
The x86-64 ABI defines a red-zone on the stack: The 128-byte area beyond the location pointed to by %rsp is considered to be reserved and shall not be modified by signal or interrupt handlers. Therefore, functions may use this area for temporary data that is not needed across function calls. In particular, leaf functions may use this area for their entire stack frame, rather than adjusting the stack pointer in the prologue and epilogue. This area is known as the red zone. This is not compatible with exception handling, because the IRET frame written by the hardware at the stack pointer and the functions to handle the exception will overwrite the temporary variables of the interrupted function, causing undefined behavior. So disable red-zones for the pre-decompression boot code. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200907131613.12703-13-joro@8bytes.org
-
由 Joerg Roedel 提交于
Add a function to check whether an instruction has a REP prefix. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20200907131613.12703-12-joro@8bytes.org
-
由 Joerg Roedel 提交于
Add a function to the instruction decoder which returns the pt_regs offset of the register specified in the reg field of the modrm byte. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20200907131613.12703-11-joro@8bytes.org
-
由 Joerg Roedel 提交于
Factor out the code used to decode an instruction with the correct address and operand sizes to a helper function. No functional changes. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-10-joro@8bytes.org
-
由 Joerg Roedel 提交于
Factor out the code to fetch the instruction from user-space to a helper function. No functional changes. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-9-joro@8bytes.org
-
由 Joerg Roedel 提交于
The inat-tables.c file has some arrays in it that contain pointers to other arrays. These pointers need to be relocated when the kernel image is moved to a different location. The pre-decompression boot-code has no support for applying ELF relocations, so initialize these arrays at runtime in the pre-decompression code to make sure all pointers are correctly initialized. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20200907131613.12703-8-joro@8bytes.org
-
由 Joerg Roedel 提交于
Move the definition of the x86 page-fault error code bits to a new header file asm/trap_pf.h. This makes it easier to include them into pre-decompression boot code. No functional changes. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-7-joro@8bytes.org
-
由 Tom Lendacky 提交于
Add CPU feature detection for Secure Encrypted Virtualization with Encrypted State. This feature enhances SEV by also encrypting the guest register state, making it in-accessible to the hypervisor. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-6-joro@8bytes.org
-
由 Borislav Petkov 提交于
Use the shorthand to make it more readable. No functional changes. Signed-off-by: NBorislav Petkov <bp@alien8.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-5-joro@8bytes.org
-
由 Joerg Roedel 提交于
Building a correct GHCB for the hypervisor requires setting valid bits in the GHCB. Simplify that process by providing accessor functions to set values and to update the valid bitmap and to check the valid bitmap in KVM. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-4-joro@8bytes.org
-
由 Tom Lendacky 提交于
Extend the vmcb_safe_area with SEV-ES fields and add a new 'struct ghcb' which will be used for guest-hypervisor communication. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-3-joro@8bytes.org
-
由 Joerg Roedel 提交于
Do not allocate a vmcb_control_area and a vmcb_save_area on the stack, as these structures will become larger with future extenstions of SVM and thus the svm_set_nested_state() function will become a too large stack frame. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-2-joro@8bytes.org
-
- 04 9月, 2020 5 次提交
-
-
由 Thomas Gleixner 提交于
Andy reported that the syscall treacing for 32bit fast syscall fails: # ./tools/testing/selftests/x86/ptrace_syscall_32 ... [RUN] SYSEMU [FAIL] Initial args are wrong (nr=224, args=10 11 12 13 14 4289172732) ... [RUN] SYSCALL [FAIL] Initial args are wrong (nr=29, args=0 0 0 0 0 4289172732) The eason is that the conversion to generic entry code moved the retrieval of the sixth argument (EBP) after the point where the syscall entry work runs, i.e. ptrace, seccomp, audit... Unbreak it by providing a split up version of syscall_enter_from_user_mode(). - syscall_enter_from_user_mode_prepare() establishes state and enables interrupts - syscall_enter_from_user_mode_work() runs the entry work Replace the call to syscall_enter_from_user_mode() in the 32bit fast syscall C-entry with the split functions and stick the EBP retrieval between them. Fixes: 27d6b4d1 ("x86/entry: Use generic syscall entry function") Reported-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/87k0xdjbtt.fsf@nanos.tec.linutronix.de
-
由 Andy Lutomirski 提交于
Trying to clear DR7 around a #DB from usermode malfunctions if the tasks schedules when delivering SIGTRAP. Rather than trying to define a special no-recursion region, just allow a single level of recursion. The same mechanism is used for NMI, and it hasn't caused any problems yet. Fixes: 9f58fdde ("x86/db: Split out dr6/7 handling") Reported-by: NKyle Huey <me@kylehuey.com> Debugged-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NDaniel Thompson <daniel.thompson@linaro.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/8b9bd05f187231df008d48cf818a6a311cbd5c98.1597882384.git.luto@kernel.org Link: https://lore.kernel.org/r/20200902133200.726584153@infradead.org
-
由 Peter Zijlstra 提交于
The WARN added in commit 3c73b81a ("x86/entry, selftests: Further improve user entry sanity checks") unconditionally triggers on a IVB machine because it does not support SMAP. For !SMAP hardware the CLAC/STAC instructions are patched out and thus if userspace sets AC, it is still have set after entry. Fixes: 3c73b81a ("x86/entry, selftests: Further improve user entry sanity checks") Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NDaniel Thompson <daniel.thompson@linaro.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200902133200.666781610@infradead.org
-
由 Vamshi K Sthambamkadi 提交于
On i386, the order of parameters passed on regs is eax,edx,and ecx (as per regparm(3) calling conventions). Change the mapping in regs_get_kernel_argument(), so that arg1=ax arg2=dx, and arg3=cx. Running the selftests testcase kprobes_args_use.tc shows the result as passed. Fixes: 3c88ee19 ("x86: ptrace: Add function argument access API") Signed-off-by: NVamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20200828113242.GA1424@cosmos
-
由 Huang Ying 提交于
Commit: cc9aec03 ("x86/numa_emulation: Introduce uniform split capability") uses "-1" as the starting node ID, which causes the strange kernel log as follows, when "numa=fake=32G" is added to the kernel command line: Faking node -1 at [mem 0x0000000000000000-0x0000000893ffffff] (35136MB) Faking node 0 at [mem 0x0000001840000000-0x000000203fffffff] (32768MB) Faking node 1 at [mem 0x0000000894000000-0x000000183fffffff] (64192MB) Faking node 2 at [mem 0x0000002040000000-0x000000283fffffff] (32768MB) Faking node 3 at [mem 0x0000002840000000-0x000000303fffffff] (32768MB) And finally the kernel crashes: BUG: Bad page state in process swapper pfn:00011 page:(____ptrval____) refcount:0 mapcount:1 mapping:(____ptrval____) index:0x55cd7e44b270 pfn:0x11 failed to read mapping contents, not a valid kernel address? flags: 0x5(locked|uptodate) raw: 0000000000000005 000055cd7e44af30 000055cd7e44af50 0000000100000006 raw: 000055cd7e44b270 000055cd7e44b290 0000000000000000 000055cd7e44b510 page dumped because: page still charged to cgroup page->mem_cgroup:000055cd7e44b510 Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.9.0-rc2 #1 Hardware name: Intel Corporation S2600WFT/S2600WFT, BIOS SE5C620.86B.02.01.0008.031920191559 03/19/2019 Call Trace: dump_stack+0x57/0x80 bad_page.cold+0x63/0x94 __free_pages_ok+0x33f/0x360 memblock_free_all+0x127/0x195 mem_init+0x23/0x1f5 start_kernel+0x219/0x4f5 secondary_startup_64+0xb6/0xc0 Fix this bug via using 0 as the starting node ID. This restores the original behavior before cc9aec03. [ mingo: Massaged the changelog. ] Fixes: cc9aec03 ("x86/numa_emulation: Introduce uniform split capability") Signed-off-by: N"Huang, Ying" <ying.huang@intel.com> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200904061047.612950-1-ying.huang@intel.com
-
- 03 9月, 2020 4 次提交
-
-
由 Thomas Bogendoerfer 提交于
On RM400(a20r) machines ISA and SCSI interrupts share the same interrupt line. Commit 49e6e07e ("MIPS: pass non-NULL dev_id on shared request_irq()") accidently dropped the IRQF_SHARED bit, which breaks registering SCSI interrupt. Put back IRQF_SHARED and add dev_id for ISA interrupt. Fixes: 49e6e07e ("MIPS: pass non-NULL dev_id on shared request_irq()") Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-
由 Huang Pei 提交于
In cc97ab23 ("MIPS: Simplify FP context initialization), init_fp_ctx just initialize the fp/msa context, and own_fp_inatomic just restore FCSR and 64bit FP regs from it, but miss MSACSR and upper MSA regs for MSA, so MSACSR and MSA upper regs's value from previous task on current cpu can leak into current task and cause unpredictable behavior when MSA context not initialized. Fixes: cc97ab23 ("MIPS: Simplify FP context initialization") Signed-off-by: NHuang Pei <huangpei@loongson.cn> Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-
由 Joerg Roedel 提交于
One can not simply remove vmalloc faulting on x86-32. Upstream commit: 7f0a002b ("x86/mm: remove vmalloc faulting") removed it on x86 alltogether because previously the arch_sync_kernel_mappings() interface was introduced. This interface added synchronization of vmalloc/ioremap page-table updates to all page-tables in the system at creation time and was thought to make vmalloc faulting obsolete. But that assumption was incredibly naive. It turned out that there is a race window between the time the vmalloc or ioremap code establishes a mapping and the time it synchronizes this change to other page-tables in the system. During this race window another CPU or thread can establish a vmalloc mapping which uses the same intermediate page-table entries (e.g. PMD or PUD) and does no synchronization in the end, because it found all necessary mappings already present in the kernel reference page-table. But when these intermediate page-table entries are not yet synchronized, the other CPU or thread will continue with a vmalloc address that is not yet mapped in the page-table it currently uses, causing an unhandled page fault and oops like below: BUG: unable to handle page fault for address: fe80c000 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page *pde = 33183067 *pte = a8648163 Oops: 0002 [#1] SMP CPU: 1 PID: 13514 Comm: cve-2017-17053 Tainted: G ... Call Trace: ldt_dup_context+0x66/0x80 dup_mm+0x2b3/0x480 copy_process+0x133b/0x15c0 _do_fork+0x94/0x3e0 __ia32_sys_clone+0x67/0x80 __do_fast_syscall_32+0x3f/0x70 do_fast_syscall_32+0x29/0x60 do_SYSENTER_32+0x15/0x20 entry_SYSENTER_32+0x9f/0xf2 EIP: 0xb7eef549 So the arch_sync_kernel_mappings() interface is racy, but removing it would mean to re-introduce the vmalloc_sync_all() interface, which is even more awful. Keep arch_sync_kernel_mappings() in place and catch the race condition in the page-fault handler instead. Do a partial revert of above commit to get vmalloc faulting on x86-32 back in place. Fixes: 7f0a002b ("x86/mm: remove vmalloc faulting") Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200902155904.17544-1-joro@8bytes.org
-
由 Arvind Sankar 提交于
When CONFIG_RETPOLINE is disabled, Clang uses a jump table for the switch statement in cmdline_find_option (jump tables are disabled when CONFIG_RETPOLINE is enabled). This function is called very early in boot from sme_enable() if CONFIG_AMD_MEM_ENCRYPT is enabled. At this time, the kernel is still executing out of the identity mapping, but the jump table will contain virtual addresses. Fix this by disabling jump tables for cmdline.c when AMD_MEM_ENCRYPT is enabled. Signed-off-by: NArvind Sankar <nivedita@alum.mit.edu> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200903023056.3914690-1-nivedita@alum.mit.edu
-
- 02 9月, 2020 7 次提交
-
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <hca@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Eric Farman 提交于
Commit fa686453 ("sched/rt, s390: Use CONFIG_PREEMPTION") changed a bunch of uses of CONFIG_PREEMPT to _PREEMPTION. Except in the Kconfig it used two T's. That's the only place in the system where that spelling exists, so let's fix that. Fixes: fa686453 ("sched/rt, s390: Use CONFIG_PREEMPTION") Cc: <stable@vger.kernel.org> # 5.6 Signed-off-by: NEric Farman <farman@linux.ibm.com> Signed-off-by: NHeiko Carstens <hca@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Jessica Yu 提交于
In the arm64 module linker script, the section .text.ftrace_trampoline is specified unconditionally regardless of whether CONFIG_DYNAMIC_FTRACE is enabled (this is simply due to the limitation that module linker scripts are not preprocessed like the vmlinux one). Normally, for .plt and .text.ftrace_trampoline, the section flags present in the module binary wouldn't matter since module_frob_arch_sections() would assign them manually anyway. However, the arm64 module loader only sets the section flags for .text.ftrace_trampoline when CONFIG_DYNAMIC_FTRACE=y. That's only become problematic recently due to a recent change in binutils-2.35, where the .text.ftrace_trampoline section (along with the .plt section) is now marked writable and executable (WAX). We no longer allow writable and executable sections to be loaded due to commit 5c3a7db0 ("module: Harden STRICT_MODULE_RWX"), so this is causing all modules linked with binutils-2.35 to be rejected under arm64. Drop the IS_ENABLED(CONFIG_DYNAMIC_FTRACE) check in module_frob_arch_sections() so that the section flags for .text.ftrace_trampoline get properly set to SHF_EXECINSTR|SHF_ALLOC, without SHF_WRITE. Signed-off-by: NJessica Yu <jeyu@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NArd Biesheuvel <ardb@kernel.org> Link: http://lore.kernel.org/r/20200831094651.GA16385@linux-8ccs Link: https://lore.kernel.org/r/20200901160016.3646-1-jeyu@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Sudeep Holla 提交于
Commit eaecca9e ("arm64: Fix __cpu_logical_map undefined issue") exported cpu_logical_map in order to fix tegra194-cpufreq module build failure. As this might potentially cause problem while supporting physical CPU hotplug, tegra194-cpufreq module was reworded to avoid use of cpu_logical_map() via the commit 93d0c1ab ("cpufreq: replace cpu_logical_map() with read_cpuid_mpir()") Since cpu_logical_map was exported to fix the module build temporarily, let us remove the same before it gains any user again. Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20200901095229.56793-1-sudeep.holla@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Evgeniy Didin 提交于
HSDK board has Micrel KSZ9031, recent commit bcf3440c ("net: phy: micrel: add phy-mode support for the KSZ9031 PHY") caused a breakdown of Ethernet. Using 'phy-mode = "rgmii"' is not correct because accodring RGMII specification it is necessary to have delay on RX (PHY to MAX) which is not generated in case of "rgmii". Using "rgmii-id" adds necessary delay and solves the issue. Also adding name of PHY placed on HSDK board. Signed-off-by: NEvgeniy Didin <Evgeniy.Didin@synopsys.com> Cc: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Cc: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Mike Rapoport 提交于
Rework of memory map initialization broke initialization of ARC systems with two memory banks. Before these changes, memblock was not aware of nodes configuration and the memory map was always allocated from the "lowmem" bank. After the addition of node information to memblock, the core mm attempts to allocate the memory map for the "highmem" bank from its node. The access to this memory using __va() fails because it can be only accessed using kmap. Anther problem that was uncovered is that {min,max}_high_pfn are calculated from u64 high_mem_start variable which prevents truncation to 32-bit physical address and the PFN values are above the node and zone boundaries. Use phys_addr_t type for high_mem_start and high_mem_size to ensure correspondence between PFNs and highmem zone boundaries and reserve the entire highmem bank until mem_init() to avoid accesses to it before highmem is enabled. To test this: 1. Enable HIGHMEM in ARC config 2. Enable 2 memory banks in haps_hs.dts (uncomment the 2nd bank) Fixes: 51930df5 ("mm: free_area_init: allow defining max_zone_pfn in descending order") Cc: stable@vger.kernel.org [5.8] Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com> [vgupta: added instructions to test highmem]
-
由 Randy Dunlap 提交于
Fix min_low_pfn/max_low_pfn build errors for arch/ia64/: (e.g.) ERROR: "max_low_pfn" [drivers/rpmsg/virtio_rpmsg_bus.ko] undefined! ERROR: "min_low_pfn" [drivers/rpmsg/virtio_rpmsg_bus.ko] undefined! ERROR: "min_low_pfn" [drivers/hwtracing/intel_th/intel_th_msu.ko] undefined! ERROR: "max_low_pfn" [drivers/hwtracing/intel_th/intel_th_msu.ko] undefined! ERROR: "min_low_pfn" [drivers/crypto/cavium/nitrox/n5pf.ko] undefined! ERROR: "max_low_pfn" [drivers/crypto/cavium/nitrox/n5pf.ko] undefined! ERROR: "max_low_pfn" [drivers/md/dm-integrity.ko] undefined! ERROR: "min_low_pfn" [drivers/md/dm-integrity.ko] undefined! ERROR: "max_low_pfn" [crypto/tcrypt.ko] undefined! ERROR: "min_low_pfn" [crypto/tcrypt.ko] undefined! ERROR: "min_low_pfn" [security/keys/encrypted-keys/encrypted-keys.ko] undefined! ERROR: "max_low_pfn" [security/keys/encrypted-keys/encrypted-keys.ko] undefined! ERROR: "min_low_pfn" [arch/ia64/kernel/mca_recovery.ko] undefined! ERROR: "max_low_pfn" [arch/ia64/kernel/mca_recovery.ko] undefined! David suggested just exporting min_low_pfn & max_low_pfn in mm/memblock.c: https://lore.kernel.org/lkml/alpine.DEB.2.22.394.2006291911220.1118534@chino.kir.corp.google.com/Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Acked-by: NDavid Rientjes <rientjes@google.com> Acked-by: NTony Luck <tony.luck@intel.com> Cc: linux-mm@kvack.org Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Rientjes <rientjes@google.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
-
- 01 9月, 2020 1 次提交
-
-
由 Tiezhu Yang 提交于
According to the user's manual chapter 8.2.1 of Loongson 3A2000 CPU [1] and 3A3000 CPU [2], we should take some event IDs such as 274, 358, 359 and 360 as valid in the check condition, otherwise they are recognized as "not supported", fix it. [1] http://www.loongson.cn/uploadfile/cpu/3A2000/Loongson3A2000_user2.pdf [2] http://www.loongson.cn/uploadfile/cpu/3A3000/Loongson3A3000_3B3000user2.pdf Fixes: e9dfbaae ("MIPS: perf: Add hardware perf events support for new Loongson-3") Signed-off-by: NTiezhu Yang <yangtiezhu@loongson.cn> Acked-by: NHuang Pei <huangpei@loongson.cn> Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
-