- 11 8月, 2022 1 次提交
-
-
由 Xianting Tian 提交于
Modules always live before the kernel, MODULES_END is fixed but MODULES_VADDR isn't fixed, it depends on the kernel size. Let's add it to virtual kernel memory layout dump. As MODULES is only defined for CONFIG_64BIT, so we dump it when CONFIG_64BIT=y. eg, MODULES_VADDR - MODULES_END 0xffffffff01133000 - 0xffffffff80000000 Reviewed-by: NGuo Ren <guoren@kernel.org> Reviewed-by: NHeiko Stuebner <heiko@sntech.de> Signed-off-by: NXianting Tian <xianting.tian@linux.alibaba.com> Link: https://lore.kernel.org/r/20220811074150.3020189-5-xianting.tian@linux.alibaba.com Cc: stable@vger.kernel.org Fixes: 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 18 7月, 2022 1 次提交
-
-
由 Anshuman Khandual 提交于
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks up a private and static protection_map[] array. Subsequently all __SXXX and __PXXX macros can be dropped which are no longer needed. Link: https://lkml.kernel.org/r/20220711070600.2378316-17-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Brian Cain <bcain@quicinc.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Christoph Hellwig <hch@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vineet Gupta <vgupta@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 03 6月, 2022 1 次提交
-
-
由 Jisheng Zhang 提交于
These three functions are only used in init.c, so make them static. Fix W=1 warnings like below: arch/riscv/mm/init.c:721:13: warning: no previous prototype for function 'pt_ops_set_early' [-Wmissing-prototypes] void __init pt_ops_set_early(void) ^ Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAnup Patel <anup@brainfault.org> Reviewed-by: NAtish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20220516143204.2603-1-jszhang@kernel.orgSigned-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 02 6月, 2022 1 次提交
-
-
由 Alexandre Ghiti 提交于
With the arrival of sv48 and its large address space, it would be cumbersome to statically define the unit size to use to print the different portions of the virtual memory layout: instead, determine it dynamically. Signed-off-by: NAlexandre Ghiti <alexandre.ghiti@canonical.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 26 5月, 2022 1 次提交
-
-
由 Palmer Dabbelt 提交于
A handful of functions unused functions were enabled during XIP builds, which themselves didn't build correctly. This just disables the functions entirely. Fixes: e8a62cc2 ("riscv: Implement sv48 support") Reviewed-by: NGuo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20220420184056.7886-5-palmer@rivosinc.comSigned-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 25 5月, 2022 1 次提交
-
-
由 Palmer Dabbelt 提交于
These trigger a handful of build warnings. Reported-by: Nkernel test robot <lkp@intel.com> Fixes: 677b9eb8 ("riscv: mm: Prepare pt_ops helper functions for sv57") Link: https://lore.kernel.org/r/20220420184056.7886-2-palmer@rivosinc.comSigned-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 12 5月, 2022 1 次提交
-
-
由 Heiko Stuebner 提交于
Some current cpus based on T-Head cores implement memory-types way different than described in the svpbmt spec even going so far as using PTE bits marked as reserved. Add the T-Head vendor-id and necessary errata code to replace the affected instructions. Signed-off-by: NHeiko Stuebner <heiko@sntech.de> Tested-by: NSamuel Holland <samuel@sholland.org> Link: https://lore.kernel.org/r/20220511192921.2223629-13-heiko@sntech.deSigned-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 29 4月, 2022 1 次提交
-
-
由 Nick Kossifidis 提交于
In case the DTB provided by the bootloader/BootROM is before the kernel image or outside /memory, we won't be able to access it through the linear mapping, and get a segfault on setup_arch(). Currently OpenSBI relocates DTB but that's not always the case (e.g. if FW_JUMP_FDT_ADDR is not specified), and it's also not the most portable approach since the default FW_JUMP_FDT_ADDR of the generic platform relocates the DTB at a specific offset that may not be available. To avoid this situation copy DTB so that it's visible through the linear mapping. Signed-off-by: NNick Kossifidis <mick@ics.forth.gr> Link: https://lore.kernel.org/r/20220322132839.3653682-1-mick@ics.forth.grTested-by: NConor Dooley <conor.dooley@microchip.com> Fixes: f105aa94 ("riscv: add BUILTIN_DTB support for MMU-enabled targets") Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 22 4月, 2022 1 次提交
-
-
由 Anup Patel 提交于
When Sv57 is not available the satp.MODE test in set_satp_mode() will fail and lead to pgdir re-programming for Sv48. The pgdir re-programming will fail as well due to pre-existing pgdir entry used for Sv57 and as a result kernel fails to boot on RISC-V platform not having Sv57. To fix above issue, we should clear the pgdir memory in set_satp_mode() before re-programming. Fixes: 011f09d1 ("riscv: mm: Set sv57 on defaultly") Reported-by: NMayuresh Chitale <mchitale@ventanamicro.com> Signed-off-by: NAnup Patel <apatel@ventanamicro.com> Reviewed-by: NAtish Patra <atishp@rivosinc.com> Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 18 4月, 2022 1 次提交
-
-
由 Christoph Hellwig 提交于
Pass a boolean flag to indicate if swiotlb needs to be enabled based on the addressing needs, and replace the verbose argument with a set of flags, including one to force enable bounce buffering. Note that this patch removes the possibility to force xen-swiotlb use with the swiotlb=force parameter on the command line on x86 (arm and arm64 never supported that), but this interface will be restored shortly. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
- 24 3月, 2022 1 次提交
-
-
由 Jisheng Zhang 提交于
Replace the conditional compilation using "#ifdef CONFIG_KEXEC_CORE" by a check for "IS_ENABLED(CONFIG_KEXEC_CORE)", to simplify the code and increase compile coverage. Link: https://lkml.kernel.org/r/20211206160514.2000-3-jszhang@kernel.orgSigned-off-by: NJisheng Zhang <jszhang@kernel.org> Acked-by: NPalmer Dabbelt <palmer@rivosinc.com> Acked-by: NBaoquan He <bhe@redhat.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexandre Ghiti <alex@ghiti.fr> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 3月, 2022 1 次提交
-
-
由 Alexandre Ghiti 提交于
high_memory used to be initialized in mem_init, way after setup_bootmem. But a call to dma_contiguous_reserve in this function gives rise to the below warning because high_memory is equal to 0 and is used at the very beginning at cma_declare_contiguous_nid. It went unnoticed since the move of the kasan region redefined KERN_VIRT_SIZE so that it does not encompass -1 anymore. Fix this by initializing high_memory in setup_bootmem. ------------[ cut here ]------------ virt_to_phys used for non-linear address: ffffffffffffffff (0xffffffffffffffff) WARNING: CPU: 0 PID: 0 at arch/riscv/mm/physaddr.c:14 __virt_to_phys+0xac/0x1b8 Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.17.0-rc1-00007-ga68b89289e26 #27 Hardware name: riscv-virtio,qemu (DT) epc : __virt_to_phys+0xac/0x1b8 ra : __virt_to_phys+0xac/0x1b8 epc : ffffffff80014922 ra : ffffffff80014922 sp : ffffffff84a03c30 gp : ffffffff85866c80 tp : ffffffff84a3f180 t0 : ffffffff86bce657 t1 : fffffffef09406e8 t2 : 0000000000000000 s0 : ffffffff84a03c70 s1 : ffffffffffffffff a0 : 000000000000004f a1 : 00000000000f0000 a2 : 0000000000000002 a3 : ffffffff8011f408 a4 : 0000000000000000 a5 : 0000000000000000 a6 : 0000000000f00000 a7 : ffffffff84a03747 s2 : ffffffd800000000 s3 : ffffffff86ef4000 s4 : ffffffff8467f828 s5 : fffffff800000000 s6 : 8000000000006800 s7 : 0000000000000000 s8 : 0000000480000000 s9 : 0000000080038ea0 s10: 0000000000000000 s11: ffffffffffffffff t3 : ffffffff84a035c0 t4 : fffffffef09406e8 t5 : fffffffef09406e9 t6 : ffffffff84a03758 status: 0000000000000100 badaddr: 0000000000000000 cause: 0000000000000003 [<ffffffff8322ef4c>] cma_declare_contiguous_nid+0xf2/0x64a [<ffffffff83212a58>] dma_contiguous_reserve_area+0x46/0xb4 [<ffffffff83212c3a>] dma_contiguous_reserve+0x174/0x18e [<ffffffff83208fc2>] paging_init+0x12c/0x35e [<ffffffff83206bd2>] setup_arch+0x120/0x74e [<ffffffff83201416>] start_kernel+0xce/0x68c irq event stamp: 0 hardirqs last enabled at (0): [<0000000000000000>] 0x0 hardirqs last disabled at (0): [<0000000000000000>] 0x0 softirqs last enabled at (0): [<0000000000000000>] 0x0 softirqs last disabled at (0): [<0000000000000000>] 0x0 ---[ end trace 0000000000000000 ]--- Fixes: f7ae0233 ("riscv: Move KASAN mapping next to the kernel mapping") Signed-off-by: NAlexandre Ghiti <alexandre.ghiti@canonical.com> Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 15 2月, 2022 4 次提交
-
-
由 Qinglin Pan 提交于
This patch sets sv57 on defaultly if CONFIG_64BIT. And do fallback to try to set sv48 on boot time if sv57 is not supported in current hardware. Signed-off-by: NQinglin Pan <panqinglin2020@iscas.ac.cn> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Qinglin Pan 提交于
This patch prepare some pt_ops helper functions which will be used in creating sv57 mappings during boot time. Signed-off-by: NQinglin Pan <panqinglin2020@iscas.ac.cn> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Qinglin Pan 提交于
To determine pgtable level at boot time, we can not use helper functions in include/asm-generic/pgtable-nop4d.h and must implement these functions. This patch uses pgtable_l5_enabled variable instead of including pgtable-nop4d.h to controle p4d's folding, and implements corresponding helper functions. Signed-off-by: NQinglin Pan <panqinglin2020@iscas.ac.cn> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Jisheng Zhang 提交于
satp_mode is never modified after init, so it can be marked as __ro_after_init. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 05 2月, 2022 2 次提交
-
-
由 Palmer Dabbelt 提交于
This manifests as a crash early in boot on VexRiscv. Signed-off-by: NMyrtle Shah <gatecat@ds0.me> [Palmer: split commit] Fixes: 44c92257 ("RISC-V: enable XIP") Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Palmer Dabbelt 提交于
This manifests as a crash early in boot on VexRiscv. Signed-off-by: NMyrtle Shah <gatecat@ds0.me> [Palmer: split commit] Fixes: 6d7f91d9 ("riscv: Get rid of CONFIG_PHYS_RAM_BASE in kernel physical address conversion") Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 20 1月, 2022 10 次提交
-
-
由 kernel test robot 提交于
arch/riscv/mm/init.c:48:11-16: WARNING: conversion to bool not needed here Remove unneeded conversion to bool Semantic patch information: Relational and logical operators evaluate to bool, explicit conversion is overly verbose and unneeded. Generated by: scripts/coccinelle/misc/boolconv.cocci Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Alexandre Ghiti 提交于
By adding a new 4th level of page table, give the possibility to 64bit kernel to address 2^48 bytes of virtual address: in practice, that offers 128TB of virtual address space to userspace and allows up to 64TB of physical memory. If the underlying hardware does not support sv48, we will automatically fallback to a standard 3-level page table by folding the new PUD level into PGDIR level. In order to detect HW capabilities at runtime, we use SATP feature that ignores writes with an unsupported mode. Signed-off-by: NAlexandre Ghiti <alexandre.ghiti@canonical.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Alexandre Ghiti 提交于
This simply gathers the different pt_ops initialization in functions where a comment was added to explain why the page table operations must be changed along the boot process. Signed-off-by: NAlexandre Ghiti <alexandre.ghiti@canonical.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Alexandre Ghiti 提交于
Now that kasan shadow region is next to the kernel, for sv48, this region won't be aligned on PGDIR_SIZE and then when populating this region, we'll need to get down to lower levels of the page table. So instead of reimplementing the page table walk for the early population, take advantage of the existing functions used for the final population. Note that kasan swapper initialization must also be split since memblock is not initialized at this point and as the last PGD is shared with the kernel, we'd need to allocate a PUD so postpone the kasan final population after the kernel population is done. Signed-off-by: NAlexandre Ghiti <alexandre.ghiti@canonical.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Alexandre Ghiti 提交于
Now that KASAN_SHADOW_OFFSET is defined at compile time as a config, this value must remain constant whatever the size of the virtual address space, which is only possible by pushing this region at the end of the address space next to the kernel mapping. Signed-off-by: NAlexandre Ghiti <alexandre.ghiti@canonical.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Jisheng Zhang 提交于
Currently, the #ifdef CONFIG_XIP_KERNEL usage can be divided into the following three types: The first one is for functions/declarations only used in XIP case. The second one is for XIP_FIXUP case. Something as below: |foo_type foo; |#ifdef CONFIG_XIP_KERNEL |#define foo (*(foo_type *)XIP_FIXUP(&foo)) |#endif Usually, it's better to let the foo macro sit with the foo var together. But if various foos are defined adjacently, we can save some #ifdef CONFIG_XIP_KERNEL usage by grouping them together. The third one is for different implementations for XIP, usually, this is a #ifdef...#else...#endif case. This patch moves the pt_ops macro to adjacent #ifdef CONFIG_XIP_KERNEL and group first type usage cases into one. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Jisheng Zhang 提交于
Try our best to replace the conditional compilation using "#ifdef CONFIG_XIP_KERNEL" with "IS_ENABLED(CONFIG_XIP_KERNEL)", to simplify the code and to increase compile coverage. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Jisheng Zhang 提交于
Except "pt_ops", other global vars when CONFIG_XIP_KERNEL=y is defined as below: |foo_type foo; |#ifdef CONFIG_XIP_KERNEL |#define foo (*(foo_type *)XIP_FIXUP(&foo)) |#endif Follow the same way for pt_ops to unify the style and to simplify code. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Jisheng Zhang 提交于
Try our best to replace the conditional compilation using "#ifdef CONFIG_64BIT" by a check for "IS_ENABLED(CONFIG_64BIT)", to simplify the code and to increase compile coverage. Now we can also remove the __maybe_unused used in max_mapped_addr declaration. We also remove the BUG_ON check of mapping the last 4K bytes of the addressable memory since this is always true for every kernel actually. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Jisheng Zhang 提交于
The is_kdump_kernel() returns false for !CRASH_DUMP case, so we don't need the #ifdef CONFIG_CRASH_DUMP for is_kdump_kernel() checking. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 10 1月, 2022 2 次提交
-
-
由 Jisheng Zhang 提交于
Currently, if 64BIT and !XIP_KERNEL, the phys_ram_base is always 0, no matter the real start of dram reported by memblock is. Fixes: 6d7f91d9 ("riscv: Get rid of CONFIG_PHYS_RAM_BASE in kernel physical address conversion") Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
由 Nick Kossifidis 提交于
When allocating crash kernel region without explicitly specifying its base address/size, memblock_phys_alloc_range will attempt to allocate memory top to bottom (memblock.bottom_up is false), so the crash kernel region will end up in highmem on 64bit systems. This way swiotlb can't work on the crash kernel, since there won't be any 32bit addressible memory available for the bounce buffers. Try to allocate 32bit addressible memory if available, for the crash kernel by restricting the top search address to be less than SZ_4G. If that fails fallback to the previous behavior. I tested this on HiFive Unmatched where the pci-e controller needs swiotlb to work, with this patch it's possible to access the pci-e controller on crash kernel and mount the rootfs from the nvme. Signed-off-by: NNick Kossifidis <mick@ics.forth.gr> Fixes: e53d2818 ("RISC-V: Add kdump support") Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 06 1月, 2022 1 次提交
-
-
由 Kefeng Wang 提交于
After commit 1355c31e ("asm-generic: pgalloc: provide generic pmd_alloc_one() and pmd_free_one()"), the main part to support PMD split page table lock is in asm-generic/pgalloc.h. The only change is add pgtable_pmd_page_ctor() into alloc_pmd_late(), then we could enable ARCH_ENABLE_SPLIT_PMD_PTLOCK for RV64. Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmer@rivosinc.com>
-
- 27 10月, 2021 1 次提交
-
-
由 Vitaly Wool 提交于
Currently there's a limit of 8MB for the .text section of a RISC-V image in the XIP case. This breaks compilation of many automatic builds and is generally inconvenient. This patch removes that limitation and optimizes XIP image file size at the same time. Signed-off-by: NVitaly Wool <vitaly.wool@konsulko.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 04 9月, 2021 1 次提交
-
-
由 Mike Rapoport 提交于
There are a lot of uses of memblock_find_in_range() along with memblock_reserve() from the times memblock allocation APIs did not exist. memblock_find_in_range() is the very core of memblock allocations, so any future changes to its internal behaviour would mandate updates of all the users outside memblock. Replace the calls to memblock_find_in_range() with an equivalent calls to memblock_phys_alloc() and memblock_phys_alloc_range() and make memblock_find_in_range() private method of memblock. This simplifies the callers, ensures that (unlikely) errors in memblock_reserve() are handled and improves maintainability of memblock_find_in_range(). Link: https://lkml.kernel.org/r/20210816122622.30279-1-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Acked-by: NKirill A. Shutemov <kirill.shtuemov@linux.intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [ACPI] Acked-by: NRussell King (Oracle) <rmk+kernel@armlinux.org.uk> Acked-by: Nick Kossifidis <mick@ics.forth.gr> [riscv] Tested-by: NGuenter Roeck <linux@roeck-us.net> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 8月, 2021 1 次提交
-
-
由 Geert Uytterhoeven 提交于
RISC-V uses platform-specific code to locate the elf core header in memory. However, this does not conform to the standard "linux,elfcorehdr" DT bindings, as it relies on a reserved memory node with the "linux,elfcorehdr" compatible value, instead of on a "linux,elfcorehdr" property under the "/chosen" node. The non-compliant code can just be removed, as the standard behavior is already implemented by platform-agnostic handling in the FDT core code. Fixes: 56409750 ("RISC-V: Add crash kernel support") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Acked-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NRob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/41c75d6ee3114ae6304f8afe0051895af91200ee.1628670468.git.geert+renesas@glider.be
-
- 14 8月, 2021 2 次提交
-
-
由 Kefeng Wang 提交于
This patch adds support to allocate gigantic hugepages using CMA by specifying the hugetlb_cma= kernel parameter. This is only supported on RV64. Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Kenneth Lee 提交于
RISCV uses a global variable pfn_base for page/pfn translation. But this is a common name and will be used elsewhere. In those cases, the page-pfn macros which refer to this name will be referred to the local/input variable instead. (such as in vfio_pin_pages_remote). This make everything wrong. This patch changes the name from pfn_base to riscv_pfn_base to fix this problem. Signed-off-by: NKenneth Lee <liguozhu@hisilicon.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 12 8月, 2021 4 次提交
-
-
由 Alexandre Ghiti 提交于
The current comment states that we check if the 64-bit kernel mapping overlaps with the last 4K of the address space that is reserved to error values in create_kernel_page_table, which is not the case since it is done in setup_vm. But anyway, remove the reference to any function and simply note that in 64-bit kernel, the check should be done as soon as the kernel mapping base address is known. Fixes: db6b84a3 ("riscv: Make sure the kernel mapping does not overlap with IS_ERR_VALUE") Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
The code that handles the early fdt mapping is hard to read and does not create the same mapping size depending on the kernel: - for 64-bit, 2 PMD entries are used which amounts to a 4MB mapping - for 32-bit, 2 PGDIR entries are used which amounts to a 8MB mapping So keep using 2 PMD entries for 64-bit and use only one PGD entry for 32-bit needed to cover 4MB. Move that into a new function called create_fdt_early_page_table which, using the same naming as create_kernel_page_table. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
__PAGETABLE_PMD_FOLDED defines a 2-level page table that is only used in 32-bit kernel, so there is no need to check for CONFIG_64BIT in #ifndef __PAGETABLE_PMD_FOLDED and vice-versa. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
This allows to simplify the code and make it more readable. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-