- 26 5月, 2021 2 次提交
-
-
由 Kefeng Wang 提交于
Make setup_bootmem() static. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Jisheng Zhang 提交于
The empty_zero_page sits at .bss..page_aligned section, so will be cleared to zero during clearing bss, we don't need to clear it again. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAnup Patel <anup@brainfault.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 07 5月, 2021 2 次提交
-
-
由 Geert Uytterhoeven 提交于
The various uses of protect_kernel_linear_mapping_text_rodata() are not consistent: - Its definition depends on "64BIT && !XIP_KERNEL", - Its forward declaration depends on MMU, - Its single caller depends on "STRICT_KERNEL_RWX && 64BIT && MMU && !XIP_KERNEL". Fix this by settling on the dependencies of the caller, which can be simplified as STRICT_KERNEL_RWX depends on "MMU && !XIP_KERNEL". Provide a dummy definition, as the caller is protected by "IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)" instead of "#ifdef CONFIG_STRICT_KERNEL_RWX". Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Tested-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Geert Uytterhoeven 提交于
When the kernel mapping was moved outside of the linear mapping, the kernel memory reservation was increased, to take into account mapping granularity. However, this is done unconditionally, regardless of whether the kernel memory is mapped read-only or not. If this extension is not needed, up to 2 MiB may be lost, which has a big impact on e.g. Canaan K210 (64-bit nommu) platforms with only 8 MiB of RAM. Reclaim the lost memory by only extending the reserved region when needed, i.e. depending on a simplified version of the conditional logic around the call to protect_kernel_linear_mapping_text_rodata(). Fixes: 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Tested-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 01 5月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
mem_init_print_info() is called in mem_init() on each architecture, and pass NULL argument, so using void argument and move it into mm_init(). Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.comSigned-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86] Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc] Acked-by: NDavid Hildenbrand <david@redhat.com> Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64] Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm] Acked-by: NMike Rapoport <rppt@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Guo Ren <guoren@kernel.org> Cc: Yoshinori Sato <ysato@users.osdn.me> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Peter Zijlstra" <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 4月, 2021 7 次提交
-
-
由 Vitaly Wool 提交于
Introduce XIP (eXecute In Place) support for RISC-V platforms. It allows code to be executed directly from non-volatile storage directly addressable by the CPU, such as QSPI NOR flash which can be found on many RISC-V platforms. This makes way for significant optimization of RAM footprint. The XIP kernel is not compressed since it has to run directly from flash, so it will occupy more space on the non-volatile storage. The physical flash address used to link the kernel object files and for storing it has to be known at compile time and is represented by a Kconfig option. XIP on RISC-V will for the time being only work on MMU-enabled kernels. Signed-off-by: NVitaly Wool <vitaly.wool@konsulko.com> [Alex: Rebase on top of "Move kernel mapping outside the linear mapping" ] Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> [Palmer: disable XIP for allyesconfig] Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Nick Kossifidis 提交于
This patch allows Linux to act as a crash kernel for use with kdump. Userspace will let the crash kernel know about the memory region it can use through linux,usable-memory property on the /memory node (overriding its reg property), and about the memory region where the elf core header of the previous kernel is saved, through a reserved-memory node with a compatible string of "linux,elfcorehdr". This approach is the least invasive and re-uses functionality already present. I tested this on riscv64 qemu and it works as expected, you may test it by retrieving the dmesg of the previous kernel through /proc/vmcore, using the vmcore-dmesg utility from kexec-tools. Signed-off-by: NNick Kossifidis <mick@ics.forth.gr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Nick Kossifidis 提交于
This patch adds support for kdump, the kernel will reserve a region for the crash kernel and jump there on panic. In order for userspace tools (kexec-tools) to prepare the crash kernel kexec image, we also need to expose some information on /proc/iomem for the memory regions used by the kernel and for the region reserved for crash kernel. Note that on userspace the device tree is used to determine the system's memory layout so the "System RAM" on /proc/iomem is ignored. I tested this on riscv64 qemu and works as expected, you may test it by triggering a crash through /proc/sysrq_trigger: echo c > /proc/sysrq_trigger Signed-off-by: NNick Kossifidis <mick@ics.forth.gr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 zhouchuangao 提交于
BUG_ON() uses unlikely in if(), which can be optimized at compile time. Signed-off-by: Nzhouchuangao <zhouchuangao@vivo.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Jisheng Zhang 提交于
All of these are never modified after init, so they can be __ro_after_init. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Jisheng Zhang 提交于
They are not needed after booting, so mark them as __init to move them to the __init section. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
This is a preparatory patch for relocatable kernel and sv48 support. The kernel used to be linked at PAGE_OFFSET address therefore we could use the linear mapping for the kernel mapping. But the relocated kernel base address will be different from PAGE_OFFSET and since in the linear mapping, two different virtual addresses cannot point to the same physical address, the kernel mapping needs to lie outside the linear mapping so that we don't have to copy it at the same physical offset. The kernel mapping is moved to the last 2GB of the address space, BPF is now always after the kernel and modules use the 2GB memory range right before the kernel, so BPF and modules regions do not overlap. KASLR implementation will simply have to move the kernel in the last 2GB range and just take care of leaving enough space for BPF. In addition, by moving the kernel to the end of the address space, both sv39 and sv48 kernels will be exactly the same without needing to be relocated at runtime. Suggested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> [Palmer: Squash the STRICT_RWX fix, and a !MMU fix] Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 10 3月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
The riscv [rv32_]defconfig enabled CONFIG_MEMTEST, but memtest feature is not supported in RISCV. Add early_memtest() to support for memtest. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 27 2月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
After the following patches, commit de043da0 ("RISC-V: Fix usage of memblock_enforce_memory_limit") commit 1bd14a66 ("RISC-V: Remove any memblock representing unusable memory area") commit b10d6bca ("arch, drivers: replace for_each_membock() with for_each_mem_range()") some logic is useless, kill the mem_start/start/end and unneeded code. Reviewed-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 23 2月, 2021 1 次提交
-
-
由 Alexandre Ghiti 提交于
At early boot stage, we have a whole PGDIR to map the kernel, so there is no need to restrict the early mapping size to 128MB. Removing this define also allows us to simplify some compile time logic. This fixes large kernel mappings with a size greater than 128MB, as it is the case for syzbot kernels whose size was just ~130MB. Note that on rv64, for now, we are then limited to PGDIR size for early mapping as we can't use PGD mappings (see [1]). That should be enough given the relative small size of syzbot kernels compared to PGDIR_SIZE which is 1GB. [1] https://lore.kernel.org/lkml/20200603153608.30056-1-alex@ghiti.fr/Reported-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Tested-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 19 2月, 2021 2 次提交
-
-
由 Kefeng Wang 提交于
Covert to the generic reserve_initrd_mem() function. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Vitaly Wool 提交于
Sometimes, especially in a production system we may not want to use a "smart bootloader" like u-boot to load kernel, ramdisk and device tree from a filesystem on eMMC, but rather load the kernel from a NAND partition and just run it as soon as we can, and in this case it is convenient to have device tree compiled into the kernel binary. Since this case is not limited to MMU-less systems, let's support it for these which have MMU enabled too. While at it, provide __dtb_start as a parameter to setup_vm() in BUILTIN_DTB case, so we don't have to duplicate BUILTIN_DTB specific processing in MMU-enabled and MMU-disabled versions of setup_vm(). Signed-off-by: NVitaly Wool <vitaly.wool@konsulko.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 23 1月, 2021 1 次提交
-
-
由 Guo Ren 提交于
The max_mapnr is the number of PFNs, not absolute PFN offset. Signed-off-by: NGuo Ren <guoren@linux.alibaba.com> Fixes: d0d8aae6 ("RISC-V: Set maximum number of mapped pages correctly") Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 16 1月, 2021 1 次提交
-
-
由 Atish Patra 提交于
Currently, linux kernel can not use last 4k bytes of addressable space because IS_ERR_VALUE macro treats those as an error. This will be an issue for RV32 as any memblock allocator potentially allocate chunk of memory from the end of DRAM (2GB) leading bad address error even though the address was technically valid. Fix this issue by limiting the memblock if available memory spans the entire address space. Reviewed-by: NAnup Patel <anup@brainfault.org> Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 15 1月, 2021 2 次提交
-
-
由 Atish Patra 提交于
Use the generic numa implementation to add NUMA support for RISC-V. This is based on Greentime's patch[1] but modified to use generic NUMA implementation and few more fixes. [1] https://lkml.org/lkml/2020/1/10/233Co-developed-by: NGreentime Hu <greentime.hu@sifive.com> Signed-off-by: NGreentime Hu <greentime.hu@sifive.com> Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
Currently, we perform some memory init functions in paging init. But, that will be an issue for NUMA support where DT needs to be flattened before numa initialization and memblock_present can only be called after numa initialization. Move memory initialization related functions to a separate function. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NGreentime Hu <greentime.hu@sifive.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 08 1月, 2021 1 次提交
-
-
由 Damien Le Moal 提交于
All SiPeed K210 MAIX boards have the exact same vendor, arch and implementation IDs, preventing differentiation to select the correct device tree to use through the SOC_BUILTIN_DTB_DECLARE() macro. This result in this macro to be useless and mandates changing the code of the sysctl driver to change the builtin device tree suitable for the target board. Fix this problem by removing the SOC_BUILTIN_DTB_DECLARE() macro since it is used only for the K210 support. The code searching the builtin DTBs using the vendor, arch an implementation IDs is also removed. Support for builtin DTB falls back to the simpler and more traditional handling of builtin DTB using the CONFIG_BUILTIN_DTB option, similarly to other architectures. Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 22 12月, 2020 1 次提交
-
-
由 Atish Patra 提交于
memblock_enforce_memory_limit accepts the maximum memory size not the maximum address that can be handled by kernel. Fix the function invocation accordingly. Fixes: 1bd14a66 ("RISC-V: Remove any memblock representing unusable memory area") Cc: stable@vger.kernel.org Reported-by: NBin Meng <bin.meng@windriver.com> Tested-by: NBin Meng <bin.meng@windriver.com> Acked-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 26 11月, 2020 1 次提交
-
-
由 Atish Patra 提交于
Currently, .init.text & .init.data are intermixed which makes it impossible apply different permissions to them. .init.data shouldn't need exec permissions while .init.text shouldn't have write permission. Moreover, the strict permission are only enforced /init starts. This leaves the kernel vulnerable from possible buggy built-in modules. Keep .init.text & .data in separate sections so that different permissions are applied to each section. Apply permissions to individual sections as early as possible. This improves the kernel protection under CONFIG_STRICT_KERNEL_RWX. We also need to restore the permissions for the entire _init section after it is freed so that those pages can be used for other purpose. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Tested-by: NGreentime Hu <greentime.hu@sifive.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 21 11月, 2020 1 次提交
-
-
由 Kefeng Wang 提交于
riscv has selected HAVE_DMA_CONTIGUOUS, but doesn't call dma_contiguous_reserve(). This calls dma_contiguous_reserve(), which enables CMA. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 10 11月, 2020 1 次提交
-
-
由 Nick Kossifidis 提交于
This patch (previously part of my kexec/kdump series) populates /proc/iomem with the various sections of the kernel image. We need this for kexec-tools to be able to prepare the crashkernel image for kdump to work. Since resource tree initialization is not related to memory initialization I added the code to kernel/setup.c and removed the original code (derived from the arm64 tree) from mm/init.c. Signed-off-by: NNick Kossifidis <mick@ics.forth.gr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 06 11月, 2020 2 次提交
-
-
由 Anup Patel 提交于
Currently, we use PGD mappings for early DTB mapping in early_pgd but this breaks Linux kernel on SiFive Unleashed because on SiFive Unleashed PMP checks don't work correctly for PGD mappings. To fix early DTB mappings on SiFive Unleashed, we use non-PGD mappings (i.e. PMD) for early DTB access. Fixes: 8f3a2b4a ("RISC-V: Move DT mapping outof fixmap") Signed-off-by: NAnup Patel <anup.patel@wdc.com> Reviewed-by: NAtish Patra <atish.patra@wdc.com> Tested-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
RISC-V limits the physical memory size by -PAGE_OFFSET. Any memory beyond that size from DRAM start is unusable. Just remove any memblock pointing to those memory region without worrying about computing the maximum size. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 14 10月, 2020 3 次提交
-
-
由 Mike Rapoport 提交于
for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges. Introduce separate for_each_mem_region() and for_each_reserved_mem_region() to improve encapsulation of memblock internals from its users. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NBaoquan He <bhe@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> [x86] Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> [MIPS] Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Rapoport 提交于
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start = __pfn_to_phys(memblock_region_memory_base_pfn(reg); end = __pfn_to_phys(memblock_region_memory_end_pfn(reg)); /* do something with start and end */ } Using for_each_mem_range() iterator is more appropriate in such cases and allows simpler and cleaner code. [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build] [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring] Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Rapoport 提交于
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is used implicitly during early memory initialization. There is no need to call memblock_set_node(), remove this call and the surrounding code. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-7-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 10月, 2020 1 次提交
-
-
由 Atish Patra 提交于
Currently, the memory containing DT is not reserved. Thus, that region of memory can be reallocated or reused for other purposes. This may result in corrupted DT for nommu virt board in Qemu. We may not face any issue in kendryte as DT is embedded in the kernel image for that. Fixes: 6bd33e1e ("riscv: add nommu support") Cc: stable@vger.kernel.org Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 03 10月, 2020 4 次提交
-
-
由 Atish Patra 提交于
This patch adds EFI runtime service support for RISC-V. Signed-off-by: NAtish Patra <atish.patra@wdc.com> [ardb: - Remove the page check] Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Acked-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
Currently, page table setup is done during setup_va_final where fixmap can be used to create the temporary mappings. The physical frame is allocated from memblock_alloc_* functions. However, this won't work if page table mapping needs to be created for a different mm context (i.e. efi mm) at a later point of time. Use generic kernel page allocation function & macros for any mapping after setup_vm_final. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Acked-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
UEFI uses early IO or memory mappings for runtime services before normal ioremap() is usable. Add the necessary fixmap bindings and pmd mappings for generic ioremap support to work. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NAnup Patel <anup@brainfault.org> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Anup Patel 提交于
Currently, RISC-V reserves 1MB of fixmap memory for device tree. However, it maps only single PMD (2MB) space for fixmap which leaves only < 1MB space left for other kernel features such as early ioremap which requires fixmap as well. The fixmap size can be increased by another 2MB but it brings additional complexity and changes the virtual memory layout as well. If we require some additional feature requiring fixmap again, it has to be moved again. Technically, DT doesn't need a fixmap as the memory occupied by the DT is only used during boot. That's why, We map device tree in early page table using two consecutive PGD mappings at lower addresses (< PAGE_OFFSET). This frees lot of space in fixmap and also makes maximum supported device tree size supported as PGDIR_SIZE. Thus, init memory section can be used for the same purpose as well. This simplifies fixmap implementation. Signed-off-by: NAnup Patel <anup.patel@wdc.com> Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 20 9月, 2020 1 次提交
-
-
由 Greentime Hu 提交于
This invalidates local TLB after modifying the page tables during early init as it's too early to handle suprious faults as we otherwise do. Fixes: f2c17aab ("RISC-V: Implement compile-time fixed mappings") Reported-by: NSyven Wang <syven.wang@sifive.com> Signed-off-by: NSyven Wang <syven.wang@sifive.com> Signed-off-by: NGreentime Hu <greentime.hu@sifive.com> Reviewed-by: NAnup Patel <anup@brainfault.org> [Palmer: Cleaned up the commit text] Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 08 8月, 2020 2 次提交
-
-
由 Mike Rapoport 提交于
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent functions that call memory_present() for each region in memblock.memory: sparse_memory_present_with_active_regions() and membocks_present(). Moreover, all architectures have a call to either of these functions preceding the call to sparse_init() and in the most cases they are called one after the other. Mark the regions from memblock.memory as present during sparce_init() by making sparse_init() call memblocks_present(), make memblocks_present() and memory_present() functions static and remove redundant sparse_memory_present_with_active_regions() function. Also remove no longer required HAVE_MEMORY_PRESENT configuration option. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Anshuman Khandual 提交于
Patch series "arm64: Enable vmemmap mapping from device memory", v4. This series enables vmemmap backing memory allocation from device memory ranges on arm64. But before that, it enables vmemmap_populate_basepages() and vmemmap_alloc_block_buf() to accommodate struct vmem_altmap based alocation requests. This patch (of 3): vmemmap_populate_basepages() is used across platforms to allocate backing memory for vmemmap mapping. This is used as a standard default choice or as a fallback when intended huge pages allocation fails. This just creates entire vmemmap mapping with base pages (PAGE_SIZE). On arm64 platforms, vmemmap_populate_basepages() is called instead of the platform specific vmemmap_populate() when ARM64_SWAPPER_USES_SECTION_MAPS is not enabled as in case for ARM64_16K_PAGES and ARM64_64K_PAGES configs. At present vmemmap_populate_basepages() does not support allocating from driver defined struct vmem_altmap while trying to create vmemmap mapping for a device memory range. It prevents ARM64_16K_PAGES and ARM64_64K_PAGES configs on arm64 from supporting device memory with vmemap_altmap request. This enables vmem_altmap support in vmemmap_populate_basepages() unlocking device memory allocation for vmemap mapping on arm64 platforms with 16K or 64K base page configs. Each architecture should evaluate and decide on subscribing device memory based base page allocation through vmemmap_populate_basepages(). Hence lets keep it disabled on all archs in order to preserve the existing semantics. A subsequent patch enables it on arm64. Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NJia He <justin.he@arm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hsin-Yi Wang <hsinyi@chromium.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Yu Zhao <yuzhao@google.com> Link: http://lkml.kernel.org/r/1594004178-8861-1-git-send-email-anshuman.khandual@arm.com Link: http://lkml.kernel.org/r/1594004178-8861-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 7月, 2020 1 次提交
-
-
由 Zong Li 提交于
Add static keyword for resource_init, this function is only used in this object file. Signed-off-by: NZong Li <zong.li@sifive.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-