- 27 10月, 2021 1 次提交
-
-
由 Vitaly Wool 提交于
Currently there's a limit of 8MB for the .text section of a RISC-V image in the XIP case. This breaks compilation of many automatic builds and is generally inconvenient. This patch removes that limitation and optimizes XIP image file size at the same time. Signed-off-by: NVitaly Wool <vitaly.wool@konsulko.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 04 9月, 2021 1 次提交
-
-
由 Mike Rapoport 提交于
There are a lot of uses of memblock_find_in_range() along with memblock_reserve() from the times memblock allocation APIs did not exist. memblock_find_in_range() is the very core of memblock allocations, so any future changes to its internal behaviour would mandate updates of all the users outside memblock. Replace the calls to memblock_find_in_range() with an equivalent calls to memblock_phys_alloc() and memblock_phys_alloc_range() and make memblock_find_in_range() private method of memblock. This simplifies the callers, ensures that (unlikely) errors in memblock_reserve() are handled and improves maintainability of memblock_find_in_range(). Link: https://lkml.kernel.org/r/20210816122622.30279-1-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Acked-by: NKirill A. Shutemov <kirill.shtuemov@linux.intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [ACPI] Acked-by: NRussell King (Oracle) <rmk+kernel@armlinux.org.uk> Acked-by: Nick Kossifidis <mick@ics.forth.gr> [riscv] Tested-by: NGuenter Roeck <linux@roeck-us.net> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 8月, 2021 1 次提交
-
-
由 Geert Uytterhoeven 提交于
RISC-V uses platform-specific code to locate the elf core header in memory. However, this does not conform to the standard "linux,elfcorehdr" DT bindings, as it relies on a reserved memory node with the "linux,elfcorehdr" compatible value, instead of on a "linux,elfcorehdr" property under the "/chosen" node. The non-compliant code can just be removed, as the standard behavior is already implemented by platform-agnostic handling in the FDT core code. Fixes: 56409750 ("RISC-V: Add crash kernel support") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Acked-by: NPalmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: NRob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/41c75d6ee3114ae6304f8afe0051895af91200ee.1628670468.git.geert+renesas@glider.be
-
- 14 8月, 2021 2 次提交
-
-
由 Kefeng Wang 提交于
This patch adds support to allocate gigantic hugepages using CMA by specifying the hugetlb_cma= kernel parameter. This is only supported on RV64. Reviewed-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Kenneth Lee 提交于
RISCV uses a global variable pfn_base for page/pfn translation. But this is a common name and will be used elsewhere. In those cases, the page-pfn macros which refer to this name will be referred to the local/input variable instead. (such as in vfio_pin_pages_remote). This make everything wrong. This patch changes the name from pfn_base to riscv_pfn_base to fix this problem. Signed-off-by: NKenneth Lee <liguozhu@hisilicon.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 12 8月, 2021 6 次提交
-
-
由 Alexandre Ghiti 提交于
The current comment states that we check if the 64-bit kernel mapping overlaps with the last 4K of the address space that is reserved to error values in create_kernel_page_table, which is not the case since it is done in setup_vm. But anyway, remove the reference to any function and simply note that in 64-bit kernel, the check should be done as soon as the kernel mapping base address is known. Fixes: db6b84a3 ("riscv: Make sure the kernel mapping does not overlap with IS_ERR_VALUE") Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
The code that handles the early fdt mapping is hard to read and does not create the same mapping size depending on the kernel: - for 64-bit, 2 PMD entries are used which amounts to a 4MB mapping - for 32-bit, 2 PGDIR entries are used which amounts to a 8MB mapping So keep using 2 PMD entries for 64-bit and use only one PGD entry for 32-bit needed to cover 4MB. Move that into a new function called create_fdt_early_page_table which, using the same naming as create_kernel_page_table. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
__PAGETABLE_PMD_FOLDED defines a 2-level page table that is only used in 32-bit kernel, so there is no need to check for CONFIG_64BIT in #ifndef __PAGETABLE_PMD_FOLDED and vice-versa. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
This allows to simplify the code and make it more readable. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
The kernel must always be mapped using PMD_SIZE, and this is already the case, this just simplifies create_kernel_page_table. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
va_kernel_pa_offset was only used for 64-bit as the kernel mapping lies in the linear mapping for 32-bit kernel and then only the offset between the PAGE_OFFSET and the kernel load address is needed. But this distinction complexifies the code with #ifdefs and especially with a separate definition of the address conversions macros. Simplify the code by defining this variable for both 32-bit and 64-bit. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 07 8月, 2021 1 次提交
-
-
由 Alexandre Ghiti 提交于
The usage of CONFIG_PHYS_RAM_BASE for all kernel types was a mistake: this value is implementation-specific and this breaks the genericity of the RISC-V kernel. Fix this by introducing a new variable phys_ram_base that holds this value at runtime and use it in the kernel physical address conversion macro. Since this value is used only for XIP kernels, evaluate it only if CONFIG_XIP_KERNEL is set which in addition optimizes this macro for standard kernels at compile-time. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Tested-by: NEmil Renner Berthing <kernel@esmil.dk> Reviewed-by: NJisheng Zhang <jszhang@kernel.org> Fixes: 44c92257 ("RISC-V: enable XIP") Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 23 7月, 2021 3 次提交
-
-
由 Alexandre Ghiti 提交于
The check that is done in setup_bootmem currently only works for 32-bit kernel since the kernel mapping has been moved outside of the linear mapping for 64-bit kernel. So make sure that for 64-bit kernel, the kernel mapping does not overlap with the last 4K of the addressable memory. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Fixes: 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
For 64-bit kernel, the end of the address space is occupied by the kernel mapping and currently, the functions to populate the kernel page tables (i.e. create_p*d_mapping) do not override existing mapping so we must make sure the linear mapping does not map memory in the kernel mapping by clipping the memory above the memory limit. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Fixes: c9811e37 ("riscv: Add mem kernel parameter support") Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
As described in Documentation/riscv/vm-layout.rst, the end of the virtual address space for 64-bit kernel is occupied by the modules/BPF/ kernel mappings so this actually reduces the amount of memory we are able to map and then use in the linear mapping. So make sure this limit is correctly set. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Fixes: 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 22 7月, 2021 1 次提交
-
-
由 Bin Meng 提交于
Commit dd2d082b ("riscv: Cleanup setup_bootmem()") adjusted the calling sequence in setup_bootmem(), which invalidates the fix commit de043da0 ("RISC-V: Fix usage of memblock_enforce_memory_limit") did for 32-bit RISC-V unfortunately. So now 32-bit RISC-V does not boot again when testing booting kernel on QEMU 'virt' with '-m 2G', which was exactly what the original commit de043da0 ("RISC-V: Fix usage of memblock_enforce_memory_limit") tried to fix. Fixes: dd2d082b ("riscv: Cleanup setup_bootmem()") Signed-off-by: NBin Meng <bmeng.cn@gmail.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 06 7月, 2021 1 次提交
-
-
由 Alexandre Ghiti 提交于
We have a lot of variables that are used to hold kernel mapping addresses, offsets between physical and virtual mappings and some others used for XIP kernels: they are all defined at different places in mm/init.c, so group them into a single structure with, for some of them, more explicit and concise names. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 01 7月, 2021 1 次提交
-
-
由 Alexandre Ghiti 提交于
For 64-bit kernels, we map all the kernel with write and execute permissions and afterwards remove writability from text and executability from data. For 32-bit kernels, the kernel mapping resides in the linear mapping, so we map all the linear mapping as writable and executable and afterwards we remove those properties for unused memory and kernel mapping as described above. Change this behavior to directly map the kernel with correct permissions and avoid going through the whole mapping to fix the permissions. At the same time, this fixes an issue introduced by commit 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") as reported here https://github.com/starfive-tech/linux/issues/17. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Reviewed-by: NAnup Patel <anup@brainfault.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 15 6月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
The memblock_enforce_memory_limit() could change the memblock range, so move the dram_end assignment after it in bootmem_init(), then support mem= cmdline. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 12 6月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
The SWIOTLB buffer is not needed unless the physical address space is beyond the limit of dma, only initialize swiotlb when swiotlb_force is true or not all system memory is DMA-able. Also move the swiotlb_init() into mem_init(). Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 09 6月, 2021 1 次提交
-
-
由 Vitaly Wool 提交于
Commit 01062356 introduced a typo in "__initdata" spelling which led to build breakage for XIP. Fix that. Fixes: 01062356 ("riscv: mm: init: Consolidate vars, functions") Signed-off-by: NVitaly Wool <vitaly.wool@konsulko.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 02 6月, 2021 1 次提交
-
-
由 Jisheng Zhang 提交于
When the kernel mapping was moved the last 2GB of the address space, (__va(PFN_PHYS(max_low_pfn))) is much smaller than the .data section start address, the last set_memory_nx() in protect_kernel_text_data() will fail, thus the .data section is still mapped as W+X. This results in below W+X mapping waring at boot. Fix it by passing the correct .data section page num to the set_memory_nx(). [ 0.396516] ------------[ cut here ]------------ [ 0.396889] riscv/mm: Found insecure W+X mapping at address (____ptrval____)/0xffffffff80c00000 [ 0.398347] WARNING: CPU: 0 PID: 1 at arch/riscv/mm/ptdump.c:258 note_page+0x244/0x24a [ 0.398964] Modules linked in: [ 0.399459] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-rc1+ #14 [ 0.400003] Hardware name: riscv-virtio,qemu (DT) [ 0.400591] epc : note_page+0x244/0x24a [ 0.401368] ra : note_page+0x244/0x24a [ 0.401772] epc : ffffffff80007c86 ra : ffffffff80007c86 sp : ffffffe000e7bc30 [ 0.402304] gp : ffffffff80caae88 tp : ffffffe000e70000 t0 : ffffffff80cb80cf [ 0.402800] t1 : ffffffff80cb80c0 t2 : 0000000000000000 s0 : ffffffe000e7bc80 [ 0.403310] s1 : ffffffe000e7bde8 a0 : 0000000000000053 a1 : ffffffff80c83ff0 [ 0.403805] a2 : 0000000000000010 a3 : 0000000000000000 a4 : 6c7e7a5137233100 [ 0.404298] a5 : 6c7e7a5137233100 a6 : 0000000000000030 a7 : ffffffffffffffff [ 0.404849] s2 : ffffffff80e00000 s3 : 0000000040000000 s4 : 0000000000000000 [ 0.405393] s5 : 0000000000000000 s6 : 0000000000000003 s7 : ffffffe000e7bd48 [ 0.405935] s8 : ffffffff81000000 s9 : ffffffffc0000000 s10: ffffffe000e7bd48 [ 0.406476] s11: 0000000000001000 t3 : 0000000000000072 t4 : ffffffffffffffff [ 0.407016] t5 : 0000000000000002 t6 : ffffffe000e7b978 [ 0.407435] status: 0000000000000120 badaddr: 0000000000000000 cause: 0000000000000003 [ 0.408052] Call Trace: [ 0.408343] [<ffffffff80007c86>] note_page+0x244/0x24a [ 0.408855] [<ffffffff8010c5a6>] ptdump_hole+0x14/0x1e [ 0.409263] [<ffffffff800f65c6>] walk_pgd_range+0x2a0/0x376 [ 0.409690] [<ffffffff800f6828>] walk_page_range_novma+0x4e/0x6e [ 0.410146] [<ffffffff8010c5f8>] ptdump_walk_pgd+0x48/0x78 [ 0.410570] [<ffffffff80007d66>] ptdump_check_wx+0xb4/0xf8 [ 0.410990] [<ffffffff80006738>] mark_rodata_ro+0x26/0x2e [ 0.411407] [<ffffffff8031961e>] kernel_init+0x44/0x108 [ 0.411814] [<ffffffff80002312>] ret_from_exception+0x0/0xc [ 0.412309] ---[ end trace 7ec3459f2547ea83 ]--- [ 0.413141] Checked W+X mappings: failed, 512 W+X pages found Fixes: 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 30 5月, 2021 1 次提交
-
-
由 Jisheng Zhang 提交于
Consolidate the following items in init.c Staticize global vars as much as possible; Add __initdata mark if the global var isn't needed after init Add __init mark if the func isn't needed after init Add __ro_after_init if the global var is read only after init Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 26 5月, 2021 3 次提交
-
-
由 Kefeng Wang 提交于
The _sdata/_edata is already in sections.h, drop redundant declaration. Also move _xiprom/_exiprom declarations at the beginning of the file, cleanup one CONFIG_XIP_KERNEL. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Kefeng Wang 提交于
Make setup_bootmem() static. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Jisheng Zhang 提交于
The empty_zero_page sits at .bss..page_aligned section, so will be cleared to zero during clearing bss, we don't need to clear it again. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Reviewed-by: NAnup Patel <anup@brainfault.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 07 5月, 2021 2 次提交
-
-
由 Geert Uytterhoeven 提交于
The various uses of protect_kernel_linear_mapping_text_rodata() are not consistent: - Its definition depends on "64BIT && !XIP_KERNEL", - Its forward declaration depends on MMU, - Its single caller depends on "STRICT_KERNEL_RWX && 64BIT && MMU && !XIP_KERNEL". Fix this by settling on the dependencies of the caller, which can be simplified as STRICT_KERNEL_RWX depends on "MMU && !XIP_KERNEL". Provide a dummy definition, as the caller is protected by "IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)" instead of "#ifdef CONFIG_STRICT_KERNEL_RWX". Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Tested-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Geert Uytterhoeven 提交于
When the kernel mapping was moved outside of the linear mapping, the kernel memory reservation was increased, to take into account mapping granularity. However, this is done unconditionally, regardless of whether the kernel memory is mapped read-only or not. If this extension is not needed, up to 2 MiB may be lost, which has a big impact on e.g. Canaan K210 (64-bit nommu) platforms with only 8 MiB of RAM. Reclaim the lost memory by only extending the reserved region when needed, i.e. depending on a simplified version of the conditional logic around the call to protect_kernel_linear_mapping_text_rodata(). Fixes: 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Tested-by: NAlexandre Ghiti <alex@ghiti.fr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 01 5月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
mem_init_print_info() is called in mem_init() on each architecture, and pass NULL argument, so using void argument and move it into mm_init(). Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.comSigned-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86] Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc] Acked-by: NDavid Hildenbrand <david@redhat.com> Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64] Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm] Acked-by: NMike Rapoport <rppt@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Guo Ren <guoren@kernel.org> Cc: Yoshinori Sato <ysato@users.osdn.me> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Peter Zijlstra" <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 4月, 2021 7 次提交
-
-
由 Vitaly Wool 提交于
Introduce XIP (eXecute In Place) support for RISC-V platforms. It allows code to be executed directly from non-volatile storage directly addressable by the CPU, such as QSPI NOR flash which can be found on many RISC-V platforms. This makes way for significant optimization of RAM footprint. The XIP kernel is not compressed since it has to run directly from flash, so it will occupy more space on the non-volatile storage. The physical flash address used to link the kernel object files and for storing it has to be known at compile time and is represented by a Kconfig option. XIP on RISC-V will for the time being only work on MMU-enabled kernels. Signed-off-by: NVitaly Wool <vitaly.wool@konsulko.com> [Alex: Rebase on top of "Move kernel mapping outside the linear mapping" ] Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> [Palmer: disable XIP for allyesconfig] Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Nick Kossifidis 提交于
This patch allows Linux to act as a crash kernel for use with kdump. Userspace will let the crash kernel know about the memory region it can use through linux,usable-memory property on the /memory node (overriding its reg property), and about the memory region where the elf core header of the previous kernel is saved, through a reserved-memory node with a compatible string of "linux,elfcorehdr". This approach is the least invasive and re-uses functionality already present. I tested this on riscv64 qemu and it works as expected, you may test it by retrieving the dmesg of the previous kernel through /proc/vmcore, using the vmcore-dmesg utility from kexec-tools. Signed-off-by: NNick Kossifidis <mick@ics.forth.gr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Nick Kossifidis 提交于
This patch adds support for kdump, the kernel will reserve a region for the crash kernel and jump there on panic. In order for userspace tools (kexec-tools) to prepare the crash kernel kexec image, we also need to expose some information on /proc/iomem for the memory regions used by the kernel and for the region reserved for crash kernel. Note that on userspace the device tree is used to determine the system's memory layout so the "System RAM" on /proc/iomem is ignored. I tested this on riscv64 qemu and works as expected, you may test it by triggering a crash through /proc/sysrq_trigger: echo c > /proc/sysrq_trigger Signed-off-by: NNick Kossifidis <mick@ics.forth.gr> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 zhouchuangao 提交于
BUG_ON() uses unlikely in if(), which can be optimized at compile time. Signed-off-by: Nzhouchuangao <zhouchuangao@vivo.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Jisheng Zhang 提交于
All of these are never modified after init, so they can be __ro_after_init. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Jisheng Zhang 提交于
They are not needed after booting, so mark them as __init to move them to the __init section. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
This is a preparatory patch for relocatable kernel and sv48 support. The kernel used to be linked at PAGE_OFFSET address therefore we could use the linear mapping for the kernel mapping. But the relocated kernel base address will be different from PAGE_OFFSET and since in the linear mapping, two different virtual addresses cannot point to the same physical address, the kernel mapping needs to lie outside the linear mapping so that we don't have to copy it at the same physical offset. The kernel mapping is moved to the last 2GB of the address space, BPF is now always after the kernel and modules use the 2GB memory range right before the kernel, so BPF and modules regions do not overlap. KASLR implementation will simply have to move the kernel in the last 2GB range and just take care of leaving enough space for BPF. In addition, by moving the kernel to the end of the address space, both sv39 and sv48 kernels will be exactly the same without needing to be relocated at runtime. Suggested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> [Palmer: Squash the STRICT_RWX fix, and a !MMU fix] Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 10 3月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
The riscv [rv32_]defconfig enabled CONFIG_MEMTEST, but memtest feature is not supported in RISCV. Add early_memtest() to support for memtest. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 27 2月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
After the following patches, commit de043da0 ("RISC-V: Fix usage of memblock_enforce_memory_limit") commit 1bd14a66 ("RISC-V: Remove any memblock representing unusable memory area") commit b10d6bca ("arch, drivers: replace for_each_membock() with for_each_mem_range()") some logic is useless, kill the mem_start/start/end and unneeded code. Reviewed-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 23 2月, 2021 1 次提交
-
-
由 Alexandre Ghiti 提交于
At early boot stage, we have a whole PGDIR to map the kernel, so there is no need to restrict the early mapping size to 128MB. Removing this define also allows us to simplify some compile time logic. This fixes large kernel mappings with a size greater than 128MB, as it is the case for syzbot kernels whose size was just ~130MB. Note that on rv64, for now, we are then limited to PGDIR size for early mapping as we can't use PGD mappings (see [1]). That should be enough given the relative small size of syzbot kernels compared to PGDIR_SIZE which is 1GB. [1] https://lore.kernel.org/lkml/20200603153608.30056-1-alex@ghiti.fr/Reported-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Tested-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 19 2月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
Covert to the generic reserve_initrd_mem() function. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-