- 20 11月, 2020 2 次提交
-
-
由 Nicolas Saenz Julienne 提交于
crashkernel might reserve memory located in ZONE_DMA. We plan to delay ZONE_DMA's initialization after unflattening the devicetree and ACPI's boot table initialization, so move it later in the boot process. Specifically into bootmem_init() since request_standard_resources() depends on it. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-2-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Currently, the kernel assumes that if RAM starts above 32-bit (or zone_bits), there is still a ZONE_DMA/DMA32 at the bottom of the RAM and such constrained devices have a hardwired DMA offset. In practice, we haven't noticed any such hardware so let's assume that we can expand ZONE_DMA32 to the available memory if no RAM below 4GB. Similarly, ZONE_DMA is expanded to the 4GB limit if no RAM addressable by zone_bits. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20201118185809.1078362-1-catalin.marinas@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 10月, 2020 1 次提交
-
-
由 Ard Biesheuvel 提交于
On arm64, the global variable memstart_addr represents the physical address of PAGE_OFFSET, and so physical to virtual translations or vice versa used to come down to simple additions or subtractions involving the values of PAGE_OFFSET and memstart_addr. When support for 52-bit virtual addressing was introduced, we had to deal with PAGE_OFFSET potentially being outside of the region that can be covered by the virtual range (as the 52-bit VA capable build needs to be able to run on systems that are only 48-bit VA capable), and for this reason, another translation was introduced, and recorded in the global variable physvirt_offset. However, if we go back to the original definition of memstart_addr, i.e., the physical address of PAGE_OFFSET, it turns out that there is no need for two separate translations: instead, we can simply subtract the size of the unaddressable VA space from memstart_addr to make the available physical memory appear in the 48-bit addressable VA region. This simplifies things, but also fixes a bug on KASLR builds, which may update memstart_addr later on in arm64_memblock_init(), but fails to update vmemmap and physvirt_offset accordingly. Fixes: 5383cc6e ("arm64: mm: Introduce vabits_actual") Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Reviewed-by: NSteve Capper <steve.capper@arm.com> Link: https://lore.kernel.org/r/20201008153602.9467-2-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 14 10月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start_pfn = memblock_region_memory_base_pfn(reg); end_pfn = memblock_region_memory_end_pfn(reg); /* do something with start_pfn and end_pfn */ } Rather than iterate over all memblock.memory regions and each time query for their start and end PFNs, use for_each_mem_pfn_range() iterator to get simpler and clearer code. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NBaoquan He <bhe@redhat.com> Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-12-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 10月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 01 9月, 2020 1 次提交
-
-
由 Barry Song 提交于
Right now, smmu is using dma_alloc_coherent() to get memory to save queues and tables. Typically, on ARM64 server, there is a default CMA located at node0, which could be far away from node2, node3 etc. with this patch, smmu will get memory from local numa node to save command queues and page tables. that means dma_unmap latency will be shrunk much. Meanwhile, when iommu.passthrough is on, device drivers which call dma_ alloc_coherent() will also get local memory and avoid the travel between numa nodes. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 08 8月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent functions that call memory_present() for each region in memblock.memory: sparse_memory_present_with_active_regions() and membocks_present(). Moreover, all architectures have a call to either of these functions preceding the call to sparse_init() and in the most cases they are called one after the other. Mark the regions from memblock.memory as present during sparce_init() by making sparse_init() call memblocks_present(), make memblocks_present() and memory_present() functions static and remove redundant sparse_memory_present_with_active_regions() function. Also remove no longer required HAVE_MEMORY_PRESENT configuration option. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 7月, 2020 1 次提交
-
-
由 Anshuman Khandual 提交于
Currently 'hugetlb_cma=' command line argument does not create CMA area on ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends up with the following warning message. Reason being, hugetlb_cma_reserve() never gets called for these huge page sizes. [ 64.255669] hugetlb_cma: the option isn't supported by current arch This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped in CONFIG_CMA. Call site for arm64_hugetlb_cma_reserve() is also protected as <asm/hugetlb.h> is conditionally included and hence cannot contain stub for the inverse config i.e !(CONFIG_HUGETLB_PAGE && CONFIG_CMA). Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Barry Song <song.bao.hua@hisilicon.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593578521-24672-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 7月, 2020 1 次提交
-
-
由 Anshuman Khandual 提交于
Currently there are three different registered panic notifier blocks. This unifies all of them into a single one i.e arm64_panic_block, hence reducing code duplication and required calling sequence during panic. This preserves the existing dump sequence. While here, just use device_initcall() directly instead of __initcall() which has been a legacy alias for the earlier. This replacement is a pure cleanup with no functional implications. Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593405511-7625-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 6月, 2020 1 次提交
-
-
由 Barry Song 提交于
hugetlb_cma_reserve() is called at the wrong place. numa_init has not been done yet. so all reserved memory will be located at node0. Fixes: cf11e85f ("mm: hugetlb: optionally allocate gigantic hugepages using cma") Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NRoman Gushchin <guro@fb.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200617215828.25296-1-song.bao.hua@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 04 6月, 2020 2 次提交
-
-
由 Mike Rapoport 提交于
The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200412194859.12663-9-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Rapoport 提交于
free_area_init() has effectively became a wrapper for free_area_init_nodes() and there is no point of keeping it. Still free_area_init() name is shorter and more general as it does not imply necessity to initialize multiple nodes. Rename free_area_init_nodes() to free_area_init(), update the callers and drop old version of free_area_init(). Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Reviewed-by: NBaoquan He <bhe@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200412194859.12663-6-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 5月, 2020 1 次提交
-
-
由 Guixiong Wei 提交于
Replace the open-coded '__nr_to_section(pfn_to_section_nr(pfn))' in pfn_valid() with a more concise call to '__pfn_to_section(pfn)'. No functional change. Signed-off-by: NGuixiong Wei <guixiongwei@gmail.com> Link: https://lore.kernel.org/r/20200430161858.11379-1-guixiongwei@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 11 4月, 2020 1 次提交
-
-
由 Roman Gushchin 提交于
Commit 944d9fec ("hugetlb: add support for gigantic page allocation at runtime") has added the run-time allocation of gigantic pages. However it actually works only at early stages of the system loading, when the majority of memory is free. After some time the memory gets fragmented by non-movable pages, so the chances to find a contiguous 1GB block are getting close to zero. Even dropping caches manually doesn't help a lot. At large scale rebooting servers in order to allocate gigantic hugepages is quite expensive and complex. At the same time keeping some constant percentage of memory in reserved hugepages even if the workload isn't using it is a big waste: not all workloads can benefit from using 1 GB pages. The following solution can solve the problem: 1) On boot time a dedicated cma area* is reserved. The size is passed as a kernel argument. 2) Run-time allocations of gigantic hugepages are performed using the cma allocator and the dedicated cma area In this case gigantic hugepages can be allocated successfully with a high probability, however the memory isn't completely wasted if nobody is using 1GB hugepages: it can be used for pagecache, anon memory, THPs, etc. * On a multi-node machine a per-node cma area is allocated on each node. Following gigantic hugetlb allocation are using the first available numa node if the mask isn't specified by a user. Usage: 1) configure the kernel to allocate a cma area for hugetlb allocations: pass hugetlb_cma=10G as a kernel argument 2) allocate hugetlb pages as usual, e.g. echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages If the option isn't enabled or the allocation of the cma area failed, the current behavior of the system is preserved. x86 and arm-64 are covered by this patch, other architectures can be trivially added later. The patch contains clean-ups and fixes proposed and implemented by Aslan Bakirov and Randy Dunlap. It also contains ideas and suggestions proposed by Rik van Riel, Michal Hocko and Mike Kravetz. Thanks! Signed-off-by: NRoman Gushchin <guro@fb.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NAndreas Schaufler <andreas.schaufler@gmx.de> Acked-by: NMike Kravetz <mike.kravetz@oracle.com> Acked-by: NMichal Hocko <mhocko@kernel.org> Cc: Aslan Bakirov <aslan@fb.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Joonsoo Kim <js1304@gmail.com> Link: http://lkml.kernel.org/r/20200407163840.92263-3-guro@fb.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 12月, 2019 1 次提交
-
-
由 Will Deacon 提交于
John reports that the recently merged commit 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") breaks the boot on his DB845C board: | Booting Linux on physical CPU 0x0000000000 [0x517f803c] | Linux version 5.4.0-mainline-10675-g957a03b9e38f | Machine model: Thundercomm Dragonboard 845c | [...] | Built 1 zonelists, mobility grouping on. Total pages: -188245 | Kernel command line: earlycon | firmware_class.path=/vendor/firmware/ androidboot.hardware=db845c | init=/init androidboot.boot_devices=soc/1d84000.ufshc | printk.devkmsg=on buildvariant=userdebug root=/dev/sda2 | androidboot.bootdevice=1d84000.ufshc androidboot.serialno=c4e1189c | androidboot.baseband=sda | msm_drm.dsi_display0=dsi_lt9611_1080_video_display: | androidboot.slot_suffix=_a skip_initramfs rootwait ro init=/init | | <hangs indefinitely here> This is because, when CONFIG_NUMA=n, zone_sizes_init() fails to handle memblocks that fall entirely within the ZONE_DMA region and erroneously ends up trying to add a negatively-sized region into the following ZONE_DMA32, which is later interpreted as a large unsigned region by the core MM code. Rework the non-NUMA implementation of zone_sizes_init() so that the start address of the memblock being processed is adjusted according to the end of the previous zone, which is then range-checked before updating the hole information of subsequent zones. Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Link: https://lore.kernel.org/lkml/CALAqxLVVcsmFrDKLRGRq7GewcW405yTOxG=KR3csVzQ6bXutkA@mail.gmail.com Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Reported-by: NJohn Stultz <john.stultz@linaro.org> Tested-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 11月, 2019 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
With the introduction of ZONE_DMA in arm64 we moved the default CMA and crashkernel reservation into that area. This caused a regression on big machines that need big CMA and crashkernel reservations. Note that ZONE_DMA is only 1GB big. Restore the previous behavior as the wide majority of devices are OK with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA will configure it explicitly. Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Reported-by: NQian Cai <cai@lca.pw> Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 11月, 2019 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
Some architectures, notably ARM, are interested in tweaking this depending on their runtime DMA addressing limitations. Acked-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 10月, 2019 1 次提交
-
-
由 Catalin Marinas 提交于
This variable is only used in the arch/arm64/mm/init.c file for ZONE_DMA32 initialisation, no need to expose it. Reported-by: NWill Deacon <will@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 10月, 2019 3 次提交
-
-
由 Nathan Chancellor 提交于
When building arm64 allnoconfig, CONFIG_ZONE_DMA and CONFIG_ZONE_DMA32 get disabled so there is a warning about max_dma being unused. ../arch/arm64/mm/init.c:215:16: warning: unused variable 'max_dma' [-Wunused-variable] unsigned long max_dma = min; ^ 1 warning generated. Add __maybe_unused to make this clear to the compiler. Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Anshuman Khandual 提交于
Platform implementation for free_initmem() should poison the memory while freeing it up. Hence pass across POISON_FREE_INITMEM while calling into free_reserved_area(). The same is being followed in the generic fallback for free_initmem() and some other platforms overriding it. Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-kernel@vger.kernel.org Reviewed-by: NSteven Price <steven.price@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mike Rapoport 提交于
arm64 calls memblock_free() for the initrd area in its implementation of free_initrd_mem(), but this call has no actual effect that late in the boot process. By the time initrd is freed, all the reserved memory is managed by the page allocator and the memblock.reserved is unused, so the only purpose of the memblock_free() call is to keep track of initrd memory for debugging and accounting. Without the memblock_free() call the only difference between arm64 and the generic versions of free_initrd_mem() is the memory poisoning. Move memblock_free() call to the generic code, enable it there for the architectures that define ARCH_KEEP_MEMBLOCK and use the generic implementation of free_initrd_mem() on arm64. Tested-by: Anshuman Khandual <anshuman.khandual@arm.com> #arm64 Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 10月, 2019 3 次提交
-
-
由 Nicolas Saenz Julienne 提交于
So far all arm64 devices have supported 32 bit DMA masks for their peripherals. This is not true anymore for the Raspberry Pi 4 as most of it's peripherals can only address the first GB of memory on a total of up to 4 GB. This goes against ZONE_DMA32's intent, as it's expected for ZONE_DMA32 to be addressable with a 32 bit mask. So it was decided to re-introduce ZONE_DMA in arm64. ZONE_DMA will contain the lower 1G of memory, which is currently the memory area addressable by any peripheral on an arm64 device. ZONE_DMA32 will contain the rest of the 32 bit addressable memory. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Nicolas Saenz Julienne 提交于
Let the name indicate that they are used to calculate ZONE_DMA32's size as opposed to ZONE_DMA. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Nicolas Saenz Julienne 提交于
By the time we call zones_sizes_init() arm64_dma_phys_limit already contains the result of max_zone_dma_phys(). We use the variable instead of calling the function directly to save some precious cpu time. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 8月, 2019 4 次提交
-
-
由 Steve Capper 提交于
Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. To simplify things this patch also removes the 52-bit user/48-bit kernel kconfig option. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
vmemmap is a preprocessor definition that depends on a variable, memstart_addr. In a later patch we will need to expand the size of the VMEMMAP region and optionally modify vmemmap depending upon whether or not hardware support is available for 52-bit virtual addresses. This patch changes vmemmap to be a variable. As the old definition depended on a variable load, this should not affect performance noticeably. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable vabits_actual is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In order to allow for a KASAN shadow that changes size at boot time, one must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the start address. Also, it is highly desirable to maintain the same function addresses in the kernel .text between VA sizes. Both of these requirements necessitate us to flip the kernel address space halves s.t. the direct linear map occupies the lower addresses. This patch puts the direct linear map in the lower addresses of the kernel VA range and everything else in the higher ranges. We need to adjust: *) KASAN shadow region placement logic, *) KASAN_SHADOW_OFFSET computation logic, *) virt_to_phys, phys_to_virt checks, *) page table dumper. These are all small changes, that need to take place atomically, so they are bundled into this commit. As part of the re-arrangement, a guard region of 2MB (to preserve alignment for fixed map) is added after the vmemmap. Otherwise the vmemmap could intersect with IS_ERR pointers. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 05 8月, 2019 1 次提交
-
-
由 Junhua Huang 提交于
We should free the initrd reserved memblock in an aligned manner, because the initrd reserves the memblock in an aligned manner in arm64_memblock_init(). Otherwise there are some fragments in memblock_reserved regions after free_initrd_mem(). e.g.: /sys/kernel/debug/memblock # cat reserved 0: 0x0000000080080000..0x00000000817fafff 1: 0x0000000083400000..0x0000000083ffffff 2: 0x0000000090000000..0x000000009000407f 3: 0x00000000b0000000..0x00000000b000003f 4: 0x00000000b26184ea..0x00000000b2618fff The fragments like the ranges from b0000000 to b000003f and from b26184ea to b2618fff should be freed. And we can do free_reserved_area() after memblock_free(), as free_reserved_area() calls __free_pages(), once we've done that it could be allocated somewhere else, but memblock and iomem still say this is reserved memory. Fixes: 05c58752 ("arm64: To remove initrd reserved area entry from memblock") Signed-off-by: NJunhua Huang <huang.junhua@zte.com.cn> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NEnrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 04 6月, 2019 1 次提交
-
-
由 Miles Chen 提交于
This change makes CONFIG_ZONE_DMA32 defuly y and allows users to overwrite it only when CONFIG_EXPERT=y. For the SoCs that do not need CONFIG_ZONE_DMA32, this is the first step to manage all available memory by a single zone(normal zone) to reduce the overhead of multiple zones. The change also fixes a build error when CONFIG_NUMA=y and CONFIG_ZONE_DMA32=n. arch/arm64/mm/init.c:195:17: error: use of undeclared identifier 'ZONE_DMA32' max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys()); Change since v1: 1. only expose CONFIG_ZONE_DMA32 when CONFIG_EXPERT=y 2. remove redundant IS_ENABLED(CONFIG_ZONE_DMA32) Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 5月, 2019 2 次提交
-
-
由 Masahiro Yamada 提交于
Since commit dccd2304 ("ARM: 7430/1: sizes.h: move from asm-generic to <linux/sizes.h>"), <asm/sizes.h> and <asm-generic/sizes.h> are just wrappers of <linux/sizes.h>. This commit replaces all <asm/sizes.h> and <asm-generic/sizes.h> to prepare for the removal. Link: http://lkml.kernel.org/r/1553267665-27228-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
No need to handle the freeing disable in arch code when we already have a core hook (and a different name for the option) for it. Link: http://lkml.kernel.org/r/20190213174621.29297-7-hch@lst.deSigned-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Acked-by: NMike Rapoport <rppt@linux.ibm.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Cc: Steven Price <steven.price@arm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 4月, 2019 1 次提交
-
-
由 Bjorn Andersson 提交于
In the event that the start address of the initrd is not aligned, but has an aligned size, the base + size will not cover the entire initrd image and there is a chance that the kernel will corrupt the tail of the image. By aligning the end of the initrd to a page boundary and then subtracting the adjusted start address the memblock reservation will cover all pages that contains the initrd. Fixes: c756c592 ("arm64: Utilize phys_initrd_start/phys_initrd_size") Cc: stable@vger.kernel.org Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 4月, 2019 1 次提交
-
-
由 Will Deacon 提交于
If the initrd payload isn't completely accessible via the linear map, then we print a warning during boot and nobble the virtual address of the payload so that we ignore it later on. Unfortunately, since commit c756c592 ("arm64: Utilize phys_initrd_start/phys_initrd_size"), the virtual address isn't initialised until later anyway, so we need to nobble the size of the payload to ensure that we don't try to use it later on. Fixes: c756c592 ("arm64: Utilize phys_initrd_start/phys_initrd_size") Reported-by: NPierre Kuo <vichy.kuo@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 4月, 2019 1 次提交
-
-
由 Miles Chen 提交于
When debugging with CONFIG_PAGE_OWNER, I noticed that the min_low_pfn on arm64 is always zero and the page owner scanning has to start from zero. We have to loop a while before we see the first valid pfn. (see: read_page_owner()) Setup min_low_pfn to save some loops. Before setting min_low_pfn: [ 21.265602] min_low_pfn=0, *ppos=0 Page allocated via order 0, mask 0x100cca(GFP_HIGHUSER_MOVABLE) PFN 262144 type Movable Block 512 type Movable Flags 0x8001e referenced|uptodate|dirty|lru|swapbacked) prep_new_page+0x13c/0x140 get_page_from_freelist+0x254/0x1068 __alloc_pages_nodemask+0xd4/0xcb8 After setting min_low_pfn: [ 11.025787] min_low_pfn=262144, *ppos=0 Page allocated via order 0, mask 0x100cca(GFP_HIGHUSER_MOVABLE) PFN 262144 type Movable Block 512 type Movable Flags 0x8001e referenced|uptodate|dirty|lru|swapbacked) prep_new_page+0x13c/0x140 get_page_from_freelist+0x254/0x1068 __alloc_pages_nodemask+0xd4/0xcb8 shmem_alloc_page+0x7c/0xa0 shmem_alloc_and_acct_page+0x124/0x1e8 shmem_getpage_gfp.isra.7+0x118/0x878 shmem_write_begin+0x38/0x68 Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 4月, 2019 1 次提交
-
-
由 Muchun Song 提交于
Although we don't actually make use of the 'max_mapnr' global variable, we do set it to a junk value for !CONFIG_FLATMEM configurations that leave mem_map uninitialised. To avoid somebody tripping over this in future, set 'max_mapnr' using max_pfn, which is calculated directly from the memblock information. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMuchun Song <smuchun@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 06 3月, 2019 1 次提交
-
-
由 David Hildenbrand 提交于
The crashkernel is reserved via memblock_reserve(). memblock_free_all() will call free_low_memory_core_early(), which will go over all reserved memblocks, marking the pages as PG_reserved. So manually marking pages as PG_reserved is not necessary, they are already in the desired state (otherwise they would have been handed over to the buddy as free pages and bad things would happen). Link: http://lkml.kernel.org/r/20190114125903.24845-8-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NMatthias Brugger <mbrugger@suse.com> Reviewed-by: NBhupesh Sharma <bhsharma@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Kleikamp <dave.kleikamp@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Stefan Agner <stefan@agner.ch> Cc: Laura Abbott <labbott@redhat.com> Cc: Greg Hackmann <ghackmann@android.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kristina Martsenko <kristina.martsenko@arm.com> Cc: CHANDAN VN <chandan.vn@samsung.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 1月, 2019 1 次提交
-
-
由 Logan Gunthorpe 提交于
Cleanup the arm64_memory_present() function seeing it's very similar to other arches. memblocks_present() is a direct replacement of arm64_memory_present() Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 1月, 2019 1 次提交
-
-
由 Yueyi Li 提交于
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), the top 4K of kernel virtual address space may be mapped to physical addresses despite being reserved for ERR_PTR values. Fix the randomization of the linear region so that we avoid mapping the last page of the virtual address space. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Nliyueyi <liyueyi@live.com> [will: rewrote commit message; merged in suggestion from Ard] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-