- 03 6月, 2021 1 次提交
-
-
由 Catalin Marinas 提交于
mainline inclusion from mainline-5.11-rc4 commit d78050ee category: bugfix bugzilla: 50423 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d78050ee35440d7879ed94011c52994b8932e96e ------------------------------------------------- With the introduction of a dynamic ZONE_DMA range based on DT or IORT information, there's no need for CMA allocations from the wider ZONE_DMA32 since on most platforms ZONE_DMA will cover the 32-bit addressable range. Remove the arm64_dma32_phys_limit and set arm64_dma_phys_limit to cover the smallest DMA range required on the platform. CMA allocation and crashkernel reservation now go in the dynamically sized ZONE_DMA, allowing correct functionality on RPi4. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Chen Zhou <chenzhou10@huawei.com> Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> # On RPi4B Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Conflicts: arch/arm64/mm/init.c Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 09 4月, 2021 5 次提交
-
-
由 Anshuman Khandual 提交于
stable inclusion from stable-5.10.24 commit 475a4307c14c8886ac10495d79bd19e88ee1aa83 bugzilla: 51348 -------------------------------- [ Upstream commit eeb0753b ] pfn_valid() validates a pfn but basically it checks for a valid struct page backing for that pfn. It should always return positive for memory ranges backed with struct page mapping. But currently pfn_valid() fails for all ZONE_DEVICE based memory types even though they have struct page mapping. pfn_valid() asserts that there is a memblock entry for a given pfn without MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is that they do not have memblock entries. Hence memblock_is_map_memory() will invariably fail via memblock_search() for a ZONE_DEVICE based address. This eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged into the system via memremap_pages() called from a driver, their respective memory sections will not have SECTION_IS_EARLY set. Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set for firmware reserved memory regions. memblock_is_map_memory() can just be skipped as its always going to be positive and that will be an optimization for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal hotplugged memory too will not have SECTION_IS_EARLY set for their sections Skipping memblock_is_map_memory() for all non early memory sections would fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its performance for normal hotplug memory as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Acked-by: NDavid Hildenbrand <david@redhat.com> Fixes: 73b20c84 ("arm64: mm: implement pte_devmap support") Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614921898-4099-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: N Weilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.11-rc1 commit 2b865293 category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) Instructing the DMA layer about these limitations is straight-forward, even though we had to fix some issues regarding memory limits set in the IORT for named components, and regarding the handling of ACPI _DMA methods. However, the DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So let's do an early scan of the IORT, and only create the ZONE_DMA if we encounter any devices that need it. This puts the burden on the firmware to describe such limitations in the IORT, which may be redundant (and less precise) if _DMA methods are also being provided. However, it should be noted that this situation is highly unusual for arm64 ACPI machines. Also, the DMA subsystem still gives precedence to the _DMA method if implemented, and so we will not lose the ability to perform streaming DMA outside the ZONE_DMA if the _DMA method permits it. [nsaenz: unified implementation with DT's counterpart] Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NHanjun Guo <guohanjun@huawei.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Rob Herring <robh+dt@kernel.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-7-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 8424ecdd category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) The DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So, with the help of of_dma_get_max_cpu_address() get the topmost physical address accessible to all DMA masters in system and use that information to fine-tune ZONE_DMA's size. In the absence of addressing limited masters ZONE_DMA will span the whole 32-bit address space, otherwise, in the case of the Raspberry Pi 4 it'll only span the 30-bit address space, and have ZONE_DMA32 cover the rest of the 32-bit address space. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201119175400.9995-6-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 9804f8c6 category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- zone_dma_bits's initialization happens earlier that it's actually needed, in arm64_memblock_init(). So move it into the more suitable zone_sizes_init(). Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-3-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 0a30c535 category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- crashkernel might reserve memory located in ZONE_DMA. We plan to delay ZONE_DMA's initialization after unflattening the devicetree and ACPI's boot table initialization, so move it later in the boot process. Specifically into bootmem_init() since request_standard_resources() depends on it. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-2-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 08 2月, 2021 3 次提交
-
-
由 Chen Zhou 提交于
maillist inclusion category: feature bugzilla: 47954 Reference: https://lkml.org/lkml/2021/1/30/53 ------------------------------------------------- When reserving crashkernel in high memory, some low memory is reserved for crash dump kernel devices and never mapped by the first kernel. This memory range is advertised to crash dump kernel via DT property under /chosen, linux,usable-memory-range = <BASE1 SIZE1 [BASE2 SIZE2]> We reused the DT property linux,usable-memory-range and made the low memory region as the second range "BASE2 SIZE2", which keeps compatibility with existing user-space and older kdump kernels. Crash dump kernel reads this property at boot time and call memblock_add() to add the low memory region after memblock_cap_memory_range() has been called. Signed-off-by: NChen Zhou <chenzhou10@huawei.com> Tested-by: NJohn Donnelly <John.p.donnelly@oracle.com> Signed-off-by: NChen Zhou <chenzhou10@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chen Zhou 提交于
maillist inclusion category: feature bugzilla: 47954 Reference: https://lkml.org/lkml/2021/1/30/53 ------------------------------------------------- There are following issues in arm64 kdump: 1. We use crashkernel=X to reserve crashkernel below 4G, which will fail when there is no enough low memory. 2. If reserving crashkernel above 4G, in this case, crash dump kernel will boot failure because there is no low memory available for allocation. 3. Since commit 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32"), if the memory reserved for crash dump kernel falled in ZONE_DMA32, the devices in crash dump kernel need to use ZONE_DMA will alloc fail. To solve these issues, change the behavior of crashkernel=X and introduce crashkernel=X,[high,low]. crashkernel=X tries low allocation in DMA zone or DMA32 zone if CONFIG_ZONE_DMA is disabled, and fall back to high allocation if it fails. We can also use "crashkernel=X,high" to select a region above DMA zone, which also tries to allocate at least 256M in DMA zone automatically (or the DMA32 zone if CONFIG_ZONE_DMA is disabled). "crashkernel=Y,low" can be used to allocate specified size low memory. Another minor change, there may be two regions reserved for crash dump kernel, in order to distinct from the high region and make no effect to the use of existing kexec-tools, rename the low region as "Crash kernel (low)". Signed-off-by: NChen Zhou <chenzhou10@huawei.com> Tested-by: NJohn Donnelly <John.p.donnelly@oracle.com> Signed-off-by: NChen Zhou <chenzhou10@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chen Zhou 提交于
maillist inclusion category: feature bugzilla: 47954 Reference: https://lkml.org/lkml/2021/1/30/53 ------------------------------------------------- Introduce macro CRASH_ALIGN for alignment, macro CRASH_ADDR_LOW_MAX for upper bound of low crash memory, macro CRASH_ADDR_HIGH_MAX for upper bound of high crash memory, use macroes instead. Besides, keep consistent with x86, use CRASH_ALIGN as the lower bound of crash kernel reservation. Signed-off-by: NChen Zhou <chenzhou10@huawei.com> Tested-by: NJohn Donnelly <John.p.donnelly@oracle.com> Signed-off-by: NChen Zhou <chenzhou10@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 27 1月, 2021 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
stable inclusion from stable-5.10.8 commit 41dcfc0cb955c445255a9cd2d6f391ebef8dfc2b bugzilla: 47450 -------------------------------- commit 095507dc upstream. Systems configured with CONFIG_ZONE_DMA32, CONFIG_ZONE_NORMAL and !CONFIG_ZONE_DMA will fail to properly setup ARCH_LOW_ADDRESS_LIMIT. The limit will default to ~0ULL, effectively spanning the whole memory, which is too high for a configuration that expects low memory to be capped at 4GB. Fix ARCH_LOW_ADDRESS_LIMIT by falling back to arm64_dma32_phys_limit when arm64_dma_phys_limit isn't set. arm64_dma32_phys_limit will honour CONFIG_ZONE_DMA32, or span the entire memory when not enabled. Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201218163307.10150-1-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
-
- 15 10月, 2020 1 次提交
-
-
由 Ard Biesheuvel 提交于
On arm64, the global variable memstart_addr represents the physical address of PAGE_OFFSET, and so physical to virtual translations or vice versa used to come down to simple additions or subtractions involving the values of PAGE_OFFSET and memstart_addr. When support for 52-bit virtual addressing was introduced, we had to deal with PAGE_OFFSET potentially being outside of the region that can be covered by the virtual range (as the 52-bit VA capable build needs to be able to run on systems that are only 48-bit VA capable), and for this reason, another translation was introduced, and recorded in the global variable physvirt_offset. However, if we go back to the original definition of memstart_addr, i.e., the physical address of PAGE_OFFSET, it turns out that there is no need for two separate translations: instead, we can simply subtract the size of the unaddressable VA space from memstart_addr to make the available physical memory appear in the 48-bit addressable VA region. This simplifies things, but also fixes a bug on KASLR builds, which may update memstart_addr later on in arm64_memblock_init(), but fails to update vmemmap and physvirt_offset accordingly. Fixes: 5383cc6e ("arm64: mm: Introduce vabits_actual") Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Reviewed-by: NSteve Capper <steve.capper@arm.com> Link: https://lore.kernel.org/r/20201008153602.9467-2-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 14 10月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start_pfn = memblock_region_memory_base_pfn(reg); end_pfn = memblock_region_memory_end_pfn(reg); /* do something with start_pfn and end_pfn */ } Rather than iterate over all memblock.memory regions and each time query for their start and end PFNs, use for_each_mem_pfn_range() iterator to get simpler and clearer code. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NBaoquan He <bhe@redhat.com> Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-12-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 10月, 2020 1 次提交
-
-
由 Christoph Hellwig 提交于
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 01 9月, 2020 1 次提交
-
-
由 Barry Song 提交于
Right now, smmu is using dma_alloc_coherent() to get memory to save queues and tables. Typically, on ARM64 server, there is a default CMA located at node0, which could be far away from node2, node3 etc. with this patch, smmu will get memory from local numa node to save command queues and page tables. that means dma_unmap latency will be shrunk much. Meanwhile, when iommu.passthrough is on, device drivers which call dma_ alloc_coherent() will also get local memory and avoid the travel between numa nodes. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 08 8月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent functions that call memory_present() for each region in memblock.memory: sparse_memory_present_with_active_regions() and membocks_present(). Moreover, all architectures have a call to either of these functions preceding the call to sparse_init() and in the most cases they are called one after the other. Mark the regions from memblock.memory as present during sparce_init() by making sparse_init() call memblocks_present(), make memblocks_present() and memory_present() functions static and remove redundant sparse_memory_present_with_active_regions() function. Also remove no longer required HAVE_MEMORY_PRESENT configuration option. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 7月, 2020 1 次提交
-
-
由 Anshuman Khandual 提交于
Currently 'hugetlb_cma=' command line argument does not create CMA area on ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends up with the following warning message. Reason being, hugetlb_cma_reserve() never gets called for these huge page sizes. [ 64.255669] hugetlb_cma: the option isn't supported by current arch This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped in CONFIG_CMA. Call site for arm64_hugetlb_cma_reserve() is also protected as <asm/hugetlb.h> is conditionally included and hence cannot contain stub for the inverse config i.e !(CONFIG_HUGETLB_PAGE && CONFIG_CMA). Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Barry Song <song.bao.hua@hisilicon.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593578521-24672-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 7月, 2020 1 次提交
-
-
由 Anshuman Khandual 提交于
Currently there are three different registered panic notifier blocks. This unifies all of them into a single one i.e arm64_panic_block, hence reducing code duplication and required calling sequence during panic. This preserves the existing dump sequence. While here, just use device_initcall() directly instead of __initcall() which has been a legacy alias for the earlier. This replacement is a pure cleanup with no functional implications. Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1593405511-7625-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 6月, 2020 1 次提交
-
-
由 Barry Song 提交于
hugetlb_cma_reserve() is called at the wrong place. numa_init has not been done yet. so all reserved memory will be located at node0. Fixes: cf11e85f ("mm: hugetlb: optionally allocate gigantic hugepages using cma") Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NRoman Gushchin <guro@fb.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200617215828.25296-1-song.bao.hua@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 04 6月, 2020 2 次提交
-
-
由 Mike Rapoport 提交于
The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200412194859.12663-9-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Rapoport 提交于
free_area_init() has effectively became a wrapper for free_area_init_nodes() and there is no point of keeping it. Still free_area_init() name is shorter and more general as it does not imply necessity to initialize multiple nodes. Rename free_area_init_nodes() to free_area_init(), update the callers and drop old version of free_area_init(). Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Reviewed-by: NBaoquan He <bhe@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200412194859.12663-6-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 5月, 2020 1 次提交
-
-
由 Guixiong Wei 提交于
Replace the open-coded '__nr_to_section(pfn_to_section_nr(pfn))' in pfn_valid() with a more concise call to '__pfn_to_section(pfn)'. No functional change. Signed-off-by: NGuixiong Wei <guixiongwei@gmail.com> Link: https://lore.kernel.org/r/20200430161858.11379-1-guixiongwei@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 11 4月, 2020 1 次提交
-
-
由 Roman Gushchin 提交于
Commit 944d9fec ("hugetlb: add support for gigantic page allocation at runtime") has added the run-time allocation of gigantic pages. However it actually works only at early stages of the system loading, when the majority of memory is free. After some time the memory gets fragmented by non-movable pages, so the chances to find a contiguous 1GB block are getting close to zero. Even dropping caches manually doesn't help a lot. At large scale rebooting servers in order to allocate gigantic hugepages is quite expensive and complex. At the same time keeping some constant percentage of memory in reserved hugepages even if the workload isn't using it is a big waste: not all workloads can benefit from using 1 GB pages. The following solution can solve the problem: 1) On boot time a dedicated cma area* is reserved. The size is passed as a kernel argument. 2) Run-time allocations of gigantic hugepages are performed using the cma allocator and the dedicated cma area In this case gigantic hugepages can be allocated successfully with a high probability, however the memory isn't completely wasted if nobody is using 1GB hugepages: it can be used for pagecache, anon memory, THPs, etc. * On a multi-node machine a per-node cma area is allocated on each node. Following gigantic hugetlb allocation are using the first available numa node if the mask isn't specified by a user. Usage: 1) configure the kernel to allocate a cma area for hugetlb allocations: pass hugetlb_cma=10G as a kernel argument 2) allocate hugetlb pages as usual, e.g. echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages If the option isn't enabled or the allocation of the cma area failed, the current behavior of the system is preserved. x86 and arm-64 are covered by this patch, other architectures can be trivially added later. The patch contains clean-ups and fixes proposed and implemented by Aslan Bakirov and Randy Dunlap. It also contains ideas and suggestions proposed by Rik van Riel, Michal Hocko and Mike Kravetz. Thanks! Signed-off-by: NRoman Gushchin <guro@fb.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NAndreas Schaufler <andreas.schaufler@gmx.de> Acked-by: NMike Kravetz <mike.kravetz@oracle.com> Acked-by: NMichal Hocko <mhocko@kernel.org> Cc: Aslan Bakirov <aslan@fb.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Joonsoo Kim <js1304@gmail.com> Link: http://lkml.kernel.org/r/20200407163840.92263-3-guro@fb.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 12月, 2019 1 次提交
-
-
由 Will Deacon 提交于
John reports that the recently merged commit 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") breaks the boot on his DB845C board: | Booting Linux on physical CPU 0x0000000000 [0x517f803c] | Linux version 5.4.0-mainline-10675-g957a03b9e38f | Machine model: Thundercomm Dragonboard 845c | [...] | Built 1 zonelists, mobility grouping on. Total pages: -188245 | Kernel command line: earlycon | firmware_class.path=/vendor/firmware/ androidboot.hardware=db845c | init=/init androidboot.boot_devices=soc/1d84000.ufshc | printk.devkmsg=on buildvariant=userdebug root=/dev/sda2 | androidboot.bootdevice=1d84000.ufshc androidboot.serialno=c4e1189c | androidboot.baseband=sda | msm_drm.dsi_display0=dsi_lt9611_1080_video_display: | androidboot.slot_suffix=_a skip_initramfs rootwait ro init=/init | | <hangs indefinitely here> This is because, when CONFIG_NUMA=n, zone_sizes_init() fails to handle memblocks that fall entirely within the ZONE_DMA region and erroneously ends up trying to add a negatively-sized region into the following ZONE_DMA32, which is later interpreted as a large unsigned region by the core MM code. Rework the non-NUMA implementation of zone_sizes_init() so that the start address of the memblock being processed is adjusted according to the end of the previous zone, which is then range-checked before updating the hole information of subsequent zones. Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Link: https://lore.kernel.org/lkml/CALAqxLVVcsmFrDKLRGRq7GewcW405yTOxG=KR3csVzQ6bXutkA@mail.gmail.com Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Reported-by: NJohn Stultz <john.stultz@linaro.org> Tested-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 11月, 2019 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
With the introduction of ZONE_DMA in arm64 we moved the default CMA and crashkernel reservation into that area. This caused a regression on big machines that need big CMA and crashkernel reservations. Note that ZONE_DMA is only 1GB big. Restore the previous behavior as the wide majority of devices are OK with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA will configure it explicitly. Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Reported-by: NQian Cai <cai@lca.pw> Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 11月, 2019 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
Some architectures, notably ARM, are interested in tweaking this depending on their runtime DMA addressing limitations. Acked-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 10月, 2019 1 次提交
-
-
由 Catalin Marinas 提交于
This variable is only used in the arch/arm64/mm/init.c file for ZONE_DMA32 initialisation, no need to expose it. Reported-by: NWill Deacon <will@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 10月, 2019 3 次提交
-
-
由 Nathan Chancellor 提交于
When building arm64 allnoconfig, CONFIG_ZONE_DMA and CONFIG_ZONE_DMA32 get disabled so there is a warning about max_dma being unused. ../arch/arm64/mm/init.c:215:16: warning: unused variable 'max_dma' [-Wunused-variable] unsigned long max_dma = min; ^ 1 warning generated. Add __maybe_unused to make this clear to the compiler. Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Anshuman Khandual 提交于
Platform implementation for free_initmem() should poison the memory while freeing it up. Hence pass across POISON_FREE_INITMEM while calling into free_reserved_area(). The same is being followed in the generic fallback for free_initmem() and some other platforms overriding it. Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-kernel@vger.kernel.org Reviewed-by: NSteven Price <steven.price@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mike Rapoport 提交于
arm64 calls memblock_free() for the initrd area in its implementation of free_initrd_mem(), but this call has no actual effect that late in the boot process. By the time initrd is freed, all the reserved memory is managed by the page allocator and the memblock.reserved is unused, so the only purpose of the memblock_free() call is to keep track of initrd memory for debugging and accounting. Without the memblock_free() call the only difference between arm64 and the generic versions of free_initrd_mem() is the memory poisoning. Move memblock_free() call to the generic code, enable it there for the architectures that define ARCH_KEEP_MEMBLOCK and use the generic implementation of free_initrd_mem() on arm64. Tested-by: Anshuman Khandual <anshuman.khandual@arm.com> #arm64 Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 10月, 2019 3 次提交
-
-
由 Nicolas Saenz Julienne 提交于
So far all arm64 devices have supported 32 bit DMA masks for their peripherals. This is not true anymore for the Raspberry Pi 4 as most of it's peripherals can only address the first GB of memory on a total of up to 4 GB. This goes against ZONE_DMA32's intent, as it's expected for ZONE_DMA32 to be addressable with a 32 bit mask. So it was decided to re-introduce ZONE_DMA in arm64. ZONE_DMA will contain the lower 1G of memory, which is currently the memory area addressable by any peripheral on an arm64 device. ZONE_DMA32 will contain the rest of the 32 bit addressable memory. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Nicolas Saenz Julienne 提交于
Let the name indicate that they are used to calculate ZONE_DMA32's size as opposed to ZONE_DMA. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Nicolas Saenz Julienne 提交于
By the time we call zones_sizes_init() arm64_dma_phys_limit already contains the result of max_zone_dma_phys(). We use the variable instead of calling the function directly to save some precious cpu time. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 8月, 2019 4 次提交
-
-
由 Steve Capper 提交于
Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. To simplify things this patch also removes the 52-bit user/48-bit kernel kconfig option. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
vmemmap is a preprocessor definition that depends on a variable, memstart_addr. In a later patch we will need to expand the size of the VMEMMAP region and optionally modify vmemmap depending upon whether or not hardware support is available for 52-bit virtual addresses. This patch changes vmemmap to be a variable. As the old definition depended on a variable load, this should not affect performance noticeably. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable vabits_actual is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In order to allow for a KASAN shadow that changes size at boot time, one must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the start address. Also, it is highly desirable to maintain the same function addresses in the kernel .text between VA sizes. Both of these requirements necessitate us to flip the kernel address space halves s.t. the direct linear map occupies the lower addresses. This patch puts the direct linear map in the lower addresses of the kernel VA range and everything else in the higher ranges. We need to adjust: *) KASAN shadow region placement logic, *) KASAN_SHADOW_OFFSET computation logic, *) virt_to_phys, phys_to_virt checks, *) page table dumper. These are all small changes, that need to take place atomically, so they are bundled into this commit. As part of the re-arrangement, a guard region of 2MB (to preserve alignment for fixed map) is added after the vmemmap. Otherwise the vmemmap could intersect with IS_ERR pointers. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 05 8月, 2019 1 次提交
-
-
由 Junhua Huang 提交于
We should free the initrd reserved memblock in an aligned manner, because the initrd reserves the memblock in an aligned manner in arm64_memblock_init(). Otherwise there are some fragments in memblock_reserved regions after free_initrd_mem(). e.g.: /sys/kernel/debug/memblock # cat reserved 0: 0x0000000080080000..0x00000000817fafff 1: 0x0000000083400000..0x0000000083ffffff 2: 0x0000000090000000..0x000000009000407f 3: 0x00000000b0000000..0x00000000b000003f 4: 0x00000000b26184ea..0x00000000b2618fff The fragments like the ranges from b0000000 to b000003f and from b26184ea to b2618fff should be freed. And we can do free_reserved_area() after memblock_free(), as free_reserved_area() calls __free_pages(), once we've done that it could be allocated somewhere else, but memblock and iomem still say this is reserved memory. Fixes: 05c58752 ("arm64: To remove initrd reserved area entry from memblock") Signed-off-by: NJunhua Huang <huang.junhua@zte.com.cn> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NEnrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 04 6月, 2019 1 次提交
-
-
由 Miles Chen 提交于
This change makes CONFIG_ZONE_DMA32 defuly y and allows users to overwrite it only when CONFIG_EXPERT=y. For the SoCs that do not need CONFIG_ZONE_DMA32, this is the first step to manage all available memory by a single zone(normal zone) to reduce the overhead of multiple zones. The change also fixes a build error when CONFIG_NUMA=y and CONFIG_ZONE_DMA32=n. arch/arm64/mm/init.c:195:17: error: use of undeclared identifier 'ZONE_DMA32' max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys()); Change since v1: 1. only expose CONFIG_ZONE_DMA32 when CONFIG_EXPERT=y 2. remove redundant IS_ENABLED(CONFIG_ZONE_DMA32) Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 5月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
Since commit dccd2304 ("ARM: 7430/1: sizes.h: move from asm-generic to <linux/sizes.h>"), <asm/sizes.h> and <asm-generic/sizes.h> are just wrappers of <linux/sizes.h>. This commit replaces all <asm/sizes.h> and <asm-generic/sizes.h> to prepare for the removal. Link: http://lkml.kernel.org/r/1553267665-27228-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-