1. 09 3月, 2021 1 次提交
    • A
      arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory · eeb0753b
      Anshuman Khandual 提交于
      pfn_valid() validates a pfn but basically it checks for a valid struct page
      backing for that pfn. It should always return positive for memory ranges
      backed with struct page mapping. But currently pfn_valid() fails for all
      ZONE_DEVICE based memory types even though they have struct page mapping.
      
      pfn_valid() asserts that there is a memblock entry for a given pfn without
      MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is
      that they do not have memblock entries. Hence memblock_is_map_memory() will
      invariably fail via memblock_search() for a ZONE_DEVICE based address. This
      eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs
      to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged
      into the system via memremap_pages() called from a driver, their respective
      memory sections will not have SECTION_IS_EARLY set.
      
      Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock
      regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set
      for firmware reserved memory regions. memblock_is_map_memory() can just be
      skipped as its always going to be positive and that will be an optimization
      for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal
      hotplugged memory too will not have SECTION_IS_EARLY set for their sections
      
      Skipping memblock_is_map_memory() for all non early memory sections would
      fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its
      performance for normal hotplug memory as well.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Fixes: 73b20c84 ("arm64: mm: implement pte_devmap support")
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/1614921898-4099-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      eeb0753b
  2. 27 2月, 2021 4 次提交
  3. 25 2月, 2021 1 次提交
  4. 23 2月, 2021 1 次提交
  5. 09 2月, 2021 2 次提交
  6. 08 2月, 2021 2 次提交
  7. 06 2月, 2021 1 次提交
  8. 03 2月, 2021 2 次提交
  9. 27 1月, 2021 7 次提交
  10. 20 1月, 2021 1 次提交
  11. 19 1月, 2021 1 次提交
  12. 15 1月, 2021 2 次提交
  13. 13 1月, 2021 1 次提交
    • C
      arm64: Remove arm64_dma32_phys_limit and its uses · d78050ee
      Catalin Marinas 提交于
      With the introduction of a dynamic ZONE_DMA range based on DT or IORT
      information, there's no need for CMA allocations from the wider
      ZONE_DMA32 since on most platforms ZONE_DMA will cover the 32-bit
      addressable range. Remove the arm64_dma32_phys_limit and set
      arm64_dma_phys_limit to cover the smallest DMA range required on the
      platform. CMA allocation and crashkernel reservation now go in the
      dynamically sized ZONE_DMA, allowing correct functionality on RPi4.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Zhou <chenzhou10@huawei.com>
      Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> # On RPi4B
      d78050ee
  14. 06 1月, 2021 1 次提交
  15. 04 1月, 2021 1 次提交
  16. 23 12月, 2020 9 次提交
  17. 16 12月, 2020 3 次提交
    • M
      arch, mm: make kernel_page_present() always available · 32a0de88
      Mike Rapoport 提交于
      For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
      verify that a page is mapped in the kernel direct map can be useful
      regardless of hibernation.
      
      Add RISC-V implementation of kernel_page_present(), update its forward
      declarations and stubs to be a part of set_memory API and remove ugly
      ifdefery in inlcude/linux/mm.h around current declarations of
      kernel_page_present().
      
      Link: https://lkml.kernel.org/r/20201109192128.960-5-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      32a0de88
    • M
      arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC · 5d6ad668
      Mike Rapoport 提交于
      The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must
      never fail.  With this assumption is wouldn't be safe to allow general
      usage of this function.
      
      Moreover, some architectures that implement __kernel_map_pages() have this
      function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
      pages when page allocation debugging is disabled at runtime.
      
      As all the users of __kernel_map_pages() were converted to use
      debug_pagealloc_map_pages() it is safe to make it available only when
      DEBUG_PAGEALLOC is set.
      
      Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5d6ad668
    • M
      arm, arm64: move free_unused_memmap() to generic mm · 4f5b0c17
      Mike Rapoport 提交于
      ARM and ARM64 free unused parts of the memory map just before the
      initialization of the page allocator. To allow holes in the memory map both
      architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID.
      
      Allowing holes in the memory map for FLATMEM may be useful for small
      machines, such as ARC and m68k and will enable those architectures to cease
      using DISCONTIGMEM and still support more than one memory bank.
      
      Move the functions that free unused memory map to generic mm and enable
      them in case HAVE_ARCH_PFN_VALID=y.
      
      Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Meelis Roos <mroos@linux.ee>
      Cc: Michael Schmitz <schmitzmic@gmail.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f5b0c17