1. 03 6月, 2021 1 次提交
  2. 09 4月, 2021 5 次提交
  3. 08 2月, 2021 3 次提交
  4. 27 1月, 2021 1 次提交
  5. 15 10月, 2020 1 次提交
    • A
      arm64: mm: use single quantity to represent the PA to VA translation · 7bc1a0f9
      Ard Biesheuvel 提交于
      On arm64, the global variable memstart_addr represents the physical
      address of PAGE_OFFSET, and so physical to virtual translations or
      vice versa used to come down to simple additions or subtractions
      involving the values of PAGE_OFFSET and memstart_addr.
      
      When support for 52-bit virtual addressing was introduced, we had to
      deal with PAGE_OFFSET potentially being outside of the region that
      can be covered by the virtual range (as the 52-bit VA capable build
      needs to be able to run on systems that are only 48-bit VA capable),
      and for this reason, another translation was introduced, and recorded
      in the global variable physvirt_offset.
      
      However, if we go back to the original definition of memstart_addr,
      i.e., the physical address of PAGE_OFFSET, it turns out that there is
      no need for two separate translations: instead, we can simply subtract
      the size of the unaddressable VA space from memstart_addr to make the
      available physical memory appear in the 48-bit addressable VA region.
      
      This simplifies things, but also fixes a bug on KASLR builds, which
      may update memstart_addr later on in arm64_memblock_init(), but fails
      to update vmemmap and physvirt_offset accordingly.
      
      Fixes: 5383cc6e ("arm64: mm: Introduce vabits_actual")
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NSteve Capper <steve.capper@arm.com>
      Link: https://lore.kernel.org/r/20201008153602.9467-2-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      7bc1a0f9
  6. 14 10月, 2020 1 次提交
    • M
      arch, mm: replace for_each_memblock() with for_each_mem_pfn_range() · c9118e6c
      Mike Rapoport 提交于
      There are several occurrences of the following pattern:
      
      	for_each_memblock(memory, reg) {
      		start_pfn = memblock_region_memory_base_pfn(reg);
      		end_pfn = memblock_region_memory_end_pfn(reg);
      
      		/* do something with start_pfn and end_pfn */
      	}
      
      Rather than iterate over all memblock.memory regions and each time query
      for their start and end PFNs, use for_each_mem_pfn_range() iterator to get
      simpler and clearer code.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>	[.clang-format]
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-12-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c9118e6c
  7. 06 10月, 2020 1 次提交
  8. 01 9月, 2020 1 次提交
  9. 08 8月, 2020 1 次提交
  10. 15 7月, 2020 1 次提交
    • A
      arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs · abb7962a
      Anshuman Khandual 提交于
      Currently 'hugetlb_cma=' command line argument does not create CMA area on
      ARM64_16K_PAGES and ARM64_64K_PAGES based platforms. Instead, it just ends
      up with the following warning message. Reason being, hugetlb_cma_reserve()
      never gets called for these huge page sizes.
      
      [   64.255669] hugetlb_cma: the option isn't supported by current arch
      
      This enables CMA areas reservation on ARM64_16K_PAGES and ARM64_64K_PAGES
      configs by defining an unified arm64_hugetlb_cma_reseve() that is wrapped
      in CONFIG_CMA. Call site for arm64_hugetlb_cma_reserve() is also protected
      as <asm/hugetlb.h> is conditionally included and hence cannot contain stub
      for the inverse config i.e !(CONFIG_HUGETLB_PAGE && CONFIG_CMA).
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Barry Song <song.bao.hua@hisilicon.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Link: https://lore.kernel.org/r/1593578521-24672-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      abb7962a
  11. 02 7月, 2020 1 次提交
  12. 18 6月, 2020 1 次提交
  13. 04 6月, 2020 2 次提交
    • M
      arm64: simplify detection of memory zone boundaries for UMA configs · 584cb13d
      Mike Rapoport 提交于
      The free_area_init() function only requires the definition of maximal PFN
      for each of the supported zone rater than calculation of actual zone sizes
      and the sizes of the holes between the zones.
      
      After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
      available to all architectures.
      
      Using this function instead of free_area_init_node() simplifies the zone
      detection.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: Hoan Tran <hoan@os.amperecomputing.com>	[arm64]
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200412194859.12663-9-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      584cb13d
    • M
      mm: use free_area_init() instead of free_area_init_nodes() · 9691a071
      Mike Rapoport 提交于
      free_area_init() has effectively became a wrapper for
      free_area_init_nodes() and there is no point of keeping it.  Still
      free_area_init() name is shorter and more general as it does not imply
      necessity to initialize multiple nodes.
      
      Rename free_area_init_nodes() to free_area_init(), update the callers and
      drop old version of free_area_init().
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: Hoan Tran <hoan@os.amperecomputing.com>	[arm64]
      Reviewed-by: NBaoquan He <bhe@redhat.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200412194859.12663-6-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9691a071
  14. 01 5月, 2020 1 次提交
  15. 11 4月, 2020 1 次提交
    • R
      mm: hugetlb: optionally allocate gigantic hugepages using cma · cf11e85f
      Roman Gushchin 提交于
      Commit 944d9fec ("hugetlb: add support for gigantic page allocation
      at runtime") has added the run-time allocation of gigantic pages.
      
      However it actually works only at early stages of the system loading,
      when the majority of memory is free.  After some time the memory gets
      fragmented by non-movable pages, so the chances to find a contiguous 1GB
      block are getting close to zero.  Even dropping caches manually doesn't
      help a lot.
      
      At large scale rebooting servers in order to allocate gigantic hugepages
      is quite expensive and complex.  At the same time keeping some constant
      percentage of memory in reserved hugepages even if the workload isn't
      using it is a big waste: not all workloads can benefit from using 1 GB
      pages.
      
      The following solution can solve the problem:
      1) On boot time a dedicated cma area* is reserved. The size is passed
         as a kernel argument.
      2) Run-time allocations of gigantic hugepages are performed using the
         cma allocator and the dedicated cma area
      
      In this case gigantic hugepages can be allocated successfully with a
      high probability, however the memory isn't completely wasted if nobody
      is using 1GB hugepages: it can be used for pagecache, anon memory, THPs,
      etc.
      
      * On a multi-node machine a per-node cma area is allocated on each node.
        Following gigantic hugetlb allocation are using the first available
        numa node if the mask isn't specified by a user.
      
      Usage:
      1) configure the kernel to allocate a cma area for hugetlb allocations:
         pass hugetlb_cma=10G as a kernel argument
      
      2) allocate hugetlb pages as usual, e.g.
         echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
      
      If the option isn't enabled or the allocation of the cma area failed,
      the current behavior of the system is preserved.
      
      x86 and arm-64 are covered by this patch, other architectures can be
      trivially added later.
      
      The patch contains clean-ups and fixes proposed and implemented by Aslan
      Bakirov and Randy Dunlap.  It also contains ideas and suggestions
      proposed by Rik van Riel, Michal Hocko and Mike Kravetz.  Thanks!
      Signed-off-by: NRoman Gushchin <guro@fb.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: NAndreas Schaufler <andreas.schaufler@gmx.de>
      Acked-by: NMike Kravetz <mike.kravetz@oracle.com>
      Acked-by: NMichal Hocko <mhocko@kernel.org>
      Cc: Aslan Bakirov <aslan@fb.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Link: http://lkml.kernel.org/r/20200407163840.92263-3-guro@fb.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf11e85f
  16. 04 12月, 2019 1 次提交
    • W
      arm64: mm: Fix initialisation of DMA zones on non-NUMA systems · 93b90414
      Will Deacon 提交于
      John reports that the recently merged commit 1a8e1cef ("arm64: use
      both ZONE_DMA and ZONE_DMA32") breaks the boot on his DB845C board:
      
        | Booting Linux on physical CPU 0x0000000000 [0x517f803c]
        | Linux version 5.4.0-mainline-10675-g957a03b9e38f
        | Machine model: Thundercomm Dragonboard 845c
        | [...]
        | Built 1 zonelists, mobility grouping on.  Total pages: -188245
        | Kernel command line: earlycon
        | firmware_class.path=/vendor/firmware/ androidboot.hardware=db845c
        | init=/init androidboot.boot_devices=soc/1d84000.ufshc
        | printk.devkmsg=on buildvariant=userdebug root=/dev/sda2
        | androidboot.bootdevice=1d84000.ufshc androidboot.serialno=c4e1189c
        | androidboot.baseband=sda
        | msm_drm.dsi_display0=dsi_lt9611_1080_video_display:
        | androidboot.slot_suffix=_a skip_initramfs rootwait ro init=/init
        |
        | <hangs indefinitely here>
      
      This is because, when CONFIG_NUMA=n, zone_sizes_init() fails to handle
      memblocks that fall entirely within the ZONE_DMA region and erroneously ends up
      trying to add a negatively-sized region into the following ZONE_DMA32, which is
      later interpreted as a large unsigned region by the core MM code.
      
      Rework the non-NUMA implementation of zone_sizes_init() so that the start
      address of the memblock being processed is adjusted according to the end of the
      previous zone, which is then range-checked before updating the hole information
      of subsequent zones.
      
      Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
      Link: https://lore.kernel.org/lkml/CALAqxLVVcsmFrDKLRGRq7GewcW405yTOxG=KR3csVzQ6bXutkA@mail.gmail.com
      Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32")
      Reported-by: NJohn Stultz <john.stultz@linaro.org>
      Tested-by: NJohn Stultz <john.stultz@linaro.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      93b90414
  17. 07 11月, 2019 1 次提交
  18. 01 11月, 2019 1 次提交
  19. 29 10月, 2019 1 次提交
  20. 16 10月, 2019 3 次提交
  21. 14 10月, 2019 3 次提交
  22. 09 8月, 2019 4 次提交
    • S
      arm64: mm: Introduce 52-bit Kernel VAs · b6d00d47
      Steve Capper 提交于
      Most of the machinery is now in place to enable 52-bit kernel VAs that
      are detectable at boot time.
      
      This patch adds a Kconfig option for 52-bit user and kernel addresses
      and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
      physvirt_offset and vmemmap at early boot.
      
      To simplify things this patch also removes the 52-bit user/48-bit kernel
      kconfig option.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b6d00d47
    • S
      arm64: mm: Separate out vmemmap · c8b6d2cc
      Steve Capper 提交于
      vmemmap is a preprocessor definition that depends on a variable,
      memstart_addr. In a later patch we will need to expand the size of
      the VMEMMAP region and optionally modify vmemmap depending upon
      whether or not hardware support is available for 52-bit virtual
      addresses.
      
      This patch changes vmemmap to be a variable. As the old definition
      depended on a variable load, this should not affect performance
      noticeably.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      c8b6d2cc
    • S
      arm64: mm: Introduce vabits_actual · 5383cc6e
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, one
      needs to know the actual VA_BITS detected. A new variable vabits_actual
      is introduced in this commit and employed for the KVM hypervisor layout,
      KASAN, fault handling and phys-to/from-virt translation where there
      would normally be compile time constants.
      
      In order to maintain performance in phys_to_virt, another variable
      physvirt_offset is introduced.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      5383cc6e
    • S
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper 提交于
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      14c127c9
  23. 05 8月, 2019 1 次提交
    • J
      arm64: mm: free the initrd reserved memblock in a aligned manner · 13776f9d
      Junhua Huang 提交于
      We should free the initrd reserved memblock in an aligned manner,
      because the initrd reserves the memblock in an aligned manner
      in arm64_memblock_init().
      Otherwise there are some fragments in memblock_reserved regions
      after free_initrd_mem(). e.g.:
      /sys/kernel/debug/memblock # cat reserved
         0: 0x0000000080080000..0x00000000817fafff
         1: 0x0000000083400000..0x0000000083ffffff
         2: 0x0000000090000000..0x000000009000407f
         3: 0x00000000b0000000..0x00000000b000003f
         4: 0x00000000b26184ea..0x00000000b2618fff
      The fragments like the ranges from b0000000 to b000003f and
      from b26184ea to b2618fff should be freed.
      
      And we can do free_reserved_area() after memblock_free(),
      as free_reserved_area() calls __free_pages(), once we've done
      that it could be allocated somewhere else,
      but memblock and iomem still say this is reserved memory.
      
      Fixes: 05c58752 ("arm64: To remove initrd reserved area entry from memblock")
      Signed-off-by: NJunhua Huang <huang.junhua@zte.com.cn>
      Signed-off-by: NWill Deacon <will@kernel.org>
      13776f9d
  24. 19 6月, 2019 1 次提交
  25. 04 6月, 2019 1 次提交
    • M
      arm64: mm: make CONFIG_ZONE_DMA32 configurable · 0c1f14ed
      Miles Chen 提交于
      This change makes CONFIG_ZONE_DMA32 defuly y and allows users
      to overwrite it only when CONFIG_EXPERT=y.
      
      For the SoCs that do not need CONFIG_ZONE_DMA32, this is the
      first step to manage all available memory by a single
      zone(normal zone) to reduce the overhead of multiple zones.
      
      The change also fixes a build error when CONFIG_NUMA=y and
      CONFIG_ZONE_DMA32=n.
      
      arch/arm64/mm/init.c:195:17: error: use of undeclared identifier 'ZONE_DMA32'
                      max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys());
      
      Change since v1:
      1. only expose CONFIG_ZONE_DMA32 when CONFIG_EXPERT=y
      2. remove redundant IS_ENABLED(CONFIG_ZONE_DMA32)
      
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMiles Chen <miles.chen@mediatek.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0c1f14ed
  26. 15 5月, 2019 1 次提交