1. 09 8月, 2019 2 次提交
    • S
      arm64: mm: Introduce vabits_actual · 5383cc6e
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, one
      needs to know the actual VA_BITS detected. A new variable vabits_actual
      is introduced in this commit and employed for the KVM hypervisor layout,
      KASAN, fault handling and phys-to/from-virt translation where there
      would normally be compile time constants.
      
      In order to maintain performance in phys_to_virt, another variable
      physvirt_offset is introduced.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      5383cc6e
    • S
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper 提交于
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      14c127c9
  2. 19 6月, 2019 1 次提交
  3. 04 6月, 2019 1 次提交
    • M
      arm64: mm: make CONFIG_ZONE_DMA32 configurable · 0c1f14ed
      Miles Chen 提交于
      This change makes CONFIG_ZONE_DMA32 defuly y and allows users
      to overwrite it only when CONFIG_EXPERT=y.
      
      For the SoCs that do not need CONFIG_ZONE_DMA32, this is the
      first step to manage all available memory by a single
      zone(normal zone) to reduce the overhead of multiple zones.
      
      The change also fixes a build error when CONFIG_NUMA=y and
      CONFIG_ZONE_DMA32=n.
      
      arch/arm64/mm/init.c:195:17: error: use of undeclared identifier 'ZONE_DMA32'
                      max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys());
      
      Change since v1:
      1. only expose CONFIG_ZONE_DMA32 when CONFIG_EXPERT=y
      2. remove redundant IS_ENABLED(CONFIG_ZONE_DMA32)
      
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMiles Chen <miles.chen@mediatek.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0c1f14ed
  4. 15 5月, 2019 2 次提交
  5. 23 4月, 2019 1 次提交
  6. 04 4月, 2019 1 次提交
  7. 03 4月, 2019 1 次提交
    • M
      arm64: setup min_low_pfn · 19d6242e
      Miles Chen 提交于
      When debugging with CONFIG_PAGE_OWNER, I noticed that the min_low_pfn
      on arm64 is always zero and the page owner scanning has to start from zero.
      We have to loop a while before we see the first valid pfn.
      (see: read_page_owner())
      
      Setup min_low_pfn to save some loops.
      
      Before setting min_low_pfn:
      
      [   21.265602] min_low_pfn=0, *ppos=0
      Page allocated via order 0, mask 0x100cca(GFP_HIGHUSER_MOVABLE)
      PFN 262144 type Movable Block 512 type Movable Flags 0x8001e
      referenced|uptodate|dirty|lru|swapbacked)
      prep_new_page+0x13c/0x140
      get_page_from_freelist+0x254/0x1068
      __alloc_pages_nodemask+0xd4/0xcb8
      
      After setting min_low_pfn:
      
      [   11.025787] min_low_pfn=262144, *ppos=0
      Page allocated via order 0, mask 0x100cca(GFP_HIGHUSER_MOVABLE)
      PFN 262144 type Movable Block 512 type Movable Flags 0x8001e
      referenced|uptodate|dirty|lru|swapbacked)
      prep_new_page+0x13c/0x140
      get_page_from_freelist+0x254/0x1068
      __alloc_pages_nodemask+0xd4/0xcb8
      shmem_alloc_page+0x7c/0xa0
      shmem_alloc_and_acct_page+0x124/0x1e8
      shmem_getpage_gfp.isra.7+0x118/0x878
      shmem_write_begin+0x38/0x68
      Signed-off-by: NMiles Chen <miles.chen@mediatek.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      19d6242e
  8. 01 4月, 2019 1 次提交
  9. 06 3月, 2019 1 次提交
  10. 22 1月, 2019 1 次提交
  11. 04 1月, 2019 1 次提交
  12. 15 12月, 2018 1 次提交
  13. 12 12月, 2018 1 次提交
    • R
      arm64: Add memory hotplug support · 4ab21506
      Robin Murphy 提交于
      Wire up the basic support for hot-adding memory. Since memory hotplug
      is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also
      cross-check the presence of a section in the manner of the generic
      implementation, before falling back to memblock to check for no-map
      regions within a present section as before. By having arch_add_memory(()
      create the linear mapping first, this then makes everything work in the
      way that __add_section() expects.
      
      We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates
      should be safe from races by virtue of the global device hotplug lock.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4ab21506
  14. 11 12月, 2018 1 次提交
    • S
      arm64: mm: Introduce DEFAULT_MAP_WINDOW · 363524d2
      Steve Capper 提交于
      We wish to introduce a 52-bit virtual address space for userspace but
      maintain compatibility with software that assumes the maximum VA space
      size is 48 bit.
      
      In order to achieve this, on 52-bit VA systems, we make mmap behave as
      if it were running on a 48-bit VA system (unless userspace explicitly
      requests a VA where addr[51:48] != 0).
      
      On a system running a 52-bit userspace we need TASK_SIZE to represent
      the 52-bit limit as it is used in various places to distinguish between
      kernelspace and userspace addresses.
      
      Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses
      TTBR0) to represent the non-extended VA space.
      
      This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and
      switches the appropriate logic to use that instead of TASK_SIZE.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      363524d2
  15. 10 12月, 2018 1 次提交
    • M
      arm64: move memstart_addr export inline · 03ef055f
      Mark Rutland 提交于
      Since we define memstart_addr in a C file, we can have the export
      immediately after the definition of the symbol, as we do elsewhere.
      
      As a step towards removing arm64ksyms.c, move the export of
      memstart_addr to init.c, where the symbol is defined.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      03ef055f
  16. 27 11月, 2018 2 次提交
  17. 09 11月, 2018 1 次提交
  18. 31 10月, 2018 3 次提交
    • M
      mm: remove include/linux/bootmem.h · 57c8a661
      Mike Rapoport 提交于
      Move remaining definitions and declarations from include/linux/bootmem.h
      into include/linux/memblock.h and remove the redundant header.
      
      The includes were replaced with the semantic patch below and then
      semi-automated removal of duplicated '#include <linux/memblock.h>
      
      @@
      @@
      - #include <linux/bootmem.h>
      + #include <linux/memblock.h>
      
      [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
        Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
      [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
        Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
      [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
        Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
      Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      57c8a661
    • M
      memblock: rename free_all_bootmem to memblock_free_all · c6ffc5ca
      Mike Rapoport 提交于
      The conversion is done using
      
      sed -i 's@free_all_bootmem@memblock_free_all@' \
          $(git grep -l free_all_bootmem)
      
      Link: http://lkml.kernel.org/r/1536927045-23536-26-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c6ffc5ca
    • M
      memblock: replace free_bootmem{_node} with memblock_free · 2013288f
      Mike Rapoport 提交于
      The free_bootmem and free_bootmem_node are merely wrappers for
      memblock_free. Replace their usage with a call to memblock_free using the
      following semantic patch:
      
      @@
      expression e1, e2, e3;
      @@
      (
      - free_bootmem(e1, e2)
      + memblock_free(e1, e2)
      |
      - free_bootmem_node(e1, e2, e3)
      + memblock_free(e2, e3)
      )
      
      Link: http://lkml.kernel.org/r/1536927045-23536-24-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2013288f
  19. 21 9月, 2018 1 次提交
    • J
      arm64: Kconfig: Remove ARCH_HAS_HOLES_MEMORYMODEL · 8a695a58
      James Morse 提交于
      include/linux/mmzone.h describes ARCH_HAS_HOLES_MEMORYMODEL as
      relevant when parts the memmap have been free()d. This would
      happen on systems where memory is smaller than a sparsemem-section,
      and the extra struct pages are expensive. pfn_valid() on these
      systems returns true for the whole sparsemem-section, so an extra
      memmap_valid_within() check is needed.
      
      On arm64 we have nomap memory, so always provide pfn_valid() to test
      for nomap pages. This means ARCH_HAS_HOLES_MEMORYMODEL's extra checks
      are already rolled up into pfn_valid().
      
      Remove it.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8a695a58
  20. 17 8月, 2018 1 次提交
    • G
      arm64: mm: check for upper PAGE_SHIFT bits in pfn_valid() · 5ad356ea
      Greg Hackmann 提交于
      ARM64's pfn_valid() shifts away the upper PAGE_SHIFT bits of the input
      before seeing if the PFN is valid.  This leads to false positives when
      some of the upper bits are set, but the lower bits match a valid PFN.
      
      For example, the following userspace code looks up a bogus entry in
      /proc/kpageflags:
      
          int pagemap = open("/proc/self/pagemap", O_RDONLY);
          int pageflags = open("/proc/kpageflags", O_RDONLY);
          uint64_t pfn, val;
      
          lseek64(pagemap, [...], SEEK_SET);
          read(pagemap, &pfn, sizeof(pfn));
          if (pfn & (1UL << 63)) {        /* valid PFN */
              pfn &= ((1UL << 55) - 1);   /* clear flag bits */
              pfn |= (1UL << 55);
              lseek64(pageflags, pfn * sizeof(uint64_t), SEEK_SET);
              read(pageflags, &val, sizeof(val));
          }
      
      On ARM64 this causes the userspace process to crash with SIGSEGV rather
      than reading (1 << KPF_NOPAGE).  kpageflags_read() treats the offset as
      valid, and stable_page_flags() will try to access an address between the
      user and kernel address ranges.
      
      Fixes: c1cc1552 ("arm64: MMU initialisation")
      Cc: stable@vger.kernel.org
      Signed-off-by: NGreg Hackmann <ghackmann@google.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5ad356ea
  21. 25 7月, 2018 1 次提交
    • J
      arm64: fix vmemmap BUILD_BUG_ON() triggering on !vmemmap setups · 7b0eb6b4
      Johannes Weiner 提交于
      Arnd reports the following arm64 randconfig build error with the PSI
      patches that add another page flag:
      
        /git/arm-soc/arch/arm64/mm/init.c: In function 'mem_init':
        /git/arm-soc/include/linux/compiler.h:357:38: error: call to
        '__compiletime_assert_618' declared with attribute error: BUILD_BUG_ON
        failed: sizeof(struct page) > (1 << STRUCT_PAGE_MAX_SHIFT)
      
      The additional page flag causes other information stored in
      page->flags to get bumped into their own struct page member:
      
        #if SECTIONS_WIDTH+ZONES_WIDTH+NODES_SHIFT+LAST_CPUPID_SHIFT <=
        BITS_PER_LONG - NR_PAGEFLAGS
        #define LAST_CPUPID_WIDTH LAST_CPUPID_SHIFT
        #else
        #define LAST_CPUPID_WIDTH 0
        #endif
      
        #if defined(CONFIG_NUMA_BALANCING) && LAST_CPUPID_WIDTH == 0
        #define LAST_CPUPID_NOT_IN_PAGE_FLAGS
        #endif
      
      which in turn causes the struct page size to exceed the size set in
      STRUCT_PAGE_MAX_SHIFT. This value is an an estimate used to size the
      VMEMMAP page array according to address space and struct page size.
      
      However, the check is performed - and triggers here - on a !VMEMMAP
      config, which consumes an additional 22 page bits for the sparse
      section id. When VMEMMAP is enabled, those bits are returned, cpupid
      doesn't need its own member, and the page passes the VMEMMAP check.
      
      Restrict that check to the situation it was meant to check: that we
      are sizing the VMEMMAP page array correctly.
      
      Says Arnd:
      
          Further experiments show that the build error already existed before,
          but was only triggered with larger values of CONFIG_NR_CPU and/or
          CONFIG_NODES_SHIFT that might be used in actual configurations but
          not in randconfig builds.
      
          With longer CPU and node masks, I could recreate the problem with
          kernels as old as linux-4.7 when arm64 NUMA support got added.
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Tested-by: NArnd Bergmann <arnd@arndb.de>
      Cc: stable@vger.kernel.org
      Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms.")
      Fixes: 3e1907d5 ("arm64: mm: move vmemmap region right below the linear region")
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7b0eb6b4
  22. 15 6月, 2018 1 次提交
  23. 01 5月, 2018 1 次提交
  24. 27 3月, 2018 1 次提交
  25. 07 3月, 2018 1 次提交
    • C
      arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size) · 1f85b42a
      Catalin Marinas 提交于
      Commit 97303480 ("arm64: Increase the max granular size") increased
      the cache line size to 128 to match Cavium ThunderX, apparently for some
      performance benefit which could not be confirmed. This change, however,
      has an impact on the network packets allocation in certain
      circumstances, requiring slightly over a 4K page with a significant
      performance degradation.
      
      This patch reverts L1_CACHE_SHIFT back to 6 (64-byte cache line) while
      keeping ARCH_DMA_MINALIGN at 128. The cache_line_size() function was
      changed to default to ARCH_DMA_MINALIGN in the absence of a meaningful
      CTR_EL0.CWG bit field.
      
      In addition, if a system with ARCH_DMA_MINALIGN < CTR_EL0.CWG is
      detected, the kernel will force swiotlb bounce buffering for all
      non-coherent devices since DMA cache maintenance on sub-CWG ranges is
      not safe, leading to data corruption.
      
      Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com>
      Cc: Timur Tabi <timur@codeaurora.org>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1f85b42a
  26. 19 1月, 2018 1 次提交
  27. 16 1月, 2018 1 次提交
  28. 15 1月, 2018 1 次提交
  29. 12 12月, 2017 1 次提交
    • S
      arm64: Initialise high_memory global variable earlier · f24e5834
      Steve Capper 提交于
      The high_memory global variable is used by
      cma_declare_contiguous(.) before it is defined.
      
      We don't notice this as we compute __pa(high_memory - 1), and it looks
      like we're processing a VA from the direct linear map.
      
      This problem becomes apparent when we flip the kernel virtual address
      space and the linear map is moved to the bottom of the kernel VA space.
      
      This patch moves the initialisation of high_memory before it used.
      
      Cc: <stable@vger.kernel.org>
      Fixes: f7426b98 ("mm: cma: adjust address limit to avoid hitting low/high memory boundary")
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f24e5834
  30. 06 4月, 2017 4 次提交
  31. 17 1月, 2017 1 次提交
    • A
      arm64: Fix swiotlb fallback allocation · 524dabe1
      Alexander Graf 提交于
      Commit b67a8b29 introduced logic to skip swiotlb allocation when all memory
      is DMA accessible anyway.
      
      While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
      to allocate memory when there is no CMA memory available. The swiotlb code is
      called to ensure that we at least try get_free_pages().
      
      Without initialization, swiotlb allocation code tries to access io_tlb_list
      which is NULL. That results in a stack trace like this:
      
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
        [...]
        [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
        [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
        [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
        [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
        [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
        [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
        [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
        [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
        [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
        [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
        [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
        [...]
      
      Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
      option. This patch configures the swiotlb code to use that if we decide not to
      initialize the swiotlb framework.
      
      Fixes: b67a8b29 ("arm64: mm: only initialize swiotlb when necessary")
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      CC: Jisheng Zhang <jszhang@marvell.com>
      CC: Geert Uytterhoeven <geert+renesas@glider.be>
      CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      524dabe1
  32. 12 1月, 2017 1 次提交