1. 08 3月, 2019 1 次提交
    • M
      arch: simplify several early memory allocations · b63a07d6
      Mike Rapoport 提交于
      There are several early memory allocations in arch/ code that use
      memblock_phys_alloc() to allocate memory, convert the returned physical
      address to the virtual address and then set the allocated memory to
      zero.
      
      Exactly the same behaviour can be achieved simply by calling
      memblock_alloc(): it allocates the memory in the same way as
      memblock_phys_alloc(), then it performs the phys_to_virt() conversion
      and clears the allocated memory.
      
      Replace the longer sequence with a simpler call to memblock_alloc().
      
      Link: http://lkml.kernel.org/r/1546248566-14910-6-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Michal Simek <michal.simek@xilinx.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b63a07d6
  2. 31 10月, 2018 1 次提交
    • M
      memblock: rename memblock_alloc{_nid,_try_nid} to memblock_phys_alloc* · 9a8dd708
      Mike Rapoport 提交于
      Make it explicit that the caller gets a physical address rather than a
      virtual one.
      
      This will also allow using meblock_alloc prefix for memblock allocations
      returning virtual address, which is done in the following patches.
      
      The conversion is done using the following semantic patch:
      
      @@
      expression e1, e2, e3;
      @@
      (
      - memblock_alloc(e1, e2)
      + memblock_phys_alloc(e1, e2)
      |
      - memblock_alloc_nid(e1, e2, e3)
      + memblock_phys_alloc_nid(e1, e2, e3)
      |
      - memblock_alloc_try_nid(e1, e2, e3)
      + memblock_phys_alloc_try_nid(e1, e2, e3)
      )
      
      Link: http://lkml.kernel.org/r/1536927045-23536-7-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a8dd708
  3. 30 6月, 2017 1 次提交
    • D
      ARM: 8685/1: ensure memblock-limit is pmd-aligned · 9e25ebfe
      Doug Berger 提交于
      The pmd containing memblock_limit is cleared by prepare_page_table()
      which creates the opportunity for early_alloc() to allocate unmapped
      memory if memblock_limit is not pmd aligned causing a boot-time hang.
      
      Commit 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
      attempted to resolve this problem, but there is a path through the
      adjust_lowmem_bounds() routine where if all memory regions start and
      end on pmd-aligned addresses the memblock_limit will be set to
      arm_lowmem_limit.
      
      Since arm_lowmem_limit can be affected by the vmalloc early parameter,
      the value of arm_lowmem_limit may not be pmd-aligned. This commit
      corrects this oversight such that memblock_limit is always rounded
      down to pmd-alignment.
      
      Fixes: 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
      Signed-off-by: NDoug Berger <opendmb@gmail.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      9e25ebfe
  4. 20 4月, 2017 1 次提交
    • J
      ARM: 8667/3: Fix memory attribute inconsistencies when using fixmap · b089c31c
      Jon Medhurst 提交于
      To cope with the variety in ARM architectures and configurations, the
      pagetable attributes for kernel memory are generated at runtime to match
      the system the kernel finds itself on. This calculated value is stored
      in pgprot_kernel.
      
      However, when early fixmap support was added for ARM (commit
      a5f4c561) the attributes used for mappings were hard coded because
      pgprot_kernel is not set up early enough. Unfortunately, when fixmap is
      used after early boot this means the memory being mapped can have
      different attributes to existing mappings, potentially leading to
      unpredictable behaviour. A specific problem also exists due to the hard
      coded values not include the 'shareable' attribute which means on
      systems where this matters (e.g. those with multiple CPU clusters) the
      cache contents for a memory location can become inconsistent between
      CPUs.
      
      To resolve these issues we change fixmap to use the same memory
      attributes (from pgprot_kernel) that the rest of the kernel uses. To
      enable this we need to refactor the initialisation code so
      build_mem_type_table() is called early enough. Note, that relies on early
      param parsing for memory type overrides passed via the kernel command
      line, so we need to make sure this call is still after
      parse_early_params().
      
      [ardb: keep early_fixmap_init() before param parsing, for earlycon]
      
      Fixes: a5f4c561 ("ARM: 8415/1: early fixmap support for earlycon")
      Cc: <stable@vger.kernel.org> # v4.3+
      Tested-by: Nafzal mohammed <afzal.mohd.ma@gmail.com>
      Signed-off-by: NJon Medhurst <tixy@linaro.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      b089c31c
  5. 09 4月, 2017 1 次提交
  6. 28 2月, 2017 3 次提交
  7. 12 9月, 2016 1 次提交
    • S
      ARM: 8612/1: LPAE: initialize cache policy correctly · 6b3142b2
      Stefan Agner 提交于
      The cachepolicy variable gets initialized using a masked pmd
      value. So far, the pmd has been masked with flags valid for the
      2-page table format, but the 3-page table format requires a
      different mask. On LPAE, this lead to a wrong assumption of what
      initial cache policy has been used. Later a check forces the
      cache policy to writealloc and prints the following warning:
      Forcing write-allocate cache policy for SMP
      
      This patch introduces a new definition PMD_SECT_CACHE_MASK for
      both page table formats which masks in all cache flags in both
      cases.
      Signed-off-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6b3142b2
  8. 12 8月, 2016 1 次提交
  9. 10 8月, 2016 2 次提交
    • A
      ARM: 8591/1: mm: use fully constructed struct pages for EFI pgd allocations · 61444cde
      Ard Biesheuvel 提交于
      The late_alloc() PTE allocation function used by create_mapping_late()
      does not call pgtable_page_ctor() on PTE pages it allocates, leaving
      the per-page spinlock uninitialized.
      
      Since generic page table manipulation code may assume that translation
      table pages that are not owned by init_mm are covered by fully
      constructed struct pages, the following crash may occur with the new
      UEFI memory attributes table code.
      
        efi: memattr: Processing EFI Memory Attributes table:
        efi: memattr:  0x0000ffa16000-0x0000ffa82fff [Runtime Code       |RUN|  |  |XP|  |  |  |   |  |  |  |  ]
        Unable to handle kernel NULL pointer dereference at virtual address 00000010
        pgd = c0204000
        [00000010] *pgd=00000000
        Internal error: Oops: 5 [#1] SMP ARM
        Modules linked in:
        CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc4-00063-g3882aa7b340b #361
        Hardware name: Generic DT based system
        task: ed858000 ti: ed842000 task.ti: ed842000
        PC is at __lock_acquire+0xa0/0x19a8
        ...
        [<c038c830>] (__lock_acquire) from [<c038e4f8>] (lock_acquire+0x6c/0x88)
        [<c038e4f8>] (lock_acquire) from [<c0c06134>] (_raw_spin_lock+0x2c/0x3c)
        [<c0c06134>] (_raw_spin_lock) from [<c0410384>] (apply_to_page_range+0xe8/0x238)
        [<c0410384>] (apply_to_page_range) from [<c1205f34>] (efi_set_mapping_permissions+0x54/0x5c)
        [<c1205f34>] (efi_set_mapping_permissions) from [<c1247474>] (efi_memattr_apply_permissions+0x2b8/0x378)
        [<c1247474>] (efi_memattr_apply_permissions) from [<c1248258>] (arm_enable_runtime_services+0x1f0/0x22c)
        [<c1248258>] (arm_enable_runtime_services) from [<c0301f0c>] (do_one_initcall+0x44/0x174)
        [<c0301f0c>] (do_one_initcall) from [<c1200d10>] (kernel_init_freeable+0x90/0x1e8)
        [<c1200d10>] (kernel_init_freeable) from [<c0bff690>] (kernel_init+0x8/0x114)
        [<c0bff690>] (kernel_init) from [<c0307ed0>] (ret_from_fork+0x14/0x24)
      
      The crash is due to the fact that the UEFI page tables are not owned by
      init_mm, but are not covered by fully constructed struct pages.
      
      Given that the UEFI subsystem is currently the only user of
      create_mapping_late(), add an unconditional call to pgtable_page_ctor() to
      late_alloc().
      
      Fixes: 9fc68b71 ("ARM/efi: Apply strict permissions for UEFI Runtime Services regions")
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      61444cde
    • N
      ARM: 8590/1: sanity_check_meminfo(): avoid overflow on vmalloc_limit · b9a01989
      Nicolas Pitre 提交于
      To limit the amount of mapped low memory, we determine a physical address
      boundary based on the start of the vmalloc area using __pa().
      Strictly speaking, the vmalloc area location is arbitrary and does not
      necessarily corresponds to a valid physical address. For example, if
      
      	PAGE_OFFSET = 0x80000000
      	PHYS_OFFSET = 0x90000000
      	vmalloc_min = 0xf0000000
      
      then __pa(vmalloc_min) overflows and returns a wrapped 0 when phys_addr_t
      is a 32-bit type. Then the code that follows determines that the entire
      physical memory is above that boundary and no low memory gets mapped at
      all:
      
      |[...]
      |Machine model: Freescale i.MX51 NA04 Board
      |Ignoring RAM at 0x90000000-0xb0000000 (!CONFIG_HIGHMEM)
      |Consider using a HIGHMEM enabled kernel.
      
      To avoid this problem let's make vmalloc_limit a 64-bit value all the
      time and determine that boundary explicitly without using __pa().
      Reported-by: NEmil Renner Berthing <kernel@esmil.dk>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NEmil Renner Berthing <kernel@esmil.dk>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b9a01989
  10. 18 3月, 2016 1 次提交
  11. 11 2月, 2016 1 次提交
    • C
      ARM: 8518/1: Use correct symbols for XIP_KERNEL · 02afa9a8
      Chris Brandt 提交于
      For an XIP build, _etext does not represent the end of the
      binary image that needs to stay mapped into the MODULES_VADDR area.
      Years ago, data came before text in the memory map. However,
      now that the order is text/init/data, an XIP_KERNEL needs to map
      up to the data location in order to keep from cutting off
      parts of the kernel that are needed.
      We only map up to the beginning of data because data has already been
      copied, so there's no reason to keep it around anymore.
      A new symbol is created to make it clear what it is we are referring
      to.
      
      This fixes the bug where you might lose the end of your kernel area
      after page table setup is complete.
      Signed-off-by: NChris Brandt <chris.brandt@renesas.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      02afa9a8
  12. 04 1月, 2016 1 次提交
  13. 14 12月, 2015 6 次提交
  14. 02 12月, 2015 1 次提交
    • A
      ARM: make xscale iwmmxt code multiplatform aware · d33c43ac
      Arnd Bergmann 提交于
      In a multiplatform configuration, we may end up building a kernel for
      both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the
      xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a
      build error from the coprocessor instructions.
      
      Since we know this code will only have to run on an actual xscale
      processor, we can simply build the entire file for ARMv5TE.
      
      Related to this, we need to handle the iWMMXT initialization sequence
      differently during boot, to ensure we don't try to touch xscale
      specific registers on other CPUs from the xscale_cp0_init initcall.
      cpu_is_xscale() used to be hardcoded to '1' in any configuration that
      enables any XScale-compatible core, but this breaks once we can have a
      combined kernel with MMP1 and something else.
      
      In this patch, I replace the existing cpu_is_xscale() macro with a new
      cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and
      mohawk, which makes the behavior more deterministic.
      
      The two existing users of cpu_is_xscale() are modified accordingly,
      but slightly change behavior for kernels that enable CPU_MOHAWK without
      also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave
      PMD_BIT4 in the page tables untouched, now they clear it as we've always
      done for kernels that enable both MOHAWK and the support for the older
      CPU types.
      
      Since the previous behavior was inconsistent, I assume it was
      unintentional.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      d33c43ac
  15. 20 10月, 2015 1 次提交
  16. 22 9月, 2015 1 次提交
  17. 21 8月, 2015 1 次提交
  18. 18 8月, 2015 1 次提交
    • S
      ARM: 8415/1: early fixmap support for earlycon · a5f4c561
      Stefan Agner 提交于
      Add early fixmap support, initially to support permanent, fixed
      mapping support for early console. A temporary, early pte is
      created which is migrated to a permanent mapping in paging_init.
      This is also needed since the attributes may change as the memory
      types are initialized. The 3MiB range of fixmap spans two pte
      tables, but currently only one pte is created for early fixmap
      support.
      
      Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since
      the index for kmap does not start at zero anymore. This reverts
      4221e2e6 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and
      FIX_KMAP_END") to some extent.
      
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a5f4c561
  19. 29 6月, 2015 2 次提交
  20. 02 6月, 2015 5 次提交
  21. 14 5月, 2015 1 次提交
    • M
      ARM: 8356/1: mm: handle non-pmd-aligned end of RAM · 965278dc
      Mark Rutland 提交于
      At boot time we round the memblock limit down to section size in an
      attempt to ensure that we will have mapped this RAM with section
      mappings prior to allocating from it. When mapping RAM we iterate over
      PMD-sized chunks, creating these section mappings.
      
      Section mappings are only created when the end of a chunk is aligned to
      section size. Unfortunately, with classic page tables (where PMD_SIZE is
      2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
      size the first 1M will not be mapped despite having been accounted for
      in the memblock limit. This has been observed to result in page tables
      being allocated from unmapped memory, causing boot-time hangs.
      
      This patch modifies the memblock limit rounding to always round down to
      PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
      will round the memblock limit down to a 2M boundary, matching the limits
      on section mappings, and preventing allocations from unmapped memory.
      For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NStefan Agner <stefan@agner.ch>
      Tested-by: NStefan Agner <stefan@agner.ch>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Tested-by: NHans de Goede <hdegoede@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      965278dc
  22. 08 1月, 2015 1 次提交
    • G
      ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region · ac084688
      Grygorii Strashko 提交于
      Now local variables kernel_x_start and kernel_x_end defined using
      'unsigned long' type which is wrong because they represent physical
      memory range and will be calculated wrongly if LPAE is enabled.
      As result, all following code in map_lowmem() will not work correctly.
      
      For example, Keystone 2 boot is broken because
       kernel_x_start == 0x0000 0000
       kernel_x_end   == 0x0080 0000
      
      instead of
       kernel_x_start == 0x0000 0008 0000 0000
       kernel_x_end   == 0x0000 0008 0080 0000
      and as result whole low memory will be mapped with MT_MEMORY_RW
      permissions by code (start > kernel_x_end):
      		} else if (start >= kernel_x_end) {
      			map.pfn = __phys_to_pfn(start);
      			map.virtual = __phys_to_virt(start);
      			map.length = end - start;
      			map.type = MT_MEMORY_RW;
      
      			create_mapping(&map);
      		}
      
      Hence, fix it by using phys_addr_t type for variables kernel_x_start
      and kernel_x_end.
      Tested-by: NMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ac084688
  23. 04 12月, 2014 1 次提交
  24. 03 12月, 2014 1 次提交
  25. 21 11月, 2014 1 次提交
  26. 17 10月, 2014 2 次提交