1. 10 8月, 2016 2 次提交
    • A
      ARM: 8591/1: mm: use fully constructed struct pages for EFI pgd allocations · 61444cde
      Ard Biesheuvel 提交于
      The late_alloc() PTE allocation function used by create_mapping_late()
      does not call pgtable_page_ctor() on PTE pages it allocates, leaving
      the per-page spinlock uninitialized.
      
      Since generic page table manipulation code may assume that translation
      table pages that are not owned by init_mm are covered by fully
      constructed struct pages, the following crash may occur with the new
      UEFI memory attributes table code.
      
        efi: memattr: Processing EFI Memory Attributes table:
        efi: memattr:  0x0000ffa16000-0x0000ffa82fff [Runtime Code       |RUN|  |  |XP|  |  |  |   |  |  |  |  ]
        Unable to handle kernel NULL pointer dereference at virtual address 00000010
        pgd = c0204000
        [00000010] *pgd=00000000
        Internal error: Oops: 5 [#1] SMP ARM
        Modules linked in:
        CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc4-00063-g3882aa7b340b #361
        Hardware name: Generic DT based system
        task: ed858000 ti: ed842000 task.ti: ed842000
        PC is at __lock_acquire+0xa0/0x19a8
        ...
        [<c038c830>] (__lock_acquire) from [<c038e4f8>] (lock_acquire+0x6c/0x88)
        [<c038e4f8>] (lock_acquire) from [<c0c06134>] (_raw_spin_lock+0x2c/0x3c)
        [<c0c06134>] (_raw_spin_lock) from [<c0410384>] (apply_to_page_range+0xe8/0x238)
        [<c0410384>] (apply_to_page_range) from [<c1205f34>] (efi_set_mapping_permissions+0x54/0x5c)
        [<c1205f34>] (efi_set_mapping_permissions) from [<c1247474>] (efi_memattr_apply_permissions+0x2b8/0x378)
        [<c1247474>] (efi_memattr_apply_permissions) from [<c1248258>] (arm_enable_runtime_services+0x1f0/0x22c)
        [<c1248258>] (arm_enable_runtime_services) from [<c0301f0c>] (do_one_initcall+0x44/0x174)
        [<c0301f0c>] (do_one_initcall) from [<c1200d10>] (kernel_init_freeable+0x90/0x1e8)
        [<c1200d10>] (kernel_init_freeable) from [<c0bff690>] (kernel_init+0x8/0x114)
        [<c0bff690>] (kernel_init) from [<c0307ed0>] (ret_from_fork+0x14/0x24)
      
      The crash is due to the fact that the UEFI page tables are not owned by
      init_mm, but are not covered by fully constructed struct pages.
      
      Given that the UEFI subsystem is currently the only user of
      create_mapping_late(), add an unconditional call to pgtable_page_ctor() to
      late_alloc().
      
      Fixes: 9fc68b71 ("ARM/efi: Apply strict permissions for UEFI Runtime Services regions")
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      61444cde
    • N
      ARM: 8590/1: sanity_check_meminfo(): avoid overflow on vmalloc_limit · b9a01989
      Nicolas Pitre 提交于
      To limit the amount of mapped low memory, we determine a physical address
      boundary based on the start of the vmalloc area using __pa().
      Strictly speaking, the vmalloc area location is arbitrary and does not
      necessarily corresponds to a valid physical address. For example, if
      
      	PAGE_OFFSET = 0x80000000
      	PHYS_OFFSET = 0x90000000
      	vmalloc_min = 0xf0000000
      
      then __pa(vmalloc_min) overflows and returns a wrapped 0 when phys_addr_t
      is a 32-bit type. Then the code that follows determines that the entire
      physical memory is above that boundary and no low memory gets mapped at
      all:
      
      |[...]
      |Machine model: Freescale i.MX51 NA04 Board
      |Ignoring RAM at 0x90000000-0xb0000000 (!CONFIG_HIGHMEM)
      |Consider using a HIGHMEM enabled kernel.
      
      To avoid this problem let's make vmalloc_limit a 64-bit value all the
      time and determine that boundary explicitly without using __pa().
      Reported-by: NEmil Renner Berthing <kernel@esmil.dk>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NEmil Renner Berthing <kernel@esmil.dk>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b9a01989
  2. 18 3月, 2016 1 次提交
  3. 11 2月, 2016 1 次提交
    • C
      ARM: 8518/1: Use correct symbols for XIP_KERNEL · 02afa9a8
      Chris Brandt 提交于
      For an XIP build, _etext does not represent the end of the
      binary image that needs to stay mapped into the MODULES_VADDR area.
      Years ago, data came before text in the memory map. However,
      now that the order is text/init/data, an XIP_KERNEL needs to map
      up to the data location in order to keep from cutting off
      parts of the kernel that are needed.
      We only map up to the beginning of data because data has already been
      copied, so there's no reason to keep it around anymore.
      A new symbol is created to make it clear what it is we are referring
      to.
      
      This fixes the bug where you might lose the end of your kernel area
      after page table setup is complete.
      Signed-off-by: NChris Brandt <chris.brandt@renesas.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      02afa9a8
  4. 04 1月, 2016 1 次提交
  5. 14 12月, 2015 6 次提交
  6. 02 12月, 2015 1 次提交
    • A
      ARM: make xscale iwmmxt code multiplatform aware · d33c43ac
      Arnd Bergmann 提交于
      In a multiplatform configuration, we may end up building a kernel for
      both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the
      xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a
      build error from the coprocessor instructions.
      
      Since we know this code will only have to run on an actual xscale
      processor, we can simply build the entire file for ARMv5TE.
      
      Related to this, we need to handle the iWMMXT initialization sequence
      differently during boot, to ensure we don't try to touch xscale
      specific registers on other CPUs from the xscale_cp0_init initcall.
      cpu_is_xscale() used to be hardcoded to '1' in any configuration that
      enables any XScale-compatible core, but this breaks once we can have a
      combined kernel with MMP1 and something else.
      
      In this patch, I replace the existing cpu_is_xscale() macro with a new
      cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and
      mohawk, which makes the behavior more deterministic.
      
      The two existing users of cpu_is_xscale() are modified accordingly,
      but slightly change behavior for kernels that enable CPU_MOHAWK without
      also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave
      PMD_BIT4 in the page tables untouched, now they clear it as we've always
      done for kernels that enable both MOHAWK and the support for the older
      CPU types.
      
      Since the previous behavior was inconsistent, I assume it was
      unintentional.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      d33c43ac
  7. 20 10月, 2015 1 次提交
  8. 22 9月, 2015 1 次提交
  9. 21 8月, 2015 1 次提交
  10. 18 8月, 2015 1 次提交
    • S
      ARM: 8415/1: early fixmap support for earlycon · a5f4c561
      Stefan Agner 提交于
      Add early fixmap support, initially to support permanent, fixed
      mapping support for early console. A temporary, early pte is
      created which is migrated to a permanent mapping in paging_init.
      This is also needed since the attributes may change as the memory
      types are initialized. The 3MiB range of fixmap spans two pte
      tables, but currently only one pte is created for early fixmap
      support.
      
      Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since
      the index for kmap does not start at zero anymore. This reverts
      4221e2e6 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and
      FIX_KMAP_END") to some extent.
      
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a5f4c561
  11. 29 6月, 2015 2 次提交
  12. 02 6月, 2015 5 次提交
  13. 14 5月, 2015 1 次提交
    • M
      ARM: 8356/1: mm: handle non-pmd-aligned end of RAM · 965278dc
      Mark Rutland 提交于
      At boot time we round the memblock limit down to section size in an
      attempt to ensure that we will have mapped this RAM with section
      mappings prior to allocating from it. When mapping RAM we iterate over
      PMD-sized chunks, creating these section mappings.
      
      Section mappings are only created when the end of a chunk is aligned to
      section size. Unfortunately, with classic page tables (where PMD_SIZE is
      2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
      size the first 1M will not be mapped despite having been accounted for
      in the memblock limit. This has been observed to result in page tables
      being allocated from unmapped memory, causing boot-time hangs.
      
      This patch modifies the memblock limit rounding to always round down to
      PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
      will round the memblock limit down to a 2M boundary, matching the limits
      on section mappings, and preventing allocations from unmapped memory.
      For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NStefan Agner <stefan@agner.ch>
      Tested-by: NStefan Agner <stefan@agner.ch>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Tested-by: NHans de Goede <hdegoede@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      965278dc
  14. 08 1月, 2015 1 次提交
    • G
      ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region · ac084688
      Grygorii Strashko 提交于
      Now local variables kernel_x_start and kernel_x_end defined using
      'unsigned long' type which is wrong because they represent physical
      memory range and will be calculated wrongly if LPAE is enabled.
      As result, all following code in map_lowmem() will not work correctly.
      
      For example, Keystone 2 boot is broken because
       kernel_x_start == 0x0000 0000
       kernel_x_end   == 0x0080 0000
      
      instead of
       kernel_x_start == 0x0000 0008 0000 0000
       kernel_x_end   == 0x0000 0008 0080 0000
      and as result whole low memory will be mapped with MT_MEMORY_RW
      permissions by code (start > kernel_x_end):
      		} else if (start >= kernel_x_end) {
      			map.pfn = __phys_to_pfn(start);
      			map.virtual = __phys_to_virt(start);
      			map.length = end - start;
      			map.type = MT_MEMORY_RW;
      
      			create_mapping(&map);
      		}
      
      Hence, fix it by using phys_addr_t type for variables kernel_x_start
      and kernel_x_end.
      Tested-by: NMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ac084688
  15. 04 12月, 2014 1 次提交
  16. 03 12月, 2014 1 次提交
  17. 21 11月, 2014 1 次提交
  18. 17 10月, 2014 3 次提交
  19. 26 9月, 2014 1 次提交
  20. 02 8月, 2014 1 次提交
  21. 29 7月, 2014 1 次提交
  22. 02 6月, 2014 5 次提交
  23. 01 6月, 2014 1 次提交