1. 14 12月, 2015 1 次提交
  2. 20 10月, 2015 1 次提交
  3. 22 9月, 2015 1 次提交
  4. 21 8月, 2015 1 次提交
  5. 18 8月, 2015 1 次提交
    • S
      ARM: 8415/1: early fixmap support for earlycon · a5f4c561
      Stefan Agner 提交于
      Add early fixmap support, initially to support permanent, fixed
      mapping support for early console. A temporary, early pte is
      created which is migrated to a permanent mapping in paging_init.
      This is also needed since the attributes may change as the memory
      types are initialized. The 3MiB range of fixmap spans two pte
      tables, but currently only one pte is created for early fixmap
      support.
      
      Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since
      the index for kmap does not start at zero anymore. This reverts
      4221e2e6 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and
      FIX_KMAP_END") to some extent.
      
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a5f4c561
  6. 29 6月, 2015 2 次提交
  7. 02 6月, 2015 5 次提交
  8. 14 5月, 2015 1 次提交
    • M
      ARM: 8356/1: mm: handle non-pmd-aligned end of RAM · 965278dc
      Mark Rutland 提交于
      At boot time we round the memblock limit down to section size in an
      attempt to ensure that we will have mapped this RAM with section
      mappings prior to allocating from it. When mapping RAM we iterate over
      PMD-sized chunks, creating these section mappings.
      
      Section mappings are only created when the end of a chunk is aligned to
      section size. Unfortunately, with classic page tables (where PMD_SIZE is
      2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
      size the first 1M will not be mapped despite having been accounted for
      in the memblock limit. This has been observed to result in page tables
      being allocated from unmapped memory, causing boot-time hangs.
      
      This patch modifies the memblock limit rounding to always round down to
      PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
      will round the memblock limit down to a 2M boundary, matching the limits
      on section mappings, and preventing allocations from unmapped memory.
      For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NStefan Agner <stefan@agner.ch>
      Tested-by: NStefan Agner <stefan@agner.ch>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Tested-by: NHans de Goede <hdegoede@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      965278dc
  9. 08 1月, 2015 1 次提交
    • G
      ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region · ac084688
      Grygorii Strashko 提交于
      Now local variables kernel_x_start and kernel_x_end defined using
      'unsigned long' type which is wrong because they represent physical
      memory range and will be calculated wrongly if LPAE is enabled.
      As result, all following code in map_lowmem() will not work correctly.
      
      For example, Keystone 2 boot is broken because
       kernel_x_start == 0x0000 0000
       kernel_x_end   == 0x0080 0000
      
      instead of
       kernel_x_start == 0x0000 0008 0000 0000
       kernel_x_end   == 0x0000 0008 0080 0000
      and as result whole low memory will be mapped with MT_MEMORY_RW
      permissions by code (start > kernel_x_end):
      		} else if (start >= kernel_x_end) {
      			map.pfn = __phys_to_pfn(start);
      			map.virtual = __phys_to_virt(start);
      			map.length = end - start;
      			map.type = MT_MEMORY_RW;
      
      			create_mapping(&map);
      		}
      
      Hence, fix it by using phys_addr_t type for variables kernel_x_start
      and kernel_x_end.
      Tested-by: NMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ac084688
  10. 04 12月, 2014 1 次提交
  11. 03 12月, 2014 1 次提交
  12. 21 11月, 2014 1 次提交
  13. 17 10月, 2014 3 次提交
  14. 26 9月, 2014 1 次提交
  15. 02 8月, 2014 1 次提交
  16. 29 7月, 2014 1 次提交
  17. 02 6月, 2014 5 次提交
  18. 01 6月, 2014 1 次提交
  19. 26 5月, 2014 1 次提交
  20. 23 4月, 2014 1 次提交
  21. 10 2月, 2014 2 次提交
    • W
      ARM: 7954/1: mm: remove remaining domain support from ARMv6 · b6ccb980
      Will Deacon 提交于
      CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is
      because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do
      not have hardware thread registers. The lack of these registers requires
      the kernel to update the vectors page at each context switch in order to
      write a new TLS pointer. This write must be done via the userspace
      mapping, since aliasing caches can lead to expensive flushing when using
      kmap. Finally, this requires the vectors page to be mapped r/w for
      kernel and r/o for user, which has implications for things like put_user
      which must trigger CoW appropriately when targetting user pages.
      
      The upshot of all this is that a v6/v7 kernel makes use of domains to
      segregate kernel and user memory accesses. This has the nasty
      side-effect of making device mappings executable, which has been
      observed to cause subtle bugs on recent cores (e.g. Cortex-A15
      performing a speculative instruction fetch from the GIC and acking an
      interrupt in the process).
      
      This patch solves this problem by removing the remaining domain support
      from ARMv6. A new memory type is added specifically for the vectors page
      which allows that page (and only that page) to be mapped as user r/o,
      kernel r/w. All other user r/o pages are mapped also as kernel r/o.
      Patch co-developed with Russell King.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b6ccb980
    • C
      ARM: 7950/1: mm: Fix stage-2 device memory attributes · 4d9c5b89
      Christoffer Dall 提交于
      The stage-2 memory attributes are distinct from the Hyp memory
      attributes and the Stage-1 memory attributes.  We were using the stage-1
      memory attributes for stage-2 mappings causing device mappings to be
      mapped as normal memory.  Add the S2 equivalent defines for memory
      attributes and fix the comments explaining the defines while at it.
      
      Add a prot_pte_s2 field to the mem_type struct and fill out the field
      for device mappings accordingly.
      
      Cc: <stable@vger.kernel.org>	[3.9+]
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4d9c5b89
  22. 11 12月, 2013 4 次提交
  23. 14 11月, 2013 1 次提交
  24. 11 10月, 2013 1 次提交
  25. 01 8月, 2013 1 次提交