1. 22 1月, 2015 2 次提交
  2. 14 1月, 2015 1 次提交
    • M
      arm64: remove broken cachepolicy code · 26a945ca
      Mark Rutland 提交于
      The cachepolicy kernel parameter was intended to aid in the debugging of
      coherency issues, but it is fundamentally broken for several reasons:
      
       * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary
         CPUs may therefore use differ w.r.t. the attributes they apply to
         MT_NORMAL memory, resulting in a loss of coherency.
      
       * The cache maintenance using flush_dcache_all (based on Set/Way
         operations) is not guaranteed to empty a given CPU's cache hierarchy
         while said CPU has caches enabled, it cannot empty the caches of
         other coherent PEs, nor is it guaranteed to flush data to the PoC
         even when caches are disabled.
      
       * The TLBs are not invalidated around the modification of MAIR_EL1 and
         TCR_EL1, as required by the architecture (as both are permitted to be
         cached in a TLB). This may result in CPUs using attributes other than
         those expected for some memory accesses, resulting in a loss of
         coherency.
      
       * Exclusive accesses are not architecturally guaranteed to function as
         expected on memory marked as Write-Through or Non-Cacheable. Thus
         changing the attributes of MT_NORMAL away from the (architecurally
         safe) defaults may cause uses of these instructions (e.g. atomics) to
         behave erratically.
      
      Given this, the cachepolicy code cannot be used for debugging purposes
      as it alone is likely to cause coherency issues. This patch removes the
      broken cachepolicy code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      26a945ca
  3. 13 1月, 2015 1 次提交
  4. 12 1月, 2015 2 次提交
  5. 25 11月, 2014 1 次提交
  6. 13 11月, 2014 1 次提交
  7. 07 11月, 2014 1 次提交
  8. 25 10月, 2014 1 次提交
  9. 16 9月, 2014 1 次提交
  10. 23 7月, 2014 1 次提交
    • J
      arm64: mm: Implement 4 levels of translation tables · c79b954b
      Jungseok Lee 提交于
      This patch implements 4 levels of translation tables since 3 levels
      of page tables with 4KB pages cannot support 40-bit physical address
      space described in [1] due to the following issue.
      
      It is a restriction that kernel logical memory map with 4KB + 3 levels
      (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
      544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
      mapping for this region in map_mem function since __phys_to_virt for
      this region reaches to address overflow.
      
      If SoC design follows the document, [1], over 32GB RAM would be placed
      from 544GB. Even 64GB system is supposed to use the region from 544GB
      to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
      of page tables to avoid hacking __virt_to_phys and __phys_to_virt.
      
      However, it is recommended 4 levels of page table should be only enabled
      if memory map is too sparse or there is about 512GB RAM.
      
      References
      ----------
      [1]: Principles of ARM Memory Maps, White Paper, Issue C
      Signed-off-by: NJungseok Lee <jays.lee@samsung.com>
      Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com>
      Acked-by: NKukjin Kim <kgene.kim@samsung.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Reviewed-by: NSteve Capper <steve.capper@linaro.org>
      [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
      [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
      [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
      c79b954b
  11. 09 5月, 2014 2 次提交
    • S
      arm64: mm: Create gigabyte kernel logical mappings where possible · 206a2a73
      Steve Capper 提交于
      We have the capability to map 1GB level 1 blocks when using a 4K
      granule.
      
      This patch adjusts the create_mapping logic s.t. when mapping physical
      memory on boot, we attempt to use a 1GB block if both the VA and PA
      start and end are 1GB aligned. This both reduces the levels of lookup
      required to resolve a kernel logical address, as well as reduces TLB
      pressure on cores that support 1GB TLB entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Tested-by: NJungseok Lee <jays.lee@samsung.com>
      [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      206a2a73
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324
  12. 04 5月, 2014 1 次提交
  13. 01 5月, 2014 1 次提交
  14. 08 4月, 2014 2 次提交
  15. 05 2月, 2014 1 次提交
  16. 28 8月, 2013 1 次提交
  17. 14 6月, 2013 1 次提交
    • S
      ARM64: mm: Restore memblock limit when map_mem finished. · f6bc87c3
      Steve Capper 提交于
      In paging_init the memblock limit is set to restrict any addresses
      returned by early_alloc to fit within the initial direct kernel
      mapping in swapper_pg_dir. This allows map_mem to allocate puds,
      pmds and ptes from the initial direct kernel mapping.
      
      The limit stays low after paging_init() though, meaning any
      bootmem allocations will be from a restricted subset of memory.
      Gigabyte huge pages, for instance, are normally allocated from
      bootmem as their order (18) is too large for the default buddy
      allocator (MAX_ORDER = 11).
      
      This patch restores the memblock limit when map_mem has finished,
      allowing gigabyte huge pages (and other objects) to be allocated
      from all of bootmem.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      f6bc87c3
  18. 08 6月, 2013 1 次提交
  19. 30 4月, 2013 1 次提交
    • J
      sparse-vmemmap: specify vmemmap population range in bytes · 0aad818b
      Johannes Weiner 提交于
      The sparse code, when asking the architecture to populate the vmemmap,
      specifies the section range as a starting page and a number of pages.
      
      This is an awkward interface, because none of the arch-specific code
      actually thinks of the range in terms of 'struct page' units and always
      translates it to bytes first.
      
      In addition, later patches mix huge page and regular page backing for
      the vmemmap.  For this, they need to call vmemmap_populate_basepages()
      on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind.  But these
      are not necessarily multiples of the 'struct page' size and so this unit
      is too coarse.
      
      Just translate the section range into bytes once in the generic sparse
      code, then pass byte ranges down the stack.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Bernhard Schmidt <Bernhard.Schmidt@lrz.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Tested-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0aad818b
  20. 26 3月, 2013 1 次提交
  21. 24 2月, 2013 1 次提交
  22. 23 1月, 2013 1 次提交
    • C
      arm64: Add simple earlyprintk support · 2475ff9d
      Catalin Marinas 提交于
      This patch adds support for "earlyprintk=" parameter on the kernel
      command line. The format is:
      
        earlyprintk=<name>[,<addr>][,<options>]
      
      where <name> is the name of the (UART) device, e.g. "pl011", <addr> is
      the I/O address. The <options> aren't currently used.
      
      The mapping of the earlyprintk device is done very early during kernel
      boot and there are restrictions on which functions it can call. A
      special early_io_map() function is added which creates the mapping from
      the pre-defined EARLY_IOBASE to the device I/O address passed via the
      kernel parameter. The pgd entry corresponding to EARLY_IOBASE is
      pre-populated in head.S during kernel boot.
      
      Only PL011 is currently supported and it is assumed that the interface
      is already initialised by the boot loader before the kernel is started.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      2475ff9d
  23. 17 9月, 2012 1 次提交