1. 27 7月, 2015 1 次提交
  2. 01 7月, 2015 1 次提交
  3. 02 6月, 2015 1 次提交
  4. 15 4月, 2015 1 次提交
  5. 23 3月, 2015 1 次提交
  6. 19 3月, 2015 1 次提交
  7. 30 1月, 2015 1 次提交
  8. 28 1月, 2015 3 次提交
    • M
      arm64: mm: use *_sect to check for section maps · a1c76574
      Mark Rutland 提交于
      The {pgd,pud,pmd}_bad family of macros have slightly fuzzy
      cross-architecture semantics, and seem to imply a populated entry that
      is not a next-level table, rather than a particular type of entry (e.g.
      a section map).
      
      In arm64 code, for those cases where we care about whether an entry is a
      section mapping, we can instead use the {pud,pmd}_sect macros to
      explicitly check for this case. This helps to document precisely what we
      care about, making the code easier to read, and allows for future
      relaxation of the *_bad macros to check for other "bad" entries.
      
      To that end this patch updates the table dumping and initial table setup
      to check for section mappings with {pud,pmd}_sect, and adds/restores
      BUG_ON(*_bad((*p)) checks after we've handled the *_sect and *_none
      cases so as to catch remaining "bad" cases.
      
      In the fault handling code, show_pte is left with *_bad checks as it
      only cares about whether it can walk the next level table, and this path
      is used for both kernel and userspace fault handling. The former case
      will be followed by a die() where we'll report the address that
      triggered the fault, which can be useful context for debugging.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a1c76574
    • M
      arm64: drop unnecessary cache+tlb maintenance · a3bba370
      Mark Rutland 提交于
      In paging_init, we call flush_cache_all, but this is backed by Set/Way
      operations which may not achieve anything in the presence of cache line
      migration and/or system caches. If the caches are already in an
      inconsistent state at this point, there is nothing we can do (short of
      flushing the entire physical address space by VA) to empty architected
      and system caches. As such, flush_cache_all only serves to mask other
      potential bugs. Hence, this patch removes the boot-time call to
      flush_cache_all.
      
      Immediately after the cache maintenance we flush the TLBs, but this is
      also unnecessary. Before enabling the MMU, the TLBs are invalidated, and
      thus are initially clean. When changing the contents of active tables
      (e.g. in fixup_executable() for DEBUG_RODATA) we perform the required
      TLB maintenance following the update, and therefore no additional
      maintenance is required to ensure the new table entries are in effect.
      Since activating the MMU we will not have modified system register
      fields permitted to be cached in a TLB, and therefore do not need
      maintenance for any cached system register fields. Hence, the TLB flush
      is unnecessary.
      
      Shortly after the unnecessary TLB flush, we update TTBR0 to point to an
      empty zero page rather than the idmap, and flush the TLBs. This
      maintenance is necessary to remove the global idmap entries from the
      TLBs (as they would conflict with userspace mappings), and is retained.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a3bba370
    • Z
      arm64:mm: free the useless initial page table · 523d6e9f
      zhichang.yuan 提交于
      For 64K page system, after mapping a PMD section, the corresponding initial
      page table is not needed any more. That page can be freed.
      Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org>
      [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      523d6e9f
  9. 22 1月, 2015 2 次提交
  10. 14 1月, 2015 1 次提交
    • M
      arm64: remove broken cachepolicy code · 26a945ca
      Mark Rutland 提交于
      The cachepolicy kernel parameter was intended to aid in the debugging of
      coherency issues, but it is fundamentally broken for several reasons:
      
       * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary
         CPUs may therefore use differ w.r.t. the attributes they apply to
         MT_NORMAL memory, resulting in a loss of coherency.
      
       * The cache maintenance using flush_dcache_all (based on Set/Way
         operations) is not guaranteed to empty a given CPU's cache hierarchy
         while said CPU has caches enabled, it cannot empty the caches of
         other coherent PEs, nor is it guaranteed to flush data to the PoC
         even when caches are disabled.
      
       * The TLBs are not invalidated around the modification of MAIR_EL1 and
         TCR_EL1, as required by the architecture (as both are permitted to be
         cached in a TLB). This may result in CPUs using attributes other than
         those expected for some memory accesses, resulting in a loss of
         coherency.
      
       * Exclusive accesses are not architecturally guaranteed to function as
         expected on memory marked as Write-Through or Non-Cacheable. Thus
         changing the attributes of MT_NORMAL away from the (architecurally
         safe) defaults may cause uses of these instructions (e.g. atomics) to
         behave erratically.
      
      Given this, the cachepolicy code cannot be used for debugging purposes
      as it alone is likely to cause coherency issues. This patch removes the
      broken cachepolicy code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      26a945ca
  11. 13 1月, 2015 1 次提交
  12. 12 1月, 2015 2 次提交
  13. 25 11月, 2014 1 次提交
  14. 13 11月, 2014 1 次提交
  15. 07 11月, 2014 1 次提交
  16. 25 10月, 2014 1 次提交
  17. 16 9月, 2014 1 次提交
  18. 23 7月, 2014 1 次提交
    • J
      arm64: mm: Implement 4 levels of translation tables · c79b954b
      Jungseok Lee 提交于
      This patch implements 4 levels of translation tables since 3 levels
      of page tables with 4KB pages cannot support 40-bit physical address
      space described in [1] due to the following issue.
      
      It is a restriction that kernel logical memory map with 4KB + 3 levels
      (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
      544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
      mapping for this region in map_mem function since __phys_to_virt for
      this region reaches to address overflow.
      
      If SoC design follows the document, [1], over 32GB RAM would be placed
      from 544GB. Even 64GB system is supposed to use the region from 544GB
      to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
      of page tables to avoid hacking __virt_to_phys and __phys_to_virt.
      
      However, it is recommended 4 levels of page table should be only enabled
      if memory map is too sparse or there is about 512GB RAM.
      
      References
      ----------
      [1]: Principles of ARM Memory Maps, White Paper, Issue C
      Signed-off-by: NJungseok Lee <jays.lee@samsung.com>
      Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com>
      Acked-by: NKukjin Kim <kgene.kim@samsung.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Reviewed-by: NSteve Capper <steve.capper@linaro.org>
      [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
      [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
      [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
      c79b954b
  19. 09 5月, 2014 2 次提交
    • S
      arm64: mm: Create gigabyte kernel logical mappings where possible · 206a2a73
      Steve Capper 提交于
      We have the capability to map 1GB level 1 blocks when using a 4K
      granule.
      
      This patch adjusts the create_mapping logic s.t. when mapping physical
      memory on boot, we attempt to use a 1GB block if both the VA and PA
      start and end are 1GB aligned. This both reduces the levels of lookup
      required to resolve a kernel logical address, as well as reduces TLB
      pressure on cores that support 1GB TLB entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Tested-by: NJungseok Lee <jays.lee@samsung.com>
      [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      206a2a73
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324
  20. 04 5月, 2014 1 次提交
  21. 01 5月, 2014 1 次提交
  22. 08 4月, 2014 2 次提交
  23. 05 2月, 2014 1 次提交
  24. 28 8月, 2013 1 次提交
  25. 14 6月, 2013 1 次提交
    • S
      ARM64: mm: Restore memblock limit when map_mem finished. · f6bc87c3
      Steve Capper 提交于
      In paging_init the memblock limit is set to restrict any addresses
      returned by early_alloc to fit within the initial direct kernel
      mapping in swapper_pg_dir. This allows map_mem to allocate puds,
      pmds and ptes from the initial direct kernel mapping.
      
      The limit stays low after paging_init() though, meaning any
      bootmem allocations will be from a restricted subset of memory.
      Gigabyte huge pages, for instance, are normally allocated from
      bootmem as their order (18) is too large for the default buddy
      allocator (MAX_ORDER = 11).
      
      This patch restores the memblock limit when map_mem has finished,
      allowing gigabyte huge pages (and other objects) to be allocated
      from all of bootmem.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      f6bc87c3
  26. 08 6月, 2013 1 次提交
  27. 30 4月, 2013 1 次提交
    • J
      sparse-vmemmap: specify vmemmap population range in bytes · 0aad818b
      Johannes Weiner 提交于
      The sparse code, when asking the architecture to populate the vmemmap,
      specifies the section range as a starting page and a number of pages.
      
      This is an awkward interface, because none of the arch-specific code
      actually thinks of the range in terms of 'struct page' units and always
      translates it to bytes first.
      
      In addition, later patches mix huge page and regular page backing for
      the vmemmap.  For this, they need to call vmemmap_populate_basepages()
      on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind.  But these
      are not necessarily multiples of the 'struct page' size and so this unit
      is too coarse.
      
      Just translate the section range into bytes once in the generic sparse
      code, then pass byte ranges down the stack.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Bernhard Schmidt <Bernhard.Schmidt@lrz.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Tested-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0aad818b
  28. 26 3月, 2013 1 次提交
  29. 24 2月, 2013 1 次提交
  30. 23 1月, 2013 1 次提交
    • C
      arm64: Add simple earlyprintk support · 2475ff9d
      Catalin Marinas 提交于
      This patch adds support for "earlyprintk=" parameter on the kernel
      command line. The format is:
      
        earlyprintk=<name>[,<addr>][,<options>]
      
      where <name> is the name of the (UART) device, e.g. "pl011", <addr> is
      the I/O address. The <options> aren't currently used.
      
      The mapping of the earlyprintk device is done very early during kernel
      boot and there are restrictions on which functions it can call. A
      special early_io_map() function is added which creates the mapping from
      the pre-defined EARLY_IOBASE to the device I/O address passed via the
      kernel parameter. The pgd entry corresponding to EARLY_IOBASE is
      pre-populated in head.S during kernel boot.
      
      Only PL011 is currently supported and it is assumed that the interface
      is already initialised by the boot loader before the kernel is started.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      2475ff9d
  31. 17 9月, 2012 1 次提交