1. 09 5月, 2014 3 次提交
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324
    • C
      arm64: Introduce execute-only page access permissions · bc07c2c6
      Catalin Marinas 提交于
      The ARMv8 architecture allows execute-only user permissions by clearing
      the PTE_UXN and PTE_USER bits. The kernel, however, can still access
      such page, so execute-only page permission does not protect against
      read(2)/write(2) etc. accesses. Systems requiring such protection must
      implement/enable features like SECCOMP.
      
      This patch changes the arm64 __P100 and __S100 protection_map[] macros
      to the new __PAGE_EXECONLY attributes. A side effect is that
      pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER
      isn't set. To work around this, the check is done on the PTE_NG bit via
      the pte_valid_ng() macro. VM_READ is also checked now for page faults.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bc07c2c6
    • C
      arm64: Provide read/write fault information in compat signal handlers · 9141300a
      Catalin Marinas 提交于
      For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault
      was caused by a write access and applications like Qemu rely on such
      information being provided in sigcontext. This patch introduces the
      ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly
      in compat sigcontext.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9141300a
  2. 04 5月, 2014 3 次提交
  3. 08 4月, 2014 3 次提交
  4. 05 4月, 2014 1 次提交
  5. 03 4月, 2014 1 次提交
  6. 24 3月, 2014 3 次提交
  7. 13 3月, 2014 2 次提交
  8. 04 3月, 2014 1 次提交
    • M
      arm64: remove unnecessary cache flush at boot · bff70595
      Mark Rutland 提交于
      Currently we flush the entire dcache at boot within __cpu_setup, but
      this is unnecessary as the booting protocol demands that the dcache is
      invalid and off upon entering the kernel. The presence of the cache
      flush only serves to hide bugs in bootloaders, and is not safe in the
      presence of SMP.
      
      In an SMP boot scenario the CPUs enter coherency outside of the kernel,
      and the primary CPU enables its caches before bringing up secondary
      CPUs. Therefore if any secondary CPU has an entry in its cache (in
      violation of the boot protocol), the primary CPU might snoop it even if
      the secondary CPU's cache is disabled. The boot-time cache flush only
      serves to hide a firmware bug, and slows down a cpu boot unnecessarily.
      
      This patch removes the unnecessary boot-time cache flush.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: make __flush_dcache_all local only]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bff70595
  9. 28 2月, 2014 1 次提交
  10. 27 2月, 2014 2 次提交
  11. 26 2月, 2014 1 次提交
  12. 05 2月, 2014 3 次提交
  13. 27 1月, 2014 1 次提交
  14. 23 1月, 2014 2 次提交
  15. 20 12月, 2013 2 次提交
  16. 17 12月, 2013 1 次提交
    • L
      arm64: kernel: suspend/resume registers save/restore · 6732bc65
      Lorenzo Pieralisi 提交于
      Power management software requires the kernel to save and restore
      CPU registers while going through suspend and resume operations
      triggered by kernel subsystems like CPU idle and suspend to RAM.
      
      This patch implements code that provides save and restore mechanism
      for the arm v8 implementation. Memory for the context is passed as
      parameter to both cpu_do_suspend and cpu_do_resume functions, and allows
      the callers to implement context allocation as they deem fit.
      
      The registers that are saved and restored correspond to the registers set
      actually required by the kernel to be up and running which represents a
      subset of v8 ISA.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      6732bc65
  17. 07 12月, 2013 1 次提交
    • M
      arm64: ensure completion of TLB invalidatation · 3cea71bc
      Mark Rutland 提交于
      Currently there is no dsb between the tlbi in __cpu_setup and the write
      to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the
      TLB invalidation is not guaranteed to have completed at the point
      address translation is enabled, leading to a number of possible issues
      including incorrect translations and TLB conflict faults.
      
      This patch moves the tlbi in __cpu_setup above an existing dsb used to
      synchronise I-cache invalidation, ensuring that the TLBs have been
      invalidated at the point the MMU is enabled.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3cea71bc
  18. 30 10月, 2013 1 次提交
  19. 25 10月, 2013 1 次提交
  20. 10 10月, 2013 2 次提交
  21. 25 9月, 2013 1 次提交
  22. 20 9月, 2013 1 次提交
  23. 13 9月, 2013 2 次提交
  24. 12 9月, 2013 1 次提交
    • N
      mm: migrate: check movability of hugepage in unmap_and_move_huge_page() · 83467efb
      Naoya Horiguchi 提交于
      Currently hugepage migration works well only for pmd-based hugepages
      (mainly due to lack of testing,) so we had better not enable migration of
      other levels of hugepages until we are ready for it.
      
      Some users of hugepage migration (mbind, move_pages, and migrate_pages) do
      page table walk and check pud/pmd_huge() there, so they are safe.  But the
      other users (softoffline and memory hotremove) don't do this, so without
      this patch they can try to migrate unexpected types of hugepages.
      
      To prevent this, we introduce hugepage_migration_support() as an
      architecture dependent check of whether hugepage are implemented on a pmd
      basis or not.  And on some architecture multiple sizes of hugepages are
      available, so hugepage_migration_support() also checks hugepage size.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83467efb