1. 09 5月, 2014 1 次提交
  2. 04 5月, 2014 3 次提交
  3. 08 4月, 2014 3 次提交
  4. 05 4月, 2014 1 次提交
  5. 03 4月, 2014 1 次提交
  6. 24 3月, 2014 3 次提交
  7. 13 3月, 2014 2 次提交
  8. 04 3月, 2014 1 次提交
    • M
      arm64: remove unnecessary cache flush at boot · bff70595
      Mark Rutland 提交于
      Currently we flush the entire dcache at boot within __cpu_setup, but
      this is unnecessary as the booting protocol demands that the dcache is
      invalid and off upon entering the kernel. The presence of the cache
      flush only serves to hide bugs in bootloaders, and is not safe in the
      presence of SMP.
      
      In an SMP boot scenario the CPUs enter coherency outside of the kernel,
      and the primary CPU enables its caches before bringing up secondary
      CPUs. Therefore if any secondary CPU has an entry in its cache (in
      violation of the boot protocol), the primary CPU might snoop it even if
      the secondary CPU's cache is disabled. The boot-time cache flush only
      serves to hide a firmware bug, and slows down a cpu boot unnecessarily.
      
      This patch removes the unnecessary boot-time cache flush.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: make __flush_dcache_all local only]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bff70595
  9. 28 2月, 2014 1 次提交
  10. 27 2月, 2014 2 次提交
  11. 26 2月, 2014 1 次提交
  12. 05 2月, 2014 3 次提交
  13. 27 1月, 2014 1 次提交
  14. 23 1月, 2014 2 次提交
  15. 20 12月, 2013 2 次提交
  16. 17 12月, 2013 1 次提交
    • L
      arm64: kernel: suspend/resume registers save/restore · 6732bc65
      Lorenzo Pieralisi 提交于
      Power management software requires the kernel to save and restore
      CPU registers while going through suspend and resume operations
      triggered by kernel subsystems like CPU idle and suspend to RAM.
      
      This patch implements code that provides save and restore mechanism
      for the arm v8 implementation. Memory for the context is passed as
      parameter to both cpu_do_suspend and cpu_do_resume functions, and allows
      the callers to implement context allocation as they deem fit.
      
      The registers that are saved and restored correspond to the registers set
      actually required by the kernel to be up and running which represents a
      subset of v8 ISA.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      6732bc65
  17. 07 12月, 2013 1 次提交
    • M
      arm64: ensure completion of TLB invalidatation · 3cea71bc
      Mark Rutland 提交于
      Currently there is no dsb between the tlbi in __cpu_setup and the write
      to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the
      TLB invalidation is not guaranteed to have completed at the point
      address translation is enabled, leading to a number of possible issues
      including incorrect translations and TLB conflict faults.
      
      This patch moves the tlbi in __cpu_setup above an existing dsb used to
      synchronise I-cache invalidation, ensuring that the TLBs have been
      invalidated at the point the MMU is enabled.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3cea71bc
  18. 30 10月, 2013 1 次提交
  19. 25 10月, 2013 1 次提交
  20. 10 10月, 2013 2 次提交
  21. 25 9月, 2013 1 次提交
  22. 20 9月, 2013 1 次提交
  23. 13 9月, 2013 2 次提交
  24. 12 9月, 2013 1 次提交
    • N
      mm: migrate: check movability of hugepage in unmap_and_move_huge_page() · 83467efb
      Naoya Horiguchi 提交于
      Currently hugepage migration works well only for pmd-based hugepages
      (mainly due to lack of testing,) so we had better not enable migration of
      other levels of hugepages until we are ready for it.
      
      Some users of hugepage migration (mbind, move_pages, and migrate_pages) do
      page table walk and check pud/pmd_huge() there, so they are safe.  But the
      other users (softoffline and memory hotremove) don't do this, so without
      this patch they can try to migrate unexpected types of hugepages.
      
      To prevent this, we introduce hugepage_migration_support() as an
      architecture dependent check of whether hugepage are implemented on a pmd
      basis or not.  And on some architecture multiple sizes of hugepages are
      available, so hugepage_migration_support() also checks hugepage size.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83467efb
  25. 03 9月, 2013 1 次提交
  26. 02 9月, 2013 1 次提交