1. 11 12月, 2013 2 次提交
  2. 15 11月, 2013 1 次提交
    • K
      mm: rename USE_SPLIT_PTLOCKS to USE_SPLIT_PTE_PTLOCKS · 57c1ffce
      Kirill A. Shutemov 提交于
      We're going to introduce split page table lock for PMD level.  Let's
      rename existing split ptlock for PTE level to avoid confusion.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Tested-by: NAlex Thorlton <athorlton@sgi.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Eric W . Biederman" <ebiederm@xmission.com>
      Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Robin Holt <robinmholt@gmail.com>
      Cc: Sedat Dilek <sedat.dilek@gmail.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      57c1ffce
  3. 14 11月, 2013 3 次提交
  4. 07 11月, 2013 2 次提交
  5. 31 10月, 2013 2 次提交
    • S
      ARM: 7805/1: mm: change max*pfn to include the physical offset of memory · 26ba47b1
      Santosh Shilimkar 提交于
      Most of the kernel code assumes that max*pfn is maximum pfns because
      the physical start of memory is expected to be PFN0. Since this
      assumption is not true on ARM architectures, the meaning of max*pfn
      is number of memory pages. This is done to keep drivers happy which
      are making use of of these variable to calculate the dma bounce limit
      using dma_mask.
      
      Now since we have a architecture override possibility for DMAable
      maximum pfns, lets make meaning of max*pfns as maximum pnfs on ARM
      as well.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      26ba47b1
    • R
      ARM: DMA-API: better handing of DMA masks for coherent allocations · 4dcfa600
      Russell King 提交于
      We need to start treating DMA masks as something which is specific to
      the bus that the device resides on, otherwise we're going to hit all
      sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit
      systems, where memory is offset from PFN 0.
      
      In order to start doing this, we convert the DMA mask to a PFN using
      the device specific dma_to_pfn() macro.  This is the reverse of the
      pfn_to_dma() macro which is used to get the DMA address for the device.
      
      This gives us a PFN mask, which we can then check against the PFN
      limit of the DMA zone.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4dcfa600
  6. 29 10月, 2013 1 次提交
  7. 24 10月, 2013 1 次提交
  8. 20 10月, 2013 3 次提交
  9. 15 10月, 2013 1 次提交
  10. 11 10月, 2013 3 次提交
    • S
      ARM: mm: Recreate kernel mappings in early_paging_init() · a77e0c7b
      Santosh Shilimkar 提交于
      This patch adds a step in the init sequence, in order to recreate
      the kernel code/data page table mappings prior to full paging
      initialization.  This is necessary on LPAE systems that run out of
      a physical address space outside the 4G limit.  On these systems,
      this implementation provides a machine descriptor hook that allows
      the PHYS_OFFSET to be overridden in a machine specific fashion.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NR Sricharan <r.sricharan@ti.com>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      a77e0c7b
    • S
      ARM: mm: Move the idmap print to appropriate place in the code · c1a5f4f6
      Santosh Shilimkar 提交于
      Commit 9e9a367c {ARM: Section based HYP idmap} moved
      the address conversion inside identity_mapping_add() without
      respective print which carries useful idmap information.
      
      Move the print as well inside identity_mapping_add() to
      fix the same.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      c1a5f4f6
    • S
      ARM: mm: Introduce virt_to_idmap() with an arch hook · 4dc9a817
      Santosh Shilimkar 提交于
      On some PAE systems (e.g. TI Keystone), memory is above the
      32-bit addressable limit, and the interconnect provides an
      aliased view of parts of physical memory in the 32-bit addressable
      space.  This alias is strictly for boot time usage, and is not
      otherwise usable because of coherency limitations. On such systems,
      the idmap mechanism needs to take this aliased mapping into account.
      
      This patch introduces virt_to_idmap() and a arch function pointer which
      can be populated by platform which needs it. Also populate necessary
      idmap spots with now available virt_to_idmap(). Avoided #ifdef approach
      to be compatible with multi-platform builds.
      
      Most architecture won't touch it and in that case virt_to_idmap()
      fall-back to existing virt_to_phys() macro.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      4dc9a817
  11. 10 10月, 2013 2 次提交
  12. 02 10月, 2013 1 次提交
  13. 13 9月, 2013 2 次提交
  14. 12 9月, 2013 1 次提交
    • N
      mm: migrate: check movability of hugepage in unmap_and_move_huge_page() · 83467efb
      Naoya Horiguchi 提交于
      Currently hugepage migration works well only for pmd-based hugepages
      (mainly due to lack of testing,) so we had better not enable migration of
      other levels of hugepages until we are ready for it.
      
      Some users of hugepage migration (mbind, move_pages, and migrate_pages) do
      page table walk and check pud/pmd_huge() there, so they are safe.  But the
      other users (softoffline and memory hotremove) don't do this, so without
      this patch they can try to migrate unexpected types of hugepages.
      
      To prevent this, we introduce hugepage_migration_support() as an
      architecture dependent check of whether hugepage are implemented on a pmd
      basis or not.  And on some architecture multiple sizes of hugepages are
      available, so hugepage_migration_support() also checks hugepage size.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83467efb
  15. 02 9月, 2013 1 次提交
  16. 27 8月, 2013 1 次提交
  17. 20 8月, 2013 4 次提交
  18. 14 8月, 2013 1 次提交
  19. 12 8月, 2013 4 次提交
  20. 01 8月, 2013 3 次提交
  21. 31 7月, 2013 1 次提交