1. 02 3月, 2010 3 次提交
  2. 26 2月, 2010 1 次提交
  3. 23 2月, 2010 1 次提交
    • P
      sh: wire up SET/GET_UNALIGN_CTL. · 94ea5e44
      Paul Mundt 提交于
      This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from
      the PPC and ia64 implementations. The thread flags happen to be the
      logical inverse of what the global fault mode is set to, so this works
      out pretty cleanly. By default the global fault mode is used, with tasks
      now being able to override their own settings via prctl().
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      94ea5e44
  4. 22 2月, 2010 2 次提交
  5. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  6. 18 2月, 2010 2 次提交
    • P
      sh: Merge legacy and dynamic PMB modes. · d01447b3
      Paul Mundt 提交于
      This implements a bit of rework for the PMB code, which permits us to
      kill off the legacy PMB mode completely. Rather than trusting the boot
      loader to do the right thing, we do a quick verification of the PMB
      contents to determine whether to have the kernel setup the initial
      mappings or whether it needs to mangle them later on instead.
      
      If we're booting from legacy mappings, the kernel will now take control
      of them and make them match the kernel's initial mapping configuration.
      This is accomplished by breaking the initialization phase out in to
      multiple steps: synchronization, merging, and resizing. With the recent
      rework, the synchronization code establishes page links for compound
      mappings already, so we build on top of this for promoting mappings and
      reclaiming unused slots.
      
      At the same time, the changes introduced for the uncached helpers also
      permit us to dynamically resize the uncached mapping without any
      particular headaches. The smallest page size is more than sufficient for
      mapping all of kernel text, and as we're careful not to jump to any far
      off locations in the setup code the mapping can safely be resized
      regardless of whether we are executing from it or not.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d01447b3
    • P
      sh: Provide uncached I/O helpers. · b8f7918f
      Paul Mundt 提交于
      There are lots of registers that can only be updated from the uncached
      mapping, so we add some helpers for those cases in order to make it
      easier to ensure that we only make the jump when it's absolutely
      necessary.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      b8f7918f
  7. 17 2月, 2010 6 次提交
  8. 16 2月, 2010 2 次提交
  9. 12 2月, 2010 2 次提交
  10. 08 2月, 2010 3 次提交
  11. 06 2月, 2010 1 次提交
  12. 03 2月, 2010 1 次提交
  13. 02 2月, 2010 1 次提交
  14. 01 2月, 2010 4 次提交
  15. 30 1月, 2010 1 次提交
  16. 29 1月, 2010 2 次提交
  17. 28 1月, 2010 2 次提交
  18. 27 1月, 2010 1 次提交
  19. 26 1月, 2010 2 次提交
  20. 25 1月, 2010 1 次提交
  21. 21 1月, 2010 1 次提交