1. 26 2月, 2010 1 次提交
  2. 23 2月, 2010 1 次提交
    • P
      sh: wire up SET/GET_UNALIGN_CTL. · 94ea5e44
      Paul Mundt 提交于
      This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from
      the PPC and ia64 implementations. The thread flags happen to be the
      logical inverse of what the global fault mode is set to, so this works
      out pretty cleanly. By default the global fault mode is used, with tasks
      now being able to override their own settings via prctl().
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      94ea5e44
  3. 22 2月, 2010 2 次提交
  4. 18 2月, 2010 2 次提交
    • P
      sh: Merge legacy and dynamic PMB modes. · d01447b3
      Paul Mundt 提交于
      This implements a bit of rework for the PMB code, which permits us to
      kill off the legacy PMB mode completely. Rather than trusting the boot
      loader to do the right thing, we do a quick verification of the PMB
      contents to determine whether to have the kernel setup the initial
      mappings or whether it needs to mangle them later on instead.
      
      If we're booting from legacy mappings, the kernel will now take control
      of them and make them match the kernel's initial mapping configuration.
      This is accomplished by breaking the initialization phase out in to
      multiple steps: synchronization, merging, and resizing. With the recent
      rework, the synchronization code establishes page links for compound
      mappings already, so we build on top of this for promoting mappings and
      reclaiming unused slots.
      
      At the same time, the changes introduced for the uncached helpers also
      permit us to dynamically resize the uncached mapping without any
      particular headaches. The smallest page size is more than sufficient for
      mapping all of kernel text, and as we're careful not to jump to any far
      off locations in the setup code the mapping can safely be resized
      regardless of whether we are executing from it or not.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d01447b3
    • P
      sh: Provide uncached I/O helpers. · b8f7918f
      Paul Mundt 提交于
      There are lots of registers that can only be updated from the uncached
      mapping, so we add some helpers for those cases in order to make it
      easier to ensure that we only make the jump when it's absolutely
      necessary.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      b8f7918f
  5. 17 2月, 2010 5 次提交
    • P
      sh: PMB locking overhaul. · d53a0d33
      Paul Mundt 提交于
      This implements some locking for the PMB code. A high level rwlock is
      added for dealing with rw accesses on the entry map while a per-entry
      data structure spinlock is added to deal with the PMB entry changing out
      from underneath us.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d53a0d33
    • P
      sh: Build PMB entry links for existing contiguous multi-page mappings. · d7813bc9
      Paul Mundt 提交于
      This plugs in entry sizing support for existing mappings and then builds
      on top of that for linking together entries that are mapping contiguous
      areas. This will ultimately permit us to coalesce mappings and promote
      head pages while reclaiming PMB slots for dynamic remapping.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d7813bc9
    • P
      sh: uncached mapping helpers. · 9edef286
      Paul Mundt 提交于
      This adds some helper routines for uncached mapping support. This
      simplifies some of the cases where we need to check the uncached mapping
      boundaries in addition to giving us a centralized location for building
      more complex manipulation on top of.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      9edef286
    • P
      sh: PMB tidying. · 51becfd9
      Paul Mundt 提交于
      Some overdue cleanup of the PMB code, killing off unused functionality
      and duplication sprinkled about the tree.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      51becfd9
    • P
      sh: Fix up more 64-bit pgprot truncation on SH-X2 TLB. · 7bdda620
      Paul Mundt 提交于
      Both the store queue API and the PMB remapping take unsigned long for
      their pgprot flags, which cuts off the extended protection bits. In the
      case of the PMB this isn't really a problem since the cache attribute
      bits that we care about are all in the lower 32-bits, but we do it just
      to be safe. The store queue remapping on the other hand depends on the
      extended prot bits for enabling userspace access to the mappings.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      7bdda620
  6. 16 2月, 2010 2 次提交
  7. 12 2月, 2010 2 次提交
  8. 08 2月, 2010 3 次提交
  9. 06 2月, 2010 1 次提交
  10. 03 2月, 2010 1 次提交
  11. 01 2月, 2010 4 次提交
  12. 30 1月, 2010 1 次提交
  13. 29 1月, 2010 2 次提交
  14. 28 1月, 2010 2 次提交
  15. 27 1月, 2010 1 次提交
  16. 26 1月, 2010 2 次提交
  17. 21 1月, 2010 3 次提交
  18. 20 1月, 2010 3 次提交
  19. 19 1月, 2010 2 次提交