1. 04 3月, 2010 1 次提交
    • P
      sh: fix up MMU reset with variable PMB mapping sizes. · 281983d6
      Paul Mundt 提交于
      Presently we run in to issues with the MMU resetting the CPU when
      variable sized mappings are employed. This takes a slightly more
      aggressive approach to keeping the TLB and cache state sane before
      establishing the mappings in order to cut down on races observed on
      SMP configurations.
      
      At the same time, we bump the VMA range up to the 0xb000...0xc000 range,
      as there still seems to be some undocumented behaviour in setting up
      variable mappings in the 0xa000...0xb000 range, resulting in reset by the
      TLB.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      281983d6
  2. 03 3月, 2010 1 次提交
  3. 02 3月, 2010 3 次提交
  4. 01 3月, 2010 1 次提交
  5. 18 2月, 2010 2 次提交
    • P
      sh: Merge legacy and dynamic PMB modes. · d01447b3
      Paul Mundt 提交于
      This implements a bit of rework for the PMB code, which permits us to
      kill off the legacy PMB mode completely. Rather than trusting the boot
      loader to do the right thing, we do a quick verification of the PMB
      contents to determine whether to have the kernel setup the initial
      mappings or whether it needs to mangle them later on instead.
      
      If we're booting from legacy mappings, the kernel will now take control
      of them and make them match the kernel's initial mapping configuration.
      This is accomplished by breaking the initialization phase out in to
      multiple steps: synchronization, merging, and resizing. With the recent
      rework, the synchronization code establishes page links for compound
      mappings already, so we build on top of this for promoting mappings and
      reclaiming unused slots.
      
      At the same time, the changes introduced for the uncached helpers also
      permit us to dynamically resize the uncached mapping without any
      particular headaches. The smallest page size is more than sufficient for
      mapping all of kernel text, and as we're careful not to jump to any far
      off locations in the setup code the mapping can safely be resized
      regardless of whether we are executing from it or not.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d01447b3
    • P
      sh: Use uncached I/O helpers in PMB setup. · 2e450643
      Paul Mundt 提交于
      The PMB code is an example of something that spends an absurd amount of
      time running uncached when only a couple of operations really need to be.
      This switches over to the shiny new uncached helpers, permitting us to
      spend far more time running cached.
      
      Additionally, MMUCR twiddling is perfectly safe from cached space given
      that it's paired with a control register barrier, so fix that up, too.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      2e450643
  6. 17 2月, 2010 5 次提交
  7. 16 2月, 2010 2 次提交
  8. 26 1月, 2010 1 次提交
    • P
      sh: Mass ctrl_in/outX to __raw_read/writeX conversion. · 9d56dd3b
      Paul Mundt 提交于
      The old ctrl in/out routines are non-portable and unsuitable for
      cross-platform use. While drivers/sh has already been sanitized, there
      is still quite a lot of code that is not. This converts the arch/sh/ bits
      over, which permits us to flag the routines as deprecated whilst still
      building with -Werror for the architecture code, and to ensure that
      future users are not added.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      9d56dd3b
  9. 21 1月, 2010 1 次提交
  10. 20 1月, 2010 1 次提交
  11. 18 1月, 2010 1 次提交
    • M
      sh: Setup early PMB mappings. · 3d467676
      Matt Fleming 提交于
      More and more boards are going to start shipping that boot with the MMU
      in 32BIT mode by default. Previously we relied on the bootloader to
      setup PMB mappings for use by the kernel but we also need to cater for
      boards whose bootloaders don't set them up.
      
      If CONFIG_PMB_LEGACY is not enabled we have full control over our PMB
      mappings and can compress our address space. Usually, the distance
      between the the cached and uncached mappings of RAM is always 512MB,
      however we can compress the distance to be the amount of RAM on the
      board.
      
      pmb_init() now becomes much simpler. It no longer has to calculate any
      mappings, it just has to synchronise the software PMB table with the
      hardware.
      
      Tested on SDK7786 and SH7785LCR.
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      3d467676
  12. 13 1月, 2010 1 次提交
    • P
      sh: fixed PMB mode refactoring. · a0ab3668
      Paul Mundt 提交于
      This introduces some much overdue chainsawing of the fixed PMB support.
      fixed PMB was introduced initially to work around the fact that dynamic
      PMB mode was relatively broken, though they were never intended to
      converge. The main areas where there are differences are whether the
      system is booted in 29-bit mode or 32-bit mode, and whether legacy
      mappings are to be preserved. Any system booting in true 32-bit mode will
      not care about legacy mappings, so these are roughly decoupled.
      
      Regardless of the entry point, PMB and 32BIT are directly related as far
      as the kernel is concerned, so we also switch back to having one select
      the other.
      
      With legacy mappings iterated through and applied in the initialization
      path it's now possible to finally merge the two implementations and
      permit dynamic remapping overtop of remaining entries regardless of
      whether boot mappings are crafted by hand or inherited from the boot
      loader.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      a0ab3668
  13. 10 10月, 2009 6 次提交
  14. 09 10月, 2009 2 次提交
    • M
      sh: Don't allocate smaller sized mappings on every iteration · a2767cfb
      Matt Fleming 提交于
      Currently, we've got the less than ideal situation where if we need to
      allocate a 256MB mapping we'll allocate four entries like so,
      
      	 entry 1: 128MB
      	 entry 2:  64MB
      	 entry 3:  16MB
      	 entry 4:  16MB
      
      This is because as we execute the loop in pmb_remap() we will
      progressively try mapping the remaining address space with smaller and
      smaller sizes. This isn't good because the size we use on one iteration
      may be the perfect size to use on the next iteration, for instance when
      the initial size is divisible by one of the PMB mapping sizes.
      
      With this patch, we now only need two entries in the PMB to map 256MB of
      address space,
      
      	  entry 1: 128MB
      	  entry 2: 128MB
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      a2767cfb
    • M
      sh: Plug PMB alloc memory leak · fc2bdefd
      Matt Fleming 提交于
      If we fail to allocate a PMB entry in pmb_remap() we must remember to
      clear and free any PMB entries that we may have previously allocated,
      e.g. if we were allocating a multiple entry mapping.
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      fc2bdefd
  15. 16 3月, 2009 1 次提交
  16. 20 10月, 2008 1 次提交
  17. 28 7月, 2008 1 次提交
  18. 27 7月, 2008 1 次提交
  19. 19 4月, 2008 1 次提交
  20. 28 1月, 2008 3 次提交
  21. 17 10月, 2007 1 次提交
  22. 21 9月, 2007 1 次提交
  23. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  24. 14 5月, 2007 1 次提交
    • P
      sh: Kill off pmb slab cache destructor. · 38c425f6
      Paul Mundt 提交于
      This is the last remaining slab destructor in the kernel, which
      we kill off and move the resultant list tracking logic up to
      the pmb_alloc()/pmb_free() paths.
      
      As Christoph Lameter pointed out, it's potentially unsafe to be
      taking the list lock in the destructor anyways, so this is also
      more fundamentally correct.
      
      With this in place, we're all set for killing off slab destructors
      from the kernel entirely.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      38c425f6