1. 19 12月, 2007 6 次提交
  2. 12 10月, 2007 1 次提交
    • P
      [POWERPC] Use 1TB segments · 1189be65
      Paul Mackerras 提交于
      This makes the kernel use 1TB segments for all kernel mappings and for
      user addresses of 1TB and above, on machines which support them
      (currently POWER5+, POWER6 and PA6T).
      
      We detect that the machine supports 1TB segments by looking at the
      ibm,processor-segment-sizes property in the device tree.
      
      We don't currently use 1TB segments for user addresses < 1T, since
      that would effectively prevent 32-bit processes from using huge pages
      unless we also had a way to revert to using 256MB segments.  That
      would be possible but would involve extra complications (such as
      keeping track of which segment size was used when HPTEs were inserted)
      and is not addressed here.
      
      Parts of this patch were originally written by Ben Herrenschmidt.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1189be65
  3. 19 9月, 2007 1 次提交
  4. 11 9月, 2007 1 次提交
  5. 10 8月, 2007 1 次提交
  6. 21 7月, 2007 9 次提交
  7. 18 7月, 2007 1 次提交
  8. 03 7月, 2007 2 次提交
  9. 28 6月, 2007 1 次提交
  10. 09 5月, 2007 1 次提交
    • B
      [POWERPC] Introduce address space "slices" · d0f13e3c
      Benjamin Herrenschmidt 提交于
      The basic issue is to be able to do what hugetlbfs does but with
      different page sizes for some other special filesystems; more
      specifically, my need is:
      
       - Huge pages
      
       - SPE local store mappings using 64K pages on a 4K base page size
      kernel on Cell
      
       - Some special 4K segments in 64K-page kernels for mapping a dodgy
      type of powerpc-specific infiniband hardware that requires 4K MMU
      mappings for various reasons I won't explain here.
      
      The main issues are:
      
       - To maintain/keep track of the page size per "segment" (as we can
      only have one page size per segment on powerpc, which are 256MB
      divisions of the address space).
      
       - To make sure special mappings stay within their allotted
      "segments" (including MAP_FIXED crap)
      
       - To make sure everybody else doesn't mmap/brk/grow_stack into a
      "segment" that is used for a special mapping
      
      Some of the necessary mechanisms to handle that were present in the
      hugetlbfs code, but mostly in ways not suitable for anything else.
      
      The patch relies on some changes to the generic get_unmapped_area()
      that just got merged.  It still hijacks hugetlb callbacks here or
      there as the generic code hasn't been entirely cleaned up yet but
      that shouldn't be a problem.
      
      So what is a slice ?  Well, I re-used the mechanism used formerly by our
      hugetlbfs implementation which divides the address space in
      "meta-segments" which I called "slices".  The division is done using
      256MB slices below 4G, and 1T slices above.  Thus the address space is
      divided currently into 16 "low" slices and 16 "high" slices.  (Special
      case: high slice 0 is the area between 4G and 1T).
      
      Doing so simplifies significantly the tracking of segments and avoids
      having to keep track of all the 256MB segments in the address space.
      
      While I used the "concepts" of hugetlbfs, I mostly re-implemented
      everything in a more generic way and "ported" hugetlbfs to it.
      
      Slices can have an associated page size, which is encoded in the mmu
      context and used by the SLB miss handler to set the segment sizes.  The
      hash code currently doesn't care, it has a specific check for hugepages,
      though I might add a mechanism to provide per-slice hash mapping
      functions in the future.
      
      The slice code provide a pair of "generic" get_unmapped_area() (bottomup
      and topdown) functions that should work with any slice size.  There is
      some trickiness here so I would appreciate people to have a look at the
      implementation of these and let me know if I got something wrong.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d0f13e3c
  11. 30 4月, 2007 1 次提交
  12. 24 4月, 2007 4 次提交
  13. 10 3月, 2007 1 次提交
    • B
      [POWERPC] Fix spu SLB invalidations · 94b2a439
      Benjamin Herrenschmidt 提交于
      The SPU code doesn't properly invalidate SPUs SLBs when necessary,
      for example when changing a segment size from the hugetlbfs code. In
      addition, it saves and restores the SLB content on context switches
      which makes it harder to properly handle those invalidations.
      
      This patch removes the saving & restoring for now, something more
      efficient might be found later on. It also adds a spu_flush_all_slbs(mm)
      that can be used by the core mm code to flush the SLBs of all SPEs that
      are running a given mm at the time of the flush.
      
      In order to do that, it adds a spinlock to the list of all SPEs and move
      some bits & pieces from spufs to spu_base.c
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      94b2a439
  14. 24 1月, 2007 1 次提交
  15. 04 12月, 2006 4 次提交
  16. 10 11月, 2006 1 次提交
  17. 25 10月, 2006 4 次提交
    • M
      [POWERPC] add support for stopping spus from xmon · ff8a8f25
      Michael Ellerman 提交于
      This patch adds support for stopping, and restarting, spus
      from xmon. We use the spu master runcntl bit to stop execution,
      this is apparently the "right" way to control spu execution and
      spufs will be changed in the future to use this bit.
      
      Testing has shown that to restart execution we have to turn the
      master runcntl bit on and also rewrite the spu runcntl bit, even
      if it is already set to 1 (running).
      
      Stopping spus is triggered by the xmon command 'ss' - "spus stop"
      perhaps. Restarting them is triggered via 'sr'. Restart doesn't
      start execution on spus unless they were running prior to being
      stopped by xmon.
      
      Walking the spu->full_list in xmon after a panic, would mean
      corruption of any spu struct would make all the others
      inaccessible. To avoid this, and also to make the next patch
      easier, we cache pointers to all spus during boot.
      
      We attempt to catch and recover from errors while stopping and
      restarting the spus, but as with most xmon functionality there are
      no guarantees that performing these operations won't crash xmon
      itself.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ff8a8f25
    • C
      [POWERPC] cell: add support for registering sysfs attributes to spus · e570beb6
      Christian Krafft 提交于
      In order to add sysfs attributes to all spu's, there is a
      need for a list of all available spu's. Adding the device_node
      makes also sense, as it is needed for proper register access.
      This patch also adds two functions to create and remove sysfs
      attributes and attribute_groups to all spus.
      That allows to group spu attributes in a subdirectory like:
      /sys/devices/system/spu/spuX/group_name/what_ever
      This will be used by cbe_thermal to group all attributes dealing with
      thermal support in one directory.
      Signed-off-by: NChristian Krafft <krafft@de.ibm.com>
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e570beb6
    • A
      [POWERPC] spufs: allow isolated mode apps by starting the SPE loader · 0afacde3
      arnd@arndb.de 提交于
      This patch adds general support for isolated mode SPE apps.
      
      Isolated apps are started indirectly, by a dedicated loader "kernel".
      This patch starts the loader when spe_create is invoked with the
      ISOLATE flag. We do this at spe_create time to allow libspe to pass the
      isolated app in before calling spe_run.
      
      The loader is read from the device tree, at the location
      "/spu-isolation/loader". If the loader is not present, an attempt to
      start an isolated SPE binary will fail with -ENODEV.
      
      Update: loader needs to be correctly aligned - copy to a kmalloced buf.
      Update: remove workaround for systemsim/spurom 'L-bit' bug, which has
              been fixed.
      Update: don't write to runcntl on spu_run_init: SPU is already running.
      Update: do spu_setup_isolated earlier
      
      Tested on systemsim.
      Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      0afacde3
    • G
      [POWERPC] cell: remove unused struct spu variable · cc21a66d
      Geoff Levand 提交于
      Remove the mostly unused variable isrc from struct spu and a forgotten
      function declaration.
      Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com>
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      cc21a66d