1. 16 6月, 2008 1 次提交
  2. 05 5月, 2008 1 次提交
  3. 21 12月, 2007 2 次提交
    • J
      [POWERPC] spufs: rework class 0 and 1 interrupt handling · d6ad39bc
      Jeremy Kerr 提交于
      Based on original patches from
       Arnd Bergmann <arnd.bergman@de.ibm.com>; and
       Luke Browning <lukebr@linux.vnet.ibm.com>
      
      Currently, spu contexts need to be loaded to the SPU in order to take
      class 0 and class 1 exceptions.
      
      This change makes the actual interrupt-handlers much simpler (ie,
      set the exception information in the context save area), and defers the
      handling code to the spufs_handle_class[01] functions, called from
      spufs_run_spu.
      
      This should improve the concurrency of the spu scheduling leading to
      greater SPU utilization when SPUs are overcommited.
      Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d6ad39bc
    • J
      [POWERPC] spufs: move fault, lscsa_alloc and switch code to spufs module · 7cd58e43
      Jeremy Kerr 提交于
      Currently, part of the spufs code (switch.o, lscsa_alloc.o and fault.o)
      is compiled directly into the kernel.
      
      This change moves these components of spufs into the kernel.
      
      The lscsa and switch objects are fairly straightforward to move in.
      
      For the fault.o module, we split the fault-handling code into two
      parts: a/p/p/c/spu_fault.c and a/p/p/c/spufs/fault.c. The former is for
      the in-kernel spu_handle_mm_fault function, and we move the rest of the
      fault-handling code into spufs.
      Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      7cd58e43
  4. 20 12月, 2007 1 次提交
  5. 21 7月, 2007 1 次提交
  6. 09 5月, 2007 1 次提交
  7. 24 4月, 2007 1 次提交
    • A
      [POWERPC] spufs: make spu page faults not block scheduling · 57dace23
      Arnd Bergmann 提交于
      Until now, we have always entered the spu page fault handler
      with a mutex for the spu context held. This has multiple
      bad side-effects:
      - it becomes impossible to suspend the context during
        page faults
      - if an spu program attempts to access its own mmio
        areas through DMA, we get an immediate livelock when
        the nopage function tries to acquire the same mutex
      
      This patch makes the page fault logic operate on a
      struct spu_context instead of a struct spu, and moves it
      from spu_base.c to a new file fault.c inside of spufs.
      
      We now also need to copy the dar and dsisr contents
      of the last fault into the saved context to have it
      accessible in case we schedule out the context before
      activating the page fault handler.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      57dace23
  8. 10 3月, 2007 1 次提交
    • B
      [POWERPC] Fix spu SLB invalidations · 94b2a439
      Benjamin Herrenschmidt 提交于
      The SPU code doesn't properly invalidate SPUs SLBs when necessary,
      for example when changing a segment size from the hugetlbfs code. In
      addition, it saves and restores the SLB content on context switches
      which makes it harder to properly handle those invalidations.
      
      This patch removes the saving & restoring for now, something more
      efficient might be found later on. It also adds a spu_flush_all_slbs(mm)
      that can be used by the core mm code to flush the SLBs of all SPEs that
      are running a given mm at the time of the flush.
      
      In order to do that, it adds a spinlock to the list of all SPEs and move
      some bits & pieces from spufs to spu_base.c
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      94b2a439
  9. 25 10月, 2006 1 次提交
  10. 21 6月, 2006 1 次提交
    • A
      [POWERPC] spufs: one more fix for 64k pages · 37950718
      arnd@arndb.de 提交于
      The SPU context save/restore code is currently built
      for a 4k page size and we provide a _shipped version
      of it since most people don't have the spu toolchain
      that is needed to rebuild that code.
      
      This patch hardcodes the data structures to a 64k
      page alignment, which also guarantees 4k alignment
      but unfortunately wastes 60k of memory per SPU
      context that is created in the running system.
      
      We will follow up on this with another patch to
      reduce that overhead or maybe redo the context
      save/restore logic to do this part entirely different,
      but for now it should make experimental systems
      work with either page size.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37950718
  11. 09 1月, 2006 4 次提交
    • A
      [PATCH] powerpc: sanitize header files for user space includes · 88ced031
      Arnd Bergmann 提交于
      include/asm-ppc/ had #ifdef __KERNEL__ in all header files that
      are not meant for use by user space, include/asm-powerpc does
      not have this yet.
      
      This patch gets us a lot closer there. There are a few cases
      where I was not sure, so I left them out. I have verified
      that no CONFIG_* symbols are used outside of __KERNEL__
      any more and that there are no obvious compile errors when
      including any of the headers in user space libraries.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      88ced031
    • A
      [PATCH] spufs: cooperative scheduler support · 8b3d6663
      Arnd Bergmann 提交于
      This adds a scheduler for SPUs to make it possible to use
      more logical SPUs than physical ones are present in the
      system.
      
      Currently, there is no support for preempting a running
      SPU thread, they have to leave the SPU by either triggering
      an event on the SPU that causes it to return to the
      owning thread or by sending a signal to it.
      
      This patch also adds operations that enable accessing an SPU
      in either runnable or saved state. We use an RW semaphore
      to protect the state of the SPU from changing underneath
      us, while we are holding it readable. In order to change
      the state, it is acquired writeable and a context save
      or restore is executed before downgrading the semaphore
      to read-only.
      
      From: Mark Nutter <mnutter@us.ibm.com>,
            Uli Weigand <Ulrich.Weigand@de.ibm.com>
      Signed-off-by: NArnd Bergmann <arndb@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      8b3d6663
    • M
      [PATCH] kernel-side context switch code for spufs · 7c038749
      Mark Nutter 提交于
      This adds the code needed to perform a context switch from
      spufs, following the recommended 76-step sequence.
      Signed-off-by: NArnd Bergmann <arndb@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      7c038749
    • M
      [PATCH] spufs: switchable spu contexts · 5473af04
      Mark Nutter 提交于
      Add some infrastructure for saving and restoring the context of an
      SPE. This patch creates a new structure that can hold the whole
      state of a physical SPE in memory. It also contains code that
      avoids races during the context switch and the binary code that
      is loaded to the SPU in order to access its registers.
      
      The actual PPE- and SPE-side context switch code are two separate
      patches.
      Signed-off-by: NArnd Bergmann <arndb@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      5473af04