1. 11 3月, 2009 1 次提交
  2. 09 7月, 2008 1 次提交
  3. 30 4月, 2008 3 次提交
  4. 11 3月, 2008 1 次提交
    • J
      [POWERPC] spufs: fix rescheduling of non-runnable contexts · c368392a
      Jeremy Kerr 提交于
      At present, we can hit the BUG_ON in __spu_update_sched_info by reading
      the regs file of a context between two calls to spu_run. The
      spu_release_saved called by spufs_regs_read() is resulting in the (now
      non-runnable) context being placed back on the run queue, so the next
      call to spu_run ends up in the bug condition.
      
      This change uses the SPU_SCHED_SPU_RUN flag to only reschedule a context
      if it's still in spu_run().
      Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
      c368392a
  5. 28 2月, 2008 1 次提交
    • J
      [POWERPC] spufs: fix invalid scheduling of forgotten contexts · 0111a701
      Jeremy Kerr 提交于
      At present, we have a situation where a context with no owner is
      re-scheduled by spu_forget:
      
      	Thread 1: reading regs file	Thread 2: context owner
      
      					spu_forget()
      						- ctx->owner = NULL
      						- set SPU_SCHED_WAS_ACTIVE
      
      	spu_acquire_saved()
      	- context is in saved state
      
      	spu_release_saved()
      	- SPU_SCHED_WAS_ACTIVE is set,
      	  so spu_activate() the context,
      	  which now has no owner
      
      In spu_forget(), we shouldn't be requesting a re-schedule by setting
      SPU_SCHED_WAS_ACTIVE. This change removes the set_bit in spu_forget(),
      so that spu_release_saved() doesn't reinsert this destroyed context on
      to the run queue.
      Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
      0111a701
  6. 28 12月, 2007 1 次提交
  7. 21 12月, 2007 3 次提交
  8. 26 7月, 2007 1 次提交
  9. 21 7月, 2007 4 次提交
  10. 03 7月, 2007 6 次提交
  11. 07 6月, 2007 1 次提交
  12. 09 5月, 2007 1 次提交
  13. 24 4月, 2007 4 次提交
  14. 14 2月, 2007 8 次提交
  15. 13 2月, 2007 1 次提交
    • B
      [POWERPC] spufs: Fix bitrot of the SPU mmap facility · 17e0e270
      Benjamin Herrenschmidt 提交于
      It looks like we've had some serious bitrot there mostly due to tracking
      of address_space's of mmap'ed files getting out of sync with the actual
      mmap code. The mfc, mss and psmap were not tracked properly and thus
      not invalidated on context switches (oops !)
      
      I also removed the various file->f_mapping = inode->i_mapping;
      assignments that were done in the other open() routines since that
      is already done for us by __dentry_open.
      
      One improvement we might want to do later is to assign the various
      ctx-> fields at mmap time instead of file open/close time so that we
      don't call unmap_mapping_range() on thing that have not been mmap'ed
      
      Finally, I added some smp_wmb's after assigning the ctx-> fields to make
      sure they are visible to other CPUs. I don't think this is really
      necessary as I suspect locking in the fs layer will make that happen
      anyway but better safe than sorry.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      17e0e270
  16. 04 12月, 2006 1 次提交
  17. 25 10月, 2006 1 次提交
    • J
      [POWERPC] spufs: Add isolated-mode SPE recycling support · 099814bb
      Jeremy Kerr 提交于
      When in isolated mode, SPEs have access to an area of persistent
      storage, which is per-SPE. In order for isolated-mode apps to
      communicate arbitrary data through this storage, we need to ensure that
      isolated physical SPEs can be reused for subsequent applications.
      
      Add a file ("recycle") in a spethread dir to enable isolated-mode
      recycling. By writing to this file, the kernel will reload the
      isolated-mode loader kernel, allowing a new app to be run on the same
      physical SPE.
      
      This requires the spu_acquire_exclusive function to enforce exclusive
      access to the SPE while the loader is initialised.
      Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      099814bb
  18. 05 10月, 2006 1 次提交
    • A
      [POWERPC] spufs: Add infrastructure needed for gang scheduling · 6263203e
      Arnd Bergmann 提交于
      Add the concept of a gang to spufs as a new type of object.
      So far, this has no impact whatsover on scheduling, but makes
      it possible to add that later.
      
      A new type of object in spufs is now a spu_gang. It is created
      with the spu_create system call with the flags argument set
      to SPU_CREATE_GANG (0x2). Inside of a spu_gang, it
      is then possible to create spu_context objects, which until
      now was only possible at the root of spufs.
      
      There is a new member in struct spu_context pointing to
      the spu_gang it belongs to, if any. The spu_gang maintains
      a list of spu_context structures that are its children.
      This information can then be used in the scheduler in the
      future.
      
      There is still a bug that needs to be resolved in this
      basic infrastructure regarding the order in which objects
      are removed. When the spu_gang file descriptor is closed
      before the spu_context descriptors, we leak the dentry
      and inode for the gang. Any ideas how to cleanly solve
      this are appreciated.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      6263203e