- 03 7月, 2007 6 次提交
-
-
由 Christoph Hellwig 提交于
Export per-context statistics in spufs. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Provide load average information for spu context. The format is identical to /proc/loadavg, which is also where a lot of code and concepts is borrowed from. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Add a cpus_allowed allowed filed to struct spu_context so that we always use the cpu mask of the owning thread instead of the one happening to call into the scheduler. Also use this information in grab_runnable_context to avoid spurious wakeups. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Update scheduling information on every spu_run to allow for setting threads to realtime priority just before running them. This requires some slightly ugly code in spufs_run_spu because we can just update the information unlocked if the spu is not runnable, but we need to acquire the active_mutex when it is runnable to protect against find_victim. This locking scheme requires opencoding spu_acquire_runnable in spufs_run_spu which actually is a nice cleanup all by itself. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Enable preemptive scheduling for non-RT contexts. We use the same algorithms as the CPU scheduler to calculate the time slice length, and for now we also use the same timeslice length as the CPU scheduler. This might be not enough for good performance and can be changed after some benchmarking. Note that currently we do not boost the priority for contexts waiting on the runqueue for a long time, so contexts with a higher nice value could starve ones with less priority. This could easily be fixed once the rework of the spu lists that Luke and I discussed is done. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Get rid of the scheduler workqueues that complicated things a lot to a dedicated spu scheduler thread that gets woken by a traditional scheduler tick. By default this scheduler tick runs a HZ * 10, aka one spu scheduler tick for every 10 cpu ticks. Currently the tick is not disabled when we have less context than available spus, but I will implement this later. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 6月, 2007 1 次提交
-
-
由 Christoph Hellwig 提交于
Make sure the mapping_lock also protects access to the various address_space pointers used for tearing down the ptes on a spu context switch. Because unmap_mapping_range can sleep we need to turn mapping_lock from a spinlock into a sleeping mutex. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 5月, 2007 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This adds an option to spufs when the kernel is configured for 4K page to give it the ability to use 64K pages for SPE local store mappings. Currently, we are optimistic and try order 4 allocations when creating contexts. If that fails, the code will fallback to 4K automatically. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 4月, 2007 4 次提交
-
-
由 Christoph Hellwig 提交于
There is no reason for run_sema to be a struct semaphore. Changing it to a mutex and rename it accordingly. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
For quite a while now spu state is protected by a simple mutex instead of the old rw_semaphore, and this means we can simplify the locking around spu_setup_isolated a lot. Instead of doing an spu_release before entering spu_setup_isolated and then calling the complicated spu_acquire_exclusive we can now simply enter the function locked an in guaranteed runnable state, so that the only bit of spu_acquire_exclusive that's left is the call to spu_unmap_mappings. Similarly there's no more need to unlock and reacquire the state_mutex when spu_setup_isolated is done, but we can always return with the lock held and only drop it in spu_run_init in the failure case. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
A single context should only be woken once, and we should not have more wakeups for a given priority than the number of contexts on that runqueue position. Also add some asserts to trap future problems in this area more easily. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Make sure the pointers to various mappings are cleared once the last user stopped using them. This avoids accessing freed memory when tearing down the gang directory aswell as optimizing away pte invalidations if no one uses these. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 14 2月, 2007 8 次提交
-
-
由 Christoph Hellwig 提交于
For SCHED_RR tasks we can do some really trivial timeslicing. Basically we fire up a time for every scheduler tick that searches for a higher or same priority thread that is on the runqueue and if there is one context switches to it. Because we can't lock spus from timer context we actually run this from a delayed runqueue instead of a timer. A nice optimization would be to skip the actual priority bitmap search when there are less contexts than physical spus available. To implement this I need a so far unpublished patch from Andre, and it will be added after we have that patch in. Note that right now we only do the time slicing for SCHED_RR tasks. The code would work for SCHED_OTHER tasks aswell, but their prio value is defered from the one the PPU thread has at time of spu_run, and using this for spu scheduling decisions would make the code very unfair. SCHED_OTHER support will be enabled once we the spu scheduler knows how to calculcate cpu_context.prio (very soon) Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
If we start a spu context with realtime priority we want it to run immediately and not wait until some other lower priority thread has finished. Try to find a suitable victim and use it's spu in this case. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
There is no need to directly wake up contexts in spu_activate when called from spu_run, so add a flag to surpress this wakeup. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
It doesn't make any sense to have a priority field in the physical spu structure. Move it into the spu context instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Various cleanups in code surrounding the state semaphore: - inline spu_acquire/spu_release - cleanup spu_acquire_* and add kerneldoc comments to these functions - remove spu_release_exclusive and replace it with spu_release Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
The r/w semaphore to lock the spus was overkill and can be replaced with a mutex to make it faster, simpler and easier to debug. It also helps to allow making most spufs interruptible in future patches. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Only bind_context/unbind_context change the spu context state. Thus we can move all assignents of SPU_STATE_RUNNABLE into bind_context, which parallels the unbind side aswell. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
unbind_context already sets the context state to SPU_STATE_SAVED, thus the spu_deactivate callers don't need to do it again. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 13 2月, 2007 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
It looks like we've had some serious bitrot there mostly due to tracking of address_space's of mmap'ed files getting out of sync with the actual mmap code. The mfc, mss and psmap were not tracked properly and thus not invalidated on context switches (oops !) I also removed the various file->f_mapping = inode->i_mapping; assignments that were done in the other open() routines since that is already done for us by __dentry_open. One improvement we might want to do later is to assign the various ctx-> fields at mmap time instead of file open/close time so that we don't call unmap_mapping_range() on thing that have not been mmap'ed Finally, I added some smp_wmb's after assigning the ctx-> fields to make sure they are visible to other CPUs. I don't think this is really necessary as I suspect locking in the fs layer will make that happen anyway but better safe than sorry. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 04 12月, 2006 1 次提交
-
-
由 Arnd Bergmann 提交于
When the user changes the runcontrol register, an SPU might be running without a process being attached to it and waiting for events. In order to prevent this, make sure we always disable the priv1 master control when we're not inside of spu_run. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 25 10月, 2006 1 次提交
-
-
由 Jeremy Kerr 提交于
When in isolated mode, SPEs have access to an area of persistent storage, which is per-SPE. In order for isolated-mode apps to communicate arbitrary data through this storage, we need to ensure that isolated physical SPEs can be reused for subsequent applications. Add a file ("recycle") in a spethread dir to enable isolated-mode recycling. By writing to this file, the kernel will reload the isolated-mode loader kernel, allowing a new app to be run on the same physical SPE. This requires the spu_acquire_exclusive function to enforce exclusive access to the SPE while the loader is initialised. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 05 10月, 2006 1 次提交
-
-
由 Arnd Bergmann 提交于
Add the concept of a gang to spufs as a new type of object. So far, this has no impact whatsover on scheduling, but makes it possible to add that later. A new type of object in spufs is now a spu_gang. It is created with the spu_create system call with the flags argument set to SPU_CREATE_GANG (0x2). Inside of a spu_gang, it is then possible to create spu_context objects, which until now was only possible at the root of spufs. There is a new member in struct spu_context pointing to the spu_gang it belongs to, if any. The spu_gang maintains a list of spu_context structures that are its children. This information can then be used in the scheduler in the future. There is still a bug that needs to be resolved in this basic infrastructure regarding the order in which objects are removed. When the spu_gang file descriptor is closed before the spu_context descriptors, we leak the dentry and inode for the gang. Any ideas how to cleanly solve this are appreciated. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 29 4月, 2006 1 次提交
-
-
由 Jeremy Kerr 提交于
Use kzalloc when allocating a new spu context, rather than kmalloc + zeroing. Booted & tested on cell. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 27 3月, 2006 3 次提交
-
-
由 Dirk Herrendoerfer 提交于
the mfc member of a new context was not initialized to zero, which potentially leads to wild memory accesses. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nutter 提交于
This patch is layered on top of CONFIG_SPARSEMEM and is patterned after direct mapping of LS. This patch allows mmap() of the following regions: "mfc", which represents the area from [0x3000 - 0x3fff]; "cntl", which represents the area from [0x4000 - 0x4fff]; "signal1" which begins at offset 0x14000; "signal2" which begins at offset 0x1c000. The signal1 & signal2 files may be mmap()'d by regular user processes. The cntl and mfc file, on the other hand, may only be accessed if the owning process has CAP_SYS_RAWIO, because they have the potential to confuse the kernel with regard to parallel access to the same files with regular file operations: the kernel always holds a spinlock when accessing registers in these areas to serialize them, which can not be guaranteed with user mmaps, Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch adds a new file called 'mfc' to each spufs directory. The file accepts DMA commands that are a subset of what would be legal DMA commands for problem state register access. Upon reading the file, a bitmask is returned with the completed tag groups set. The file is meant to be used from an abstraction in libspe that is added by a different patch. From the kernel perspective, this means a process can now offload a memory copy from or into an SPE local store without having to run code on the SPE itself. The transfer will only be performed while the SPE is owned by one thread that is waiting in the spu_run system call and the data will be transferred into that thread's address space, independent of which thread started the transfer. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 1月, 2006 8 次提交
-
-
由 Arnd Bergmann 提交于
When spu_activate fails in spu_acquire_runnable, the state must still be SPU_STATE_SAVED, we were incorrectly setting it to SPU_STATE_RUNNABLE. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
During an earlier cleanup, we lost the serialization of multiple spu_run calls performed on the same spu_context. In order to get this back, introduce a mutex in the spu_context that is held inside of spu_run. Noticed by Al Viro. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
We need to check for validity of owner under down_write, down_read is not enough. Noticed by Al Viro. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch reduces lock complexity of SPU scheduler, particularly for involuntary preemptive switches. As a result the new code does a better job of mapping the highest priority tasks to SPUs. Lock complexity is reduced by using the system default workqueue to perform involuntary saves. In this way we avoid nasty lock ordering problems that the previous code had. A "minimum timeslice" for SPU contexts is also introduced. The intent here is to avoid thrashing. While the new scheduler does a better job at prioritization it still does nothing for fairness. From: Mark Nutter <mnutter@us.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch makes it easier to preempt an SPU context by having the scheduler hold ctx->state_sema for much shorter periods of time. As part of this restructuring, the control logic for the "run" operation is moved from arch/ppc64/kernel/spu_base.c to fs/spufs/file.c. Of course the base retains "bottom half" handlers for class{0,1} irqs. The new run loop will re-acquire an SPU if preempted. From: Mark Nutter <mnutter@us.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This adds a scheduler for SPUs to make it possible to use more logical SPUs than physical ones are present in the system. Currently, there is no support for preempting a running SPU thread, they have to leave the SPU by either triggering an event on the SPU that causes it to return to the owning thread or by sending a signal to it. This patch also adds operations that enable accessing an SPU in either runnable or saved state. We use an RW semaphore to protect the state of the SPU from changing underneath us, while we are holding it readable. In order to change the state, it is acquired writeable and a context save or restore is executed before downgrading the semaphore to read-only. From: Mark Nutter <mnutter@us.ibm.com>, Uli Weigand <Ulrich.Weigand@de.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nutter 提交于
Add some infrastructure for saving and restoring the context of an SPE. This patch creates a new structure that can hold the whole state of a physical SPE in memory. It also contains code that avoids races during the context switch and the binary code that is loaded to the SPU in order to access its registers. The actual PPE- and SPE-side context switch code are two separate patches. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This is the current version of the spu file system, used for driving SPEs on the Cell Broadband Engine. This release is almost identical to the version for the 2.6.14 kernel posted earlier, which is available as part of the Cell BE Linux distribution from http://www.bsc.es/projects/deepcomputing/linuxoncell/. The first patch provides all the interfaces for running spu application, but does not have any support for debugging SPU tasks or for scheduling. Both these functionalities are added in the subsequent patches. See Documentation/filesystems/spufs.txt on how to use spufs. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-