- 19 9月, 2007 3 次提交
-
-
由 Michael Ellerman 提交于
Because spufs might be built as a module, we can't have other parts of the kernel calling directly into it, we need stub routines that check first if the module is loaded. Currently we have two structures which hold callbacks for these stubs, the syscalls are in spufs_calls and the coredump calls are in spufs_coredump_calls. In both cases the logic for registering/unregistering is essentially the same, so we can simplify things by combining the two. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The SPUFS attribute get routines take a void * because the generic attribute code doesn't know what sort of data it's passing around. However our internal __spufs_get_foo() routines can take a spu_context * directly, which saves plonking it in and out of a void * again. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Jeremy Kerr 提交于
At present, a built-in spufs will not use the spufs_calls callbacks, but directly call sys_spu_create. This saves us an indirect branch, but means we have duplicated functions - one for CONFIG_SPU_FS=y and one for =m. This change unifies the spufs syscall path, and provides access to the spufs_calls structure through a get/put pair. At present, the only user of the spufs_calls structure is spu_syscalls.c, but this will facilitate adding the coredump calls later. Everyone likes numbers, right? Here's a before/after comparison with CONFIG_SPU_FS=y, doing spu_create(); close(); 64k times. Before: [jk@cell ~]$ time ./spu_create performing 65536 spu_create calls real 0m24.075s user 0m0.146s sys 0m23.925s After: [jk@cell ~]$ time ./spu_create performing 65536 spu_create calls real 0m24.777s user 0m0.141s sys 0m24.631s So, we're adding around 11us per syscall, at the benefit of having only one syscall path. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 25 7月, 2007 1 次提交
-
-
由 Christoph Hellwig 提交于
spufs.h now has two enums for the sched_flags leading to identical values for SPU_SCHED_WAS_ACTIVE and SPU_SCHED_NOTIFY_ACTIVE. Merge them into a single enum as they were in the IBM development tree. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 7月, 2007 8 次提交
-
-
由 Bob Nelson 提交于
From: Maynard Johnson <mpjohn@us.ibm.com> This patch updates the existing arch/powerpc/oprofile/op_model_cell.c to add in the SPU profiling capabilities. In addition, a 'cell' subdirectory was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling code. Exports spu_set_profile_private_kref and spu_get_profile_private_kref which are used by OProfile to store private profile information in spufs data structures. Also incorporated several fixes from other patches (rrn). Check pointer returned from kzalloc. Eliminated unnecessary cast. Better error handling and cleanup in the related area. 64-bit unsigned long parameter was being demoted to 32-bit unsigned int and eventually promoted back to unsigned long. Signed-off-by: NCarl Love <carll@us.ibm.com> Signed-off-by: NMaynard Johnson <mpjohn@us.ibm.com> Signed-off-by: NBob Nelson <rrnelson@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org>
-
由 Bob Nelson 提交于
From: Maynard Johnson <mpjohn@us.ibm.com> This patch adds to the capability of spu_switch_event_register so that the caller is also notified of currently active SPU tasks. Exports spu_switch_event_register and spu_switch_event_unregister so that OProfile can get access to the notifications provided. Signed-off-by: NMaynard Johnson <mpjohn@us.ibm.com> Signed-off-by: NCarl Love <carll@us.ibm.com> Signed-off-by: NBob Nelson <rrnelson@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch provides the spu affinity placement logic for the spufs scheduler. Each time a gang is going to be scheduled, the placement of a reference context is defined. The placement of all other contexts with affinity from the gang is defined based on this reference context location and on a precomputed displacement offset. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch adds support for additional flags at spu_create, which relate to the establishment of affinity between contexts and contexts to memory. A fourth, optional, parameter is supported. This parameter represent a affinity neighbor of the context being created, and is used when defining SPU-SPU affinity. Affinity is represented as a doubly linked list of spu_contexts. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Jeremy Kerr 提交于
From: Sebastian Siewior <cbe-oss-dev@ml.breakpoint.cc> The 'file' argument is unused in spufs_run_spu(). This change removes it. Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Currently a process is removed from the physical spu when spu_acquire_saved is saved but never put back. This patch adds a new spu_release_saved that is to be paired with spu_acquire_saved and put the process back if it has been in RUNNABLE state before. Niether Jeremy not be are entirely happy about this exact patch because it adds another spu_activate call outside of the owner thread, but I feel this is the best short-term fix we can come up with. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Andre Detsch 提交于
This patch exports per-context statistics in spufs as long as spu statistics in sysfs. It was formed by merging: "spufs: add spu stats in sysfs" From: Christoph Hellwig "spufs: add stat file to spufs" From: Christoph Hellwig "spufs: fix libassist accounting" From: Jeremy Kerr "spusched: fix spu utilization statistics" From: Luke Browning And some adjustments by myself, after suggestions on cbe-oss-dev. Having separate patches was making the review process harder than it should, as we end up integrating spus and ctx statistics accounting much more than it was on the first implementation. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Sebastian Siewior 提交于
WARNING: arch/powerpc/platforms/cell/spufs/spufs.o(.init.text+0x158): Section mismatch: reference to .exit.text:.spu_sched_exit (between '.init_module' and '.spu_sched_init') was introduced by c99c1994 This patch removes the warning. Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 03 7月, 2007 9 次提交
-
-
由 Christoph Hellwig 提交于
Export spu statistics in sysfs. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Export per-context statistics in spufs. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Provide load average information for spu context. The format is identical to /proc/loadavg, which is also where a lot of code and concepts is borrowed from. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
The new tid file contains the ID of the thread currently running the context, if any. This is used so that the new spu-top and spu-ps tools can find the thread in /proc. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Jeremy Kerr 提交于
Remove redundant whitespace in arch/powerpc/platforms/cell/spufs/ Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Add a cpus_allowed allowed filed to struct spu_context so that we always use the cpu mask of the owning thread instead of the one happening to call into the scheduler. Also use this information in grab_runnable_context to avoid spurious wakeups. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Update scheduling information on every spu_run to allow for setting threads to realtime priority just before running them. This requires some slightly ugly code in spufs_run_spu because we can just update the information unlocked if the spu is not runnable, but we need to acquire the active_mutex when it is runnable to protect against find_victim. This locking scheme requires opencoding spu_acquire_runnable in spufs_run_spu which actually is a nice cleanup all by itself. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Enable preemptive scheduling for non-RT contexts. We use the same algorithms as the CPU scheduler to calculate the time slice length, and for now we also use the same timeslice length as the CPU scheduler. This might be not enough for good performance and can be changed after some benchmarking. Note that currently we do not boost the priority for contexts waiting on the runqueue for a long time, so contexts with a higher nice value could starve ones with less priority. This could easily be fixed once the rework of the spu lists that Luke and I discussed is done. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Get rid of the scheduler workqueues that complicated things a lot to a dedicated spu scheduler thread that gets woken by a traditional scheduler tick. By default this scheduler tick runs a HZ * 10, aka one spu scheduler tick for every 10 cpu ticks. Currently the tick is not disabled when we have less context than available spus, but I will implement this later. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 6月, 2007 1 次提交
-
-
由 Christoph Hellwig 提交于
Make sure the mapping_lock also protects access to the various address_space pointers used for tearing down the ptes on a spu context switch. Because unmap_mapping_range can sleep we need to turn mapping_lock from a spinlock into a sleeping mutex. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 4月, 2007 6 次提交
-
-
由 Jeremy Kerr 提交于
Change the loop in spu_wait to be a little more straightforward. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
There is no reason for run_sema to be a struct semaphore. Changing it to a mutex and rename it accordingly. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
Until now, we have always entered the spu page fault handler with a mutex for the spu context held. This has multiple bad side-effects: - it becomes impossible to suspend the context during page faults - if an spu program attempts to access its own mmio areas through DMA, we get an immediate livelock when the nopage function tries to acquire the same mutex This patch makes the page fault logic operate on a struct spu_context instead of a struct spu, and moves it from spu_base.c to a new file fault.c inside of spufs. We now also need to copy the dar and dsisr contents of the last fault into the saved context to have it accessible in case we schedule out the context before activating the page fault handler. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
For quite a while now spu state is protected by a simple mutex instead of the old rw_semaphore, and this means we can simplify the locking around spu_setup_isolated a lot. Instead of doing an spu_release before entering spu_setup_isolated and then calling the complicated spu_acquire_exclusive we can now simply enter the function locked an in guaranteed runnable state, so that the only bit of spu_acquire_exclusive that's left is the call to spu_unmap_mappings. Similarly there's no more need to unlock and reacquire the state_mutex when spu_setup_isolated is done, but we can always return with the lock held and only drop it in spu_run_init in the failure case. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Make sure the pointers to various mappings are cleared once the last user stopped using them. This avoids accessing freed memory when tearing down the gang directory aswell as optimizing away pte invalidations if no one uses these. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
The scheduler workqueue may rearm itself and deadlock when we try to stop it. Put a flag in place to avoid skip the work if we're tearing down the context. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 10 3月, 2007 1 次提交
-
-
由 Christoph Hellwig 提交于
This optimization was added recently but is still buggy, so back it out for now. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 14 2月, 2007 8 次提交
-
-
由 Christoph Hellwig 提交于
For SCHED_RR tasks we can do some really trivial timeslicing. Basically we fire up a time for every scheduler tick that searches for a higher or same priority thread that is on the runqueue and if there is one context switches to it. Because we can't lock spus from timer context we actually run this from a delayed runqueue instead of a timer. A nice optimization would be to skip the actual priority bitmap search when there are less contexts than physical spus available. To implement this I need a so far unpublished patch from Andre, and it will be added after we have that patch in. Note that right now we only do the time slicing for SCHED_RR tasks. The code would work for SCHED_OTHER tasks aswell, but their prio value is defered from the one the PPU thread has at time of spu_run, and using this for spu scheduling decisions would make the code very unfair. SCHED_OTHER support will be enabled once we the spu scheduler knows how to calculcate cpu_context.prio (very soon) Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
If we start a spu context with realtime priority we want it to run immediately and not wait until some other lower priority thread has finished. Try to find a suitable victim and use it's spu in this case. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
There is no need to directly wake up contexts in spu_activate when called from spu_run, so add a flag to surpress this wakeup. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
This is the biggest patch in this series, and it reworks the guts of the spu scheduler runqueue mechanism: - instead of embedding a waitqueue in the runqueue there is now a simple doubly-linked list, the actual wakeups happen by reusing the stop_wq in the spu context (maybe we should rename it one day) - spu_free and spu_prio_wakeup are merged into a single spu_reschedule function - various functionality is split out into small helpers, and kerneldoc comments are added in various places to document what's going on. - spu_activate is rewritten into a tight loop by removing test for various impossible conditions and using the infrastructure in this patch. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
It doesn't make any sense to have a priority field in the physical spu structure. Move it into the spu context instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Various cleanups in code surrounding the state semaphore: - inline spu_acquire/spu_release - cleanup spu_acquire_* and add kerneldoc comments to these functions - remove spu_release_exclusive and replace it with spu_release Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
The r/w semaphore to lock the spus was overkill and can be replaced with a mutex to make it faster, simpler and easier to debug. It also helps to allow making most spufs interruptible in future patches. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Remove the SPU_CONTEXT_PREEMPT define. It's unused and won't be used in this form after the scheduler rework. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 13 2月, 2007 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
It looks like we've had some serious bitrot there mostly due to tracking of address_space's of mmap'ed files getting out of sync with the actual mmap code. The mfc, mss and psmap were not tracked properly and thus not invalidated on context switches (oops !) I also removed the various file->f_mapping = inode->i_mapping; assignments that were done in the other open() routines since that is already done for us by __dentry_open. One improvement we might want to do later is to assign the various ctx-> fields at mmap time instead of file open/close time so that we don't call unmap_mapping_range() on thing that have not been mmap'ed Finally, I added some smp_wmb's after assigning the ctx-> fields to make sure they are visible to other CPUs. I don't think this is really necessary as I suspect locking in the fs layer will make that happen anyway but better safe than sorry. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arjan van de Ven 提交于
Many struct file_operations in the kernel can be "const". Marking them const moves these to the .rodata section, which avoids false sharing with potential dirty data. In addition it'll catch accidental writes at compile time to these shared resources. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 12月, 2006 1 次提交
-
-
由 Dwayne Grant McConnell 提交于
This patch adds SPU elf notes to the coredump. It creates a separate note for each of /regs, /fpcr, /lslr, /decr, /decr_status, /mem, /signal1, /signal1_type, /signal2, /signal2_type, /event_mask, /event_status, /mbox_info, /ibox_info, /wbox_info, /dma_info, /proxydma_info, /object-id. A new macro, ARCH_HAVE_EXTRA_NOTES, was created for architectures to specify they have extra elf core notes. A new macro, ELF_CORE_EXTRA_NOTES_SIZE, was created so the size of the additional notes could be calculated and added to the notes phdr entry. A new macro, ELF_CORE_WRITE_EXTRA_NOTES, was created so the new notes would be written after the existing notes. The SPU coredump code resides in spufs. Stub functions are provided in the kernel which are hooked into the spufs code which does the actual work via register_arch_coredump_calls(). A new set of __spufs_<file>_read/get() functions was provided to allow the coredump code to read from the spufs files without having to lock the SPU context for each file read from. Cc: <linux-arch@vger.kernel.org> Signed-off-by: NDwayne Grant McConnell <decimal@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-