- 14 2月, 2007 8 次提交
-
-
由 Christoph Hellwig 提交于
For SCHED_RR tasks we can do some really trivial timeslicing. Basically we fire up a time for every scheduler tick that searches for a higher or same priority thread that is on the runqueue and if there is one context switches to it. Because we can't lock spus from timer context we actually run this from a delayed runqueue instead of a timer. A nice optimization would be to skip the actual priority bitmap search when there are less contexts than physical spus available. To implement this I need a so far unpublished patch from Andre, and it will be added after we have that patch in. Note that right now we only do the time slicing for SCHED_RR tasks. The code would work for SCHED_OTHER tasks aswell, but their prio value is defered from the one the PPU thread has at time of spu_run, and using this for spu scheduling decisions would make the code very unfair. SCHED_OTHER support will be enabled once we the spu scheduler knows how to calculcate cpu_context.prio (very soon) Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
If we start a spu context with realtime priority we want it to run immediately and not wait until some other lower priority thread has finished. Try to find a suitable victim and use it's spu in this case. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
There is no need to directly wake up contexts in spu_activate when called from spu_run, so add a flag to surpress this wakeup. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
This is the biggest patch in this series, and it reworks the guts of the spu scheduler runqueue mechanism: - instead of embedding a waitqueue in the runqueue there is now a simple doubly-linked list, the actual wakeups happen by reusing the stop_wq in the spu context (maybe we should rename it one day) - spu_free and spu_prio_wakeup are merged into a single spu_reschedule function - various functionality is split out into small helpers, and kerneldoc comments are added in various places to document what's going on. - spu_activate is rewritten into a tight loop by removing test for various impossible conditions and using the infrastructure in this patch. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
It doesn't make any sense to have a priority field in the physical spu structure. Move it into the spu context instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Various cleanups in code surrounding the state semaphore: - inline spu_acquire/spu_release - cleanup spu_acquire_* and add kerneldoc comments to these functions - remove spu_release_exclusive and replace it with spu_release Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
The r/w semaphore to lock the spus was overkill and can be replaced with a mutex to make it faster, simpler and easier to debug. It also helps to allow making most spufs interruptible in future patches. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Remove the SPU_CONTEXT_PREEMPT define. It's unused and won't be used in this form after the scheduler rework. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 13 2月, 2007 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
It looks like we've had some serious bitrot there mostly due to tracking of address_space's of mmap'ed files getting out of sync with the actual mmap code. The mfc, mss and psmap were not tracked properly and thus not invalidated on context switches (oops !) I also removed the various file->f_mapping = inode->i_mapping; assignments that were done in the other open() routines since that is already done for us by __dentry_open. One improvement we might want to do later is to assign the various ctx-> fields at mmap time instead of file open/close time so that we don't call unmap_mapping_range() on thing that have not been mmap'ed Finally, I added some smp_wmb's after assigning the ctx-> fields to make sure they are visible to other CPUs. I don't think this is really necessary as I suspect locking in the fs layer will make that happen anyway but better safe than sorry. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arjan van de Ven 提交于
Many struct file_operations in the kernel can be "const". Marking them const moves these to the .rodata section, which avoids false sharing with potential dirty data. In addition it'll catch accidental writes at compile time to these shared resources. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 12月, 2006 5 次提交
-
-
由 Dwayne Grant McConnell 提交于
This patch adds SPU elf notes to the coredump. It creates a separate note for each of /regs, /fpcr, /lslr, /decr, /decr_status, /mem, /signal1, /signal1_type, /signal2, /signal2_type, /event_mask, /event_status, /mbox_info, /ibox_info, /wbox_info, /dma_info, /proxydma_info, /object-id. A new macro, ARCH_HAVE_EXTRA_NOTES, was created for architectures to specify they have extra elf core notes. A new macro, ELF_CORE_EXTRA_NOTES_SIZE, was created so the size of the additional notes could be calculated and added to the notes phdr entry. A new macro, ELF_CORE_WRITE_EXTRA_NOTES, was created so the new notes would be written after the existing notes. The SPU coredump code resides in spufs. Stub functions are provided in the kernel which are hooked into the spufs code which does the actual work via register_arch_coredump_calls(). A new set of __spufs_<file>_read/get() functions was provided to allow the coredump code to read from the spufs files without having to lock the SPU context for each file read from. Cc: <linux-arch@vger.kernel.org> Signed-off-by: NDwayne Grant McConnell <decimal@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Jeremy Kerr 提交于
In order to fit with the "don't-run-spus-outside-of-spu_run" model, this patch starts the isolated-mode loader in spu_run, rather than spu_create. If spu_run is passed an isolated-mode context that isn't in isolated mode state, it will run the loader. This fixes potential races with the isolated SPE app doing a stop-and-signal before the PPE has called spu_run: bugzilla #29111. Also (in conjunction with a mambo patch), this addresses #28565, as we always set the runcntrl register when entering spu_run. It is up to libspe to ensure that isolated-mode apps are cleaned up after running to completion - ie, put the app through the "ISOLATE EXIT" state (see Ch11 of the CBEA). Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Jeremy Kerr 提交于
This change adds a read accessor for the SPE problem-state run control register. This is required for for applying (userspace) changes made to the run control register while the SPE is stopped - simply asserting the master run control bit is not sufficient. My next patch for isolated-mode setup requires this. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
When the user changes the runcontrol register, an SPU might be running without a process being attached to it and waiting for events. In order to prevent this, make sure we always disable the priv1 master control when we're not inside of spu_run. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Dwayne Grant McConnell 提交于
The /lslr file gives read access to the SPU_LSLR register in hex; 0x3fff for example The /dma_info file provides read access to the SPU Command Queue in a binary format. The /proxydma_info files provides read access access to the Proxy Command Queue in a binary format. The spu_info.h file provides data structures for interpreting the binary format of /dma_info and /proxydma_info. Signed-off-by: NDwayne Grant McConnell <decimal@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 25 10月, 2006 2 次提交
-
-
由 Jeremy Kerr 提交于
When in isolated mode, SPEs have access to an area of persistent storage, which is per-SPE. In order for isolated-mode apps to communicate arbitrary data through this storage, we need to ensure that isolated physical SPEs can be reused for subsequent applications. Add a file ("recycle") in a spethread dir to enable isolated-mode recycling. By writing to this file, the kernel will reload the isolated-mode loader kernel, allowing a new app to be run on the same physical SPE. This requires the spu_acquire_exclusive function to enforce exclusive access to the SPE while the loader is initialised. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nutter 提交于
This adds two new flags to spu_create: SPU_CREATE_NONSCHED: create a context that is never moved away from an SPE once it has started running. This flag can only be used by tasks with the CAP_SYS_NICE capability. SPU_CREATE_ISOLATED: create a nonschedulable context that enters isolation mode upon first run. This requires the SPU_CREATE_NONSCHED flag. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 05 10月, 2006 3 次提交
-
-
由 Arnd Bergmann 提交于
This adds an 'object-id' file that the spe library can use to store a pointer to its ELF object. This was originally meant for use by oprofile, but is now also used by the GNU debugger, if available. In order for oprofile to find the location in an spu-elf binary where an event counter triggered, we need a way to identify the binary in the first place. Unfortunately, that binary itself can be embedded in a powerpc ELF binary. Since we can assume it is mapped into the effective address space of the running process, have that one write the pointer value into a new spufs file. When a context switch occurs, pass the user value to the profiler so that can look at the mapped file (with some care). Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
Add the concept of a gang to spufs as a new type of object. So far, this has no impact whatsover on scheduling, but makes it possible to add that later. A new type of object in spufs is now a spu_gang. It is created with the spu_create system call with the flags argument set to SPU_CREATE_GANG (0x2). Inside of a spu_gang, it is then possible to create spu_context objects, which until now was only possible at the root of spufs. There is a new member in struct spu_context pointing to the spu_gang it belongs to, if any. The spu_gang maintains a list of spu_context structures that are its children. This information can then be used in the scheduler in the future. There is still a bug that needs to be resolved in this basic infrastructure regarding the order in which objects are removed. When the spu_gang file descriptor is closed before the spu_context descriptors, we leak the dentry and inode for the gang. Any ideas how to cleanly solve this are appreciated. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This tries to fix spufs so we have an interface closer to what is specified in the man page for events returned in the third argument of spu_run. Fortunately, libspe has never been using the returned contents of that register, as they were the same as the return code of spu_run (duh!). Unlike the specification that we never implemented correctly, we now require a SPU_CREATE_EVENTS_ENABLED flag passed to spu_create, in order to get the new behavior. When this flag is not passed, spu_run will simply ignore the third argument now. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 27 3月, 2006 2 次提交
-
-
由 Mark Nutter 提交于
This patch is layered on top of CONFIG_SPARSEMEM and is patterned after direct mapping of LS. This patch allows mmap() of the following regions: "mfc", which represents the area from [0x3000 - 0x3fff]; "cntl", which represents the area from [0x4000 - 0x4fff]; "signal1" which begins at offset 0x14000; "signal2" which begins at offset 0x1c000. The signal1 & signal2 files may be mmap()'d by regular user processes. The cntl and mfc file, on the other hand, may only be accessed if the owning process has CAP_SYS_RAWIO, because they have the potential to confuse the kernel with regard to parallel access to the same files with regular file operations: the kernel always holds a spinlock when accessing registers in these areas to serialize them, which can not be guaranteed with user mmaps, Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch adds a new file called 'mfc' to each spufs directory. The file accepts DMA commands that are a subset of what would be legal DMA commands for problem state register access. Upon reading the file, a bitmask is returned with the completed tag groups set. The file is meant to be used from an abstraction in libspe that is added by a different patch. From the kernel perspective, this means a process can now offload a memory copy from or into an SPE local store without having to run code on the SPE itself. The transfer will only be performed while the SPE is owned by one thread that is waiting in the spu_run system call and the data will be transferred into that thread's address space, independent of which thread started the transfer. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 1月, 2006 11 次提交
-
-
由 Arnd Bergmann 提交于
One local variable is missing an __iomem modifier, in another place, we pass a completely unused argument with a missing __user modifier. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
The logic for sys_spu_run keeps growing and it does not really belong into file.c any more since we moved away from using regular file operations to our own syscall. No functional change in here. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
checking bits manually might not be synchonized with the use of set_bit/clear_bit. Make sure we always use the correct bitops by removing the unnecessary identifiers. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
During an earlier cleanup, we lost the serialization of multiple spu_run calls performed on the same spu_context. In order to get this back, introduce a mutex in the spu_context that is held inside of spu_run. Noticed by Al Viro. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
Only checking for SPUFS_MAGIC is not reliable, because it might not be unique in theory. Worse than that, we accidentally allow spu_run to be performed on any file in spufs, not just those returned from spu_create as intended. Noticed by Al Viro. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
Handling mailbox interrupts was broken in multiple respects, the combination of which was hiding the bugs most of the time. - The ibox interrupt mask was open initially even though there are no waiters on a newly created SPU. - Acknowledging the mailbox interrupt did not work because it is level triggered and the mailbox data is never retrieved from inside the interrupt handler. - The interrupt handler delivered interrupts with a disabled mask if another interrupt is triggered for the same class but a different mask. - The poll function did not enable the interrupt if it had not been enabled, so we might run into the poll timeout if none of the other bugs saved us and no signal was delivered. We probably still have a similar problem with blocking read/write on mailbox files, but that will result in extra wakeup in the worst case, not in incorrect behaviour. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch reduces lock complexity of SPU scheduler, particularly for involuntary preemptive switches. As a result the new code does a better job of mapping the highest priority tasks to SPUs. Lock complexity is reduced by using the system default workqueue to perform involuntary saves. In this way we avoid nasty lock ordering problems that the previous code had. A "minimum timeslice" for SPU contexts is also introduced. The intent here is to avoid thrashing. While the new scheduler does a better job at prioritization it still does nothing for fairness. From: Mark Nutter <mnutter@us.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch makes it easier to preempt an SPU context by having the scheduler hold ctx->state_sema for much shorter periods of time. As part of this restructuring, the control logic for the "run" operation is moved from arch/ppc64/kernel/spu_base.c to fs/spufs/file.c. Of course the base retains "bottom half" handlers for class{0,1} irqs. The new run loop will re-acquire an SPU if preempted. From: Mark Nutter <mnutter@us.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This adds a scheduler for SPUs to make it possible to use more logical SPUs than physical ones are present in the system. Currently, there is no support for preempting a running SPU thread, they have to leave the SPU by either triggering an event on the SPU that causes it to return to the owning thread or by sending a signal to it. This patch also adds operations that enable accessing an SPU in either runnable or saved state. We use an RW semaphore to protect the state of the SPU from changing underneath us, while we are holding it readable. In order to change the state, it is acquired writeable and a context save or restore is executed before downgrading the semaphore to read-only. From: Mark Nutter <mnutter@us.ibm.com>, Uli Weigand <Ulrich.Weigand@de.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nutter 提交于
Add some infrastructure for saving and restoring the context of an SPE. This patch creates a new structure that can hold the whole state of a physical SPE in memory. It also contains code that avoids races during the context switch and the binary code that is loaded to the SPU in order to access its registers. The actual PPE- and SPE-side context switch code are two separate patches. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This is the current version of the spu file system, used for driving SPEs on the Cell Broadband Engine. This release is almost identical to the version for the 2.6.14 kernel posted earlier, which is available as part of the Cell BE Linux distribution from http://www.bsc.es/projects/deepcomputing/linuxoncell/. The first patch provides all the interfaces for running spu application, but does not have any support for debugging SPU tasks or for scheduling. Both these functionalities are added in the subsequent patches. See Documentation/filesystems/spufs.txt on how to use spufs. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-