- 21 6月, 2006 3 次提交
-
-
由 Jeremy Kerr 提交于
Clean up create_spu() a little by using kzalloc instead of kmalloc + assignments. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 arnd@arndb.de 提交于
spufs currently knows only 4k pages and 16M hugetlb pages. Make it use the regular methods for deciding on the SLB bits. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Jeremy Kerr 提交于
SPUs are registered as system devices, exposing attributes through sysfs. Since the sysdev includes a kref, we can remove the one in struct spu (it isn't used at the moment anyway). Currently only the interrupt source and numa node attributes are added. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 02 5月, 2006 2 次提交
-
-
由 Jeremy Kerr 提交于
Add an nid member to the spu structure, and store the numa id of the spu there on creation. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Joel H Schopp 提交于
Based on an older patch from Mike Kravetz <kravetz@us.ibm.com> We need to have a mem_map for high addresses in order to make fops->no_page work on spufs mem and register files. So far, we have used the memory_present() function during early bootup, but that did not work when CONFIG_NUMA was enabled. We now use the __add_pages() function to add the mem_map when loading the spufs module, which is a lot nicer. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 29 4月, 2006 1 次提交
-
-
由 Arnd Bergmann 提交于
This patch disables and saves local interrupts during hash_page processing for SPE contexts. We have to do it explicitly in the spu_irq_class_1_bottom function. For the interrupt handlers, we get the behaviour implicitly by using SA_INTERRUPT to disable interrupts while in the handler. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 27 3月, 2006 4 次提交
-
-
由 Arnd Bergmann 提交于
If an SPE attempts a DMA put to a local store after already doing a get, the kernel must update the HW PTE to allow the write access. This case was not being handled correctly. From: Mike Kistler <mkistler@us.ibm.com> Signed-off-by: NMike Kistler <mkistler@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nutter 提交于
This patch is layered on top of CONFIG_SPARSEMEM and is patterned after direct mapping of LS. This patch allows mmap() of the following regions: "mfc", which represents the area from [0x3000 - 0x3fff]; "cntl", which represents the area from [0x4000 - 0x4fff]; "signal1" which begins at offset 0x14000; "signal2" which begins at offset 0x1c000. The signal1 & signal2 files may be mmap()'d by regular user processes. The cntl and mfc file, on the other hand, may only be accessed if the owning process has CAP_SYS_RAWIO, because they have the potential to confuse the kernel with regard to parallel access to the same files with regular file operations: the kernel always holds a spinlock when accessing registers in these areas to serialize them, which can not be guaranteed with user mmaps, Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch adds a new file called 'mfc' to each spufs directory. The file accepts DMA commands that are a subset of what would be legal DMA commands for problem state register access. Upon reading the file, a bitmask is returned with the completed tag groups set. The file is meant to be used from an abstraction in libspe that is added by a different patch. From the kernel perspective, this means a process can now offload a memory copy from or into an SPE local store without having to run code on the SPE itself. The transfer will only be performed while the SPE is owned by one thread that is waiting in the spu_run system call and the data will be transferred into that thread's address space, independent of which thread started the transfer. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Ingo Molnar 提交于
Semaphore to mutex conversion. The conversion was generated via scripts, and the result was validated automatically via a script as well. Signed-off-by: NIngo Molnar <mingo@elte.hu> Cc: Dave Jones <davej@codemonkey.org.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jens Axboe <axboe@suse.de> Cc: Neil Brown <neilb@cse.unsw.edu.au> Acked-by: NAlasdair G Kergon <agk@redhat.com> Cc: Greg KH <greg@kroah.com> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Adam Belay <ambx1@neo.rr.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 09 1月, 2006 12 次提交
-
-
由 Arnd Bergmann 提交于
For far, all SPU triggered interrupts always end up on the first SMT thread, which is a bad solution. This patch implements setting the affinity to the CPU that was running last when entering execution on an SPU. This should result in a significant reduction in IPI calls and better cache locality for SPE thread specific data. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
One local variable is missing an __iomem modifier, in another place, we pass a completely unused argument with a missing __user modifier. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
In a hypervisor based setup, direct access to the first priviledged register space can typically not be allowed to the kernel and has to be implemented through hypervisor calls. As suggested by Masato Noguchi, let's abstract the register access trough a number of function calls. Since there is currently no public specification of actual hypervisor calls to implement this, I only provide a place that makes it easier to hook into. Cc: Masato Noguchi <Masato.Noguchi@jp.sony.com> Cc: Geoff Levand <geoff.levand@am.sony.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
checking bits manually might not be synchonized with the use of set_bit/clear_bit. Make sure we always use the correct bitops by removing the unnecessary identifiers. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
Because of always clearing DSISR at spu class 1 interrupt handler, kernel may lose Class1[Mf] interrupt. Signed-off-by: NMasato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: NGeoff Levand <geoff.levand@am.sony.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
Handling mailbox interrupts was broken in multiple respects, the combination of which was hiding the bugs most of the time. - The ibox interrupt mask was open initially even though there are no waiters on a newly created SPU. - Acknowledging the mailbox interrupt did not work because it is level triggered and the mailbox data is never retrieved from inside the interrupt handler. - The interrupt handler delivered interrupts with a disabled mask if another interrupt is triggered for the same class but a different mask. - The poll function did not enable the interrupt if it had not been enabled, so we might run into the poll timeout if none of the other bugs saved us and no signal was delivered. We probably still have a similar problem with blocking read/write on mailbox files, but that will result in extra wakeup in the worst case, not in incorrect behaviour. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This patch makes it easier to preempt an SPU context by having the scheduler hold ctx->state_sema for much shorter periods of time. As part of this restructuring, the control logic for the "run" operation is moved from arch/ppc64/kernel/spu_base.c to fs/spufs/file.c. Of course the base retains "bottom half" handlers for class{0,1} irqs. The new run loop will re-acquire an SPU if preempted. From: Mark Nutter <mnutter@us.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
spufs is rather noisy when debugging is enabled, this turns off the messages for production use. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This changes all exported symbols of spufs to EXPORT_SYMBOL_GPL. The spu_ibox_read/spu_wbox_write symbols are not exported any more when the scheduler patch is applied. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This adds a scheduler for SPUs to make it possible to use more logical SPUs than physical ones are present in the system. Currently, there is no support for preempting a running SPU thread, they have to leave the SPU by either triggering an event on the SPU that causes it to return to the owning thread or by sending a signal to it. This patch also adds operations that enable accessing an SPU in either runnable or saved state. We use an RW semaphore to protect the state of the SPU from changing underneath us, while we are holding it readable. In order to change the state, it is acquired writeable and a context save or restore is executed before downgrading the semaphore to read-only. From: Mark Nutter <mnutter@us.ibm.com>, Uli Weigand <Ulrich.Weigand@de.ibm.com> Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mark Nutter 提交于
Add some infrastructure for saving and restoring the context of an SPE. This patch creates a new structure that can hold the whole state of a physical SPE in memory. It also contains code that avoids races during the context switch and the binary code that is loaded to the SPU in order to access its registers. The actual PPE- and SPE-side context switch code are two separate patches. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
This is the current version of the spu file system, used for driving SPEs on the Cell Broadband Engine. This release is almost identical to the version for the 2.6.14 kernel posted earlier, which is available as part of the Cell BE Linux distribution from http://www.bsc.es/projects/deepcomputing/linuxoncell/. The first patch provides all the interfaces for running spu application, but does not have any support for debugging SPU tasks or for scheduling. Both these functionalities are added in the subsequent patches. See Documentation/filesystems/spufs.txt on how to use spufs. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-