- 29 2月, 2008 2 次提交
-
-
由 Arnd Bergmann 提交于
There is a potential race between flushes of the entire SLB in the MFC and the point where new entries are being established. The problem is that we might put a ESID entry into the MFC SLB when the VSID entry has just been cleared by the global flush. This can be circumvented by holding the register_lock throughout both the flushing and the creation of SLB entries. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Arnd Bergmann 提交于
When we replace an SLB entry in the MFC after using up all the available entries, there is a short window in which an incorrect entry is marked as valid. The problem is that the 'valid' bit is stored in the ESID, which is always written after the VSID. Overwriting the VSID first will make the original ESID entry point to the new VSID, which means that any concurrent DMA accessing the old ESID ends up being redirected to the new virtual address. A few cycles later, we write the new ESID and everything is fine again. That race can be closed by writing a zero entry to the ESID first, which makes sure that the VSID is not accessed until we write the new ESID. Note that we don't actually need to invalidate the SLB entry using the invalidation register, which would also flush any ERAT entries for that segment, because the segment translation does not become invalid but is only removed from the SLB cache. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 20 2月, 2008 1 次提交
-
-
由 Andre Detsch 提交于
At present, the __spufs_trap_data_map and __spu_trap_data_seq functions exit if spu->flags has the SPU_CONTEXT_SWITCH_ACTIVE set. This was resulting in suprious returns from these functions, as they may be legitimately called when we have this bit set. We only use it in these two sanity checks, so this change removes the flag completely. This fixes hangs in the page-fault path of SPE apps. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 25 1月, 2008 1 次提交
-
-
由 Kay Sievers 提交于
All kobjects require a dynamically allocated name now. We no longer need to keep track if the name is statically assigned, we can just unconditionally free() all kobject names on cleanup. Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 21 12月, 2007 2 次提交
-
-
由 Jeremy Kerr 提交于
Based on original patches from Arnd Bergmann <arnd.bergman@de.ibm.com>; and Luke Browning <lukebr@linux.vnet.ibm.com> Currently, spu contexts need to be loaded to the SPU in order to take class 0 and class 1 exceptions. This change makes the actual interrupt-handlers much simpler (ie, set the exception information in the context save area), and defers the handling code to the spufs_handle_class[01] functions, called from spufs_run_spu. This should improve the concurrency of the spu scheduling leading to greater SPU utilization when SPUs are overcommited. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Jeremy Kerr 提交于
Add a few #defines for the class 0, 1 and 2 interrupt status bits, and use them instead of magic numbers when we're setting or checking for these interrupts. Also, add a #define for the class 2 mailbox threshold interrupt mask. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 19 12月, 2007 6 次提交
-
-
由 Jeremy Kerr 提交于
We're currently getting a warning from not checking the result of sysfs_create_group, which is declared as __must_check. This change introduces appropriate error-handling for spu_add_sysdev_attr_group() Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Currently, we have a possibilty that the SLBs setup during context switch don't cover the entirety of the necessary lscsa and code regions, if these regions cross a segment boundary. This change checks the start and end of each region, and inserts a SLB entry for each, if unique. We also remove the assumption that the spu_save_code and spu_restore_code reside in the same segment, by using the specific code array for save and restore. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Add a function spu_64k_pages_available(), so that we can abstract the explicity use of mmu_psize_defs() in lssca_alloc.c Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Now that we have a helper function to setup a SPU SLB, use it for __spu_trap_data_seq. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Currently, the SPU context switch code (spufs/switch.c) sets up the SPU's SLBs directly, which requires some low-level mm stuff. This change moves the kernel SLB setup to spu_base.c, by exposing a function spu_setup_kernel_slbs() to do this setup. This allows us to remove the low-level mm code from switch.c, making it possible to later move switch.c to the spufs module. Also, add a struct spu_slb for the cases where we need to deal with SLB entries. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Export force_sig_info to allow signals to be sent from a modular spufs. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 12 10月, 2007 1 次提交
-
-
由 Paul Mackerras 提交于
This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 19 9月, 2007 1 次提交
-
-
由 Sebastian Siewior 提交于
There are a few symbols used only in one file within spufs; this change makes them static where suitable. Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 9月, 2007 1 次提交
-
-
由 Masato Noguchi 提交于
The Cell BE Architecture spec states that the SPU MFC Class 0 interrupt is edge-triggered. The current spu interrupt handler assumes this behavior and does not clear the interrupt status. The PS3 hypervisor visualizes all SPU interrupts as level, and on return from the interrupt handler the hypervisor will deliver a new virtual interrupt for any unmasked interrupts which for which the status has not been cleared. This fix clears the interrupt status in the interrupt handler. Signed-off-by: NMasato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 10 8月, 2007 1 次提交
-
-
由 Andre Detsch 提交于
This patch moves affinity initialization code from spu_base.c to a new spu_management_of_ops function (init_affinity), which is empty in the case of PS3. This fixes a linking problem that was happening when compiling for PS3. Also, some small code style changes were made. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 21 7月, 2007 9 次提交
-
-
由 Christoph Hellwig 提交于
This sorts out the various lists and related locks in the spu code. In detail: - the per-node free_spus and active_list are gone. Instead struct spu gained an alloc_state member telling whether the spu is free or not - the per-node spus array is now locked by a per-node mutex, which takes over from the global spu_lock and the per-node active_mutex - the spu_alloc* and spu_free function are gone as the state change is now done inline in the spufs code. This allows some more sharing of code for the affinity vs normal case and more efficient locking - some little refactoring in the affinity code for this locking scheme Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Sort out the locking mess in spu_base and document the current rules. As an added benefit spu_alloc* and spu_free don't block anymore. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch links spus according to their physical position using information provided by the firmware through a special vicinity device-tree property. This property is present in current version of Malta firmware. Example of vicinity properties for a node in Malta: Node: Vicinity property contains phandles of: spe@0 [ spe@100000 , mic-tm@50a000 ] spe@100000 [ spe@0 , spe@200000 ] spe@200000 [ spe@100000 , spe@300000 ] spe@300000 [ spe@200000 , bif0@512000 ] spe@80000 [ spe@180000 , mic-tm@50a000 ] spe@180000 [ spe@80000 , spe@280000 ] spe@280000 [ spe@180000 , spe@380000 ] spe@380000 [ spe@280000 , bif0@512000 ] Only spe@* have a vicinity property (e.g., bif0@512000 and mic-tm@50a000 do not have it). Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch makes the scheduller honor affinity information for each context being scheduled. If the context has no affinity information, behaviour is unchanged. If there are affinity information, context is schedulled to be run on the exact spu recommended by the affinity placement algorithm. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch allows the use of spu affinity on QS20, whose original FW does not provide affinity information. This is done through two hardcoded arrays, and by reading the reg property from each spu. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch adds affinity data to each spu instance. A doubly linked list is created, meant to connect the spus in the physical order they are placed in the BE. SPUs near to memory should be marked as having memory affinity. Adjustments of the fields acording to FW properties is done in separate patches, one for CPBW, one for Malta (patch for Malta under testing). Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
Addition of a spufs-global "cbe_info" array. Each entry contains information about one Cell/B.E. node, namelly: * list of spus (both free and busy spus are in this list); * list of free spus (replacing the static spu_list from spu_base.c) * number of spus; * number of reserved (non scheduleable) spus. SPE affinity implementation actually requires only access to one spu per BE node (since it implements its own pointer to walk through the other spus of the ring) and the number of scheduleable spus (n_spus - non_sched_spus) However having this more general structure can be useful for other functionalities, concentrating per-cbe statistics / data. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Andre Detsch 提交于
This patch exports per-context statistics in spufs as long as spu statistics in sysfs. It was formed by merging: "spufs: add spu stats in sysfs" From: Christoph Hellwig "spufs: add stat file to spufs" From: Christoph Hellwig "spufs: fix libassist accounting" From: Jeremy Kerr "spusched: fix spu utilization statistics" From: Luke Browning And some adjustments by myself, after suggestions on cbe-oss-dev. Having separate patches was making the review process harder than it should, as we end up integrating spus and ctx statistics accounting much more than it was on the first implementation. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Andre Detsch 提交于
This patch adds support for investigating spus information after a kernel crash event, through kdump vmcore file. Implementation is based on xmon code, but the new functionality was kept independent from xmon. Signed-off-by: NLucio Jose Herculano Correia <luciojhc@br.ibm.com> Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 18 7月, 2007 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Let spu_management_ops.enumerate_spus() return the number of found SPEs and use that information to draw some little helper penguin logos. Signed-off-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-By: NJames Simmons <jsimmons@infradead.org> Cc: "Antonino A. Daplas" <adaplas@pol.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 7月, 2007 2 次提交
-
-
由 Christoph Hellwig 提交于
Export spu statistics in sysfs. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
Export per-context statistics in spufs. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 6月, 2007 1 次提交
-
-
由 Geoff Levand 提交于
Add a shutdown method to spu_sysdev_class to allow proper spu resource cleanup on system shutdown. This is needed to support kexec on the PS3 platform. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 5月, 2007 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The basic issue is to be able to do what hugetlbfs does but with different page sizes for some other special filesystems; more specifically, my need is: - Huge pages - SPE local store mappings using 64K pages on a 4K base page size kernel on Cell - Some special 4K segments in 64K-page kernels for mapping a dodgy type of powerpc-specific infiniband hardware that requires 4K MMU mappings for various reasons I won't explain here. The main issues are: - To maintain/keep track of the page size per "segment" (as we can only have one page size per segment on powerpc, which are 256MB divisions of the address space). - To make sure special mappings stay within their allotted "segments" (including MAP_FIXED crap) - To make sure everybody else doesn't mmap/brk/grow_stack into a "segment" that is used for a special mapping Some of the necessary mechanisms to handle that were present in the hugetlbfs code, but mostly in ways not suitable for anything else. The patch relies on some changes to the generic get_unmapped_area() that just got merged. It still hijacks hugetlb callbacks here or there as the generic code hasn't been entirely cleaned up yet but that shouldn't be a problem. So what is a slice ? Well, I re-used the mechanism used formerly by our hugetlbfs implementation which divides the address space in "meta-segments" which I called "slices". The division is done using 256MB slices below 4G, and 1T slices above. Thus the address space is divided currently into 16 "low" slices and 16 "high" slices. (Special case: high slice 0 is the area between 4G and 1T). Doing so simplifies significantly the tracking of segments and avoids having to keep track of all the 256MB segments in the address space. While I used the "concepts" of hugetlbfs, I mostly re-implemented everything in a more generic way and "ported" hugetlbfs to it. Slices can have an associated page size, which is encoded in the mmu context and used by the SLB miss handler to set the segment sizes. The hash code currently doesn't care, it has a specific check for hugepages, though I might add a mechanism to provide per-slice hash mapping functions in the future. The slice code provide a pair of "generic" get_unmapped_area() (bottomup and topdown) functions that should work with any slice size. There is some trickiness here so I would appreciate people to have a look at the implementation of these and let me know if I got something wrong. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 30 4月, 2007 1 次提交
-
-
由 Thomas Gleixner 提交于
Use DEFINE_SPINLOCK instead of initializing spinlocks to SPIN_LOCK_UNLOCKED, since DEFINE_SPINLOCK is better for lockdep. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 4月, 2007 4 次提交
-
-
由 Jeremy Kerr 提交于
This change fixes the case where spu_base and spufs are initialised on a system with no SPEs - unconditionally create the spu_lists so spu_alloc doesn't explode, and check for spu_management ops before starting spufs. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> arch/powerpc/platforms/cell/spu_base.c | 7 ++++--- arch/powerpc/platforms/cell/spufs/inode.c | 5 +++++ 2 files changed, 9 insertions(+), 3 deletions(-)
-
由 Christoph Hellwig 提交于
spu_base.c is always built into the kernel image, so there is no need for a cleanup function. And some of the things it does are in the way for my following patches, so I'd rather get rid of it ASAP. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
Until now, we have always entered the spu page fault handler with a mutex for the spu context held. This has multiple bad side-effects: - it becomes impossible to suspend the context during page faults - if an spu program attempts to access its own mmio areas through DMA, we get an immediate livelock when the nopage function tries to acquire the same mutex This patch makes the page fault logic operate on a struct spu_context instead of a struct spu, and moves it from spu_base.c to a new file fault.c inside of spufs. We now also need to copy the dar and dsisr contents of the last fault into the saved context to have it accessible in case we schedule out the context before activating the page fault handler. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
There is no reason to execute spu_init_channels under spu_mutex after the spu has been taken off the freelist it's ours. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 10 3月, 2007 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The SPU code doesn't properly invalidate SPUs SLBs when necessary, for example when changing a segment size from the hugetlbfs code. In addition, it saves and restores the SLB content on context switches which makes it harder to properly handle those invalidations. This patch removes the saving & restoring for now, something more efficient might be found later on. It also adds a spu_flush_all_slbs(mm) that can be used by the core mm code to flush the SLBs of all SPEs that are running a given mm at the time of the flush. In order to do that, it adds a spinlock to the list of all SPEs and move some bits & pieces from spufs to spu_base.c Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 1月, 2007 1 次提交
-
-
由 Ishizaki Kou 提交于
spu->register_lock should be held before accessing registers. Signed-off-by: NKou Ishizaki <kou.ishizaki@toshiba.co.jp> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 04 12月, 2006 3 次提交
-
-
由 Stephen Rothwell 提交于
Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Geoff Levand 提交于
This adds a platform specific spu management abstraction and the coresponding routines to support the IBM Cell Blade. It also removes the hypervisor only resources that were included in struct spu. Three new platform specific routines are introduced, spu_enumerate_spus(), spu_create_spu() and spu_destroy_spu(). The underlying design uses a new type, struct spu_management_ops, to hold function pointers that the platform setup code is expected to initialize to instances appropriate to that platform. For the IBM Cell Blade support, I put the hypervisor only resources that were in struct spu into a platform specific data structure struct spu_pdata. Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
When we attempt an MFC DMA to an unmapped address, the event returned from spu_run should be SPE_EVENT_SPE_DATA_STORAGE, not SPE_EVENT_INVALID_DMA. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-