- 05 12月, 2014 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
upatepp can get called for a nohpte fault when we find from the linux page table that the translation was hashed before. In that case we are sure that there is no existing translation, hence we could avoid doing tlbie. We could possibly race with a parallel fault filling the TLB. But that should be ok because updatepp is only ever relaxing permissions. We also look at linux pte permission bits when filling hash pte permission bits. We also hold the linux pte busy bits while inserting/updating a hashpte entry, hence a paralle update of linux pte is not possible. On the other hand mprotect involves ptep_modify_prot_start which cause a hpte invalidate and not updatepp. Performance number: We use randbox_access_bench written by Anton. Kernel with THP disabled and smaller hash page table size. 86.60% random_access_b [kernel.kallsyms] [k] .native_hpte_updatepp 2.10% random_access_b random_access_bench [.] doit 1.99% random_access_b [kernel.kallsyms] [k] .do_raw_spin_lock 1.85% random_access_b [kernel.kallsyms] [k] .native_hpte_insert 1.26% random_access_b [kernel.kallsyms] [k] .native_flush_hash_range 1.18% random_access_b [kernel.kallsyms] [k] .__delay 0.69% random_access_b [kernel.kallsyms] [k] .native_hpte_remove 0.37% random_access_b [kernel.kallsyms] [k] .clear_user_page 0.34% random_access_b [kernel.kallsyms] [k] .__hash_page_64K 0.32% random_access_b [kernel.kallsyms] [k] fast_exception_return 0.30% random_access_b [kernel.kallsyms] [k] .hash_page_mm With Fix: 27.54% random_access_b random_access_bench [.] doit 22.90% random_access_b [kernel.kallsyms] [k] .native_hpte_insert 5.76% random_access_b [kernel.kallsyms] [k] .native_hpte_remove 5.20% random_access_b [kernel.kallsyms] [k] fast_exception_return 5.12% random_access_b [kernel.kallsyms] [k] .__hash_page_64K 4.80% random_access_b [kernel.kallsyms] [k] .hash_page_mm 3.31% random_access_b [kernel.kallsyms] [k] data_access_common 1.84% random_access_b [kernel.kallsyms] [k] .trace_hardirqs_on_caller Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 08 10月, 2014 1 次提交
-
-
由 Ian Munsie 提交于
__spu_trap_data_seg() currently contains code to determine the VSID and ESID required for a particular EA and mm struct. This code is generically useful for other co-processors. This moves the code of the cell platform so it can be used by other powerpc code. It also adds 1TB segment handling which Cell didn't support. The new function is called copro_calculate_slb(). This also moves the internal struct spu_slb to a generic struct copro_slb which is now used in the Cell and copro code. We use this new struct instead of passing around esid and vsid parameters. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 24 7月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
Replace the ever recurring: ts = ktime_get_ts(); ns = timespec_to_ns(&ts); with ns = ktime_get_ns(); Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
- 06 5月, 2013 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We are registering the attribute with permission 0644 but it doesn't have a store callback, which causes WARN_ON's during boot. Fix the permission. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 22 12月, 2011 1 次提交
-
-
由 Kay Sievers 提交于
This moves the 'cpu sysdev_class' over to a regular 'cpu' subsystem and converts the devices to regular devices. The sysdev drivers are implemented as subsystem interfaces now. After all sysdev classes are ported to regular driver core entities, the sysdev implementation will be entirely removed from the kernel. Userspace relies on events and generic sysfs subsystem infrastructure from sysdev devices, which are made available with this conversion. Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Borislav Petkov <bp@amd64.org> Cc: Tigran Aivazian <tigran@aivazian.fsnet.co.uk> Cc: Len Brown <lenb@kernel.org> Cc: Zhang Rui <rui.zhang@intel.com> Cc: Dave Jones <davej@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 08 11月, 2011 1 次提交
-
-
由 Yong Zhang 提交于
Since commit [e58aa3d2: genirq: Run irq handlers with interrupts disabled], We run all interrupt handlers with interrupts disabled and we even check and yell when an interrupt handler returns with interrupts enabled (see commit [b738a50a: genirq: Warn when handler enables interrupts]). So now this flag is a NOOP and can be removed. Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 12 5月, 2011 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Make some PowerPC architecture's code use struct syscore_ops objects for power management instead of sysdev classes and sysdevs. This simplifies the code and reduces the kernel's memory footprint. It also is necessary for removing sysdevs from the kernel entirely in the future. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 21 1月, 2011 1 次提交
-
-
由 Anton Blanchard 提交于
Use the crash handler hooks to run the SPU stop code, just like we do for ehea and cell RAS code. While I'm here I noticed "CPUSs reliabally" so fix the spelling MISTAKESs reliabally. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 17 6月, 2009 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Now we have __initconst, we can finally move the external declarations for the various Linux logo structures to <linux/linux_logo.h>. James' ack dates back to the previous submission (way to long ago), when the logos were still __initdata, which caused failures on some platforms with some toolchain versions. Signed-off-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: NJames Simmons <jsimmons@infradead.org> Cc: Krzysztof Helt <krzysztof.h1@poczta.fm> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 3月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 13 1月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
Convert arch/powerpc/ over to long long based u64: -#ifdef __powerpc64__ -# include <asm-generic/int-l64.h> -#else -# include <asm-generic/int-ll64.h> -#endif +#include <asm-generic/int-ll64.h> This will avoid reoccuring spurious warnings in core kernel code that comes when people test on their own hardware. (i.e. x86 in ~98% of the cases) This is what x86 uses and it generally helps keep 64-bit code 32-bit clean too. [Adjusted to not impact user mode (from paulus) - sfr] Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 22 7月, 2008 1 次提交
-
-
由 Andi Kleen 提交于
This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 16 6月, 2008 2 次提交
-
-
由 Luke Browning 提交于
Time slicing can occur at the same time as spu exception handling resulting in the wakeup of the wrong thread. This change uses the the spu's register_lock to enforce synchronization between bind/unbind and spu exception handling so that they are mutually exclusive. Signed-off-by: NLuke Browning <lukebrowning@us.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Luke Browning 提交于
According to the CBEA, the SPU dsisr is not updated for class 0 exceptions. spu_stopped() is testing the dsisr that was passed to it from the class 0 exception handler, so we return a false positive here. This patch cleans up the interrupt handler and erroneous tests in spu_stopped. It also removes the fields from the csa since it is not needed to process class 0 events. Signed-off-by: NLuke Browning <lukebrowning@us.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 05 5月, 2008 2 次提交
-
-
由 Luke Browning 提交于
Currently, page fault handlers don't issue a mfc restart if the context switch pending flag is set, which can leave us with a hanging DMA after a context restore. This patch introduces fault pending flag that is set by the fault handler and read by the context switch code, so that the latter can add the restart bit at the right spot, after it has successfuly saved the state of the mfc control register. Signed-off-by: NLuke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Luke Browning 提交于
SPU class 0 & 1 exceptions may occur in parallel, so we may end up overwriting csa.dsisr. This change adds dedicated fields for each class to the spu and the spu context so that fault data is not overwritten. Signed-off-by: NLuke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 01 4月, 2008 1 次提交
-
-
由 Harvey Harrison 提交于
__FUNCTION__ is gcc-specific, use __func__ Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 29 2月, 2008 2 次提交
-
-
由 Arnd Bergmann 提交于
There is a potential race between flushes of the entire SLB in the MFC and the point where new entries are being established. The problem is that we might put a ESID entry into the MFC SLB when the VSID entry has just been cleared by the global flush. This can be circumvented by holding the register_lock throughout both the flushing and the creation of SLB entries. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Arnd Bergmann 提交于
When we replace an SLB entry in the MFC after using up all the available entries, there is a short window in which an incorrect entry is marked as valid. The problem is that the 'valid' bit is stored in the ESID, which is always written after the VSID. Overwriting the VSID first will make the original ESID entry point to the new VSID, which means that any concurrent DMA accessing the old ESID ends up being redirected to the new virtual address. A few cycles later, we write the new ESID and everything is fine again. That race can be closed by writing a zero entry to the ESID first, which makes sure that the VSID is not accessed until we write the new ESID. Note that we don't actually need to invalidate the SLB entry using the invalidation register, which would also flush any ERAT entries for that segment, because the segment translation does not become invalid but is only removed from the SLB cache. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 20 2月, 2008 1 次提交
-
-
由 Andre Detsch 提交于
At present, the __spufs_trap_data_map and __spu_trap_data_seq functions exit if spu->flags has the SPU_CONTEXT_SWITCH_ACTIVE set. This was resulting in suprious returns from these functions, as they may be legitimately called when we have this bit set. We only use it in these two sanity checks, so this change removes the flag completely. This fixes hangs in the page-fault path of SPE apps. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 25 1月, 2008 1 次提交
-
-
由 Kay Sievers 提交于
All kobjects require a dynamically allocated name now. We no longer need to keep track if the name is statically assigned, we can just unconditionally free() all kobject names on cleanup. Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 21 12月, 2007 2 次提交
-
-
由 Jeremy Kerr 提交于
Based on original patches from Arnd Bergmann <arnd.bergman@de.ibm.com>; and Luke Browning <lukebr@linux.vnet.ibm.com> Currently, spu contexts need to be loaded to the SPU in order to take class 0 and class 1 exceptions. This change makes the actual interrupt-handlers much simpler (ie, set the exception information in the context save area), and defers the handling code to the spufs_handle_class[01] functions, called from spufs_run_spu. This should improve the concurrency of the spu scheduling leading to greater SPU utilization when SPUs are overcommited. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Jeremy Kerr 提交于
Add a few #defines for the class 0, 1 and 2 interrupt status bits, and use them instead of magic numbers when we're setting or checking for these interrupts. Also, add a #define for the class 2 mailbox threshold interrupt mask. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 19 12月, 2007 6 次提交
-
-
由 Jeremy Kerr 提交于
We're currently getting a warning from not checking the result of sysfs_create_group, which is declared as __must_check. This change introduces appropriate error-handling for spu_add_sysdev_attr_group() Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Currently, we have a possibilty that the SLBs setup during context switch don't cover the entirety of the necessary lscsa and code regions, if these regions cross a segment boundary. This change checks the start and end of each region, and inserts a SLB entry for each, if unique. We also remove the assumption that the spu_save_code and spu_restore_code reside in the same segment, by using the specific code array for save and restore. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Add a function spu_64k_pages_available(), so that we can abstract the explicity use of mmu_psize_defs() in lssca_alloc.c Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Now that we have a helper function to setup a SPU SLB, use it for __spu_trap_data_seq. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Currently, the SPU context switch code (spufs/switch.c) sets up the SPU's SLBs directly, which requires some low-level mm stuff. This change moves the kernel SLB setup to spu_base.c, by exposing a function spu_setup_kernel_slbs() to do this setup. This allows us to remove the low-level mm code from switch.c, making it possible to later move switch.c to the spufs module. Also, add a struct spu_slb for the cases where we need to deal with SLB entries. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jeremy Kerr 提交于
Export force_sig_info to allow signals to be sent from a modular spufs. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 12 10月, 2007 1 次提交
-
-
由 Paul Mackerras 提交于
This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 19 9月, 2007 1 次提交
-
-
由 Sebastian Siewior 提交于
There are a few symbols used only in one file within spufs; this change makes them static where suitable. Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 9月, 2007 1 次提交
-
-
由 Masato Noguchi 提交于
The Cell BE Architecture spec states that the SPU MFC Class 0 interrupt is edge-triggered. The current spu interrupt handler assumes this behavior and does not clear the interrupt status. The PS3 hypervisor visualizes all SPU interrupts as level, and on return from the interrupt handler the hypervisor will deliver a new virtual interrupt for any unmasked interrupts which for which the status has not been cleared. This fix clears the interrupt status in the interrupt handler. Signed-off-by: NMasato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 10 8月, 2007 1 次提交
-
-
由 Andre Detsch 提交于
This patch moves affinity initialization code from spu_base.c to a new spu_management_of_ops function (init_affinity), which is empty in the case of PS3. This fixes a linking problem that was happening when compiling for PS3. Also, some small code style changes were made. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Acked-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 21 7月, 2007 7 次提交
-
-
由 Christoph Hellwig 提交于
This sorts out the various lists and related locks in the spu code. In detail: - the per-node free_spus and active_list are gone. Instead struct spu gained an alloc_state member telling whether the spu is free or not - the per-node spus array is now locked by a per-node mutex, which takes over from the global spu_lock and the per-node active_mutex - the spu_alloc* and spu_free function are gone as the state change is now done inline in the spufs code. This allows some more sharing of code for the affinity vs normal case and more efficient locking - some little refactoring in the affinity code for this locking scheme Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Christoph Hellwig 提交于
Sort out the locking mess in spu_base and document the current rules. As an added benefit spu_alloc* and spu_free don't block anymore. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch links spus according to their physical position using information provided by the firmware through a special vicinity device-tree property. This property is present in current version of Malta firmware. Example of vicinity properties for a node in Malta: Node: Vicinity property contains phandles of: spe@0 [ spe@100000 , mic-tm@50a000 ] spe@100000 [ spe@0 , spe@200000 ] spe@200000 [ spe@100000 , spe@300000 ] spe@300000 [ spe@200000 , bif0@512000 ] spe@80000 [ spe@180000 , mic-tm@50a000 ] spe@180000 [ spe@80000 , spe@280000 ] spe@280000 [ spe@180000 , spe@380000 ] spe@380000 [ spe@280000 , bif0@512000 ] Only spe@* have a vicinity property (e.g., bif0@512000 and mic-tm@50a000 do not have it). Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch makes the scheduller honor affinity information for each context being scheduled. If the context has no affinity information, behaviour is unchanged. If there are affinity information, context is schedulled to be run on the exact spu recommended by the affinity placement algorithm. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch allows the use of spu affinity on QS20, whose original FW does not provide affinity information. This is done through two hardcoded arrays, and by reading the reg property from each spu. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
This patch adds affinity data to each spu instance. A doubly linked list is created, meant to connect the spus in the physical order they are placed in the BE. SPUs near to memory should be marked as having memory affinity. Adjustments of the fields acording to FW properties is done in separate patches, one for CPBW, one for Malta (patch for Malta under testing). Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
由 Arnd Bergmann 提交于
Addition of a spufs-global "cbe_info" array. Each entry contains information about one Cell/B.E. node, namelly: * list of spus (both free and busy spus are in this list); * list of free spus (replacing the static spu_list from spu_base.c) * number of spus; * number of reserved (non scheduleable) spus. SPE affinity implementation actually requires only access to one spu per BE node (since it implements its own pointer to walk through the other spus of the ring) and the number of scheduleable spus (n_spus - non_sched_spus) However having this more general structure can be useful for other functionalities, concentrating per-cbe statistics / data. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-