- 15 10月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
When the powerpc PCI layer is not configured to re-assign everything, it currently fails to detect that a PCI to PCI bridge has been left unassigned by the firmware and tries to allocate resource for the default window values in the bridge (0...X) (with the notable exception of a hack we have in there that detects some Apple firmware unassigned bridge resources). This results in resource allocation failures, which are generally fixed up later on but it causes scary warnings in the logs and we have seen the fixup code fall over in some circumstances (a different issue to fix as well). This code improves that by providing a more complete & useful function to intuit that a bridge was left unassigned by the firmware, and thus force a full re-allocation by the PCI code without trying to allocate the existing useless resources first. The algorithm we use basically considers unassigned a window that starts at 0 (PCI address) if the corresponding address space enable bit is not set. In addition, for memory space, it considers such a resource unassigned also if the host bridge isn't configured to forward cycles to address 0 (ie, the resource basically overlaps main memory). This fixes a range of problems with things like Bare-Metal support on pSeries machines, or attempt to use partial firmware PCI setup. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 10月, 2008 4 次提交
-
-
The "phys" argument to machine_init() isn't used and isn't likely to ever be so let's remove it. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
After Becky's work we can almost have different DMA offsets between on-chip devices and PCI. Almost because there's a problem with the non-coherent DMA code that basically ignores the programmed offset to use the global one for everything. This fixes it. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Milton Miller 提交于
b38fd42f added false dependencys to order the load of upper and lower halfs of the pte, but only adjusted whitespace instead of deleting the old load in the iside handler, letting the hardware see the non-dependent load. This patch removes the extra load. Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 John Rigby 提交于
The class of the MPC5121 pci host bridge is PCI_CLASS_BRIDGE_OTHER while other freescale host bridges have class set to PCI_CLASS_PROCESSOR_POWERPC. This patch makes fixup_hide_host_resource_fsl match PCI_CLASS_BRIDGE_OTHER in addition to PCI_CLASS_PROCESSOR_POWERPC. Signed-off-by: NJohn Rigby <jrigby@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 13 10月, 2008 1 次提交
-
-
由 Milton Miller 提交于
The comment in the code was asking "Do we have to do this?", and according to x86 and s390 the answer is no, the scheduler will do it before calling the arch hook. Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 10月, 2008 2 次提交
-
-
由 Paul Mackerras 提交于
Commit 9b09c6d9 ("powerpc: Change the default link address for pSeries zImage kernels") changed the real-base value in the CHRP note added by the addnote program from 12MB to 32MB to give more space for Open Firmware to load the zImage. (The real-base value says where we want OF to position itself in memory.) However, this change was ineffective on most pSeries machines, because the RPA note added by addnote has the "ignore me" flag set to 1. This was intended to tell OF to ignore just the RPA note, but has the side effect of also making OF ignore the CHRP note (at least on most pSeries machines). To solve this we have to set the "ignore me" flag to 0 in the RPA note. (We can't just omit the RPA note because that is equivalent to having an RPA note with default values, and the default values are not what we want.) However, then we have to make sure the values in the zImage's RPA note match up with the values that the kernel supplies later in prom_init.c with either the ibm,client-architecture-support call or the process-elf-header call in prom_send_capabilities(). So this sets the "ignore me" flag in the RPA note in addnote to 0, and adjusts the RPA note values in addnote.c and in prom_init.c to be consistent with each other and with the values in ibm_architecture_vec. However, since the wrapper is independent of the kernel, this doesn't ensure that the notes will stay consistent. To ensure that, this adds code to addnote.c so that it can extract the kernel's RPA note from the kernel binary and put that in the zImage. To that end, we put the kernel's fake ELF header (which contains the kernel's RPA note) into its own section, and arrange for wrapper to pull out that section with objcopy and pass it to addnote, which then extracts the RPA note from it and transfers it to the zImage. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Josh Poimboeuf 提交于
The powerpc 32-bit and 64-bit kernel_thread functions don't properly propagate errors being returned by the clone syscall. (In the case of error, the syscall exit code returns a positive errno in r3 and sets the CR0[SO] bit.) This patch fixes that by negating r3 if CR0[SO] is set after the syscall. Signed-off-by: NJosh Poimboeuf <jpoimboe@us.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 07 10月, 2008 3 次提交
-
-
由 Benjamin Herrenschmidt 提交于
When manipulating 64-bit PCI addresses, the code would lose the top 32-bit in a couple of places when shifting a pfn due to missing type casting from the 32-bit pfn to a 64-bit resource before the shift. This breaks using newer X servers for example on 440 machines with the PCI bus above 32-bit. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Johannes Berg 提交于
A bug in my initial 64-bit hibernation code breaks it when using page sizes that aren't 4K. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Sebastien Dugue 提交于
Add a .gitignore in arch/powerpc/kernel to ignore the generated vmlinux.lds. Signed-off-by: NSebastien Dugue <sebastien.dugue@bull.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 25 9月, 2008 6 次提交
-
-
由 Becky Bruce 提交于
This rearranges a bit of code, and adds support for 36-bit physical addressing for configs that use a hashed page table. The 36b physical support is not enabled by default on any config - it must be explicitly enabled via the config system. This patch *only* expands the page table code to accomodate large physical addresses on 32-bit systems and enables the PHYS_64BIT config option for 86xx. It does *not* allow you to boot a board with more than about 3.5GB of RAM - for that, SWIOTLB support is also required (and coming soon). Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
Introduced a new set of low level tlb invalidate functions that do not broadcast invalidates on the bus: _tlbil_all - invalidate all _tlbil_pid - invalidate based on process id (or mm context) _tlbil_va - invalidate based on virtual address (ea + pid) On non-SMP configs _tlbil_all should be functionally equivalent to _tlbia and _tlbil_va should be functionally equivalent to _tlbie. The intent of this change is to handle SMP based invalidates via IPIs instead of broadcasts as the mechanism scales better for larger number of cores. On e500 (fsl-booke mmu) based cores move to using MMUCSR for invalidate alls and tlbsx/tlbwe for invalidate virtual address. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Becky Bruce 提交于
We essentially adopt the 64-bit dma code, with some changes to support 32-bit systems, including HIGHMEM. dma functions on 32-bit are now invoked via accessor functions which call the correct op for a device based on archdata dma_ops. If there is no archdata dma_ops, this defaults to dma_direct_ops. In addition, the dma_map/unmap_page functions are added to dma_ops because we can't just fall back on map/unmap_single when HIGHMEM is enabled. In the case of dma_direct_*, we stop using map/unmap_single and just use the page version - this saves a lot of ugly ifdeffing. We leave map/unmap_single in the dma_ops definition, though, because they are needed by the iommu code, which does not implement map/unmap_page. Ideally, going forward, we will completely eliminate map/unmap_single and just have map/unmap_page, if it's workable for 64-bit. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Becky Bruce 提交于
Use the struct device's numa_node instead; use accessor functions to get/set numa_node. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Becky Bruce 提交于
32-bit platforms are about to start using dma.c; move the iommu dma ops into their own file to make this a bit cleaner. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Becky Bruce 提交于
This is in preparation for the merge of the 32 and 64-bit dma code in arch/powerpc. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 20 9月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
We need to create a false data dependency to ensure the loads of the pte are done in the right order. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 19 9月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
arch/powerpc/kernel/sysfs.c:197:7: warning: "CONFIG_6xx" is not defined arch/powerpc/kernel/sysfs.c:141: warning: 'run_on_cpu' defined but not used Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 16 9月, 2008 9 次提交
-
-
由 Martin Langer 提交于
Some 74xx cores by Freescale are using the configuration field instead of the major revision field for their revision number. This corrects the wrong behaviour for those ppc cores including my one. There is a reference document at Freecale. It describes the PVR register. This is based on that pdf. You can find the document at: http://www.freescale.com/files/archives/doc/support_info/PPCPVR.pdfSigned-off-by: NMartin Langer <martin-langer@gmx.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Sebastien Dugue 提交于
The radix trees used by interrupt controllers for their irq reverse mapping (currently only the XICS found on pSeries) have a complex locking scheme dating back to before the advent of the lockless radix tree. This takes advantage of the lockless radix tree and of the fact that the items of the tree are pointers to a static array (irq_map) elements which can never go under us to simplify the locking. Concurrency between readers and writers is handled by the intrinsic properties of the lockless radix tree. Concurrency between writers is handled with a global mutex. Signed-off-by: NSebastien Dugue <sebastien.dugue@bull.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Sebastien Dugue 提交于
irq_radix_revmap() currently serves 2 purposes, irq mapping lookup and insertion which happen in interrupt and process context respectively. Separate the function into its 2 components, one for lookup only and one for insertion only. Fix the only user of the revmap tree (XICS) to use the new functions. Also, move the insertion into the radix tree of those irqs that were requested before it was initialized at said tree initialization. Mutual exclusion between the tree initialization and readers/writers is handled via a state variable (revmap_trees_allocated) set to 1 when the tree has been initialized and set to 2 after the already requested irqs have been inserted in the tree by the init path. This state is checked before any reader or writer access just like we used to check for tree.gfp_mask != 0 before. Finally, now that we're not any longer inserting nodes into the radix-tree in interrupt context, turn the GFP_ATOMIC allocations into GFP_KERNEL ones. Signed-off-by: NSebastien Dugue <sebastien.dugue@bull.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Christoph Hellwig 提交于
sys32_pause is a useless copy of the generic sys_pause. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as a position-independent executable (PIE) when it is set. This involves processing the dynamic relocations in the image in the early stages of booting, even if the kernel is being run at the address it is linked at, since the linker does not necessarily fill in words in the image for which there are dynamic relocations. (In fact the linker does fill in such words for 64-bit executables, though not for 32-bit executables, so in principle we could avoid calling relocate() entirely when we're running a 64-bit kernel at the linked address.) The dynamic relocations are processed by a new function relocate(addr), where the addr parameter is the virtual address where the image will be run. In fact we call it twice; once before calling prom_init, and again when starting the main kernel. This means that reloc_offset() returns 0 in prom_init (since it has been relocated to the address it is running at), which necessitated a few adjustments. This also changes __va and __pa to use an equivalent definition that is simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are constants (for 64-bit) whereas PHYSICAL_START is a variable (and KERNELBASE ideally should be too, but isn't yet). With this, relocatable kernels still copy themselves down to physical address 0 and run there. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
Using LOAD_REG_IMMEDIATE to get the address of kernel symbols generates 5 instructions where LOAD_REG_ADDR can do it in one, and will generate R_PPC64_ADDR16_* relocations in the output when we get to making the kernel as a position-independent executable, which we'd rather not have to handle. This changes various bits of assembly code to use LOAD_REG_ADDR when we need to get the address of a symbol, or to use suitable position-independent code for cases where we can't access the TOC for various reasons, or if we're not running at the address we were linked at. It also cleans up a few minor things; there's no reason to save and restore SRR0/1 around RTAS calls, __mmu_off can get the return address from LR more conveniently than the caller can supply it in R4 (and we already assume elsewhere that EA == RA if the MMU is on in early boot), and enable_64b_mode was using 5 instructions where 2 would do. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This changes the way that the exception prologs transfer control to the handlers in 64-bit kernels with the aim of making it possible to have the prologs separate from the main body of the kernel. Now, instead of computing the address of the handler by taking the top 32 bits of the paca address (to get the 0xc0000000........ part) and ORing in something in the bottom 16 bits, we get the base address of the kernel by doing a load from the paca and add an offset. This also replaces an mfmsr and an ori to compute the MSR value for the handler with a load from the paca. That makes it unnecessary to have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit mode. We can no longer use a direct branches in the exception prolog code, which means that the SLB miss handlers can't branch directly to .slb_miss_realmode any more. Instead we have to compute the address and do an indirect branch. This is conditional on CONFIG_RELOCATABLE; for non-relocatable kernels we use a direct branch as before. (A later change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.) Since the secondary CPUs on pSeries start execution in the first 0x100 bytes of real memory and then have to get to wherever the kernel is, we can't use a direct branch to get there. Instead this changes __secondary_hold_spinloop from a flag to a function pointer. When it is set to a non-NULL value, the secondary CPUs jump to the function pointed to by that value. Finally this eliminates one code difference between 32-bit and 64-bit by making __secondary_hold be the text address of the secondary CPU spinloop rather than a function descriptor for it. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This rearranges head_64.S so that we have all the first-level exception prologs together starting at 0x100, followed by all the second-level handlers that are invoked from the first-level prologs, followed by other code. This doesn't make any functional change but will make following changes for relocatable kernel support easier. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Chandru 提交于
Kdump kernel needs to use only those memory regions that it is allowed to use (crashkernel, rtas, tce, etc.). Each of these regions have their own sizes and are currently added under 'linux,usable-memory' property under each memory@xxx node of the device tree. The ibm,dynamic-memory property of ibm,dynamic-reconfiguration-memory node (on POWER6) now stores in it the representation for most of the logical memory blocks with the size of each memory block being a constant (lmb_size). If one or more or part of the above mentioned regions lie under one of the lmb from ibm,dynamic-memory property, there is a need to identify those regions within the given lmb. This makes the kernel recognize a new 'linux,drconf-usable-memory' property added by kexec-tools. Each entry in this property is of the form of a count followed by that many (base, size) pairs for the above mentioned regions. The number of cells in the count value is given by the #size-cells property of the root node. Signed-off-by: NChandru Siddalingappa <chandru@in.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 10 9月, 2008 1 次提交
-
-
由 James Bottomley 提交于
It was introduced by "vsprintf: add support for '%pS' and '%pF' pointer formats" in commit 0fe1ef24. However, the current way its coded doesn't work on parisc64. For two reasons: 1) parisc isn't in the #ifdef and 2) parisc has a different format for function descriptors Make dereference_function_descriptor() more accommodating by allowing architecture overrides. I put the three overrides (for parisc64, ppc64 and ia64) in arch/kernel/module.c because that's where the kernel internal linker which knows how to deal with function descriptors sits. Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NTony Luck <tony.luck@intel.com> Acked-by: NKyle McMartin <kyle@mcmartin.ca> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 9月, 2008 4 次提交
-
-
由 Kumar Gala 提交于
The calculation to get TI_CPU based off of SPRG3 was just plain wrong, meaning that we were getting garbage for the CPU number on 6xx/G3/G4 based SMP boxes in this code. Just offset off the stack pointer (to get to thread_info) like all the other references to TI_CPU do. This was pointed out by Chen Gong <G.Chen@freescale.com> [paulus@samba.org - use rlwinm r12,r11,... instead of rlwinm r12,r1,...; tophys()] Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Tony Breeds 提交于
This bug is causing random crashes (http://bugzilla.kernel.org/show_bug.cgi?id=11414). -fno-omit-frame-pointer is only needed on powerpc when -pg is also supplied, and there is a gcc bug that causes incorrect code generation on 32-bit powerpc when -fno-omit-frame-pointer is used---it uses stack locations below the stack pointer, which is not allowed by the ABI because those locations can and sometimes do get corrupted by an interrupt. This ensures that CONFIG_FRAME_POINTER is only selected by ftrace. When CONFIG_FTRACE is enabled we also pass -mno-sched-epilog to work around the gcc codegen bug. Patch based on work by: Andreas Schwab <schwab@suse.de> Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Stephen Rothwell 提交于
This makes core_kernel_text() (and therefore kernel_text_address()) return the correct result. Currently all the __devinit routines (at least) will not be considered to be kernel text. This is just a quick fix for 2.6.27 - hopefully we will be able to fix this better in 2.6.28. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
This fixes an uninitialised variable in the VSX alignment code. It can cause warnings from GCC (noticed with gcc-4.1.1). Gcc is actually correct in this instance, and this bug could cause the alignment interrupt handler to send a SIGSEGV to the process on a legitimate access. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 20 8月, 2008 7 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The file arch/powerpc/kernel/sysfs.c is currently only compiled for 64-bit kernels. It contain code to register CPU sysdevs in sysfs and add various properties such as cache topology and raw access by root to performance monitor counters (PMCs). A lot of that can be re-used as is on 32-bits. This makes the file be built for both, with appropriate ifdef'ing for the few bits that are really 64-bit specific, and adds some support for the raw PMCs for 75x and 74xx processors. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
When removing a directory, the sysfs core takes care of removing files in the directory (see sysfs_remove_dir()). So when we are about to delete a kobject (and thus cause its sysfs directory to be removed), we don't have to explicitly remove the files attached to it, although it's harmless to do so. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Harvey Harrison 提交于
__FUNCTION__ is gcc-specific, use __func__ [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Harvey Harrison 提交于
[akpm@linux-foundation.org: exclude prom_init.c] Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
There is a small passage of code in ret_from_except_lite which is only required on iSeries. For a multi-platform kernel on non-iSeries machines this means we end up executing ~15 nops in ret_from_except_lite. It would be nicer if non-iSeries could skip the code entirely, and on iSeries we can jump out of line to execute the code. I have no performance numbers to justify this, other than the assertion that executing 15 nops takes longer than executing 0. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Brian King 提交于
When CMO is enabled and booted on a non CMO system and the VIO device's probe function fails, an oops can result since vio_cmo_bus_remove is called when it should not. This fixes it by avoiding the vio_cmo_bus_remove call on platforms that don't implement CMO. cpu 0x0: Vector: 300 (Data Access) at [c00000000e13b3d0] pc: c000000000020d34: .vio_cmo_bus_remove+0xc0/0x1f4 lr: c000000000020ca4: .vio_cmo_bus_remove+0x30/0x1f4 sp: c00000000e13b650 msr: 8000000000009032 dar: 0 dsisr: 40000000 current = 0xc00000000e0566c0 paca = 0xc0000000006f9b80 pid = 2428, comm = modprobe enter ? for help [c00000000e13b6e0] c000000000021d94 .vio_bus_probe+0x2f8/0x33c [c00000000e13b7a0] c00000000029fc88 .driver_probe_device+0x13c/0x200 [c00000000e13b830] c00000000029fdac .__driver_attach+0x60/0xa4 [c00000000e13b8c0] c00000000029f050 .bus_for_each_dev+0x80/0xd8 [c00000000e13b980] c00000000029f9ec .driver_attach+0x28/0x40 [c00000000e13ba00] c00000000029f630 .bus_add_driver+0xd4/0x284 [c00000000e13baa0] c0000000002a01bc .driver_register+0xc4/0x198 [c00000000e13bb50] c00000000002168c .vio_register_driver+0x40/0x5c [c00000000e13bbe0] d0000000003b3f1c .ibmvfc_module_init+0x70/0x109c [ibmvfc] [c00000000e13bc70] c0000000000acf08 .sys_init_module+0x184c/0x1a10 [c00000000e13be30] c000000000008748 syscall_exit+0x0/0x40 Signed-off-by: NBrian King <brking@linux.vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Joachim Fenkes 提交于
Recent of_platform changes made of_bus_type_init() overwrite the bus type's .dev_attrs list, meaning that the "name" attribute that ibmebus devices previously had is no longer present. This is a user-visible regression which breaks the userspace eHCA support, since the eHCA userspace driver relies on the name attribute to check for valid adapters. This fixes it by providing the "name" attribute in the generic OF device code instead. Tested on POWER. Signed-off-by: NJoachim Fenkes <fenkes@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-