- 02 6月, 2015 15 次提交
-
-
由 Jiang Liu 提交于
Use irq_desc_get_xxx() to avoid redundant lookup of irq_desc while we already have a pointer to corresponding irq_desc. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
We need to use a trampoline when using LOAD_HANDLER(), because the destination needs to be in the first 64kB. An absolute branch has no such limitations, so just jump there. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
We had some code to restore the LR in the relocatable system call path back when we used the LR to do an indirect branch. Commit 6a404806 ("powerpc: Avoid link stack corruption in MMU on syscall entry path") changed this to use the CTR which is volatile across system calls so does not need restoring. Remove the stale comment and the restore of the LR. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
When we take a PMU exception or a software event we call perf_read_regs(). This overloads regs->result with a boolean that describes if we should use the sampled instruction address register (SIAR) or the regs. If the exception is in kernel, we start with the kernel regs and backtrace through the kernel stack. At this point we switch to the userspace regs and backtrace the user stack with perf_callchain_user(). Unfortunately these regs have not got the perf_read_regs() treatment, so regs->result could be anything. If it is non zero, perf_instruction_pointer() decides to use the SIAR, and we get issues like this: 0.11% qemu-system-ppc [kernel.kallsyms] [k] _raw_spin_lock_irqsave | ---_raw_spin_lock_irqsave | |--52.35%-- 0 | | | |--46.39%-- __hrtimer_start_range_ns | | kvmppc_run_core | | kvmppc_vcpu_run_hv | | kvmppc_vcpu_run | | kvm_arch_vcpu_ioctl_run | | kvm_vcpu_ioctl | | do_vfs_ioctl | | sys_ioctl | | system_call | | | | | |--67.08%-- _raw_spin_lock_irqsave <--- hi mum | | | | | | | --100.00%-- 0x7e714 | | | 0x7e714 Notice the bogus _raw_spin_irqsave when we transition from kernel (system_call) to userspace (0x7e714). We inserted what was in the SIAR. Add a check in regs_use_siar() to check that the regs in question are from a PMU exception. With this fix the backtrace makes sense: 0.47% qemu-system-ppc [kernel.vmlinux] [k] _raw_spin_lock_irqsave | ---_raw_spin_lock_irqsave | |--53.83%-- 0 | | | |--44.73%-- hrtimer_try_to_cancel | | kvmppc_start_thread | | kvmppc_run_core | | kvmppc_vcpu_run_hv | | kvmppc_vcpu_run | | kvm_arch_vcpu_ioctl_run | | kvm_vcpu_ioctl | | do_vfs_ioctl | | sys_ioctl | | system_call | | __ioctl | | 0x7e714 | | 0x7e714 Cc: stable@vger.kernel.org Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
If both STRICT_MM_TYPECHECKS and DEBUG_PAGEALLOC are enabled, the code in kernel_map_linear_page() is built, and so we fail with: arch/powerpc/mm/hash_utils_64.c:1478:2: error: incompatible type for argument 1 of 'htab_convert_pte_flags' Fix it by using pgprot_val(). Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Previously, dma_set_mask() on powernv was convoluted: 0) Call dma_set_mask() (a/p/kernel/dma.c) 1) In dma_set_mask(), ppc_md.dma_set_mask() exists, so call it. 2) On powernv, that function pointer is pnv_dma_set_mask(). In pnv_dma_set_mask(), the device is pci, so call pnv_pci_dma_set_mask(). 3) In pnv_pci_dma_set_mask(), call pnv_phb->set_dma_mask() if it exists. 4) It only exists in the ioda case, where it points to pnv_pci_ioda_dma_set_mask(), which is the final function. So the call chain is: dma_set_mask() -> pnv_dma_set_mask() -> pnv_pci_dma_set_mask() -> pnv_pci_ioda_dma_set_mask() Both ppc_md and pnv_phb function pointers are used. Rip out the ppc_md call, pnv_dma_set_mask() and pnv_pci_dma_set_mask(). Instead: 0) Call dma_set_mask() (a/p/kernel/dma.c) 1) In dma_set_mask(), the device is pci, and pci_controller_ops.dma_set_mask() exists, so call pci_controller_ops.dma_set_mask() 2) In the ioda case, that points to pnv_pci_ioda_dma_set_mask(). The new call chain is dma_set_mask() -> pnv_pci_ioda_dma_set_mask() Now only the pci_controller_ops function pointer is used. The fallback paths for p5ioc2 are the same. Previously, pnv_pci_dma_set_mask() would find no pnv_phb->set_dma_mask() function, to it would call __set_dma_mask(). Now, dma_set_mask() finds no ppc_md call or pci_controller_ops call, so it calls __set_dma_mask(). Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Some systems only need to deal with DMA masks for PCI devices. For these systems, we can avoid the need for a platform hook and instead use a pci controller based hook. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Remove powernv generic PCI controller operations. Replace it with controller ops for each of the two supported PHBs. As an added bonus, make the two new structs const, which will help guard against bugs such as the one introduced in 65ebf4b6 ("powerpc/powernv: Move controller ops from ppc_md to controller_ops") Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Remove unneeded ppc_md functions. Patch callsites to use pci_controller_ops functions exclusively. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Move the u3 MPIC msi subsystem to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations. As with fsl_msi, operations are plugged in at the subsys level, after controller creation. Again, we iterate over all controllers and populate them with the MSI ops. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Move the PaSemi MPIC msi subsystem to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations. As with fsl_msi, operations are plugged in at the subsys level, after controller creation. Again, we iterate over all controllers and populate them with the MSI ops. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Move the ppc4xx hsta msi subsystem to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations. As with fsl_msi, operations are plugged in at the subsys level, after controller creation. Again, we iterate over all controllers and populate them with the MSI ops. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Move the ppc4xx msi subsystem to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations. As with fsl_msi, operations are plugged in at the subsys level, after controller creation. Again, we iterate over all controllers and populate them with the MSI ops. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Move the fsl_msi subsystem to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations. Previously, MSI ops were added to ppc_md at the subsys level. However, in fsl_pci.c, PCI controllers are created at the at arch level. So, unlike in e.g. PowerNV/pSeries/Cell, we can't simply populate a platform-level controller ops structure and have it copied into the controllers when they are created. Instead, walk every phb, and attempt to populate it with the MSI ops. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Move the pseries platform to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations We need to iterate all PHBs because the MSI setup happens later than find_and_init_phbs() - mpe. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 5月, 2015 13 次提交
-
-
由 Daniel Axtens 提交于
Move the Cell platform to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations. We can be confident that the functions will be added to the platform's ops struct before any PCI controller's ops struct is populated because: 1) These ops are added to the struct in a subsys initcall. We populate the ops in axon_msi_probe, which is the probe call for the axon-msi driver. However the driver is registered in axon_msi_init, which is a subsys initcall, so this will happen at the subsys level. 2) The controller recieves the struct later, in a device initcall. Cell populates the controller in cell_setup_phb, which is hooked up to ppc_md.pci_setup_phb. ppc_md.pci_setup_phb is only ever called in of_platform.c, as part of the OpenFirmware PCI driver's probe routine. That driver is registered in a device initcall, so it will occur *after* the struct is properly populated. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Move the PowerNV/BML platform to use the pci_controller_ops structure rather than ppc_md for MSI related PCI controller operations. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Add MSI setup and teardown functions to pci_controller_ops. Patch the callsites (arch_{setup,teardown}_msi_irqs) to prefer the controller ops version if it's available. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alistair Popple 提交于
All users of the old opal events notifier have been converted over to the irq domain so remove the event notifier functions. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alistair Popple 提交于
Convert the opal dump driver to the new opal irq domain. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alistair Popple 提交于
This patch converts the elog code to use the opal irq domain instead of notifier events. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alistair Popple 提交于
This patch converts the opal message event to use the new opal irq domain. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alistair Popple 提交于
The eeh code currently uses the old notifier method to get eeh events from OPAL. It also contains some logic to filter opal events which has been moved into the virtual irqchip. This patch converts the eeh code to the new event interface which simplifies event handling. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alistair Popple 提交于
Whenever an interrupt is received for opal the linux kernel gets a bitfield indicating certain events that have occurred and need handling by the various device drivers. Currently this is handled using a notifier interface where we call every device driver that has registered to receive opal events. This approach has several drawbacks. For example each driver has to do its own checking to see if the event is relevant as well as event masking. There is also no easy method of recording the number of times we receive particular events. This patch solves these issues by exposing opal events via the standard interrupt APIs by adding a new interrupt chip and domain. Drivers can then register for the appropriate events using standard kernel calls such as irq_of_parse_and_map(). Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alistair Popple 提交于
Most of the OPAL subsystems are always compiled in for PowerNV and many of them need to be initialised before or after other OPAL subsystems. Rather than trying to control this ordering through machine initcalls it is clearer and easier to control initialisation order with explicit calls in opal_init. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Cc: Mahesh Jagannath Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Shreyas B. Prabhu 提交于
Fastsleep is one of the idle state which cpuidle subsystem currently uses on power8 machines. In this state L2 cache is brought down to a threshold voltage. Therefore when the core is in fastsleep, the communication between L2 and L3 needs to be fenced. But there is a bug in the current power8 chips surrounding this fencing. OPAL provides a workaround which precludes the possibility of hitting this bug. But running with this workaround applied causes checkstop if any correctable error in L2 cache directory is detected. Hence OPAL also provides a way to undo the workaround. In the existing implementation, workaround is applied by the last thread of the core entering fastsleep and undone by the first thread waking up. But this has a performance cost. These OPAL calls account for roughly 4000 cycles everytime the core has to enter or wakeup from fastsleep. This patch introduces a sysfs attribute (fastsleep_workaround_applyonce) to choose the behavior of this workaround. By default, fastsleep_workaround_applyonce = 0. In this case, workaround is applied/undone everytime the core enters/exits fastsleep. fastsleep_workaround_applyonce = 1. In this case the workaround is applied once on all the cores and never undone. This can be triggered by echo 1 > /sys/devices/system/cpu/fastsleep_workaround_applyonce For simplicity this attribute can be modified only once. Implying, once fastsleep_workaround_applyonce is changed to 1, it cannot be reverted to the default state. Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Shreyas B. Prabhu 提交于
This is a cleanup patch; doesn't change any functionality. Moves all cpuidle related code from setup.c to a new file. Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> [mpe: Fix the SMP=n build by including asm/smp.h in idle.c] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Shreyas B. Prabhu 提交于
Currently, cpu_online_cores_map returns a mask, which for every core with at least one online thread, has the bit for thread 0 of the core set to 1, and the bits for all other threads of the core set to 0. But thread 0 of the core itself may not be online always. In such cases, if the returned mask is used for IPI, then it'll cause IPIs to be skipped on cores where the first thread is offline, because the IPI code refuses to send IPIs to offline threads. Fix this by setting the bit of the first online thread in the core. This is done by fixing this in the underlying function cpu_thread_mask_to_cores. The result has the property that for all cores with online threads, there is one bit set in the returned map. And further, all bits that are set in the returned map correspond to online threads. Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> [ Changelog from Michael Ellerman <mpe@ellerman.id.au> ] Reviewed-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 20 5月, 2015 1 次提交
-
-
由 Laurent Dufour 提交于
The commit 8170a83f ("powerpc: Wireup the kcmp syscall to sys_ni") has disabled the kcmp syscall for powerpc. This has been done due to the use of unsigned long parameters which may require a dedicated wrapper to handle 32bit process on top of 64bit kernel. However in the kcmp() case, the 2 unsigned long parameters are currently only used to carry file descriptors from user space to the kernel. Since such a parameter is passed through register, and file descriptor doesn't need to get extended, there is, today, no need for a wrapper. In the case there will be a need to pass address in or out of this system call, then a wrapper could be required, it will then be to care of it. As today this is not the case, it is safe to enable kcmp() on powerpc. Tested (by Laurent) on 64-bit, 32-bit, and 32-bit userspace on 64-bit kernel using tools/testing/selftests/kcmp [mpe]. Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 18 5月, 2015 1 次提交
-
-
由 Michael Ellerman 提交于
The only little endian configuration we support is ppc64le, all other configurations are big endian. So we should only offer a choice of endian if we're building for 64-bit Book3S, ie. PPC_BOOK3S_64. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 13 5月, 2015 4 次提交
-
-
由 Wei Yang 提交于
Currently, the macro IS_BRIDGE is not used any where. This patch just removes it. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Wei Yang 提交于
As the comment indicates, powernv_eeh_get_state() will inform EEH core to delay 1 second. This means the delay doesn't happen when powernv_eeh_get_state() returns. This patch moves the delay subtraction just before msleep(), which is the same logic in pseries_eeh_wait_state(). Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Wei Yang 提交于
To retrieve the PCI slot state, EEH driver would set a timeout for that. While current comment is not aligned to what the code does. This patch fixes those comments according to the code. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Wei Yang 提交于
struct pci_io_addr_range{} stores the information of pci resources. It would be better to keep these related fields have the same type as in struct resource{}. This patch fixes the start/end/flags type in struct pci_io_addr_range{} to have the same type as in struct resource{}. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 12 5月, 2015 2 次提交
-
-
由 Gavin Shan 提交于
The patch defines PCI error types and functions in uapi/asm/eeh.h and exports function eeh_pe_inject_err(), which will be called by VFIO driver to inject the specified PCI error to the indicated PE for testing purpose. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
There are two equivalent sets of PE state constants, defined in arch/powerpc/include/asm/eeh.h and include/uapi/linux/vfio.h. Though the names are different, their corresponding values are exactly same. The former is used by EEH core and the latter is used by userspace. The patch moves those constants from arch/powerpc/include/asm/eeh.h to arch/powerpc/include/uapi/asm/eeh.h, which are expected to be used by userspace from now on. We can't delete those constants in vfio.h as it's uncertain that those constants have been or will be used by userspace. Suggested-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 5月, 2015 4 次提交
-
-
由 Joel Stanley 提交于
OpenPower BMC machines do not place any sysparams in the device tree, so at every boot we get a warning: [ 0.437176] SYSPARAM: Opal sysparam node not found Remove the warning, and reorder the init so we don't peform allocations when there is no sysparam node in the device tree. Signed-off-by: NJoel Stanley <joel@jms.id.au> Acked-by: NNeelesh Gupta <neelegup@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The only little endian configuration we support is ppc64le. As such if we're building little endian we don't need a 32-bit VDSO, because there is no 32-bit userspace. This patch is a fairly ugly mess of #ifdefs, but is the minimal logic required to disable the 32-bit VDSO. We can hopefully clean up the result in future with some further refactoring. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
In vdso_fixup_features() we have start64/start32 and size64/size32, but they have the same types, ie. void * and unsigned long. They're only used to save the return value from find_sectionXX() for the subsequent call to do_feature_fixups(), so there's no overlap in their usage either. So we can just consolidate them into start/size and avoid the duplication. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
It's in the git history if we ever need it back. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-