- 28 1月, 2015 3 次提交
-
-
由 Gavin Shan 提交于
On PowerNV platform, the OPAL interrupts are exported by firmware through device-node property (/ibm,opal::opal-interrupts). Under some extreme circumstances (e.g. simulator), we don't have this property found from the device tree. For that case, we shouldn't allocate the interrupt map. Otherwise, slab complains allocating zero sized memory chunk. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The patch put the OPAL interrupt setup logic in opal_init() into seperate function opal_irq_init() for easier code maintaining. The patch doesn't introduce logic changes except: * Rename variable names. * Release virtual IRQ upon error from request_irq(). * Don't cache the virtual IRQ to opal_irqs[] upon error from request_irq(). Suggested-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
In commit c8742f85 "powerpc/powernv: Expose OPAL firmware symbol map" I added pr_fmt() to opal.c. This left some existing pr_xxx()s with duplicate "opal" prefixes, eg: opal: opal: Found 0 interrupts reserved for OPAL Fix them all up. Also make the "Not not found" message a bit more verbose. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 27 1月, 2015 1 次提交
-
-
由 Pranith Kumar 提交于
When CONFIG_PRINTK=n, log_buf_addr_get() returns NULL and log_buf_len_get() return 0. Check for these return values and skip registering the dump buffer. Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> Reviewed-by: NStewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 1月, 2015 4 次提交
-
-
由 Gavin Shan 提交于
The callback (ppc_md.pci_probe_mode()) is used to determine if the child PCI devices of the indicated PCI bus should be probed from device-tree or hardware. On PowerNV platform, we always expect probing PCI devices from hardware, which is PowerPC PCI core's default behaviour. Also, the callback had some delay implemented based on PHB's device node property "reset-clear-timestamp", which wasn't exported from skiboot. So we don't need this function and it's safe to remove it. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
PE#0 should be regarded as valid for P7IOC, while it's invalid for PHB3. The patch adds flag EEH_VALID_PE_ZERO to differentiate those two cases. Without the patch, we possibly see frozen PE#0 state is cleared without EEH recovery taken on P7IOC as following kernel logs indicate: [root@ltcfbl8eb ~]# dmesg : pci 0000:00 : [PE# 000] Secondary bus 0 associated with PE#0 pci 0000:01 : [PE# 001] Secondary bus 1 associated with PE#1 pci 0001:00 : [PE# 000] Secondary bus 0 associated with PE#0 pci 0001:01 : [PE# 001] Secondary bus 1 associated with PE#1 pci 0002:00 : [PE# 000] Secondary bus 0 associated with PE#0 pci 0002:01 : [PE# 001] Secondary bus 1 associated with PE#1 pci 0003:00 : [PE# 000] Secondary bus 0 associated with PE#0 pci 0003:01 : [PE# 001] Secondary bus 1 associated with PE#1 pci 0003:20 : [PE# 002] Secondary bus 32..63 associated with PE#2 : EEH: Clear non-existing PHB#3-PE#0 EEH: PHB location: U78AE.001.WZS00M9-P1-002 Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
When IOMMU bypass is enabled, a PCI device can read and write memory that was not mapped by the driver without causing an EEH. That might cause memory corruption, for example. When we disable bypass, DMA reads and writes to addresses not mapped by the IOMMU will cause an EEH, allowing us to debug such issues. Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com> Reviewed-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Wei Yang 提交于
The M64 range information is missed in dmesg, which would be helpful in debug. This patch prints the M64 range information in the same format as M32. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Reviewed-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 1月, 2015 1 次提交
-
-
由 Ryan Grimm 提交于
Turning snoops on is the last step in CAPP recovery. Sapphire is expected to have reinitialized the PHB and done the previous recovery steps. Add mode argument to opal call to do this. Driver can turn snoops off although it does not currently. Signed-off-by: NRyan Grimm <grimm@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 18 12月, 2014 1 次提交
-
-
由 Greg Kurz 提交于
Starting with POWER8, the subcore logic relies on all threads of a core being booted so that they can participate in split mode switches. So on those machines we ignore the smt_enabled_at_boot setting (smt-enabled on the kernel command line). Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com> [mpe: Update comment and change log to be more precise] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 15 12月, 2014 4 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Newer versions of OPAL will provide this, so let's expose it to user space so tools like perf can use it to properly decode samples in firmware space. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Shreyas B. Prabhu 提交于
Winkle is a deep idle state supported in power8 chips. A core enters winkle when all the threads of the core enter winkle. In this state power supply to the entire chiplet i.e core, private L2 and private L3 is turned off. As a result it gives higher powersavings compared to sleep. But entering winkle results in a total hypervisor state loss. Hence the hypervisor context has to be preserved before entering winkle and restored upon wake up. Power-on Reset Engine (PORE) is a dedicated engine which is responsible for powering on the chiplet during wake up. It can be programmed to restore the register contests of a few specific registers. This patch uses PORE to restore register state wherever possible and uses stack to save and restore rest of the necessary registers. With hypervisor state restore things fall under three categories- per-core state, per-subcore state and per-thread state. To manage this, extend the infrastructure introduced for sleep. Mainly we add a paca variable subcore_sibling_mask. Using this and the core_idle_state we can distingush first thread in core and subcore. Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Shreyas B. Prabhu 提交于
Deep idle states like sleep and winkle are per core idle states. A core enters these states only when all the threads enter either the particular idle state or a deeper one. There are tasks like fastsleep hardware bug workaround and hypervisor core state save which have to be done only by the last thread of the core entering deep idle state and similarly tasks like timebase resync, hypervisor core register restore that have to be done only by the first thread waking up from these state. The current idle state management does not have a way to distinguish the first/last thread of the core waking/entering idle states. Tasks like timebase resync are done for all the threads. This is not only is suboptimal, but can cause functionality issues when subcores and kvm is involved. This patch adds the necessary infrastructure to track idle states of threads in a per-core structure. It uses this info to perform tasks like fastsleep workaround and timebase resync only once per core. Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Originally-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: linux-pm@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Shreyas B. Prabhu 提交于
The secondary threads should enter deep idle states so as to gain maximum powersavings when the entire core is offline. To do so the offline path must be made aware of the available deepest idle state. Hence probe the device tree for the possible idle states in powernv core code and expose the deepest idle state through flags. Since the device tree is probed by the cpuidle driver as well, move the parameters required to discover the idle states into an appropriate common place to both the driver and the powernv core code. Another point is that fastsleep idle state may require workarounds in the kernel to function properly. This workaround is introduced in the subsequent patches. However neither the cpuidle driver or the hotplug path need be bothered about this workaround. They will be taken care of by the core powernv code. Originally-by: NSrivatsa S. Bhat <srivatsa@mit.edu> Signed-off-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Reviewed-by: NPaul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: linux-pm@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 14 12月, 2014 1 次提交
-
-
由 Neelesh Gupta 提交于
The patch exposes the available i2c busses on the PowerNV platform to the kernel and implements the bus driver to support i2c and smbus commands. The driver uses the platform device infrastructure to probe the busses on the platform and registers them with the i2c driver framework. Signed-off-by: NNeelesh Gupta <neelegup@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Wolfram Sang <wsa@the-dreams.de> (I2C part, excluding the bindings) Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 08 12月, 2014 1 次提交
-
-
由 Paul Mackerras 提交于
When a secondary hardware thread has finished running a KVM guest, we currently put that thread into nap mode using a nap instruction in the KVM code. This changes the code so that instead of doing a nap instruction directly, we instead cause the call to power7_nap() that put the thread into nap mode to return. The reason for doing this is to avoid having the KVM code having to know what low-power mode to put the thread into. In the case of a secondary thread used to run a KVM guest, the thread will be offline from the point of view of the host kernel, and the relevant power7_nap() call is the one in pnv_smp_cpu_disable(). In this case we don't want to clear pending IPIs in the offline loop in that function, since that might cause us to miss the wakeup for the next time the thread needs to run a guest. To tell whether or not to clear the interrupt, we use the SRR1 value returned from power7_nap(), and check if it indicates an external interrupt. We arrange that the return from power7_nap() when we have finished running a guest returns 0, so pending interrupts don't get flushed in that case. Note that it is important a secondary thread that has finished executing in the guest, or that didn't have a guest to run, should not return to power7_nap's caller while the kvm_hstate.hwthread_req flag in the PACA is non-zero, because the return from power7_nap will reenable the MMU, and the MMU might still be in guest context. In this situation we spin at low priority in real mode waiting for hwthread_req to become zero. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 12月, 2014 3 次提交
-
-
由 Mahesh Salgaonkar 提交于
Cleanup OpalMCE_* definitions/declarations and other related code which is not used anymore. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: NBenjamin Herrrenschmidt <benh@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
On PowerNV platform, PHB diag-data is dumped after stopping device drivers. In case of recursive EEH errors, the kernel is usually crashed before dumping PHB diag-data for the second EEH error. It's hard to locate the root cause of the second EEH error without PHB diag-data. The patch adds one more EEH option "eeh=early_log", which helps dumping PHB diag-data immediately once frozen PE is detected, in order to get the PHB diag-data for the second EEH error. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
The patch introduces additional flag EEH_PE_RESET to indicate the corresponding PE is under reset. In turn, the PE retrieval bakcend on PowerNV platform can return unfrozen state for the EEH core to moving forward. Flag EEH_PE_CFG_BLOCKED isn't the correct one for the purpose. In PCI passthrou case, the problem is more worse: Guest doesn't recover 6th EEH error. The PE is left in isolated (frozen) and config blocked state on Broadcom adapters. We can't retrieve the PE's state correctly any more, even from the host side via sysfs /sys/bus/pci/devices/xxx/eeh_pe_state. Reported-by: NRajeshkumar Subramanian <rajeshkumars@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 12月, 2014 1 次提交
-
-
由 Neelesh Gupta 提交于
The current driver probe() function assumes the sensor device to be always present and gets executed every time if the driver is loaded, but the appropriate hardware could not be present. So, move the platform device creation as part of platform init code and use the 'id_table' to check if the device is present or not. Signed-off-by: NNeelesh Gupta <neelegup@linux.vnet.ibm.com> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
-
- 27 11月, 2014 2 次提交
-
-
由 Gavin Shan 提交于
The flag passed to ioda_eeh_phb_reset() should be EEH_RESET_DEACTIVATE, which is translated to OPAL_DEASSERT_RESET or something else by the EEH backend accordingly. The patch replaces OPAL_DEASSERT_RESET with EEH_RESET_DEACTIVATE for ioda_eeh_phb_reset(). Cc: stable@vger.kernel.org Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Mahesh Salgaonkar 提交于
The current HMI event structure is an ABI and carries a version field to accommodate future changes without affecting/rearranging current structure members that are valid for previous versions. The current version check "if (hmi_evt->version != OpalHMIEvt_V1)" doesn't accomodate the fact that the version number may change in future. If firmware starts returning an HMI event with version > 1, this check will fail and no HMI information will be printed on older kernels. This patch fixes this issue. Cc: stable@vger.kernel.org # 3.17+ Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> [mpe: Reword changelog] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 24 11月, 2014 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Instead of the arch specific quirk which we are deprecating and that drivers don't understand. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> CC: <stable@vger.kernel.org>
-
- 23 11月, 2014 1 次提交
-
-
由 Jiang Liu 提交于
Rename write_msi_msg() to pci_write_msi_msg() to mark it as PCI specific. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Yingjoe Chen <yingjoe.chen@mediatek.com> Cc: Yijing Wang <wangyijing@huawei.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 19 11月, 2014 1 次提交
-
-
由 Michael Ellerman 提交于
Although we are now selecting NO_BOOTMEM, we still have some traces of bootmem lying around. That is because even with NO_BOOTMEM there is still a shim that converts bootmem calls into memblock calls, but ultimately we want to remove all traces of bootmem. Most of the patch is conversions from alloc_bootmem() to memblock_virt_alloc(). In general a call such as: p = (struct foo *)alloc_bootmem(x); Becomes: p = memblock_virt_alloc(x, 0); We don't need the cast because memblock_virt_alloc() returns a void *. The alignment value of zero tells memblock to use the default alignment, which is SMP_CACHE_BYTES, the same value alloc_bootmem() uses. We remove a number of NULL checks on the result of memblock_virt_alloc(). That is because memblock_virt_alloc() will panic if it can't allocate, in exactly the same way as alloc_bootmem(), so the NULL checks are and always have been redundant. The memory returned by memblock_virt_alloc() is already zeroed, so we remove several memsets of the result of memblock_virt_alloc(). Finally we convert a few uses of __alloc_bootmem(x, y, MAX_DMA_ADDRESS) to just plain memblock_virt_alloc(). We don't use memblock_alloc_base() because MAX_DMA_ADDRESS is ~0ul on powerpc, so limiting the allocation to that is pointless, 16XB ought to be enough for anyone. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 17 11月, 2014 1 次提交
-
-
由 Neelesh Gupta 提交于
The patch implements the OPAL rtc driver that binds with the rtc driver subsystem. The driver uses the platform device infrastructure to probe the rtc device and register it to rtc class framework. The 'wakeup' is supported depending upon the property 'has-tpo' present in the OF node. It provides a way to load the generic rtc driver in in the absence of an OPAL driver. The patch also moves the existing OPAL rtc get/set time interfaces to the new driver and exposes the necessary OPAL calls using EXPORT_SYMBOL_GPL. Test results: ------------- Host: [root@tul169p1 ~]# ls -l /sys/class/rtc/ total 0 lrwxrwxrwx 1 root root 0 Oct 14 03:07 rtc0 -> ../../devices/opal-rtc/rtc/rtc0 [root@tul169p1 ~]# cat /sys/devices/opal-rtc/rtc/rtc0/time 08:10:07 [root@tul169p1 ~]# echo `date '+%s' -d '+ 2 minutes'` > /sys/class/rtc/rtc0/wakealarm [root@tul169p1 ~]# cat /sys/class/rtc/rtc0/wakealarm 1413274345 [root@tul169p1 ~]# FSP: $ smgr mfgState standby $ rtim timeofday System time is valid: 2014/10/14 08:12:04.225115 $ smgr mfgState ipling $ CC: devicetree@vger.kernel.org CC: tglx@linutronix.de CC: rtc-linux@googlegroups.com CC: a.zummo@towertech.it Signed-off-by: NNeelesh Gupta <neelegup@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 14 11月, 2014 8 次提交
-
-
由 Gavin Shan 提交于
If there're no PHBs under P5IOC2 HUB device tree node, we should bail early to avoid zero devisor and allocating TCE tables. Reported-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
When freezing compound PEs in pnv_ioda_freeze_pe(), we should bail upon illegal master PE. We needn't freeze slave PE because it should have been put into frozen state by hardware. Reported-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
Nested if statements are always bad and the patch avoids one by checking PHB type and bail in advance if necessary. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
Commit 262af557 ("powerpc/powernv: Enable M64 aperatus for PHB3") introduced compound PEs in order to support M64 aperatus on PHB3. However, we never configured PELTV for compound PEs. The patch fixes that by: parent PE can freeze all child compound PEs. Any compound PE affects the group. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The patch initializes PE instance when reserving PE number to keep consistent things as we did before. Also, it replaces the iteration on bridge's windows with the prefered way. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The patch renames alloc_m64_pe() to reserve_m64_pe() to reflect its real usage: We reserve PE numbers for M64 segments in advance and then pick up the reserved PE numbers when building the mapping between PE numbers and M64 segments. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The M64 resource should be removed if we don't have hook to initialize it, or (not and) fail to do that. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The patch checks PHB type a bit early to save a bit cycles for P7 because we don't support M64 for P7IOC no matter what OPAL firmware we have. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 12 11月, 2014 1 次提交
-
-
由 Jeremy Kerr 提交于
Recent OPAL firmare adds a couple of functions to send and receive IPMI messages: https://github.com/open-power/skiboot/commit/b2a374da This change updates the token list and wrappers to suit, and adds the platform devices for any IPMI interfaces. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 10 11月, 2014 2 次提交
-
-
由 Anton Blanchard 提交于
Commit d4fe0965 ("powerpc/jump_label: use HAVE_JUMP_LABEL?") missed a few conversions. Change the remaining uses of CONFIG_JUMP_LABEL to HAVE_JUMP_LABEL. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Lots of places included bootmem.h even when not using bootmem. Signed-off-by: NAnton Blanchard <anton@samba.org> Tested-by: NEmil Medve <Emilian.Medve@Freescale.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 03 11月, 2014 2 次提交
-
-
由 Alexander Graf 提交于
The generic Linux framework to power off the machine is a function pointer called pm_power_off. The trick about this pointer is that device drivers can potentially implement it rather than board files. Today on powerpc we set pm_power_off to invoke our generic full machine power off logic which then calls ppc_md.power_off to invoke machine specific power off. However, when we want to add a power off GPIO via the "gpio-poweroff" driver, this card house falls apart. That driver only registers itself if pm_power_off is NULL to ensure it doesn't override board specific logic. However, since we always set pm_power_off to the generic power off logic (which will just not power off the machine if no ppc_md.power_off call is implemented), we can't implement power off via the generic GPIO power off driver. To fix this up, let's get rid of the ppc_md.power_off logic and just always use pm_power_off as was intended. Then individual drivers such as the GPIO power off driver can implement power off logic via that function pointer. With this patch set applied and a few patches on top of QEMU that implement a power off GPIO on the virt e500 machine, I can successfully turn off my virtual machine after halt. Signed-off-by: NAlexander Graf <agraf@suse.de> [mpe: Squash into one patch and update changelog based on cover letter] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christoph Lameter 提交于
This still has not been merged and now powerpc is the only arch that does not have this change. Sorry about missing linuxppc-dev before. V2->V2 - Fix up to work against 3.18-rc1 __get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> Signed-off-by: NChristoph Lameter <cl@linux.com> [mpe: Fix build errors caused by set/or_softirq_pending(), and rework assignment in __set_breakpoint() to use memcpy().] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 31 10月, 2014 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Endian is hard, especially when I designed a stupid FW interface, and I should know better... oh well, this is attempt #2 at fixing this properly. This time it seems to work with all access sizes and I can run my flashing tool (which exercises all sort of access sizes and types to access the SPI controller in the BMC) just fine. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> CC: stable@vger.kernel.org Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-