- 24 1月, 2017 1 次提交
-
-
由 Gavin Shan 提交于
In __eeh_clear_pe_frozen_state(), we should pass the flag's value instead of its address to eeh_unfreeze_pe(). The isolated flag is cleared if no error returned from __eeh_clear_pe_frozen_state(). We never observed the error from the function. So the isolated flag should have been always cleared, no real issue is caused because of the misused @flag. This fixes the code by passing the value of @flag to eeh_unfreeze_pe(). Fixes: 5cfb20b9 ("powerpc/eeh: Emulate EEH recovery for VFIO devices") Cc: stable@vger.kernel.org # v3.18+ Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 12月, 2016 1 次提交
-
-
由 Andrew Donnellan 提交于
In eeh_reset_device(), we take the pci_rescan_remove_lock immediately after after we call eeh_reset_pe() to reset the PCI controller. We then call eeh_clear_pe_frozen_state(), which can return an error. In this case, we bail out of eeh_reset_device() without calling pci_unlock_rescan_remove(). Add a call to pci_unlock_rescan_remove() in the eeh_clear_pe_frozen_state() error path so that we don't cause a deadlock later on. Reported-by: NPradipta Ghosh <pradghos@in.ibm.com> Fixes: 78954700 ("powerpc/eeh: Avoid I/O access during PE reset") Cc: stable@vger.kernel.org # v3.16+ Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 11月, 2016 2 次提交
-
-
由 Russell Currey 提交于
eeh_pe_reset and eeh_reset_pe are two different functions in the same file which do mostly the same thing. Not only is this confusing, but potentially causes disrepancies in functionality, notably eeh_reset_pe as it does not check return values for failure. Refactor this into the following: - eeh_pe_reset(): stays as is, performs a single operation, exported - eeh_pe_reset_full(): new, full reset process that calls eeh_pe_reset() - eeh_reset_pe(): removed and replaced by eeh_pe_reset_full() - eeh_reset_pe_once(): removed Signed-off-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Russell Currey 提交于
PHB, PE (and by association MVE) numbers are printed as a mix of decimal and hexadecimal throughout the kernel. This can be misleading, so make them all hexadecimal. Standardising on hex instead of dec because: - PHB numbers are presented in hex in sysfs/debugfs (and lspci, etc) - PE numbers are presented as hex in sysfs and parsed in hex in debugfs The only place I think this could cause confusing are the messages during boot, i.e. pci 000a:01 : [PE# 000] Secondary bus 1 associated with PE#0 which can be a quick way to check PE numbers. pe_level_printk() will only print two characters instead of three, so the above would be pci 000a:01 : [PE# 00] Secondary bus 1 associated with PE#0 which gives a hint it's in hex. Signed-off-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 9月, 2016 2 次提交
-
-
由 Russell Currey 提交于
In eeh_handle_special_event(), eeh_pe_bus_get() is called before calling eeh_report_failure() on every device under a PE. If a PE was missing a bus for some reason, the error would occur before reporting failure, even though eeh_report_failure() doesn't require a bus. Fix this by moving the bus retrieval and error check after the eeh_report_failure() calls. Signed-off-by: NRussell Currey <ruscur@russell.cc> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Russell Currey 提交于
eeh_pe_bus_get() can return NULL if a PCI bus isn't found for a given PE. Some callers don't check this, and can cause a null pointer dereference under certain circumstances. Fix this by checking NULL everywhere eeh_pe_bus_get() is called. Fixes: 8a6b1bc7 ("powerpc/eeh: EEH core to handle special event") Cc: stable@vger.kernel.org # v3.11+ Signed-off-by: NRussell Currey <ruscur@russell.cc> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 28 6月, 2016 1 次提交
-
-
由 Gavin Shan 提交于
When calling eeh_rmv_device() in eeh_reset_device() for partial hotplug case, @rmv_data instead of its address is the proper argument. Otherwise, the stack frame is corrupted when writing to @rmv_data (actually its address) in eeh_rmv_device(). It results in kernel crash as observed. This fixes the issue by passing @rmv_data, not its address to eeh_rmv_device() in eeh_reset_device(). Fixes: 67086e32 ("powerpc/eeh: powerpc/eeh: Support error recovery for VF PE") Reported-by: NPridhiviraj Paidipeddi <ppaidipe@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 17 6月, 2016 1 次提交
-
-
由 Gavin Shan 提交于
The PE primary bus cannot be got from its child devices when having full hotplug in error recovery. The PE primary bus is cached, which is done in commit <05ba75f8> ("powerpc/eeh: Fix stale cached primary bus"). In eeh_reset_device(), the flag (EEH_PE_PRI_BUS) is cleared before the PCI hot remove. eeh_pe_bus_get() then returns NULL as the PE primary bus in pnv_eeh_reset() and it crashes the kernel eventually. This fixes the issue by clearing the flag (EEH_PE_PRI_BUS) before the PCI hot add. With it, the PowerNV EEH reset backend (pnv_eeh_reset()) can get valid PE primary bus through eeh_pe_bus_get(). Fixes: 67086e32 ("powerpc/eeh: powerpc/eeh: Support error recovery for VF PE") Reported-by: NPridhiviraj Paidipeddi <ppaiddipe@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 14 6月, 2016 1 次提交
-
-
由 Michael Ellerman 提交于
Signed-off-by: NAndrea Gelmini <andrea.gelmini@gelma.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 12 5月, 2016 3 次提交
-
-
由 Gavin Shan 提交于
The function eeh_pe_reset_and_recover() is used to recover EEH error when the passthrough device are transferred to guest and backwards, meaning the device's driver is vfio-pci or none. In both cases, the handlers triggered by eeh_report_reset() and eeh_report_resume() shouldn't be called. This ignores the error handlers from eeh_report_reset() and eeh_report_resume(). Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The function eeh_pe_reset_and_recover() is used to recover EEH error when the passthrou device are transferred to guest and backwards. The content in the device's config space will be lost on PE reset issued in the middle of the recovery. The function saves/restores it before/after the reset. However, config access to some adapters like Broadcom BCM5719 at this point will causes fenced PHB. The config space is always blocked and we save 0xFF's that are restored at late point. The memory BARs are totally corrupted, causing another EEH error upon access to one of the memory BARs. This restores the config space on those adapters like BCM5719 from the content saved to the EEH device when it's populated, to resolve above issue. Fixes: 5cfb20b9 ("powerpc/eeh: Emulate EEH recovery for VFIO devices") Cc: stable@vger.kernel.org #v3.18+ Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The function eeh_pe_reset_and_recover() is used to recover EEH error when the passthrough device are transferred to guest and backwards, meaning the device's driver is vfio-pci or none. When the driver is vfio-pci that provides error_detected() error handler only, the handler simply stops the guest and it's not expected behaviour. On the other hand, no error handlers will be called if we don't have a bound driver. This ignores the error handler in eeh_pe_reset_and_recover() that reports the error to device driver to avoid the exceptional behaviour. Fixes: 5cfb20b9 ("powerpc/eeh: Emulate EEH recovery for VFIO devices") Cc: stable@vger.kernel.org #v3.18+ Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 5月, 2016 1 次提交
-
-
由 Gavin Shan 提交于
This renames pcibios_{add,remove}_pci_devices() to avoid conflicts with names of the weak functions in PCI subsystem, which have the prefix "pcibios". No logical changes introduced. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-By: NAlistair Popple <alistair@popple.id.au> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 09 3月, 2016 3 次提交
-
-
由 Gavin Shan 提交于
When we have partial hotplug as part of the error recovery on PF, the VFs that are bound with vfio-pci driver will experience hotplug. That's not allowed. This checks if the VF PE is passed or not. If it does, we leave the VF without removing it. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
When EEH error happened to the parent PE of those PEs that have been passed through to guest, the error is propagated to guest domain and the VFIO driver's error handlers are called. It's not correct as the error in the host domain shouldn't be propagated to guests and affect them. This adds one more limitation when calling EEH error handlers. If the PE has been passed through to guest, the error handlers won't be called. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Wei Yang 提交于
PFs are enumerated on PCI bus, while VFs are created by PF's driver. In EEH recovery, it has two cases: 1. Device and driver is EEH aware, error handlers are called. 2. Device and driver is not EEH aware, un-plug the device and plug it again by enumerating it. The special thing happens on the second case. For a PF, we could use the original pci core to enumerate the bus, while for VF we need to record the VFs which aer un-plugged then plug it again. Also The patch caches the VF index in pci_dn, which can be used to calculate VF's bus, device and function number. Those information helps to locate the VF's PCI device instance when doing hotplug during EEH recovery if necessary. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 2月, 2016 1 次提交
-
-
由 Gavin Shan 提交于
During error recovery, the device could be removed as part of the partial hotplug. The criterion used to come with partial hotplug is: if the device driver provides error_detected(), slot_reset() and resume() callbacks, it's immune from hotplug. Otherwise, it's going to experience partial hotplug during EEH recovery. But the criterion isn't correct enough: mlx4_core driver for Mellanox adapters provides error_detected(), slot_reset() callbacks, but resume() isn't there. Those Mellanox adapters won't be to involved in the partial hotplug. This fixes the criterion to a practical one: adpater with driver that provides error_detected(), slot_reset() will be immune from partial hotplug. resume() isn't mandatory. Fixes: f2da4ccf ("powerpc/eeh: More relaxed hotplug criterion") Cc: stable@vger.kernel.org #v4.4+ Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 15 2月, 2016 1 次提交
-
-
由 Gavin Shan 提交于
When PE is created, its primary bus is cached to pe->bus. At later point, the cached primary bus is returned from eeh_pe_bus_get(). However, we could get stale cached primary bus and run into kernel crash in one case: full hotplug as part of fenced PHB error recovery releases all PCI busses under the PHB at unplugging time and recreate them at plugging time. pe->bus is still dereferencing the PCI bus that was released. This adds another PE flag (EEH_PE_PRI_BUS) to represent the validity of pe->bus. pe->bus is updated when its first child EEH device is online and the flag is set. Before unplugging in full hotplug for error recovery, the flag is cleared. Fixes: 8cdb2833 ("powerpc/eeh: Trace PCI bus from PE") Cc: stable@vger.kernel.org #v3.11+ Reported-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reported-by: NPradipta Ghosh <pradghos@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Tested-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 12月, 2015 1 次提交
-
-
由 Bjorn Helgaas 提交于
Bit 7 of the "Header Type" register indicates a multi-function device when set. Bits 0-6 contain encoded values, where 0x1 indicates a PCI-PCI bridge. It is incorrect to test this as though it were a mask. For example, while the PCI 3.0 spec only defines values 0x0, 0x1, and 0x2, it's conceivable that a future spec could define 0x3 to mean something else; then tests for "(hdr_type & 0x7f) & PCI_HEADER_TYPE_BRIDGE" would incorrectly succeed for this new 0x3 header type. Test bits 0-6 of the Header Type for equality with PCI_HEADER_TYPE_BRIDGE. Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 09 12月, 2015 1 次提交
-
-
由 Andrew Donnellan 提交于
This reverts commit 527d10ef. The reverted commit breaks cxlflash devices following an EEH reset (and possibly other cxl devices, however this has not been tested). The reverted commit changed the behaviour of eeh_reset_device() so that PHB PEs are not unfrozen following the completion of the reset. This should not be problematic, as no device resources should have been associated with the PHB PE. However, when attempting to load the cxlflash driver after a reset, the driver attempts to read Vital Product Data through a call to pci_read_vpd() (which is called on the physical cxl device, not on the virtual AFU device). pci_read_vpd() in turn attempts to read from the cxl device's config space. This fails, as the PE it's trying to read from is still frozen. In turn, the driver gets an -ENODEV and fails to initialise. It appears this issue only affects some parts of the VPD area, as "lspci -vvv", which only reads a subset of the VPD bytes, is not broken by the original patch. At this stage, we don't fully understand why we're trying to read a frozen PE, and we don't know how this affects other cxl devices. It is possible that there is an underlying bug in the cxl driver or the powerpc CAPI support code, or alternatively a bug in the PCI resource allocation/mapping code that is incorrectly mapping resources to PE#0. As such, this fix is incomplete, however it is necessary to prevent a serious regression in CAPI support. In the meantime, revert the commit, especially as it was intended to be a non-functional change. Cc: Gavin Shan <gwshan@linux.vnet.ibm.com> Cc: Ian Munsie <imunsie@au1.ibm.com> Cc: Daniel Axtens <dja@axtens.net> Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 21 10月, 2015 3 次提交
-
-
由 Gavin Shan 提交于
On fenced PHB, the error handlers in the drivers of its subordinate devices could return PCI_ERS_RESULT_CAN_RECOVER, indicating no reset will be issued during the recovery. It's conflicting with the fact that fenced PHB won't be recovered without reset. This limits the return value from the error handlers in the drivers of the fenced PHB's subordinate devices to PCI_ERS_RESULT_NEED_NONE or PCI_ERS_RESULT_NEED_RESET, to ensure reset will be issued during recovery. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
Currently, we rely on the existence of struct pci_driver::err_handler to decide if the corresponding PCI device should be unplugged during EEH recovery (partially hotplug case). However that check is not sufficient. Some device drivers implement only some of the EEH error handlers to collect diag-data. That means the driver still expects a hotplug to recover from the EEH error. This makes the hotplug criterion more relaxed: if the device driver doesn't provide all necessary EEH error handlers, it will experience hotplug during EEH recovery. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> [mpe: Minor change log rewording] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
On PowerNV platform, the PE is kept in frozen state until the PE reset is completed to avoid recursive EEH error caused by MMIO access during the period of EEH reset. The PE's frozen state is cleared after BARs of PCI device included in the PE are restored and enabled. However, we needn't clear the frozen state for PHB PE explicitly at this point as there is no real PE for PHB PE. As the PHB PE is always binding with PE#0, we actually clear PE#0, which is wrong. It doesn't incur any problem though. This checks if the PE is PHB PE and doesn't clear the frozen state if it is. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 13 5月, 2015 1 次提交
-
-
由 Wei Yang 提交于
To retrieve the PCI slot state, EEH driver would set a timeout for that. While current comment is not aligned to what the code does. This patch fixes those comments according to the code. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 24 3月, 2015 1 次提交
-
-
由 Gavin Shan 提交于
The patch removes struct eeh_dev::dn and the corresponding helper functions: eeh_dev_to_of_node() and of_node_to_eeh_dev(). Instead, eeh_dev_to_pdn() and pdn_to_eeh_dev() should be used to get the pdn, which might contain device_node on PowerNV platform. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 1月, 2015 2 次提交
-
-
由 Gavin Shan 提交于
When PE's frozen count hits maximal allowed frozen times, which is 5 currently, it will be forced to be offline permanently. Once the PE is removed permanently, rebooting machine is required to bring the PE back. It's not convienent when testing EEH functionality. The patch exports the maximal allowed frozen times through debugfs entry (/sys/kernel/debug/powerpc/eeh_max_freezes). Requested-by: NRyan Grimm <grimm@linux.vnet.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The conditions that one specific PE's frozen count exceeds the maximal allowed times (EEH_MAX_ALLOWED_FREEZES) and it's in isolated or recovery state indicate the PE was removed permanently implicitly. The patch introduces flag EEH_PE_REMOVED to indicate that explicitly so that we don't depend on the fixed maximal allowed times, which can be varied as we do in subsequent patch. Flag EEH_PE_REMOVED is expected to be marked for the PE whose frozen count exceeds the maximal allowed times, or just failed from recovery. Requested-by: NRyan Grimm <grimm@linux.vnet.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 12月, 2014 1 次提交
-
-
由 Gavin Shan 提交于
The patch introduces additional flag EEH_PE_RESET to indicate the corresponding PE is under reset. In turn, the PE retrieval bakcend on PowerNV platform can return unfrozen state for the EEH core to moving forward. Flag EEH_PE_CFG_BLOCKED isn't the correct one for the purpose. In PCI passthrou case, the problem is more worse: Guest doesn't recover 6th EEH error. The PE is left in isolated (frozen) and config blocked state on Broadcom adapters. We can't retrieve the PE's state correctly any more, even from the host side via sysfs /sys/bus/pci/devices/xxx/eeh_pe_state. Reported-by: NRajeshkumar Subramanian <rajeshkumars@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 10月, 2014 1 次提交
-
-
由 Gavin Shan 提交于
The flag EEH_PE_RESET indicates blocking config space of the PE during reset time. We potentially need block PE's config space other than reset time. So it's reasonable to replace it with EEH_PE_CFG_BLOCKED to indicate its usage. There are no substantial code or logic changes in this patch. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 30 9月, 2014 2 次提交
-
-
由 Gavin Shan 提交于
When enabling EEH functionality on passed through devices (PE) with VFIO, the devices in the PE would be removed permanently from guest side. In that case, the PE remains frozen state. When returning PE to host, or restarting the guest again, we had mechanism unfreezing the PE by clearing PESTA/B frozen bits. However, that's not enough for some adapters, which are indicated as following "lspci" shows. Those adapters require hot reset on the parent bus to bring their firmware back to workable state. Otherwise, those adaptrs won't be operative and the host (for returning case) or the guest will fail to load the drivers for those adapters without exception. 0000:01:00.0 Ethernet controller: Emulex Corporation OneConnect \ 10Gb NIC (be3) (rev 02) 0000:01:00.0 0200: 19a2:0710 (rev 02) 0001:03:00.0 Ethernet controller: Emulex Corporation OneConnect \ NIC (Lancer) (rev 10) 0001:03:00.0 0200: 10df:e220 (rev 10) The patch adds mechanism to emulate EEH recovery (for hot reset on parent PCI bus) on 3 gates to fix the issue: open/release one adapter of the PE, enable EEH functionality on one adapter of the PE. Reported-by: NMurilo Fossa Vicentini <muvic@br.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
The patch uses eeh_unfreeze_pe() to replace the logic clearing frozen IO and DMA, in order to simplify the code. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 05 8月, 2014 1 次提交
-
-
由 Gavin Shan 提交于
pr_warn() is equal to pr_warning(), but the former is a bit more formal according to commit fc62f2f1 ("kernel.h: add pr_warn for symmetry to dev_warn, netdev_warn"). The patch replaces pr_warning() with pr_warn(). Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 11 6月, 2014 2 次提交
-
-
由 Gavin Shan 提交于
On PowerNV platform, EEH errors are reported by IO accessors or poller driven by interrupt. After the PE is isolated, we won't produce EEH event for the PE. The current implementation has possibility of EEH event lost in this way: The interrupt handler queues one "special" event, which drives the poller. EEH thread doesn't pick the special event yet. IO accessors kicks in, the frozen PE is marked as "isolated" and EEH event is queued to the list. EEH thread runs because of special event and purge all existing EEH events. However, we never produce an other EEH event for the frozen PE. Eventually, the PE is marked as "isolated" and we don't have EEH event to recover it. The patch fixes the issue to keep EEH events for PEs that have been marked as "isolated" with the help of additional "force" help to eeh_remove_event(). Reported-by: NRolf Brudeseth <rolfb@us.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
Since commit cb523e09 ("powerpc/eeh: Avoid I/O access during PE reset"), the PE is kept as frozen state on hardware level until the PE reset is done completely. After that, we explicitly clear the frozen state of the affected PE. However, there might have frozen child PEs of the affected PE and we also need clear their frozen state as well. Otherwise, the recovery is going to fail. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 28 4月, 2014 5 次提交
-
-
由 Gavin Shan 提交于
When PCI_ERS_RESULT_CAN_RECOVER returned from device drivers, the EEH core should enable I/O and DMA for the affected PE. However, it was missed to have DMA enabled in eeh_handle_normal_event(). Besides, the frozen state of the affected PE should be cleared after successful recovery, but we didn't. The patch fixes both of the issues as above. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
The issue was detected in a bit complicated test case where we have multiple hierarchical PEs shown as following figure: +-----------------+ | PE#3 p2p#0 | | p2p#1 | +-----------------+ | +-----------------+ | PE#4 pdev#0 | | pdev#1 | +-----------------+ PE#4 (have 2 PCI devices) is the child of PE#3, which has 2 p2p bridges. We accidentally had less-known scenario: PE#4 was removed permanently from the system because of permanent failure (e.g. exceeding the max allowd failure times in last hour), then we detects EEH errors on PE#3 and tried to recover it. However, eeh_dev instances for pdev#0/1 were not detached from PE#4, which was still connected to PE#3. All of that was because of the fact that we rely on count-based pcibios_release_device(), which isn't reliable enough. When doing recovery for PE#3, we still apply hotplug on PE#4 and pdev#0/1, which are not valid any more. Eventually, we run into kernel crash. The patch fixes above issue from two aspects. For unplug, we simply skip those permanently removed PE, whose state is (EEH_PE_STATE_ISOLATED && !EEH_PE_STATE_RECOVERING) and its frozen count should be greater than EEH_MAX_ALLOWED_FREEZES. For plug, we marked all permanently removed EEH devices with EEH_DEV_REMOVED and return 0xFF's on read its PCI config so that PCI core will omit them. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
We have suffered recrusive frozen PE a lot, which was caused by IO accesses during the PE reset. Ben came up with the good idea to keep frozen PE until recovery (BAR restore) gets done. With that, IO accesses during PE reset are dropped by hardware and wouldn't incur the recrusive frozen PE any more. The patch implements the idea. We don't clear the frozen state until PE reset is done completely. During the period, the EEH core expects unfrozen state from backend to keep going. So we have to reuse EEH_PE_RESET flag, which has been set during PE reset, to return normal state from backend. The side effect is we have to clear frozen state for towice (PE reset and clear it explicitly), but that's harmless. We have some limitations on pHyp. pHyp doesn't allow to enable IO or DMA for unfrozen PE. So we don't enable them on unfrozen PE in eeh_pci_enable(). We have to enable IO before grabbing logs on pHyp. Otherwise, 0xFF's is always returned from PCI config space. Also, we had wrong return value from eeh_pci_enable() for EEH_OPT_THAW_DMA case. The patch fixes it too. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
We've observed multiple PE reset failures because of PCI-CFG access during that period. Potentially, some device drivers can't support EEH very well and they can't put the device to motionless state before PE reset. So those device drivers might produce PCI-CFG accesses during PE reset. Also, we could have PCI-CFG access from user space (e.g. "lspci"). Since access to frozen PE should return 0xFF's, we can block PCI-CFG access during the period of PE reset so that we won't get recrusive EEH errors. The patch adds flag EEH_PE_RESET, which is kept during PE reset. The PowerNV/pSeries PCI-CFG accessors reuse the flag to block PCI-CFG accordingly. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
The PE state (for eeh_pe instance) EEH_PE_PHB_DEAD is duplicate to EEH_PE_ISOLATED. Originally, those PHBs (PHB PE) with EEH_PE_PHB_DEAD would be removed from the system. However, it's safe to replace that with EEH_PE_ISOLATED. The patch also clear EEH_PE_RECOVERING after fenced PHB has been handled, either failure or success. It makes the PHB PE state consistent with: PHB functions normally NONE PHB has been removed EEH_PE_ISOLATED PHB fenced, recovery in progress EEH_PE_ISOLATED | RECOVERING Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 3月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
Commit b8a9a11b (powerpc: eeh: Kill another abuse of irq_desc) is missing some brackets ..... It's not a good idea to write patches in grumpy mode and then forget to at least compile test them or rely on the few eyeballs discussing that patch to spot it..... Reported-by: fengguang.wu@intel.com Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Gavin Shan <shangw@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: ppc <linuxppc-dev@lists.ozlabs.org>
-