- 08 7月, 2016 5 次提交
-
-
由 Ian Munsie 提交于
If a kernel context is initialised and does not have any AFU interrupts allocated it will cause a NULL pointer dereference when the context is detached since the irq_names list will not have been initialised. Move the initialisation of the irq_names list into the cxl_context_init routine so that it will be valid for the entire lifetime of the context and will not cause a NULL pointer dereference. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
An issue was noted in our debug logs where the XSL would leave the RA bit asserted after an AFU reset operation, which would effectively prevent further AFU reset operations from working. Workaround the issue by clearing the RA bit with an MMIO write if it is still asserted after any AFU control operation. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
The AFU disable operation has a bug where it will not clear the enable bit and therefore will have no effect. To date this has likely been masked by fact that we perform an AFU reset before the disable, which also has the effect of clearing the enable bit, making the following disable operation effectively a noop on most hardware. This patch modifies the afu_control function to take a parameter to clear from the AFU control register so that the disable operation can clear the appropriate bit. This bug was uncovered on the Mellanox CX4, which uses an XSL rather than a PSL. On the XSL the reset operation will not complete while the AFU is enabled, meaning the enable bit was still set at the start of the disable and as a result this bug was hit and the disable also timed out. Because of this difference in behaviour between the PSL and XSL, this patch now makes the reset dependent on the card using a PSL to avoid waiting for a timeout on the XSL. It is entirely possible that we may be able to drop the reset altogether if it turns out we only ever needed it due to this bug - however I am not willing to drop it without further regression testing and have added comments to the code explaining the background. This also fixes a small issue where the AFU_Cntl register was read outside of the lock that protects it. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
The Scheduled Process Area is allocated dynamically with enough pages to fit at least as many processes as the AFU descriptor indicated. Since the calculation is non-trivial, it does this by calculating how many processes could fit in an allocation of a given order, and increasing that order until it can fit enough processes or hits the maximum supported size. Currently, it will start this search using a SPA of 2 pages instead of 1. This can waste a page of memory if the AFU's maximum number of supported processes was small enough to fit in one page. Fix the algorithm to start the search at 1 page. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
If the AFU descriptor of an AFU directed AFU indicates that it supports 0 maximum processes, we will accept that value and attempt to use it. The SPA will still be allocated (with 2 pages due to another minor bug and room for 958 processes), and when a context is allocated we will pass the value of 0 to idr_alloc as the maximum. However, idr_alloc will treat that as meaning no maximum and will allocate a context number and we return a valid context. Conceivably, this could lead to a buffer overflow of the SPA if more than 958 contexts were allocated, however this is mitigated by the fact that there are no known AFUs in the wild with a bogus AFU descriptor like this, and that only the root user is allowed to flash an AFU image to a card. Add a check when validating the AFU descriptor to reject any with 0 maximum processes. We do still allow a dedicated process only AFU to indicate that it supports 0 contexts even though that is forbidden in the architecture, as in that case we ignore the value and use 1 instead. This is just on the off-chance that such a dedicated process AFU may exist (not that I am aware of any), since their developers are less likely to have cared about this value at all. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 29 6月, 2016 2 次提交
-
-
由 Suraj Jitindar Singh 提交于
Implement new character device driver to allow access from user space to the operator panel display present on IBM Power Systems machines with FSPs. This will allow status information to be presented on the display which is visible to a user. The driver implements a character buffer which a user can read/write by accessing the device (/dev/op_panel). This buffer is then displayed on the operator panel display. Any attempt to write past the last character position will have no effect and attempts to write more characters than the size of the display will be truncated. The device may only be accessed by a single process at a time. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Suraj Jitindar Singh 提交于
An opal_msg of type OPAL_MSG_ASYNC_COMP contains the return code in the params[1] struct member. However this isn't intuitive or obvious when reading the code and requires that a user look at the skiboot documentation or opal-api.h to verify this. Add an inline function to get the return code from an opal_msg and update call sites accordingly. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 28 6月, 2016 2 次提交
-
-
由 Michael Neuling 提交于
This provides AFU drivers a means to associate private data with a cxl context. This is particularly intended for make the new callbacks for driver specific events easier for AFU drivers to use, as they can easily get back to any private data structures they may use. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: Philippe Bergheaud <felix@linux.vnet.ibm.com Reviewed-by: NMatthew R. Ochs <mrochs@linux.vnet.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Philippe Bergheaud 提交于
This adds an afu_driver_ops structure with fetch_event() and event_delivered() callbacks. An AFU driver such as cxlflash can fill this out and associate it with a context to enable passing custom AFU specific events to userspace. This also adds a new kernel API function cxl_context_pending_events(), that the AFU driver can use to notify the cxl driver that new specific events are ready to be delivered, and wake up anyone waiting on the context wait queue. The current count of AFU driver specific events is stored in the field afu_driver_events of the context structure. The cxl driver checks the afu_driver_events count during poll, select, read, etc. calls to check if an AFU driver specific event is pending, and calls fetch_event() to obtain and deliver that event. This way, the cxl driver takes care of all the usual locking semantics around these calls and handles all the generic cxl events, so that the AFU driver only needs to worry about it's own events. fetch_event() return a struct cxl_event_afu_driver_reserved, allocated by the AFU driver, and filled in with the specific event information and size. Total event size (header + data) should not be greater than CXL_READ_MIN_SIZE (4K). Th cxl driver prepends an appropriate cxl event header, copies the event to userspace, and finally calls event_delivered() to return the status of the operation to the AFU driver. The event is identified by the context and cxl_event_afu_driver_reserved pointers. Since AFU drivers provide their own means for userspace to obtain the AFU file descriptor (i.e. cxlflash uses an ioctl on their scsi file descriptor to obtain the AFU file descriptor) and the generic cxl driver will never use this event, the ABI of the event is up to each individual AFU driver. Signed-off-by: NPhilippe Bergheaud <felix@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 21 6月, 2016 2 次提交
-
-
由 Gavin Shan 提交于
This adds standalone driver to support PCI hotplug for PowerPC PowerNV platform that runs on top of skiboot firmware. The firmware identifies hotpluggable slots and marked their device tree node with proper "ibm,slot-pluggable" and "ibm,reset-by-firmware". The driver scans device tree nodes to create/register PCI hotplug slot accordingly. The PCI slots are organized in fashion of tree, which means one PCI slot might have parent PCI slot and parent PCI slot possibly contains multiple child PCI slots. At the plugging time, the parent PCI slot is populated before its children. The child PCI slots are removed before their parent PCI slot can be removed from the system. If the skiboot firmware doesn't support slot status retrieval, the PCI slot device node shouldn't have property "ibm,reset-by-firmware". In that case, none of valid PCI slots will be detected from device tree. The skiboot firmware doesn't export the capability to access attention LEDs yet and it's something for TBD. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
Currently, PowerPC PowerNV platform utilizes ppc_md.pcibios_fixup(), which is called for once after PCI probing and resource assignment are completed, to allocate platform required resources for PCI devices: PE#, IO and MMIO mapping, DMA address translation (TCE) table etc. Obviously, it's not hotplug friendly. This adds weak function pcibios_setup_bridge(), which is called by pci_setup_bridge(). PowerPC PowerNV platform will reuse the function to assign above platform required resources to newly plugged PCI devices during PCI hotplug in subsequent patches. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 16 6月, 2016 5 次提交
-
-
由 Frederic Barrat 提交于
On bare-metal, when a device is attached to the cxl card, lsvpd shows a location code such as (with cxlflash): # lsvpd -l sg22 ... *YL U78CB.001.WZS0073-P1-C33-B0-T0-L0 which makes it hard to easily identify the cxl adapter owning the flash device, since in this example C33 refers to a P8 processor. lsvpd looks in the parent devices until it finds a location code, so the device node for the vPHB ends up being used. By reusing the device node of the adapter for the vPHB, lsvpd shows: # lsvpd -l sg16 ... *YL U78C9.001.WZS09XA-P1-C7-B1-T0-L3 where C7 is the PCI slot of the cxl adapter. On powerVM, the vPHB was already using the adapter device node, so there's no change there. Tested by cxlflash on bare-metal and powerVM. Signed-off-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: NMatthew R. Ochs <mrochs@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
This adds support for using CAPP DMA mode, which is required for XSL based cards such as the Mellanox CX4 to function. This is currently an RFC as it depends on the corresponding support to be merged into skiboot first, which was submitted here: http://patchwork.ozlabs.org/patch/625582/ In the event that the skiboot on the system does not have the above support, it will indicate as such in the kernel log and abort the init process. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Frederic Barrat 提交于
The XSL (Translation Service Layer) is a stripped down version of the PSL (Power Service Layer) used in some cards such as the Mellanox CX4. Like the PSL, it implements the CAIA architecture, but has a number of differences, mostly in it's implementation dependent registers. This adds an ops structure to abstract these differences to bring initial support for XSL CAPI devices. The XSL does not implement the optional architected SERR register, however while it treats it as a reserved register and should work with no special treatment, attempting to access it will cause the XSL_FEC (First Error Capture) register to be filled out, preventing it from capturing any subsequent errors. Therefore, this patch also prevents the kernel from trying to set up the SERR register so that the FEC register may still be useful, and to save one interrupt. The XSL also uses a special DMA cxl mode, which uses a slightly different init sequence for the CAPP and PHB. The kernel support for this will be in a future patch once the corresponding support has been merged into skiboot. Co-authored-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ian Munsie 提交于
In the kernel API, it is possible to attempt to allocate AFU interrupts after already starting a context. Since the process element structure used by the hardware is only filled out at the time the context is started, it will not be updated with the interrupt numbers that have just been allocated and therefore AFU interrupts will not work unless they were allocated prior to starting the context. This can present some difficulties as each CAPI enabled PCI device in the kernel API has a default context, which may need to be started very early to enable translations, potentially before interrupts can easily be set up. This patch makes the API more flexible to allow interrupts to be allocated after a context has already been started and takes care of updating the PE structure used by the hardware and notifying it to discard any cached copy it may have. The update is currently performed via a terminate/remove/add sequence. This is necessary on some hardware such as the XSL that does not properly support the update LLCMD. Note that this is only supported on powernv at present - attempting to perform this ordering on PowerVM will raise a warning. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
Make a couple more variables static. Found by sparse. Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reviewed-by: fbarrat@linux.vnet.ibm.com Reviewed-by: NMatthew R. Ochs <mrochs@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 6月, 2016 2 次提交
-
-
由 Andy Lutomirski 提交于
The uvc compat ioctl implementation seems to have copied user data for no good reason. Remove a bunch of copies. Signed-off-by: NAndy Lutomirski <luto@kernel.org>
-
由 Andy Lutomirski 提交于
The current code goes through a lot of indirection just to call a known handler. Simplify it: just call the handlers directly. Cc: stable@vger.kernel.org Signed-off-by: NAndy Lutomirski <luto@kernel.org>
-
- 10 6月, 2016 22 次提交
-
-
由 Shrikrishna Khare 提交于
The device emulation may send segCnt of 1 for LRO packets. Signed-off-by: NShrikrishna Khare <skhare@vmware.com> Signed-off-by: NJin Heo <heoj@vmware.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Dooks 提交于
The dwmac4_set_umac_addr() takes a struct mac_device_info as the first parameter, but is being passed a ioaddr instead from dwmac4_set_filter(). Fix the warning/bug by changing the first parameter. drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c:159:46: warning: incorrect type in argument 1 (different address spaces) drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c:159:46: expected struct mac_device_info *hw drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c:159:46: got void [noderef] <asn:2>*ioaddr Note, only compile tested this as do not have any hardware with it in. Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Acked-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eli Cohen 提交于
Blue flame is a latency enhancement feature that allows the driver to write the packet data directly to the NIC's registers thus making the read of the packet data from host memory redundant. We maintain a quota for the blue flame which is reloaded whenever we identify that the hardware is processing send requests and processes them fast enough so by the time we post the next send request it was able to process all the pending ones. This indicates that the hardware is capable of processing more blue flame requests efficiently. The blue flame quota is decremented whenever we send using blue flame. The current code erroneously clears the budget if we did not use blue flame for the current post send operation and we fix it here. Fixes: 88a85f99 ('net/mlx5e: TX latency optimization to save DMA reads') Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eran Ben Elisha 提交于
The current implementation copies the flow of ndo_stop instead of calling it explicitly, Fixed it. Fixes: 5fc7197d ("net/mlx5: Add pci shutdown callback") Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mohamad Haj Yahia 提交于
Set the mc_promisc flag also in the case of adding new mc address to existing allmulti vport. Fixes: a35f71f2 ('net/mlx5: E-Switch, Implement promiscuous rx modes vf request handling') Signed-off-by: NMohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Noa Osherovich 提交于
In RoCE, the RDMA-CM needs the node guid to establish connection between nodes. Today, the node guid exposed to mlx5 Ethernet VFs is zero, therefore RDMA-CM on the VF is broken. Whenever the administrator sets a MAC for a VF, derive the node guid from it and set it as well in the following way: MAC: e4:1d:2d:b3:f4:01 -> node_guid: e4:1d:2d:ff:fe:b3:f4:01 Fixes: 77256579 ('net/mlx5: E-Switch, Introduce Vport...') Signed-off-by: NNoa Osherovich <noaos@mellanox.com> Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mohamad Haj Yahia 提交于
Reorder vport enable flow to mark the vport as enabled before calling the vport change handler which was modified to handle the case for when vport is not enabled. This fixes the case for when the PF netdev is open before sriov is enabled, once sriov is enabled at esw_enable_vport, esw_vport_change_handle_locked didn't read the PF context since it thought the PF vport was not enabled. When we enable the vport, arming for events is not required anymore, since it's done on the vport change handle Fixes: 586cfa7f ('net/mlx5: E-Switch, Use vport event handler for vport cleanup') Signed-off-by: NMohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Or Gerlitz 提交于
The mlx5 flow-steering API (mlx5_create_flow_table/group/rule) never returns null pointer on error. Even if it was doing that, checking for IS_ERR_OR_NULL(p) and then returning PTR_ERR(p) would have cause bugs, since PTR_ERR(NULL) --> success, crash. To make things more robust and protect against related future bugs, convert all IS_ERR_OR_NULL checks on returned values to IS_ERR. Fixes: 5742df0f ('net/mlx5: E-Switch, Introduce VST vport ingress/egress ACLs') Fixes: 86d722ad ('net/mlx5: Use flow steering infrastructure for mlx5_en') Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Reported-by: NIlya Lesokhin <ilyal@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Or Gerlitz 提交于
We must use kvfree() for something that could have been allocated with vzalloc(), do that. Fixes: 5742df0f ('net/mlx5: E-Switch, Introduce VST vport ingress/egress ACLs') Fixes: 86d722ad ('net/mlx5: Use flow steering infrastructure for mlx5_en') Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Reported-by: NIlya Lesokhin <ilyal@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maor Gottlieb 提交于
Add missing capabilities check for E-Switch FDB and ACLs flow tables before creating their namespace in flow steering. Fixes: efdc810b ('net/mlx5: Flow steering, Add vport ACL support') Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maor Gottlieb 提交于
Flow steering infrastructure is currently used only on link layer ethernet, therefore the driver should initialize the flow steering when the device link layer is ethernet. In addition, add missing capability check before initializing the namespace of NIC RX flow tables. Fixes: 25302363 ('net/mlx5_core: Flow steering tree initialization') Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maor Gottlieb 提交于
When we destroy the last flow table we need to update the root_ft to NULL. It fixes an issue for when the last flow table is destroyed and recreated again, root_ft pointer will not be updated, as a result traffic will be dropped. Fixes: 2cc43b49 ('net/mlx5_core: Managing root flow table') Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Majd Dibbiny 提交于
Mask the reserved bits when reading the number of newly created XRCD. Fixes: e126ba97 ('mlx5: Add driver for Mellanox Connect-IB adapters') Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Airlie 提交于
This just fixes a warning when you disable powerplay. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Lukasz Gemborowski 提交于
of_match_table was not filled which prevents device to be instantiated from device tree node. Signed-off-by: NLukasz Gemborowski <lukasz.gemborowski@nokia.com> Reviewed-by: NAlexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de>
-
由 Johannes Thumshirn 提交于
The NVMe driver only requests the PCIe device's memory regions but releases all possible regions (including eventual I/O regions). This leads to a stale warning entry in dmesg about freeing non existent resources. Signed-off-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Glauber 提交于
Remove the warning about a too long SMBUS message because the ipmi_ssif driver triggers this warning too frequently so it spams the message log. Signed-off-by: NJan Glauber <jglauber@cavium.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de>
-
由 Jan Glauber 提交于
During receive the controller requires the AAK flag for all bytes but the final one. This was wrong in case of I2C_M_RECV_LEN, where the decision if the final byte is to be transmitted happened before adding the additional received length byte. Set the AAK flag if additional bytes are to be received. Signed-off-by: NJan Glauber <jglauber@cavium.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de>
-
由 Mika Westerberg 提交于
Many Intel systems the BIOS declares a SystemIO OpRegion below the SMBus PCI device as can be seen in ACPI DSDT table from Lenovo Yoga 900: Device (SBUS) { OperationRegion (SMBI, SystemIO, (SBAR << 0x05), 0x10) Field (SMBI, ByteAcc, NoLock, Preserve) { HSTS, 8, Offset (0x02), HCON, 8, HCOM, 8, TXSA, 8, DAT0, 8, DAT1, 8, HBDR, 8, PECR, 8, RXSA, 8, SDAT, 16 } There are also bunch of AML methods that that the BIOS can use to access these fields. Most of the systems in question AML methods accessing the SMBI OpRegion are never used. Now, because of this SMBI OpRegion many systems fail to load the SMBus driver with an error looking like one below: ACPI Warning: SystemIO range 0x0000000000003040-0x000000000000305F conflicts with OpRegion 0x0000000000003040-0x000000000000304F (\_SB.PCI0.SBUS.SMBI) (20160108/utaddress-255) ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver The reason is that this SMBI OpRegion conflicts with the PCI BAR used by the SMBus driver. It turns out that we can install a custom SystemIO address space handler for the SMBus device to intercept all accesses through that OpRegion. This allows us to share the PCI BAR with the AML code if it for some reason is using it. We do not expect that this OpRegion handler will ever be called but if it is we print a warning and prevent all access from the SMBus driver itself. Link: https://bugzilla.kernel.org/show_bug.cgi?id=110041Reported-by: NAndy Lutomirski <luto@kernel.org> Reported-by: NPali Rohár <pali.rohar@gmail.com> Suggested-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NJean Delvare <jdelvare@suse.de> Reviewed-by: NBenjamin Tissoires <benjamin.tissoires@redhat.com> Tested-by: NPali Rohár <pali.rohar@gmail.com> Tested-by: NJean Delvare <jdelvare@suse.de> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Cc: stable@vger.kernel.org
-
由 Gavin Shan 提交于
The function is unflattening device sub-tree blob if @dad passed to the function is valid. Currently, this functionality is used by PPC PowerNV PCI hotplug driver only. There are possibly multiple nodes in the first level of depth, fdt_next_node() bails immediately when @depth becomes negative before the second device node can be probed successfully. It leads to the device nodes except the first one won't be unflattened successfully. This fixes the issue by setting the initial depth (@inital_depth) to 1 when this function is called to unflatten device sub-tree blob. No logic changes when this function is used to unflatten non-sub-tree blob. Cc: Rhyland Klein <rklein@nvidia.com> Fixes: 78c44d91 ("drivers/of: Fix depth when unflattening devicetree") Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Tested-by: NRhyland Klein <rklein@nvidia.com> Tested-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NRob Herring <robh@kernel.org>
-
由 Ido Schimmel 提交于
When rtnl_fill_ifinfo() is called for a certain netdevice it queries its various parameters such as switch id and physical port name. The function might get called in an atomic context, which means the underlying driver must not sleep during the query operation. Don't query the device and sleep during ndo_get_phys_port_name(), but instead store the needed parameters in port creation time. Fixes: 2bf9a586 ("mlxsw: spectrum: Add support for physical port names") Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ido Schimmel 提交于
When a port is created following a split / unsplit we need to map it to the correct module and lane, enable it and then continue to initialize its various parameters such as MTU and VLAN filters. Under certain conditions, such as trying to split ports at the bottom row of the front panel by four, we get firmware errors. After evaluating this with the firmware team it was decided to alter the split / unsplit flow, so that first all the affected ports are mapped, then enabled and finally each is initialized separately. Fix the split / unsplit flow by first mapping and enabling all the affected ports. Newer firmware versions will support both flows. Fixes: 18f1e70c ("mlxsw: spectrum: Introduce port splitting") Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-