- 13 2月, 2019 3 次提交
-
-
由 Julian Wiedmann 提交于
When sending cmds via qeth_send_control_data(), qeth puts the request on the IO channel and then blocks on the reply object until the response has been received. If the IO completes with error, there will never be a response and we block until the reply-wait hits its timeout. For this case, connect the request buffer to its reply object, so that we can immediately cancel the wait. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The code to fill the IPA length fields is duplicated three times across the driver: 1. qeth_send_ipa_cmd() sets IPA_CMD_LENGTH, which matches the defaults in the IPA_PDU_HEADER template. 2. for OSN, qeth_osn_send_ipa_cmd() bypasses this logic and inserts the length passed by the caller. 3. SNMP commands (that can outgrow IPA_CMD_LENGTH) have their own way of setting the length fields, via qeth_send_ipa_snmp_cmd(). Consolidate this into qeth_prepare_ipa_cmd(), which all originators of IPA cmds already call during setup of their cmd. Let qeth_send_ipa_cmd() pull the length from the cmd instead of hard-coding IPA_CMD_LENGTH. For now, the SNMP code still needs to fix-up its length fields manually. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_l3_query_arp_cache_info() indicates a data length that's much larger than the actual length of its request (ie. the value passed to qeth_get_setassparms_cmd()). The confusion presumably comes from the fact that the cmd _response_ can be quite large - but that's no concern for the initial request IO. Fixing this up allows us to use the generic qeth_send_ipa_cmd() infrastructure. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 2月, 2019 2 次提交
-
-
由 Julian Wiedmann 提交于
Work for Bridgeport events is currently placed on a driver-wide workqueue. If the card is removed and freed while any such work is still active, this causes a use-after-free. So put the events on a per-card queue, where we can control their lifetime. As we also don't want stale events to last beyond an offline & online cycle, flush this queue when setting the card offline. Fixes: b4d72c08 ("qeth: bridgeport support - basic control") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
A card's close_dev work is scheduled on a driver-wide workqueue. If the card is removed and freed while the work is still active, this causes a use-after-free. So make sure that the work is completed before freeing the card. Fixes: 0f54761d ("qeth: Support VEPA mode") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 1月, 2019 3 次提交
-
-
由 Julian Wiedmann 提交于
For recovery purposes, qeth keeps track of all registered VIDs. Replace this by using the infrastructure introduced in commit 9daae9bd ("net: Call add/kill vid ndo on vlan filter feature toggling"). By managing NETIF_F_HW_VLAN_CTAG_FILTER as a hw_feature, netdev_update_features() will select it from dev->wanted_features and replay all of the netdevice's VIDs to its ndo_vlan_rx_add_vid() callback. z/VM NICs strictly require VLAN registration, so don't expose it as hw_feature there but add a little hack in qeth_enable_hw_features() to make things work regardless. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When a qeth card is offline, it has no connection to the HW. So none of our control callbacks can run IO against it, and we can only cache the input (eg a new MAC address) without providing proper feedback to the caller. In this context, it seems much more reasonable to simply detach the netdevice and let the kernel reject any interaction with it. This also makes all sorts of internal state checks and locking obsolete. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The L2 and L3 code for these ops is almost identical, we only need to provide a custom ndo_validate_addr() for L2 that checks whether programming the MAC address succeeded. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 11月, 2018 4 次提交
-
-
由 Julian Wiedmann 提交于
Re-implement the card-by-RDEV lookup by using device model concepts, and remove the now redundant list of all qeth card instances in the system. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Since commit 82bf5c08 ("s390/qeth: add support for IPv6 TSO"), qeth_xmit() also knows how to build TSO packets and is practically identical to qeth_l3_xmit(). Convert qeth_l3_xmit() into a thin wrapper that merely strips the L2 header off a packet, and calls qeth_xmit() for the actual TX processing. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Filling the HW header from one single function will make it easier to rip out all the duplicated transmit code in qeth_l3_xmit(). On top, this saves one conditional branch in the TSO path. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
By default, READ MAC on a Layer2 OSD device returns the adapter's burnt-in MAC address. Given the default scenario of many virtual devices on the same adapter, qeth can't make any use of this address and therefore skips the READ MAC call for this device type. But in some configurations, the READ MAC command for a Layer2 OSD device actually returns a pre-provisioned, virtual MAC address. So enable the READ MAC code to detect this situation, and let the L2 subdriver call READ MAC for OSD devices. This also removes the QETH_LAYER2_MAC_READ flag, which protects L2 devices against calling READ MAC multiple times. Instead protect the whole call to qeth_l2_request_initial_mac(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 11月, 2018 4 次提交
-
-
由 Julian Wiedmann 提交于
The ARP_{ADD,REMOVE}_ENTRY cmd structs contain reserved fields. Introduce a common helper that doesn't raw-copy the user-provided data into the cmd, but only sets those fields that are strictly needed for the command. This also sets the correct command length for ARP_REMOVE_ENTRY. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Setting the carrier 'on' for an unregistered netdevice doesn't update its operstate. Fix this by delaying the update until the netdevice has been registered. Fixes: 91cc98f5 ("s390/qeth: remove duplicated carrier state tracking") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth only registers its netdevice when the qeth device is first set online. Thus a device that has never been set online will trigger a WARN ("network todo 'hsi%d' but state 0") in unregister_netdev() when removed. Fix this by protecting the unregister step, just like we already protect against repeated registering of the netdevice. Fixes: d3d1b205 ("s390/qeth: allocate netdevice early") Reported-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
As Documentation/s390/s390dbf.txt states quite clearly, using any pointer in sprinf-formatted s390dbf debug entries is dangerous. The pointers are dereferenced whenever the trace file is read from. So if the referenced data has a shorter life-time than the trace file, any read operation can result in a use-after-free. So rip out all hazardous use of indirect data, and replace any usage of dev_name() and such by the Bus ID number. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 10月, 2018 2 次提交
-
-
由 Julian Wiedmann 提交于
Except for the new HW header id, this works just like TSO6 on L3 devices and reuses all the existing data path support in qeth_xmit(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This adds TSO6 support for L3 qeth devices. Just like for standard IPv6 traffic, TSO6 doesn't use IP offload and thus runs over the normal qeth_xmit() path. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 9月, 2018 3 次提交
-
-
由 Julian Wiedmann 提交于
The netdevice is always available, apply any carrier state changes to it without caching them. On a STARTLAN event (ie. carrier-up), defer updating the state to qeth_core_hardsetup_card() in the subsequent recovery action. Also remove the carrier-state checks from the xmit routines. Stopping transmission on carrier-down is the responsibility of upper-level code (eg see dev_direct_xmit()). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This allows us to remove the CARD_FROM_CDEV calls in the iob callbacks. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
While the raw values are fixed due to their use in a sysfs attribute, we can still use the proper QETH_DISCIPLINE_* enum within the driver. Also move the initialization into qeth_set_initial_options(), along with all other user-configurable fields. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 9月, 2018 7 次提交
-
-
由 Julian Wiedmann 提交于
For quite a lot of code paths it's obvious that they will never run in IRQ context. So replace their spin_lock_irqsave() calls with spin_lock_irq(). While at it, get rid of the redundant card pointer in struct qeth_reply that was used by qeth_send_control_data() to access the card's lock. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Restructure the OSN xmit path to handle misaligned HW headers properly, without shifting the packet data around. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Switch TSO over to the faster transmit path, and remove all the unused old TSO code. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Add all the necessary TSO plumbing to the copy-less transmit path. This includes calculating the right length of required protocol headers, and always building a separate buffer element for the TSO headers. A follow-up patch will then switch TSO traffic over to this path. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Combined L3+L4 csum offload is only required for some L3 HW. So for L2 devices, don't offload the IP header csum calculation. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reference-ID: JUP 394553 Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Convert the last remaining user of qeth_get_elements_no() to qeth_count_elements(), so this helper can be removed. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
We need the exact same transmit path for non-offload-eligible traffic on L3 OSAs. So make it accessible from both sub-drivers. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 8月, 2018 3 次提交
-
-
由 Julian Wiedmann 提交于
Allocating the main qeth_card struct with GFP_DMA blocks us from moving it into netdev_priv(). But the only reason why we need DMA memory is the ccw1 structs embedded into each ccw channel. So extract those into separate allocations, like we already do for the cmd buffers. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Re-work the helper a little bit, so that it can be used for all CCWs that qeth issues. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Just a little code deduplication. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 7月, 2018 7 次提交
-
-
由 Julian Wiedmann 提交于
Some transmit modes require that the HW header is located in the same page as the initial protocol headers in skb->data. Let callers specify the size of this contiguous header range, and enforce it when building the HW header. While at it, apply some gentle renaming to the relevant L2 code so that it matches the L3 code. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When checking whether an skb needs to be linearized to fit into an IO buffer, it's desirable to consider the skb's final size and layout (ie. after the HW header was added). But a subsequent linearization can then cause the re-positioned HW header to violate its alignment restrictions. Dealing with this situation in two different code paths is quite tricky. This patch integrates a) linearize-check and b) HW header construction into one 3 step-sequence: 1. evaluate how the HW header needs to be added (to identify if it takes up an additional buffer element), then 2. check if the required buffer elements exceed the device's limit. Linearize when necessary and re-evaluate the HW header placement. 3. Add the HW header in the best-possible way: a) push, without taking up an additional buffer element b) push, but consume another buffer element c) allocate a header object from the cache. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Nowadays an skb fragment typically spans over multiple pages. So replace the obsolete, SG-only 'fragments' counter with one that tracks the consumed buffer elements. This is what actually matters for performance. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth's ndo_change_mtu() only applies some trivial bounds checking. Set up dev->min_mtu properly, so that dev_set_mtu() can do this for us. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When the MPC initialization code discovers the HW-specific max MTU, apply the resulting changes straight to the netdevice. If this is the device's first initialization, also set its MTU (HiperSockets: the max MTU; else: a layer-specific default value). Then cap the current MTU by the new max MTU. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The netdevice is always available now, so get the portno from there. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Allocation of the netdevice is currently delayed until a qeth card first goes online. This complicates matters in several places, where we need to cache values instead of applying them straight to the netdevice. Improve on this by moving the allocation up to where the qeth card itself is created. This is also one step in direction of eventually placing the qeth card into netdev_priv(). In all subsequent code, remove the now redundant checks whether card->dev is valid. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 7月, 2018 2 次提交
-
-
由 Julian Wiedmann 提交于
Remove some redundant EXPORTs. While at it, also move some L2-only prototypes into the proper header file. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Consolidate duplicated code, fix the misuse of RTN_UNSPEC and simplify the handling of non-unicast traffic on IQD devices. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-