- 19 3月, 2021 2 次提交
-
-
由 Julian Wiedmann 提交于
Pending TX buffers are completed from the same NAPI code as normal TX buffers. Pass the NAPI budget to qeth_tx_complete_buf() so that the freeing of the completed skbs can be deferred. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_init_qdio_out_buf() is typically called during initialization, and the GFP_ATOMIC is only needed for a very specific & rare case during TX completion. Allow callers to specify a gfp mask, so that the initialization path can select GFP_KERNEL. While at it also clarify the function name. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 3月, 2021 4 次提交
-
-
由 Julian Wiedmann 提交于
The cited commit reworked the state machine for pending TX buffers. In qeth_iqd_tx_complete() it turned PENDING into a transient state, and uses NEED_QAOB for buffers that get parked while waiting for their QAOB completion. But it missed to adjust the check in qeth_tx_complete_buf(). So if qeth_tx_complete_pending_bufs() is called during teardown to drain the parked TX buffers, we no longer raise a notification for af_iucv. Instead of updating the checked state, just move this code into qeth_tx_complete_pending_bufs() itself. This also gets rid of the special-case in the common TX completion path. Fixes: 8908f36d ("s390/qeth: fix af_iucv notification race") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When a QAOB notifies us that a pending TX buffer has been delivered, the actual TX completion processing by qeth_tx_complete_pending_bufs() is done within the context of a TX NAPI instance. We shouldn't rely on this instance being scheduled by some other TX event, but just do it ourselves. qeth_qdio_handle_aob() is called from qeth_poll(), ie. our main NAPI instance. To avoid touching the TX queue's NAPI instance before/after it is (un-)registered, reorder the code in qeth_open() and qeth_stop() accordingly. Fixes: 0da9581d ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The current design attaches a pending TX buffer to a custom single-linked list, which is anchored at the buffer's slot on the TX ring. The buffer is then checked for final completion whenever this slot is processed during a subsequent TX NAPI poll cycle. But if there's insufficient traffic on the ring, we might never make enough progress to get back to this ring slot and discover the pending buffer's final TX completion. In particular if this missing TX completion blocks the application from sending further traffic. So convert the custom single-linked list code to a per-queue list_head, and scan this list on every TX NAPI cycle. Fixes: 0da9581d ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When qeth_alloc_qdio_queues() fails to allocate one of the buffers that back an Output Queue, the 'out_freeoutqbufs' path will free all previously allocated buffers for this queue. But it misses to free the half-finished queue struct itself. Move the buffer allocation into qeth_alloc_output_queue(), and deal with such errors internally. Fixes: 0da9581d ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 2月, 2021 2 次提交
-
-
由 Julian Wiedmann 提交于
For non-QEBSM devices, get_buf_states() merges PENDING and EMPTY buffers into a single group of finished buffers. To allow the upper-layer driver to differentiate between the two states, qdio_check_pending() looks at each buffer's state again and sets the sbal_state flag to QDIO_OUTBUF_STATE_FLAG_PENDING accordingly. So effectively we're spending overhead on _every_ Output Queue inspection, just to avoid some additional TX completion calls in case a group of buffers has completed with mixed EMPTY / PENDING state. Given that PENDING buffers should rarely occur, this is a bad trade-off. In particular so as the additional checks in get_buf_states() affect _all_ device types (even those that don't use the PENDING state). Rip it all out, and just report the PENDING completions separately as we already do for QEBSM devices. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NBenjamin Block <bblock@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Julian Wiedmann 提交于
For QEBSM devices the 'merge_pending' mechanism in get_buf_states() doesn't apply, and we can actually get SLSB_P_OUTPUT_PENDING returned. So for this case propagating the PENDING state to the driver via the queue's sbal_state doesn't make sense and creates unnecessary overhead. Instead introduce a new QDIO_ERROR_* flag that gets passed to the driver, and triggers the same processing as if the buffers were flagged as QDIO_OUTBUF_STATE_FLAG_PENDING. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NBenjamin Block <bblock@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
- 29 1月, 2021 6 次提交
-
-
由 Julian Wiedmann 提交于
Stop maintaining the skb_send_q list for TRANS_HIPER sockets. Not only is it extra overhead, but keeping around a list of skb clones means that we later also have to match the ->sk_txnotify() calls against these clones and free them accordingly. The current matching logic (comparing the skbs' shinfo location) is frustratingly fragile, and breaks if the skb's head is mangled in any sort of way while passing from dev_queue_xmit() to the device's HW queue. Also adjust the interface for ->sk_txnotify(), to make clear that we don't actually care about any skb internals. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
When do_qdio() returns with an unexpected error, qeth_flush_buffers() kicks off a recovery action. In such a case there's no point in starting TX completion processing, the device gets torn down anyway. So take a closer look at do_qdio()'s return value, and skip the TX completion processing accordingly. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
As part of the TX queue selection for af_iucv skbs, qeth_l3_get_cast_type_rcu() ends up calling qeth_get_ether_cast_type(). Which is rather fragile, since such skbs don't have a proper ETH header and we rely on it being zeroed out in the right places. Add a separate case for ETH_P_AF_IUCV instead that does the right thing. When later building the HW header for such skbs, don't hard-code the cast type but follow the same path as for other protocol types. Here the cast type should naturally come from the skb's queue mapping. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
qeth_l3_hard_start_xmit() already determined the skb's proto. Avoid doing so a second time when it calls qeth_l3_get_cast_type(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Replace our home-grown helper with the more robust vlan_get_protocol(). This is pretty much a 1:1 replacement, we just need to pass around a proper ETH_P_* everyhwere and convert the old value range. For readability also convert the protocol checks in qeth_l3_hard_start_xmit() to a switch statement. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
We have two usage patterns: 1. get & ->setup() a new discipline, or 2. ->remove() & put the currently loaded one. Add corresponding helpers that hide the internals & error handling. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 08 1月, 2021 3 次提交
-
-
由 Julian Wiedmann 提交于
ip_finish_output_gso() may call .ndo_features_check() even before the skb has a L2 header. This conflicts with qeth_get_ip_version()'s attempt to inspect the L2 header via vlan_eth_hdr(). Switch to vlan_get_protocol(), as already used further down in the common qeth_features_check() path. Fixes: f13ade19 ("s390/qeth: run non-offload L3 traffic over common xmit path") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Due to insufficient locking, qeth_core_set_online() and qeth_dev_layer2_store() can run in parallel, both attempting to load & setup the discipline (and stepping on each other toes along the way). A similar race can also occur between qeth_core_remove_device() and qeth_dev_layer2_store(). Access to .discipline is meant to be protected by the discipline_mutex, so add/expand the locking in qeth_core_remove_device() and qeth_core_set_online(). Adjust the locking in qeth_l*_remove_device() accordingly, as it's now handled by the callers in a consistent manner. Based on an initial patch by Ursula Braun. Fixes: 9dc48ccc ("qeth: serialize sysfs-triggered device configurations") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
When qeth_dev_layer2_store() - holding the discipline_mutex - waits inside qeth_l*_remove_device() for a qeth_do_reset() thread to complete, we can hit a deadlock if qeth_do_reset() concurrently calls qeth_set_online() and thus tries to aquire the discipline_mutex. Move the discipline_mutex locking outside of qeth_set_online() and qeth_set_offline(), and turn the discipline into a parameter so that callers understand the dependency. To fix the deadlock, we can now relax the locking: As already established, qeth_l*_remove_device() waits for qeth_do_reset() to complete. So qeth_do_reset() itself is under no risk of having card->discipline ripped out while it's running, and thus doesn't need to take the discipline_mutex. Fixes: 9dc48ccc ("qeth: serialize sysfs-triggered device configurations") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 07 12月, 2020 5 次提交
-
-
由 Julian Wiedmann 提交于
When qeth_qdio_handle_aob() frees dangling allocations in the notified TX buffer, there are rare tear-down cases where qeth_drain_output_queue() would later call qeth_clear_output_buffer() for the same buffer - and thus end up walking the buffer a second time to check for dangling kmem_cache allocations. Luckily current code previously scrubs such a buffer, so qeth_clear_output_buffer() would find buf->buffer->element[i].addr as NULL and not do anything. But this is fragile, and we can easily improve it by consistently clearing the ->is_header flag after freeing the allocation. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Reuse the QETH_QDIO_BUF_EMPTY state to indicate that a TX buffer has been completed with a QAOB notification, and may be cleaned up by qeth_cleanup_handled_pending(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
For TX buffers that require an additional async notification via QAOB, the TX completion code can now manage all the necessary processing if the notification has already occurred (or is occurring concurrently). In such cases we can avoid replacing the metadata that is associated with the buffer's slot on the ring, and just keep using the current one. As qeth_clear_output_buffer() will also handle any kmem cache-allocated memory that was mapped into the TX buffer, qeth_qdio_handle_aob() doesn't need to worry about it. While at it, also remove the unneeded forward declaration for qeth_init_qdio_out_buf(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
All qeth devices have a minimum set of sysfs attributes, and non-OSN devices share a group of additional attributes. Depending on whether the device is forced to use a specific discipline, the device_type then specifies further attributes. Shift the common attributes into dev->groups, so that the device_type only contains the discipline-specific attributes. This avoids exposing the common attributes to the disciplines, and nicely cleans up our sysfs code. While replacing the qeth_l*_*_device_attributes() helpers, switch from sysfs_*_groups() to the more generic device_*_groups(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
INIT_LIST_HEAD() only needs to be called on actual list heads. While at it clarify the naming of the field. Suggested-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 12月, 2020 6 次提交
-
-
gfp_type() uses in_interrupt() to figure out the correct GFP mask. The usage of in_interrupt() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be separated or the context be conveyed in an argument passed by the caller, which usually knows the context. ctcmpc_tx() is used as net_device_ops::ndo_start_xmit. This callback is invoked with disabled bottom halves. Use GFP_ATOMIC for memory allocation in ctcmpc_tx(). Remove gfp_type() since the last user is gone. Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
gfp_type() uses in_interrupt() to figure out the correct GFP mask. The usage of in_interrupt() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be separated or the context be conveyed in an argument passed by the caller, which usually knows the context. The memory allocation of `ch' a few lines above is using GFP_KERNEL, also an allocation a few lines later is using GFP_KERNEL. Use GFP_KERNEL for the memory allocation. Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
gfp_type() uses in_interrupt() to figure out the correct GFP mask. The usage of in_interrupt() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be separated or the context be conveyed in an argument passed by the caller, which usually knows the context. The call chain of ctcmpc_unpack_skb(): ctcmpc_bh() -> ctcmpc_unpack_skb() ctcmpc_bh() is a tasklet handler so GFP_ATOMIC is needed. Use GFP_ATOMIC as allocation type in ctcmpc_unpack_skb(). Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
The size of struct pdu is 8 byte. The memory is allocated, initialized, used and deallocated a few lines later. It is more efficient to avoid the allocation/free dance and assign the values directly to skb's data part instead of using memcpy() for it. Avoid an allocation of struct pdu and use the resulting skb pointer instead. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> [jwi: Fix-up the pdu_offset, adjust skb->len for the pushed length. Reflow the commit msg.] Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
The size of struct qllc is 2 byte. The memory for is allocated, initialized, used and deallocated a few lines later. It is more efficient to avoid the allocation/free dance and assign the values directly to skb's data part instead of using memcpy() for it. Avoid an allocation of struct qllc and use the resulting skb pointer instead. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> [jwi: remove a newline, reflow the commit msg] Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
The size of struct th_header is 8 byte and the size of struct th_sweep is 16 byte. The memory for is allocated, initialized, used and deallocated a few lines later. It is more efficient to avoid the allocation/free dance and assign the values directly to skb's data part instead of using memcpy() for it. Avoid an allocation of struct th_sweep/th_header and use the resulting skb pointer instead. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> [jwi: use skb_put_zero(), instead of skb_put() + memset to 0] Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 21 11月, 2020 4 次提交
-
-
由 Julian Wiedmann 提交于
When qeth_iqd_tx_complete() detects that a TX buffer requires additional async completion via QAOB, it might fail to replace the queue entry's metadata (and ends up triggering recovery). Assume now that the device gets torn down, overruling the recovery. If the QAOB notification then arrives before the tear down has sufficiently progressed, the buffer state is changed to QETH_QDIO_BUF_HANDLED_DELAYED by qeth_qdio_handle_aob(). The tear down code calls qeth_drain_output_queue(), where qeth_cleanup_handled_pending() will then attempt to replace such a buffer _again_. If it succeeds this time, the buffer ends up dangling in its replacement's ->next_pending list ... where it will never be freed, since there's no further call to qeth_cleanup_handled_pending(). But the second attempt isn't actually needed, we can simply leave the buffer on the queue and re-use it after a potential recovery has completed. The qeth_clear_output_buffer() in qeth_drain_output_queue() will ensure that it's in a clean state again. Fixes: 72861ae7 ("qeth: recovery through asynchronous delivery") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
The two expected notification sequences are 1. TX_NOTIFY_PENDING with a subsequent TX_NOTIFY_DELAYED_*, when our TX completion code first observed the pending TX and the QAOB then completes at a later time; or 2. TX_NOTIFY_OK, when qeth_qdio_handle_aob() picked up the QAOB completion before our TX completion code even noticed that the TX was pending. But as qeth_iqd_tx_complete() and qeth_qdio_handle_aob() can run concurrently, we may end up with a race that results in a sequence of TX_NOTIFY_DELAYED_* followed by TX_NOTIFY_PENDING. Which would confuse the af_iucv code in its tracking of pending transmits. Rework the notification code, so that qeth_qdio_handle_aob() defers its notification if the TX completion code is still active. Fixes: b3332930 ("qeth: add support for af_iucv HiperSockets transport") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Calling into socket code is ugly already, at least check whether we are dealing with the expected sk_family. Only looking at skb->protocol is bound to cause troubles (consider eg. af_packet). Fixes: b3332930 ("qeth: add support for af_iucv HiperSockets transport") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Alexandra Winter 提交于
Remove workaround that supported early hardware implementations of PNSO OC3. Rely on the 'enarf' feature bit instead. Fixes: fa115adf ("s390/qeth: Detect PNSO OC3 capability") Signed-off-by: NAlexandra Winter <wintera@linux.ibm.com> Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com> [jwi: use logical instead of bit-wise AND] Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 19 11月, 2020 8 次提交
-
-
由 Julian Wiedmann 提交于
The link mode is a combination of port speed and port mode. But we currently only consider the speed, and then typically select the corresponding TP-based link mode. For 1G and 10G Fibre links this means we display the wrong link modes. Move the SPEED_* switch statements inside the PORT_* cases, and only consider valid combinations where we can select the corresponding link mode. Add the relevant link modes (1000baseX, 10000baseSR and 1000baseLR) that were introduced back with commit 5711a982 ("net: ethtool: add support for 1000BaseX and missing 10G link modes"). To differentiate between 10000baseSR and 10000baseLR, use the detailed media_type information that QUERY OAT provides. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Improve the initial link info with data obtained from QUERY OAT. Doing so _only_ at initialization time avoids 1. dealing with multi-part replies, and 2. sifting through all the data that may get returned at runtime. This allows us to determine the correct port type for the 1000BT variant of recent OSA adapter generations (where the .card_type field in QUERY CARD INFO is no longer sufficient). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Remove the default case for PORT_* and SPEED_* in our ethtool code. The only time these could be hit is if qeth_init_link_info() was unable to determine the port type from an OSA adapter's link_type. We already throw a message in this case, so reduce the noise and don't report bad data (ie. it's much more likely that any future link_type will represent a PORT_FIBRE link ...). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Hard-code the minimal link info at initialization time, after we obtained the link_type. qeth_get_link_ksettings() can still override this with more accurate data from QUERY CARD INFO later on. Don't set arbitrary defaults for unknown OSA link types, they certainly won't match any future type. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Move all the HW reply data parsing into qeth_query_card_info_cb(), and use common ethtool enums for transporting the information back to the caller. Also only look at the .port_speed field when we couldn't determine the speed from the .card_type field, and introduce some 'default' cases for SPEED_UNKNOWN, PORT_OTHER and DUPLEX_UNKNOWN. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
By the time that our .get_link_ksettings() code issues a QUERY CARD INFO cmd to get link-related information, we already set up a good amount of static link data. Return this data when the cmd fails, same as when the cmd is not supported. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Kaixu Xia 提交于
Fix the following coccinelle report: ./drivers/s390/net/qeth_l3_main.c:107:2-4: WARNING: possible condition with no effect (if == else) Both branches are the same since commit ab29c480 ("s390/qeth: replace deprecated simple_stroul()"), so remove them. Reported-by: NTosk Robot <tencent_os_robot@tencent.com> Signed-off-by: NKaixu Xia <kaixuxia@tencent.com> [jwi: point to the commit that introduced this] Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
call_switchdev_notifiers() doesn't require holding the RTNL lock since commit ff5cf100 ("net: switchdev: Change notifier chain to be atomic"). We still need it for the "lost event" slow path, to avoid racing against a concurrent .ndo_bridge_setlink(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-