- 25 12月, 2019 3 次提交
-
-
由 Julian Wiedmann 提交于
I stumbled over an old OSA model that claims to support DIAG_ASSIST, but then rejects the cmd to query its DIAG capabilities. In the old code this was ok, as the returned raw error code was > 0. Now that we translate the raw codes to errnos, the "rc < 0" causes us to fail the initialization of the device. The fix is trivial: don't bail out when the DIAG query fails. Such an error is not critical, we can still use the device (with a slightly reduced set of features). Fixes: 742d4d40 ("s390/qeth: convert remaining legacy cmd callbacks") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_l3_dev_hsuid_store() initially checks the card state, but doesn't take the conf_mutex to ensure that the card stays in this state while being reconfigured. Rework the code to take this lock, and drop a redundant state check in a helper function. Fixes: b3332930 ("qeth: add support for af_iucv HiperSockets transport") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_l?_set_online() goes through a number of initialization steps, and on any error uses qeth_l?_stop_card() to tear down the residual state. The first initialization step is qeth_core_hardsetup_card(). When this fails after having established a QDIO context on the device (ie. somewhere after qeth_mpc_initialize()), qeth_l?_stop_card() doesn't shut down this QDIO context again (since the card state hasn't progressed from DOWN at this stage). Even worse, we then call qdio_free() as final teardown step to free the QDIO data structures - while some of them are still hooked into wider QDIO infrastructure such as the IRQ list. This is inevitably followed by use-after-frees and other nastyness. Fix this by unconditionally calling qeth_qdio_clear_card() to shut down the QDIO context, and also to halt/clear any pending activity on the various IO channels. Remove the naive attempt at handling the teardown in qeth_mpc_initialize(), it clearly doesn't suffice and we're handling it properly now in the wider teardown code. Fixes: 4a71df50 ("qeth: new qeth device driver") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 12月, 2019 1 次提交
-
-
由 Julian Wiedmann 提交于
Along with z/VM NICs, there's additional device types that only support a specific transport mode (eg. external-bridged IQD). Identify the corresponding error code, and raise a fitting error message so that the user knows to adjust their device configuration. On top of that also fix the subsequent error path, so that the rejected cmd doesn't need to wait for a timeout but gets cancelled straight away. Fixes: 4a71df50 ("qeth: new qeth device driver") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 12月, 2019 1 次提交
-
-
由 Pankaj Bharadiya 提交于
Replace all the occurrences of FIELD_SIZEOF() with sizeof_field() except at places where these are defined. Later patches will remove the unused definition of FIELD_SIZEOF(). This patch is generated using following script: EXCLUDE_FILES="include/linux/stddef.h|include/linux/kernel.h" git grep -l -e "\bFIELD_SIZEOF\b" | while read file; do if [[ "$file" =~ $EXCLUDE_FILES ]]; then continue fi sed -i -e 's/\bFIELD_SIZEOF\b/sizeof_field/g' $file; done Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Link: https://lore.kernel.org/r/20190924105839.110713-3-pankaj.laxminarayan.bharadiya@intel.comCo-developed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: David Miller <davem@davemloft.net> # for net
-
- 06 12月, 2019 3 次提交
-
-
由 Julian Wiedmann 提交于
The cio layer's intparm logic does not align itself well with how qeth manages cmd IOs. When an active IO gets terminated via halt/clear, the corresponding IRQ's intparm does not reflect the cmd buffer but rather the intparm that was passed to ccw_device_halt() / ccw_device_clear(). This behaviour was recently clarified in commit b91d9e67 ("s390/cio: fix intparm documentation"). As a result, qeth_irq() currently doesn't cancel a cmd that was terminated via halt/clear. This primarily causes us to leak card->read_cmd after the qeth device is removed, since our IO path still holds a refcount for this cmd. For qeth this means that we need to keep track of which IO is pending on a device ('active_cmd'), and use this as the intparm when calling halt/clear. Otherwise qeth_irq() can't match the subsequent IRQ to its cmd buffer. Since we now keep track of the _expected_ intparm, we can also detect any mismatch; this would constitute a bug somewhere in the lower layers. In this case cancel the active cmd - we effectively "lost" the IRQ and should not expect any further notification for this IO. Fixes: 40554895 ("s390/qeth: add support for dynamically allocated cmds") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When the RX path builds non-linear skbs, the packet headers can currently spill over into page fragments. Depending on the packet type and what fields we need to access in the headers, this could cause us to go past the end of skb->data. So for non-linear packets, copy precisely the length of the necessary headers ('linear_len') into skb->data. And don't copy more, upper-level protocols will peel whatever additional packet headers they need. Fixes: 4a71df50 ("qeth: new qeth device driver") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Depending on a packet's type, the RX path needs to access fields in the packet headers and thus requires a minimum packet length. Enforce this length when building the skb. On the other hand a single runt packet is no reason to drop the whole RX buffer. So just skip it, and continue processing on the next packet. Fixes: 4a71df50 ("qeth: new qeth device driver") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 11月, 2019 1 次提交
-
-
由 Julian Wiedmann 提交于
When propagating IO errors back to userspace, one error path in qeth_irq() currently returns '1' instead of a proper errno. Fixes: 54daaca7 ("s390/qeth: cancel cmd on early error") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 11月, 2019 5 次提交
-
-
由 Julian Wiedmann 提交于
qeth_core_free_card() is meant to be the counterpart of qeth_alloc_card() - but unfortunately was also picked as the place to free the QDIO queues. This gets messy when qeth_core_probe_device() fails during qeth_add_dbf_entry(). At this point the card->qdio.state is not initialized yet, so qeth_free_qdio_queues() ends up operating on uninitialized data. Luckily for now, the whole qeth_card struct is zero-allocated and the value of the QETH_QDIO_UNINITIALIZED enum is 0 as well. So there's no real impact from this bug at the moment, it's just really fragile. Clean this up by moving the qeth_free_qdio_queues() call up one level in the hierarchy. This way it doesn't get called from the error path. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When current code fails to allocate an skb in the RX path, it drops the whole RX buffer. Considering the large number of packets that a single RX buffer might contain, this is quite drastic. Skip over the packet instead, and try to extract the next packet from the RX buffer. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Packets with an unexpected HW format are currently first extracted from the RX buffer, passed upwards to the layer-specific driver and only then finally dropped. Enhance the RX path so that we can drop such packets before even allocating an skb. For this, add some additional logic so that when a packet is meant to be dropped, we can still walk along the packet's data chunks in the RX buffer. This allows us to extract the following packet(s) from the buffer. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Each RX buffer may contain up to 64KB worth of data. In case the device needs to discard a packet _after_ already having reserved space for it in the buffer, the whole buffer gets set to ERROR state. As the buffer might contain any number of good packets, this can result in collateral packet loss. qeth can provide relief by enabling per-frame invalidation. The RX buffer is then presented as usual, we just need to spot & drop any individual packet that was flagged as invalid. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Where available, use the fine-grained counters in rtnl_link_stats64 to indicate different RX error causes. For drop reasons, use driver-private ethtool counters. In particular this patch allows us to keep track of driver-side drops due to unknown/unsupported HW descriptor format. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 11月, 2019 5 次提交
-
-
由 Julian Wiedmann 提交于
Any change to the card state should only be driven by qeth_l?_set_online() and qeth_l?_stop_card(). qeth_qdio_clear_card() currently also gets called from (a) qeth_core_shutdown(), where we haven't walked through the whole teardown sequence. So changing the state to DOWN is not accurate. (b) qeth_core_hardsetup_card(), which is only called while the card is still in DOWN state. No change in behaviour here. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When setting a device online, both subdrivers have the same code to program the HW trap and Isolation mode. Move that code into a single place. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When napi_complete_done() returns false, the NAPI instance is still active and we can keep the IRQ disabled a little longer. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qdio.h recently gained a new helper macro that handles wrap-around on a QDIO queue, consistently use it across all of qeth. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
For IQD devices with Multi-Write support, we can defer the queue-flush further and transmit multiple IO buffers with a single TX doorbell. The same-target restriction still applies. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 10月, 2019 1 次提交
-
-
由 Julian Wiedmann 提交于
The QIB parm area is 128 bytes long. Current code consistently misuses an _entirely unrelated_ QDIO constant, merely because it has the same value. Stop doing so. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NBenjamin Block <bblock@linux.ibm.com> Reviewed-by: NJens Remus <jremus@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
- 25 8月, 2019 6 次提交
-
-
由 Julian Wiedmann 提交于
IQD devices offer limited support for bulking: all frames in a TX buffer need to have the same target. qeth_iqd_may_bulk() implements this constraint, and allows us to defer the TX doorbell until (a) the buffer is full (since each buffer needs its own doorbell), or (b) the entire TX queue is full, or (b) we reached the BQL limit. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Each TX buffer may contain multiple skbs. So just accumulate the sent byte count in the buffer struct, and later use the same count when completing the buffer. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This allows the stack to bulk-free our TX-completed skbs. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Due to their large MTU and potentially low utilization of TX buffers, IQD devices in particular require fast TX recycling. This makes them a prime candidate for a TX NAPI path in qeth. qeth_tx_poll() uses the recently introduced qdio_inspect_queue() helper to poll the TX queue for completed buffers. To avoid hogging the CPU for too long, we yield to the stack after completing an entire queue's worth of buffers. While IQD is expected to transfer its buffers synchronously (and thus doesn't support TX interrupts), a timer covers for the odd case where a TX buffer doesn't complete synchronously. Currently this timer should only ever fire for (1) the mcast queue, (2) the occasional race, where the NAPI poll code observes an update to queue->used_buffers while the TX doorbell hasn't been issued yet. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This consolidates the SW statistics code, and improves it to (1) account for the header overhead of each segment on a TSO skb, (2) count dangling packets as in-error (during eg. shutdown), and (3) only count offloads when the skb was successfully transmitted. We also count each segment of an TSO skb as one packet - except for tx_dropped, to be consistent with dev->tx_dropped. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Commit d4c08afa ("s390/qeth: streamline SNMP cmd code") removed the bounds checking for req_len, under the assumption that the check in qeth_alloc_cmd() would suffice. But that code path isn't sufficiently robust to handle a user-provided data_length, which could overflow (when adding the cmd header overhead) before being checked against QETH_BUFSIZE. We end up allocating just a tiny iob, and the subsequent copy_from_user() writes past the end of that iob. Special-case this path and add a coarse bounds check, to protect against maliciuous requests. This let's the subsequent code flow do its normal job and precise checking, without risk of overflow. Fixes: d4c08afa ("s390/qeth: streamline SNMP cmd code") Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NUrsula Braun <ubraun@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 8月, 2019 6 次提交
-
-
由 Julian Wiedmann 提交于
We have logic to determine the desired promisc mode in _each_ code path. Change things around so that there is a clean split between (a) high-level code that selects the new mode, and (b) implementations of the various mechanisms to program this mode. This also keeps qeth_promisc_to_bridge() from polluting the debug logs on each RX modeset. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Except for card->read_cmd, every cmd we issue now passes through qeth_send_control_data() and allocates a qeth_reply struct. The way we use this struct requires additional refcounting, and pointer tracking. Clean up things by moving most of qeth_reply's content into the main cmd struct. This keeps things in one place, saves us the additional refcounting and simplifies the overall code flow. A nice little benefit is that we can now match incoming replies against the pending requests themselves, without caching the requests' seqnos. The qeth_reply struct stays around for a little bit longer in a shrunk form, to avoid touching every single callback. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Current code releases the cmd struct after its initial IO has completed. Any reply processing is done independently, using a separate qeth_reply struct. In preparation for merging the cmd and reply structs together, take an additional reference on the cmd object so that it stays around all the way until qeth_send_control_data() returns. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_snmp_command_cb() is the only cmd callback that pulls the reply's data length from a low-level transport header field. This requires additional complexity (ie. reply->offset) to make the header accessible to what is supposed to be a pure IPA cmd callback. Adapter cmds have a length field in their sub-cmd header, get the data length from there instead. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When an cmd IO completes in qeth_irq(), calculate how much data was processed by the device and pass this value to the cmd's callback. This allows cmds that retrieve data from the device to check whether sufficient data was received, so we do that in qeth_read_conf_data_cb(). Suggested-by: NJens Remus <jremus@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Rather than fumbling with hard-coded offsets, use the proper struct to access the retrieved RCD information. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 8月, 2019 1 次提交
-
-
由 Julian Wiedmann 提交于
Callbacks for a cmd reply run outside the protection of card->lock, to allow for additional cmds to be issued & enqueued in parallel. When qeth_send_control_data() bails out for a cmd without having received a reply (eg. due to timeout), its callback may concurrently be processing a reply that just arrived. In this case, the callback potentially accesses a stale reply->reply_param area that eg. was on-stack and has already been released. To avoid this race, add some locking so that qeth_send_control_data() can (1) wait for a concurrently running callback, and (2) zap any pending callback that still wants to run. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
-
- 23 7月, 2019 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
In preparation for unifying the skb_frag and bio_vec, use the fine accessors which already exist and use skb_frag_t instead of struct skb_frag_struct. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 6月, 2019 6 次提交
-
-
由 Julian Wiedmann 提交于
The cast type currently gets selected in .ndo_start_xmit, and is then piped through several layers until it's stored into the HW header. Push the selection down into qeth_l?_fill_header() to (1) reduce the number of xmit-wide parameters, and (2) merge the two route validation checks into just one. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
De-duplicate the pm callback implementations from the two sub-drivers, replacing them with core helpers that delegate to the .set_online and .set_offline callbacks. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Apply some cleanups to qeth_snmp_command() and its callback: 1. when accessing the user data, use the proper struct instead of hard-coded offsets. Also copy the request data straight into the allocated cmd, skipping the extra memdup_user() to a tmp buffer. 2. capping the request length is no longer needed, the same check gets applied at a base level in qeth_alloc_cmd(). 3. clean up some duplicated (and misindented) trace statements. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Now that all cmds are dynamically allocated, the code for static cmd buffers can go away entirely. Resulting in a nice reduction of code/data size & complexity, while removing the risk that qeth_clear_cmd_buffers() releases cmds that are still in-flight. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The base MPC cmds are the last remaining user of the static cmd buffers. Port them over to use dynamic allocation, and stop backing the write channel's cmd buffers with pages. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Add a new wrapper that allocates DIAG cmds of the right size, and fills in the common fields. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-