- 25 10月, 2021 6 次提交
-
-
由 Julian Wiedmann 提交于
qeth_add_hw_header() is missing documentation for some of its parameters, fix that up. Reported-by: NHeiko Carstens <hca@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Heiko Carstens 提交于
Fix kernel doc comments and remove incorrect kernel doc indicators. Acked-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NHeiko Carstens <hca@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The only actual user of qdio.no_input_queues is qeth_qdio_establish(), and there we already have full awareness of the current Input Queue configuration (1 RX queue, plus potentially 1 TX Completion queue). So avoid this state tracking, and the ambiguity it brings with it. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
For none of the users we are under risk of running in HW IRQ context or or with IRQs disabled. Thus we always end up in consume_skb(). But the two occurences in the RX path should really report the dropped packet to dropmon, so have them use kfree_skb() instead. That's also consistent with what napi_free_frags() does internally. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qdio.ko no longer needs to care about how the QAOBs are allocated, from its perspective they are merely another parameter to do_QDIO(). So for a start, shift the cache into the only qdio driver that uses QAOBs (ie. qeth). Here there's further opportunity to optimize its usage in the future - eg. make it per-{device, TX queue}, or only compile it when the driver is built with CQ/QAOB support. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NBenjamin Block <bblock@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
With commit 18787eee ("qeth: use ndo_siocdevprivate") this callback is now actually used to handle transport mode-specific _private_ ioctls. We only have such ioctls for L3 devices. So wire up a L3-specific .ndo_siocdevprivate() callback that handles those ioctls, and defers to the core qeth_siocdevprivate() for all other private ioctls. This takes the discipline one step closer to its original purpose of providing an internal extension for the qeth_core_ccwgroup_driver. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 10月, 2021 1 次提交
-
-
由 Jakub Kicinski 提交于
Commit 406f42fa ("net-next: When a bond have a massive amount of VLANs...") introduced a rbtree for faster Ethernet address look up. To maintain netdev->dev_addr in this tree we need to make all the writes to it got through appropriate helpers. Make sure local references to netdev->dev_addr are constant. Acked-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 22 9月, 2021 3 次提交
-
-
由 Alexandra Winter 提交于
Commit 0b9902c1 ("s390/qeth: fix deadlock during recovery") removed taking discipline_mutex inside qeth_do_reset(), fixing potential deadlocks. An error path was missed though, that still takes discipline_mutex and thus has the original deadlock potential. Intermittent deadlocks were seen when a qeth channel path is configured offline, causing a race between qeth_do_reset and ccwgroup_remove. Call qeth_set_offline() directly in the qeth_do_reset() error case and then a new variant of ccwgroup_set_offline(), without taking discipline_mutex. Fixes: b41b554c ("s390/qeth: fix locking for discipline setup / removal") Signed-off-by: NAlexandra Winter <wintera@linux.ibm.com> Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Alexandra Winter 提交于
Problem: qeth_close_dev_handler is a worker that tries to acquire card->discipline_mutex via drv->set_offline() in ccwgroup_set_offline(). Since commit b41b554c ("s390/qeth: fix locking for discipline setup / removal") qeth_remove_discipline() is called under card->discipline_mutex and cancels the work and waits for it to finish. STOPLAN reception with reason code IPA_RC_VEPA_TO_VEB_TRANSITION is the only situation that schedules close_dev_work. In that situation scheduling qeth recovery will also result in an offline interface, when resetting the isolation mode fails, if the external switch is still set to VEB. And since commit 0b9902c1 ("s390/qeth: fix deadlock during recovery") qeth recovery does not aquire card->discipline_mutex anymore. So we accept the longer pathlength of qeth_schedule_recovery in this error situation and re-use the existing function. As a side-benefit this changes the hwtrap to behave like during recovery instead of like during a user-triggered set_offline. Fixes: b41b554c ("s390/qeth: fix locking for discipline setup / removal") Signed-off-by: NAlexandra Winter <wintera@linux.ibm.com> Acked-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
When qeth_set_online() calls qeth_clear_working_pool_list() to roll back after an error exit from qeth_hardsetup_card(), we are at risk of accessing card->qdio.in_q before it was allocated by qeth_alloc_qdio_queues() via qeth_mpc_initialize(). qeth_clear_working_pool_list() then dereferences NULL, and by writing to queue->bufs[i].pool_entry scribbles all over the CPU's lowcore. Resulting in a crash when those lowcore areas are used next (eg. on the next machine-check interrupt). Such a scenario would typically happen when the device is first set online and its queues aren't allocated yet. An early IO error or certain misconfigs (eg. mismatched transport mode, bad portno) then cause us to error out from qeth_hardsetup_card() with card->qdio.in_q still being NULL. Fix it by checking the pointer for NULL before accessing it. Note that we also have (rare) paths inside qeth_mpc_initialize() where a configuration change can cause us to free the existing queues, expecting that subsequent code will allocate them again. If we then error out before that re-allocation happens, the same bug occurs. Fixes: eff73e16 ("s390/qeth: tolerate pre-filled RX buffer") Reported-by: NStefan Raspl <raspl@linux.ibm.com> Root-caused-by: NHeiko Carstens <hca@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 28 7月, 2021 1 次提交
-
-
由 Arnd Bergmann 提交于
qeth has both standard MII ioctls and custom SIOCDEVPRIVATE ones, all of which work correctly with compat user space. Move the private ones over to the new ndo_siocdevprivate callback. Cc: Julian Wiedmann <jwi@linux.ibm.com> Cc: Karsten Graul <kgraul@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 7月, 2021 1 次提交
-
-
由 Julian Wiedmann 提交于
Now that all drivers use qdio_inspect_queue() and qdio's internal queue tasklets are gone, the driver-specified queue handlers are only called for async error reporting (eg. for an error condition in the QEBSM code). So take a moment to clean up the Output Queue handlers (they are _always_ called with qdio_error != 0), and clarify which error types can be reported through what interface. As Benjamin already suggested a while ago, we should turn these into distinct enums at some point. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NBenjamin Block <bblock@linux.ibm.com> Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
-
- 20 7月, 2021 3 次提交
-
-
由 Julian Wiedmann 提交于
qeth uses three device_type structs - a generic one, and one for each sub-driver (which is used for fixed-layer devices only). Instead of exporting these device_types back&forth between the driver's modules, make all the logic self-contained within the sub-drivers. On disc->setup() they either install their own device_type, or add the sysfs attributes that are missing in the generic device_type. Later on disc->remove() these attributes are removed again from any device that has the generic device_type. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The QETH_PROT_* naming is shared among two unrelated areas - one is the MPC-level protocol identifiers, the other is the qeth_prot_version enum. Rename the MPC definitions to use QETH_MPC_PROT_*. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Commit fb64de1b ("s390/qeth: phase out OSN support") spelled out why the OSN support in qeth is in a bad shape, and put any remaining interested parties on notice to speak up before it gets ripped out. It's 2021 now, so make true on that promise and remove all the OSN-specific parts from qeth. This also means that we no longer need to export various parts of the cmd & data path internals to the L2 driver. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2021 7 次提交
-
-
由 Julian Wiedmann 提交于
Convert the large boolean array into a bitmap, this substantially reduces the struct's size. While at it also clarify the naming. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_tx_complete_buf() is the only remaining user of buf->q, and the callers can easily provide this as a parameter instead. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Maintaining a pointer inside the aob's user-definable area is fragile and unnecessary. At this stage we only need it to overload the buffer's state field, and to access the buffer's TX queue. The first part is easily solved by tracking the aob's state within the aob itself. This also feels much cleaner and self-contained. For enabling the access to the associated TX queue, we can store the queue's index in the aob. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
With commit 396c1004 ("s390/qdio: let driver manage the QAOB") a pending TX buffer now has access to its associated QAOB during TX completion processing. We can thus reduce the amount of work & state propagation that needs to be done by qeth_qdio_handle_aob(). Move all this logic into the respective TX completion paths. Doing so even allows us to determine more precise TX_NOTIFY_* values via qeth_compute_cq_notification(aob->aorc, ...). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
We have one field to track _whether_ a cmd is active on a ccw device ('irq_pending'), and one to track _which_ cmd it is ('active_cmd'). Get rid of the irq_pending field, by testing active_cmd for NULL. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Set scan_threshold = 0 to opt out from the qdio layer's internal tasklet & timer mechanism for TX completions, and replace it with the TX NAPI infrastructure that qeth already uses for IQD devices. This avoids the fragile logic in qdio_check_output_queue(), enables tighter integration and gives us more tuning options via ethtool in the future. For now we continue to apply the same policy as the qdio layer: scan for completions if 32 TX buffers are in use, or after 1 sec. A re-scan is done after 10 sec, but only if no TX interrupt is pending. With scan_threshold = 0 we no longer get TX completion scans from within qdio_get_next_buffers(). So trigger these manually in qeth_poll() and in the RX path switch to the equivalent qdio_inspect_queue(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
While the qdio layer already tracks the number of HW interrupts for a device, there's value in understanding how many of them have been raised due to our TX completion logic. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 3月, 2021 1 次提交
-
-
由 Julian Wiedmann 提交于
We are spending way too much effort on qdio-internal bookkeeping for QAOB management & caching, and it's still not robust. Once qdio's TX path has detached the QAOB from a PENDING buffer, we lost all track of it until it shows up in a CQ notification again. So if the device is torn down before that notification arrives, we leak the QAOB. Just have the driver take care of it, and simply pass down a QAOB if they want a TX with async-completion capability. For a buffer in PENDING state that requires the QAOB for final completion, qeth can now also try to recycle the buffer's QAOB rather than unconditionally freeing it. This also eliminates the qdio_outbuf_state array, which was only needed to transfer the aob->user1 tag from the driver to the qdio layer. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NBenjamin Block <bblock@linux.ibm.com> Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
-
- 19 3月, 2021 2 次提交
-
-
由 Julian Wiedmann 提交于
Pending TX buffers are completed from the same NAPI code as normal TX buffers. Pass the NAPI budget to qeth_tx_complete_buf() so that the freeing of the completed skbs can be deferred. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_init_qdio_out_buf() is typically called during initialization, and the GFP_ATOMIC is only needed for a very specific & rare case during TX completion. Allow callers to specify a gfp mask, so that the initialization path can select GFP_KERNEL. While at it also clarify the function name. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 3月, 2021 4 次提交
-
-
由 Julian Wiedmann 提交于
The cited commit reworked the state machine for pending TX buffers. In qeth_iqd_tx_complete() it turned PENDING into a transient state, and uses NEED_QAOB for buffers that get parked while waiting for their QAOB completion. But it missed to adjust the check in qeth_tx_complete_buf(). So if qeth_tx_complete_pending_bufs() is called during teardown to drain the parked TX buffers, we no longer raise a notification for af_iucv. Instead of updating the checked state, just move this code into qeth_tx_complete_pending_bufs() itself. This also gets rid of the special-case in the common TX completion path. Fixes: 8908f36d ("s390/qeth: fix af_iucv notification race") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When a QAOB notifies us that a pending TX buffer has been delivered, the actual TX completion processing by qeth_tx_complete_pending_bufs() is done within the context of a TX NAPI instance. We shouldn't rely on this instance being scheduled by some other TX event, but just do it ourselves. qeth_qdio_handle_aob() is called from qeth_poll(), ie. our main NAPI instance. To avoid touching the TX queue's NAPI instance before/after it is (un-)registered, reorder the code in qeth_open() and qeth_stop() accordingly. Fixes: 0da9581d ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The current design attaches a pending TX buffer to a custom single-linked list, which is anchored at the buffer's slot on the TX ring. The buffer is then checked for final completion whenever this slot is processed during a subsequent TX NAPI poll cycle. But if there's insufficient traffic on the ring, we might never make enough progress to get back to this ring slot and discover the pending buffer's final TX completion. In particular if this missing TX completion blocks the application from sending further traffic. So convert the custom single-linked list code to a per-queue list_head, and scan this list on every TX NAPI cycle. Fixes: 0da9581d ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When qeth_alloc_qdio_queues() fails to allocate one of the buffers that back an Output Queue, the 'out_freeoutqbufs' path will free all previously allocated buffers for this queue. But it misses to free the half-finished queue struct itself. Move the buffer allocation into qeth_alloc_output_queue(), and deal with such errors internally. Fixes: 0da9581d ("qeth: exploit asynchronous delivery of storage blocks") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 2月, 2021 2 次提交
-
-
由 Julian Wiedmann 提交于
For non-QEBSM devices, get_buf_states() merges PENDING and EMPTY buffers into a single group of finished buffers. To allow the upper-layer driver to differentiate between the two states, qdio_check_pending() looks at each buffer's state again and sets the sbal_state flag to QDIO_OUTBUF_STATE_FLAG_PENDING accordingly. So effectively we're spending overhead on _every_ Output Queue inspection, just to avoid some additional TX completion calls in case a group of buffers has completed with mixed EMPTY / PENDING state. Given that PENDING buffers should rarely occur, this is a bad trade-off. In particular so as the additional checks in get_buf_states() affect _all_ device types (even those that don't use the PENDING state). Rip it all out, and just report the PENDING completions separately as we already do for QEBSM devices. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NBenjamin Block <bblock@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Julian Wiedmann 提交于
For QEBSM devices the 'merge_pending' mechanism in get_buf_states() doesn't apply, and we can actually get SLSB_P_OUTPUT_PENDING returned. So for this case propagating the PENDING state to the driver via the queue's sbal_state doesn't make sense and creates unnecessary overhead. Instead introduce a new QDIO_ERROR_* flag that gets passed to the driver, and triggers the same processing as if the buffers were flagged as QDIO_OUTBUF_STATE_FLAG_PENDING. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NBenjamin Block <bblock@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
- 29 1月, 2021 4 次提交
-
-
由 Julian Wiedmann 提交于
Stop maintaining the skb_send_q list for TRANS_HIPER sockets. Not only is it extra overhead, but keeping around a list of skb clones means that we later also have to match the ->sk_txnotify() calls against these clones and free them accordingly. The current matching logic (comparing the skbs' shinfo location) is frustratingly fragile, and breaks if the skb's head is mangled in any sort of way while passing from dev_queue_xmit() to the device's HW queue. Also adjust the interface for ->sk_txnotify(), to make clear that we don't actually care about any skb internals. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
When do_qdio() returns with an unexpected error, qeth_flush_buffers() kicks off a recovery action. In such a case there's no point in starting TX completion processing, the device gets torn down anyway. So take a closer look at do_qdio()'s return value, and skip the TX completion processing accordingly. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
Replace our home-grown helper with the more robust vlan_get_protocol(). This is pretty much a 1:1 replacement, we just need to pass around a proper ETH_P_* everyhwere and convert the old value range. For readability also convert the protocol checks in qeth_l3_hard_start_xmit() to a switch statement. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
We have two usage patterns: 1. get & ->setup() a new discipline, or 2. ->remove() & put the currently loaded one. Add corresponding helpers that hide the internals & error handling. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Acked-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 08 1月, 2021 2 次提交
-
-
由 Julian Wiedmann 提交于
Due to insufficient locking, qeth_core_set_online() and qeth_dev_layer2_store() can run in parallel, both attempting to load & setup the discipline (and stepping on each other toes along the way). A similar race can also occur between qeth_core_remove_device() and qeth_dev_layer2_store(). Access to .discipline is meant to be protected by the discipline_mutex, so add/expand the locking in qeth_core_remove_device() and qeth_core_set_online(). Adjust the locking in qeth_l*_remove_device() accordingly, as it's now handled by the callers in a consistent manner. Based on an initial patch by Ursula Braun. Fixes: 9dc48ccc ("qeth: serialize sysfs-triggered device configurations") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Julian Wiedmann 提交于
When qeth_dev_layer2_store() - holding the discipline_mutex - waits inside qeth_l*_remove_device() for a qeth_do_reset() thread to complete, we can hit a deadlock if qeth_do_reset() concurrently calls qeth_set_online() and thus tries to aquire the discipline_mutex. Move the discipline_mutex locking outside of qeth_set_online() and qeth_set_offline(), and turn the discipline into a parameter so that callers understand the dependency. To fix the deadlock, we can now relax the locking: As already established, qeth_l*_remove_device() waits for qeth_do_reset() to complete. So qeth_do_reset() itself is under no risk of having card->discipline ripped out while it's running, and thus doesn't need to take the discipline_mutex. Fixes: 9dc48ccc ("qeth: serialize sysfs-triggered device configurations") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 07 12月, 2020 3 次提交
-
-
由 Julian Wiedmann 提交于
When qeth_qdio_handle_aob() frees dangling allocations in the notified TX buffer, there are rare tear-down cases where qeth_drain_output_queue() would later call qeth_clear_output_buffer() for the same buffer - and thus end up walking the buffer a second time to check for dangling kmem_cache allocations. Luckily current code previously scrubs such a buffer, so qeth_clear_output_buffer() would find buf->buffer->element[i].addr as NULL and not do anything. But this is fragile, and we can easily improve it by consistently clearing the ->is_header flag after freeing the allocation. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Reuse the QETH_QDIO_BUF_EMPTY state to indicate that a TX buffer has been completed with a QAOB notification, and may be cleaned up by qeth_cleanup_handled_pending(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
For TX buffers that require an additional async notification via QAOB, the TX completion code can now manage all the necessary processing if the notification has already occurred (or is occurring concurrently). In such cases we can avoid replacing the metadata that is associated with the buffer's slot on the ring, and just keep using the current one. As qeth_clear_output_buffer() will also handle any kmem cache-allocated memory that was mapped into the TX buffer, qeth_qdio_handle_aob() doesn't need to worry about it. While at it, also remove the unneeded forward declaration for qeth_init_qdio_out_buf(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-