- 28 6月, 2019 7 次提交
-
-
由 Julian Wiedmann 提交于
Now that all cmds are dynamically allocated, the code for static cmd buffers can go away entirely. Resulting in a nice reduction of code/data size & complexity, while removing the risk that qeth_clear_cmd_buffers() releases cmds that are still in-flight. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The base MPC cmds are the last remaining user of the static cmd buffers. Port them over to use dynamic allocation, and stop backing the write channel's cmd buffers with pages. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The VNICC code is somewhat quirky in that it defers the whole cmd setup to a common helper qeth_l2_vnicc_request(). Some of the cmd specifics are then passed in via parameter, while others are simply hard-coded. Split the whole machinery up into the usual format: one helper that allocates the cmd & fills in the common fields, while all the cmd originators take care of their sub-cmd type specific work. This makes it much easier to calculate the cmd's precise length, and reduces code complexity. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Add a new wrapper that allocates DIAG cmds of the right size, and fills in the common fields. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This patch converts the adapter, assist and bridgeport cmd paths to dynamic allocation. Most of the work is about re-organizing the cmd headers, calculating the correct cmd length, and filling in the right value in the sub-cmd's length field. Since we now also set the correct length for cmds that are not reflected by a fixed struct (ie SNMP), we can remove the work-around from qeth_snmp_command(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
For code that uses qeth_send_simple_setassparms_prot(), we currently can't differentiate whether the cmd should contain (1) no parameter, or (2) a 4-byte parameter with value 0. At the moment this doesn't cause any trouble. But when using dynamically allocated cmds, we need to know whether to allocate & transmit an additional 4 bytes of zeroes. So instead of the raw parameter value, pass a parameter pointer (or NULL) to qeth_send_simple_setassparms_prot(). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This patch reduces the usage of the write channel's static cmd buffers, by dynamically allocating all simple IPA cmds (eg. STARTLAN, SETVMAC). It also converts the OSN path. Doing so requires some changes to how we calculate the cmd length. Currently when building IPA cmds, we're quite generous in how much data we send down to the device (basically the size of the biggest cmd we know). This is no real concern at the moment, since the static cmd buffers are backed with zeroed pages. But for dynamic allocations, the exact length matters. So this patch also adds the needed length calculations to each cmd path. Commands that have multiple subtypes (eg. SETADP) of differing length will be converted with follow-up patches. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 6月, 2019 13 次提交
-
-
由 Julian Wiedmann 提交于
We statically allocate 8 cmd buffers on the read channel, when the only IO left that's still using them is the long-running READ. Replace this with a single allocated cmd, that gets restarted whenever the READ completed. This introduces refcounting for allocated cmds, so that the READ cmd can survive the IO completion. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The current IDX sequence first sends one WRITE cmd to activate the device, and then sends a second cmd that READs the response. Using qeth_alloc_cmd(), we can combine this into a single IO with two command-chained CCWs. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The RCD code is the last remaining IO path that doesn't use the qeth_send_control_data() infrastructure. Doing so allows us to remove all sorts of custom state machinery and logic in the IRQ handler. Instead of introducing statically allocated cmd buffers for this single IO on the data channel, use the new qeth_alloc_cmd() helper. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth currently uses a fixed set of statically allocated cmd buffers for the read and write IO channels. This (1) doesn't play well with the single RCD cmd we need to issue on the data channel, (2) doesn't provide the necessary flexibility for certain IDX improvements, and (3) is also rather wasteful since the buffers are idle most of the time. Add a new type of cmd buffer that is dynamically allocated, and keeps its ccw chain in the DMA data area. Since this touches most callers of qeth_setup_ccw(), also add a new CCW flags parameter for future usage. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Each cmd buffer maintains a pointer to the IO channel that it was/will be issued on. So when dealing with cmd buffers, we don't need to pass around a separate channel pointer. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The vast majority of SETUP-classified trace entries can be moved to their device-specific trace file. This reduces pollution of the global SETUP file, and provides a consistent trace view of all activity on the device. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
OSN currently provides a custom code path to submit IPA cmds, without waiting for the cmd response. Replace it with qeth_send_ipa_cmd(), which uses the common qeth_send_control_data() IO infrastructure. By setting a custom iob->callback, we can now provide feedback to the caller about whether the cmd has been successfully submitted to HW. Since the callback then immediately wakes up the reply-waiter object, we maintain the old behaviour of returning early without waiting for the response. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The basic MPC initialization sequence is strictly sequential, and waiting for an available cmd buffer should never be necessary. So this change only affects the OSN path, where dangling waiters on an unbounded wait_event() are not desirable. Switch to qeth_get_buffers(), and let OSN callers deal with -ENOMEM. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When called from qeth_core_probe_device(), qeth_determine_capabilities() initializes the device's BLKT defaults. From all other callers, the ccw_device has already been set online and the BLKT setting is skipped. Clean this up by extracting the BLKT setting into a separate helper that gets called from the right place. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The completion of a pending READ cmd is processed via qeth_issue_next_read_cb(). Let this callback also start the next READ cmd, instead of hardcoding that step into the IRQ handler. While at it remove the check of the channel state, __qeth_issue_next_read() already does this. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When the tear down sequence in qeth_l?_stop_card() has finished, the card is guaranteed to be in DOWN state and we don't have to check for it again. With this insight we can also remove the redundant setting of card->state in qeth_l?_set_online()'s error path. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Slightly reduce the complexity of the core xmit path, by replacing some open-coded logic with the corresponding helpers. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Current code suppresses debug entries when an TX buffer completes in ERROR state with no error indication set in SBALF15. This was introduced back with commit 58490f18 ("qeth: HiperSockets SIGA retry support on CC=2."). But qeth no longer retries after CC=2, and this sort of suppression make no sense anymore. Remove it. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 6月, 2019 4 次提交
-
-
由 Julian Wiedmann 提交于
netif_set_real_num_tx_queues() can return an error, deal with it. Fixes: 73dc2daf ("s390/qeth: add TX multiqueue support for OSA devices") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexandra Winter 提交于
Enabling sysfs attribute bridge_hostnotify triggers a series of udev events for the MAC addresses of all currently connected peers. In case no VLAN is set for a peer, the device reports the corresponding MAC addresses with VLAN ID 4096. This currently results in attribute VLAN=4096 for all non-VLAN interfaces in the initial series of events after host-notify is enabled. Instead, no VLAN attribute should be reported in the udev event for non-VLAN interfaces. Only the initial events face this issue. For dynamic changes that are reported later, the device uses a validity flag. This also changes the code so that it now sets the VLAN attribute for MAC addresses with VID 0. On Linux, no qeth interface will ever be registered with VID 0: Linux kernel registers VID 0 on all network interfaces initially, but qeth will drop .ndo_vlan_rx_add_vid for VID 0. Peers with other OSs could register MACs with VID 0. Fixes: 9f48b9db ("qeth: bridgeport support - address notifications") Signed-off-by: NAlexandra Winter <wintera@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
While qeth_l3 uses netif_keep_dst() to hold onto the dst, a skb's dst may still have been obsoleted (via dst_dev_put()) by the time that we end up using it. The dst then points to the loopback interface, which means the neighbour lookup in qeth_l3_get_cast_type() determines a bogus cast type of RTN_BROADCAST. For IQD interfaces this causes us to place such skbs on the wrong HW queue, resulting in TX errors. Fix-up the various call sites to first validate the dst entry with dst_check(), and fall back accordingly. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When selecting the cast type of a neighbourless IPv4 skb (eg. on a raw socket), qeth_l3 falls back to the packet's destination IP address. For this case we should classify traffic sent to 255.255.255.255 as broadcast. This fixes DHCP requests, which were misclassified as unicast (and for IQD interfaces thus ended up on the wrong HW queue). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 4月, 2019 2 次提交
-
-
由 Sebastian Ott 提交于
ISM devices are special in how they access PCI memory space. Provide wrappers for handling commands to the device. No functional change. Signed-off-by: NSebastian Ott <sebott@linux.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
This is a preparation patch for usage of new pci instructions. No functional change. Signed-off-by: NSebastian Ott <sebott@linux.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 4月, 2019 8 次提交
-
-
由 Julian Wiedmann 提交于
When building the L3 HW header for non-IP packets, trust the cast type that was passed as parameter. qeth_l3_get_cast_type() has most likely also used h_dest to determine the cast type, so we get the same result, and can remove that duplicated code. In the unlikely case that we would get a _different_ cast type, then that's based off a route lookup and should be considered authoritative. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This de-duplicates the L2 and L3 cast-type code, and makes the L2 code a bit more robust by removing the fragile assumption that skb->data always points to the Ethernet Header. This would break in code paths where we pushed the HW header onto the skb. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
The QETH_MAX_BUFFER_ELEMENTS() macro effectively returns a constant value. To avoid some redundant pointer chasing and computations in the xmit hot path, cache this value in the queue struct. Take this as opportunity to shrink some of the queue struct's fields to their appropriate value range, slightly reducing its total size. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
On the first initialization of a queue, its Output Buffers are in a clean state with no attached resources. On every subsequent initialization, qeth_l?_stop_card() has previously put them in a clean state via qeth_drain_output_queues(). So the call to qeth_clear_output_buffer() is redundant and can be removed. While at it, move the initialization of the queue's card pointer into the queue allocation. It never changes afterwards. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
We have helper macros for all possible device types, replace all remaining open-coded accesses to the type fields. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
We don't keep track of Input Buffer states, so remove the comments that make it sound like the qeth_qdio_buffer_states enum applies to Input Buffers. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
It's unclear what exact purpose this seqno may have served in the past. But it's certainly no longer used anymore, as the following napi_gro_receive() will straight away clear this part of the cb again. Suggested-by: NKarsten Graul <kgraul@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnd Bergmann 提交于
clang produces a harmless warning for each use for the qeth_adp_supported macro: drivers/s390/net/qeth_l2_main.c:559:31: warning: implicit conversion from enumeration type 'enum qeth_ipa_setadp_cmd' to different enumeration type 'enum qeth_ipa_funcs' [-Wenum-conversion] if (qeth_adp_supported(card, IPA_SETADP_SET_PROMISC_MODE)) ~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/s390/net/qeth_core.h:179:41: note: expanded from macro 'qeth_adp_supported' qeth_is_ipa_supported(&c->options.adp, f) ~~~~~~~~~~~~~~~~~~~~~ ^ Add a version of this macro that uses the correct types, and remove the unused qeth_adp_enabled() macro that has the same problem. Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 4月, 2019 6 次提交
-
-
由 Arnd Bergmann 提交于
clang points out that the return code from this function is undefined for one of the error paths: ../drivers/s390/net/ctcm_main.c:1595:7: warning: variable 'result' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (priv->channel[direction] == NULL) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../drivers/s390/net/ctcm_main.c:1638:9: note: uninitialized use occurs here return result; ^~~~~~ ../drivers/s390/net/ctcm_main.c:1595:3: note: remove the 'if' if its condition is always false if (priv->channel[direction] == NULL) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../drivers/s390/net/ctcm_main.c:1539:12: note: initialize the variable 'result' to silence this warning int result; ^ Make it return -ENODEV here, as in the related failure cases. gcc has a known bug in underreporting some of these warnings when it has already eliminated the assignment of the return code based on some earlier optimization step. Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Current xmit code only stops the txq after attempting to fill an IO buffer that hasn't been TX-completed yet. In many-connection scenarios, this can result in frequent rejected TX attempts, requeuing of skbs with NETDEV_TX_BUSY and extra overhead. Now that we have a proper 1-to-1 relation between stack-side txqs and our HW Queues, overhaul the stop/wake logic so that the xmit code stops the txq as needed. Given that we might map multiple skbs into a single buffer, it's crucial to ensure that the queue always provides an _entirely_ empty IO buffer. Otherwise large skbs (eg TSO) might not fit into the last available buffer. So whenever qeth_do_send_packet() first utilizes an _empty_ buffer, it updates & checks the used_buffers count. This now ensures that an skb passed to qeth_xmit() can always be mapped into an IO buffer, so remove all of the -EBUSY roll-back handling in the TX path. We preserve the minimal safety-checks ("Is this IO buffer really available?"), just in case some nasty future bug ever attempts to corrupt an in-use buffer. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth_get_priority_queue() is no longer used for IQD devices, remove the special-casing of their mcast queue. This effectively reverts commit 70deb016 ("qeth: omit outbound queue 3 for unicast packets in Priority Queuing on HiperSockets"). Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This adds trivial support for multiple TX queues on OSA-style devices (both real HW and z/VM NICs). For now we expose the driver's existing QoS mechanism via .ndo_select_queue, and adjust the number of available TX queues when qeth_update_from_chp_desc() detects that the HW configuration has changed. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth has been supporting multiple HW Output Queues for a long time. But rather than exposing those queues to the stack, it uses its own queue selection logic in .ndo_start_xmit... with all the drawbacks that entails. Start off by switching IQD devices over to a proper mqs net_device, and converting all the netdev_queue management code. One oddity with IQD devices is the requirement to place all mcast traffic on the _highest_ established HW queue. Doing so via .ndo_select_queue seems straight-forward - but that won't work if only some of the HW queues are active (ie. when dev->real_num_tx_queues < dev->num_tx_queues), since netdev_cap_txqueue() will not allow us to put skbs on the higher queues. To make this work, we 1. let .ndo_select_queue() map all mcast traffic to netdev_queue 0, and 2. later re-map the netdev_queue and HW queue indices in .ndo_start_xmit and the TX completion handler. With this patch we default to a fixed set of 1 ucast and 1 mcast queue. Support for dynamic reconfiguration is added at a later time. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
struct netdev_queue contains a counter for tx timeouts, which gets updated by dev_watchdog(). So let's not attempt to maintain our own statistics, in particular not by overloading the skb-error counter. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-