- 10 8月, 2018 3 次提交
-
-
由 Julian Wiedmann 提交于
Allocating the main qeth_card struct with GFP_DMA blocks us from moving it into netdev_priv(). But the only reason why we need DMA memory is the ccw1 structs embedded into each ccw channel. So extract those into separate allocations, like we already do for the cmd buffers. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Where possible use accessor macros and local pointers to access the ccw channels. This makes it less likely to miss a spot. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Just a little code deduplication. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 7月, 2018 8 次提交
-
-
由 Julian Wiedmann 提交于
Modify the L2 OSA xmit path so that it also supports L2 IQD devices (in particular, their HW header requirements). This allows IQD devices to advertise NETIF_F_SG support, and eliminates the allocation overhead for the HW header. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Some transmit modes require that the HW header is located in the same page as the initial protocol headers in skb->data. Let callers specify the size of this contiguous header range, and enforce it when building the HW header. While at it, apply some gentle renaming to the relevant L2 code so that it matches the L3 code. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When checking whether an skb needs to be linearized to fit into an IO buffer, it's desirable to consider the skb's final size and layout (ie. after the HW header was added). But a subsequent linearization can then cause the re-positioned HW header to violate its alignment restrictions. Dealing with this situation in two different code paths is quite tricky. This patch integrates a) linearize-check and b) HW header construction into one 3 step-sequence: 1. evaluate how the HW header needs to be added (to identify if it takes up an additional buffer element), then 2. check if the required buffer elements exceed the device's limit. Linearize when necessary and re-evaluate the HW header placement. 3. Add the HW header in the best-possible way: a) push, without taking up an additional buffer element b) push, but consume another buffer element c) allocate a header object from the cache. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Nowadays an skb fragment typically spans over multiple pages. So replace the obsolete, SG-only 'fragments' counter with one that tracks the consumed buffer elements. This is what actually matters for performance. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth's ndo_change_mtu() only applies some trivial bounds checking. Set up dev->min_mtu properly, so that dev_set_mtu() can do this for us. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When the MPC initialization code discovers the HW-specific max MTU, apply the resulting changes straight to the netdevice. If this is the device's first initialization, also set its MTU (HiperSockets: the max MTU; else: a layer-specific default value). Then cap the current MTU by the new max MTU. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Allocation of the netdevice is currently delayed until a qeth card first goes online. This complicates matters in several places, where we need to cache values instead of applying them straight to the netdevice. Improve on this by moving the allocation up to where the qeth card itself is created. This is also one step in direction of eventually placing the qeth card into netdev_priv(). In all subsequent code, remove the now redundant checks whether card->dev is valid. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
netif_carrier_off() does its own checking. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 7月, 2018 3 次提交
-
-
由 Julian Wiedmann 提交于
Remove some redundant EXPORTs. While at it, also move some L2-only prototypes into the proper header file. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Consolidate duplicated code, fix the misuse of RTN_UNSPEC and simplify the handling of non-unicast traffic on IQD devices. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Changing a device's address lists (or its promisc mode) already triggers an RX modeset, there's no need to do it manually from the L2 driver's ndo_vlan_rx_kill_vid() hook. Also when setting a device online, dev_open() already calls dev_set_rx_mode(). So a manual modeset is only necessary from the recovery path. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2018 3 次提交
-
-
由 Julian Wiedmann 提交于
commit e830baa9 ("qeth: restore device features after recovery") and commit ce344356 ("s390/qeth: rely on kernel for feature recovery") made sure that the HW functions for device features get re-programmed after recovery. But we missed that the same handling is also required when a card is first set offline (destroying all HW context), and then online again. Fix this by moving the re-enable action out of the recovery-only path. Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vasily Gorbik 提交于
*ether_addr*_64bits functions have been introduced to optimize performance critical paths, which access 6-byte ethernet address as u64 value to get "nice" assembly. A harmless hack works nicely on ethernet addresses shoved into a structure or a larger buffer, until busted by Kasan on smth like plain (u8 *)[6]. qeth_l2_set_mac_address calls qeth_l2_remove_mac passing u8 old_addr[ETH_ALEN] as an argument. Adding/removing macs for an ethernet adapter is not that performance critical. Moreover is_multicast_ether_addr_64bits itself on s390 is not faster than is_multicast_ether_addr: is_multicast_ether_addr(%r2) -> %r2 llc %r2,0(%r2) risbg %r2,%r2,63,191,0 is_multicast_ether_addr_64bits(%r2) -> %r2 llgc %r2,0(%r2) risbg %r2,%r2,63,191,0 So, let's just use is_multicast_ether_addr instead of is_multicast_ether_addr_64bits. Fixes: bcacfcbc ("s390/qeth: fix MAC address update sequence") Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When qeth_l2_set_mac_address() finds the card in a non-reachable state, it merely copies the new MAC address into dev->dev_addr so that __qeth_l2_set_online() can later register it with the HW. But __qeth_l2_set_online() may very well be running concurrently, so we can't trust the card state without appropriate locking: If the online sequence is past the point where it registers dev->dev_addr (but not yet in SOFTSETUP state), any address change needs to be properly programmed into the HW. Otherwise the netdevice ends up with a different MAC address than what's set in the HW, and inbound traffic is not forwarded as expected. This is most likely to occur for OSD in LPAR, where commit 21b1702a ("s390/qeth: improve fallback to random MAC address") now triggers eg. systemd to immediately change the MAC when the netdevice is registered with a NET_ADDR_RANDOM address. Fixes: bcacfcbc ("s390/qeth: fix MAC address update sequence") Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 4月, 2018 7 次提交
-
-
由 Julian Wiedmann 提交于
If READ MAC fails to fetch a valid MAC address, allow some more device types (IQD and z/VM OSD) to fall back to a random address. Also use eth_hw_addr_random(), for indicating to userspace that the address type is NET_ADDR_RANDOM. Note that while z/VM has various protection schemes to prohibit custom addresses on its NICs, they are all optional. So we should at least give it a try. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kittipon Meesompop 提交于
Check if a qeth device supports IPv6 RX checksum offload, and hook it up into the existing NETIF_F_RXCSUM support. As NETIF_F_RXCSUM is now backed by a combination of HW Assists, we need to be a little smarter when dealing with errors during a configuration change: - switching on NETIF_F_RXCSUM only makes sense if at least one HW Assist was enabled successfully. - for switching off NETIF_F_RXCSUM, all available HW Assists need to be deactivated. Signed-off-by: NKittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kittipon Meesompop 提交于
Check if a qeth device supports IPv6 TX checksum offload, and advertise NETIF_F_IPV6_CSUM accordingly. Add support for setting the relevant bits in IPv6 packet descriptors. Currently this has only limited use (ie. UDP, or Jumbo Frames). For any TCP traffic with a standard MSS, the TCP checksum gets calculated as part of the linear GSO segmentation. Signed-off-by: NKittipon Meesompop <kmeesomp@linux.vnet.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
This consolidates the checksum offload code that was duplicated over the two qeth subdrivers. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
struct net_device contains a dev_port field. Store the OSA port number in this field. Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com> Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
As the vid_list is only accessed from process context, there's no need to protect it with a spinlock. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Both qeth sub drivers use the same QDIO queue handlers, there's no need to expose them via the driver's discipline. No functional change. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 4月, 2018 2 次提交
-
-
由 Julian Wiedmann 提交于
Submitting a cmd IO request (usually on the WRITE device, but for IDX also on the READ device) is currently done with ccw_device_start() and a manual timeout in the caller. On timeout, the caller cleans up the related resources (eg. IO buffer). But 1) the IO might still be active and utilize those resources, and 2) when the IO completes, qeth_irq() will attempt to clean up the same resources again. Instead of introducing additional resource locking, switch to ccw_device_start_timeout() to ensure IO termination after timeout, and let the IRQ handler alone deal with cleaning up after a request. This also removes a stray write->irq_pending reset from clear_ipacmd_list(). The routine doesn't terminate any pending IO on the WRITE device, so this should be handled properly via IO timeout in the IRQ handler. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
When changing the MAC address on a L2 qeth device, current code first unregisters the old address, then registers the new one. If HW rejects the new address (or the IO fails), the device ends up with no operable address at all. Re-order the code flow so that the old address only gets dropped if the new address was registered successfully. While at it, add logic to catch some corner-cases. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 4月, 2018 1 次提交
-
-
由 Vasily Gorbik 提交于
Removing couple of duplicate includes, found by "make includecheck". That leaves 1 duplicate include in arch/s390/kernel/entry.S, which is there for a reason (it includes generated asm/syscall_table.h twice). Signed-off-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 22 3月, 2018 1 次提交
-
-
由 Julian Wiedmann 提交于
On removal, a qeth card's netdevice is currently not properly freed because the call chain looks as follows: qeth_core_remove_device(card) lx_remove_device(card) unregister_netdev(card->dev) card->dev = NULL !!! qeth_core_free_card(card) if (card->dev) !!! free_netdev(card->dev) Fix it by free'ing the netdev straight after unregistering. This also fixes the sysfs-driven layer switch case (qeth_dev_layer2_store()), where the need to free the current netdevice was not considered at all. Note that free_netdev() takes care of the netif_napi_del() for us too. Fixes: 4a71df50 ("qeth: new qeth device driver") Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Reviewed-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 3月, 2018 6 次提交
-
-
由 Julian Wiedmann 提交于
init_qdio_queues() resets the Input Queue's overall QDIO state, and positions the buffer cursor back to 0. So this is the obvious place to also reset the queue's NAPI context (in contrast to doing it rather randomly in the middle of the big set_online() path). No functional change. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Newly-allocated skbs default to PACKET_HOST, and eth_type_trans() is smart enough to determine any other packet type from the frame's destination address. So except for the IQD sniffer case, there is no need to set up skb->pkt_type manually. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
napi_alloc_skb() doesn't need to disable IRQs during the allocation, and thus may save us a few cycles. Doing so requires a small fix-up in the HiperTransport path, which currently assumes a fixed NET_SKB_PAD headroom padding. napi_alloc_skb() adds an additional NET_IP_ALIGN padding, so use the proper helper for setting up the mac_header offset. Use this opportunity to convert the non-NAPI path to netdev_alloc_skb(), which means that skb->dev is now always set-up during allocation and doesn't need to be assigned manually. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
qeth implements HW-based Unicast Filtering (via SETVMAC) on L2 devices. Tell the stack, so it knows that receiving traffic for secondary addresses doesn't require full-blown promiscuous mode. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
NETIF_F_SG support is currently limited to OSA (and for L2 even OSD) devices. Advertise it for some more device types (OSM, L2 OSX, z/VM OSA) that share the same code paths. For now, keep it switched off by default on these devices. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
"s390/qeth: fix SETIP command handling" introduced a new helper, apply it driver-wide. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 12月, 2017 4 次提交
-
-
由 Julian Wiedmann 提交于
Once all of qeth_l3_set_rx_mode()'s single-use helpers are folded back in, the two implementations actually look quite similar. So improve the readability by converting both set_rx_mode() routines to a common format. This also allows us to walk ip_mc_htable just once, instead of three times. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Be a little more self-documenting, and get rid of OSA_ADDR_LEN. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
For adding/removing a MAC address, use just one helper each that handles both unicast and multicast. Saves one level of indirection for multicast addresses, while improving the error reporting for unicast addresses. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Instead of tracking the uc/mc state in each MAC address object, just check the multicast bit in the address itself. Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 12月, 2017 2 次提交
-
-
由 Julian Wiedmann 提交于
The current GSO skb size limit was copy&pasted over from the L3 path, where it is needed due to a TSO limitation. As L2 devices don't offer TSO support (and thus all GSO skbs are segmented before they reach the driver), there's no reason to restrict the stack in how large it may build the GSO skbs. Fixes: d52aec97 ("qeth: enable scatter/gather in layer 2 mode") Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
Using GSO with small MTUs currently results in a substantial throughput regression - which is caused by how qeth needs to map non-linear skbs into its IO buffer elements: compared to a linear skb, each GSO-segmented skb effectively consumes twice as many buffer elements (ie two instead of one) due to the additional header-only part. This causes the Output Queue to be congested with low-utilized IO buffers. Fix this as follows: If the MSS is low enough so that a non-SG GSO segmentation produces order-0 skbs (currently ~3500 byte), opt out from NETIF_F_SG. This is where we anticipate the biggest savings, since an SG-enabled GSO segmentation produces skbs that always consume at least two buffer elements. Larger MSS values continue to get a SG-enabled GSO segmentation, since 1) the relative overhead of the additional header-only buffer element becomes less noticeable, and 2) the linearization overhead increases. With the throughput regression fixed, re-enable NETIF_F_SG by default to reap the significant CPU savings of GSO. Fixes: 5722963a ("qeth: do not turn on SG per default") Reported-by: NNils Hoppmann <niho@de.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-