1. 25 12月, 2019 1 次提交
    • J
      s390/qeth: fix qdio teardown after early init error · 8b5026bc
      Julian Wiedmann 提交于
      qeth_l?_set_online() goes through a number of initialization steps, and
      on any error uses qeth_l?_stop_card() to tear down the residual state.
      
      The first initialization step is qeth_core_hardsetup_card(). When this
      fails after having established a QDIO context on the device
      (ie. somewhere after qeth_mpc_initialize()), qeth_l?_stop_card() doesn't
      shut down this QDIO context again (since the card state hasn't
      progressed from DOWN at this stage).
      
      Even worse, we then call qdio_free() as final teardown step to free the
      QDIO data structures - while some of them are still hooked into wider
      QDIO infrastructure such as the IRQ list. This is inevitably followed by
      use-after-frees and other nastyness.
      
      Fix this by unconditionally calling qeth_qdio_clear_card() to shut down
      the QDIO context, and also to halt/clear any pending activity on the
      various IO channels.
      Remove the naive attempt at handling the teardown in
      qeth_mpc_initialize(), it clearly doesn't suffice and we're handling it
      properly now in the wider teardown code.
      
      Fixes: 4a71df50 ("qeth: new qeth device driver")
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b5026bc
  2. 21 12月, 2019 1 次提交
    • J
      s390/qeth: fix promiscuous mode after reset · 0f399305
      Julian Wiedmann 提交于
      When managing the promiscuous mode during an RX modeset, qeth caches the
      current HW state to avoid repeated programming of the same state on each
      modeset.
      
      But while tearing down a device, we forget to clear the cached state. So
      when the device is later set online again, the initial RX modeset
      doesn't program the promiscuous mode since we believe it is already
      enabled.
      Fix this by clearing the cached state in the tear-down path.
      
      Note that for the SBP variant of promiscuous mode, this accidentally
      works right now because we unconditionally restore the SBP role while
      re-initializing.
      
      Fixes: 4a71df50 ("qeth: new qeth device driver")
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Reviewed-by: NAlexandra Winter <wintera@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0f399305
  3. 06 12月, 2019 1 次提交
    • J
      s390/qeth: fix dangling IO buffers after halt/clear · f9e50b02
      Julian Wiedmann 提交于
      The cio layer's intparm logic does not align itself well with how qeth
      manages cmd IOs. When an active IO gets terminated via halt/clear, the
      corresponding IRQ's intparm does not reflect the cmd buffer but rather
      the intparm that was passed to ccw_device_halt() / ccw_device_clear().
      This behaviour was recently clarified in
      commit b91d9e67 ("s390/cio: fix intparm documentation").
      
      As a result, qeth_irq() currently doesn't cancel a cmd that was
      terminated via halt/clear. This primarily causes us to leak
      card->read_cmd after the qeth device is removed, since our IO path still
      holds a refcount for this cmd.
      
      For qeth this means that we need to keep track of which IO is pending on
      a device ('active_cmd'), and use this as the intparm when calling
      halt/clear. Otherwise qeth_irq() can't match the subsequent IRQ to its
      cmd buffer.
      Since we now keep track of the _expected_ intparm, we can also detect
      any mismatch; this would constitute a bug somewhere in the lower layers.
      In this case cancel the active cmd - we effectively "lost" the IRQ and
      should not expect any further notification for this IO.
      
      Fixes: 40554895 ("s390/qeth: add support for dynamically allocated cmds")
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9e50b02
  4. 15 11月, 2019 7 次提交
  5. 01 11月, 2019 3 次提交
  6. 25 8月, 2019 1 次提交
  7. 21 8月, 2019 1 次提交
  8. 28 6月, 2019 9 次提交
  9. 14 6月, 2019 3 次提交
  10. 06 6月, 2019 2 次提交
    • J
      s390/qeth: check dst entry before use · 0cd6783d
      Julian Wiedmann 提交于
      While qeth_l3 uses netif_keep_dst() to hold onto the dst, a skb's dst
      may still have been obsoleted (via dst_dev_put()) by the time that we
      end up using it. The dst then points to the loopback interface, which
      means the neighbour lookup in qeth_l3_get_cast_type() determines a bogus
      cast type of RTN_BROADCAST.
      For IQD interfaces this causes us to place such skbs on the wrong
      HW queue, resulting in TX errors.
      
      Fix-up the various call sites to first validate the dst entry with
      dst_check(), and fall back accordingly.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0cd6783d
    • J
      s390/qeth: handle limited IPv4 broadcast in L3 TX path · 72c87976
      Julian Wiedmann 提交于
      When selecting the cast type of a neighbourless IPv4 skb (eg. on a raw
      socket), qeth_l3 falls back to the packet's destination IP address.
      For this case we should classify traffic sent to 255.255.255.255 as
      broadcast.
      This fixes DHCP requests, which were misclassified as unicast
      (and for IQD interfaces thus ended up on the wrong HW queue).
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      72c87976
  11. 26 4月, 2019 3 次提交
  12. 18 4月, 2019 4 次提交
    • J
      s390/qeth: stop/wake TX queues based on their fill level · 54a50941
      Julian Wiedmann 提交于
      Current xmit code only stops the txq after attempting to fill an
      IO buffer that hasn't been TX-completed yet. In many-connection
      scenarios, this can result in frequent rejected TX attempts, requeuing
      of skbs with NETDEV_TX_BUSY and extra overhead.
      
      Now that we have a proper 1-to-1 relation between stack-side txqs and
      our HW Queues, overhaul the stop/wake logic so that the xmit code
      stops the txq as needed.
      Given that we might map multiple skbs into a single buffer, it's crucial
      to ensure that the queue always provides an _entirely_ empty IO buffer.
      Otherwise large skbs (eg TSO) might not fit into the last available
      buffer. So whenever qeth_do_send_packet() first utilizes an _empty_
      buffer, it updates & checks the used_buffers count.
      
      This now ensures that an skb passed to qeth_xmit() can always be mapped
      into an IO buffer, so remove all of the -EBUSY roll-back handling in the
      TX path. We preserve the minimal safety-checks ("Is this IO buffer
      really available?"), just in case some nasty future bug ever attempts to
      corrupt an in-use buffer.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      54a50941
    • J
      s390/qeth: add TX multiqueue support for OSA devices · 73dc2daf
      Julian Wiedmann 提交于
      This adds trivial support for multiple TX queues on OSA-style devices
      (both real HW and z/VM NICs). For now we expose the driver's existing
      QoS mechanism via .ndo_select_queue, and adjust the number of available
      TX queues when qeth_update_from_chp_desc() detects that the
      HW configuration has changed.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73dc2daf
    • J
      s390/qeth: add TX multiqueue support for IQD devices · 3a18d754
      Julian Wiedmann 提交于
      qeth has been supporting multiple HW Output Queues for a long time. But
      rather than exposing those queues to the stack, it uses its own queue
      selection logic in .ndo_start_xmit... with all the drawbacks that
      entails.
      Start off by switching IQD devices over to a proper mqs net_device,
      and converting all the netdev_queue management code.
      
      One oddity with IQD devices is the requirement to place all mcast
      traffic on the _highest_ established HW queue. Doing so via
      .ndo_select_queue seems straight-forward - but that won't work if only
      some of the HW queues are active
      (ie. when dev->real_num_tx_queues < dev->num_tx_queues), since
      netdev_cap_txqueue() will not allow us to put skbs on the higher queues.
      
      To make this work, we
      1. let .ndo_select_queue() map all mcast traffic to netdev_queue 0, and
      2. later re-map the netdev_queue and HW queue indices in
         .ndo_start_xmit and the TX completion handler.
      
      With this patch we default to a fixed set of 1 ucast and 1 mcast queue.
      Support for dynamic reconfiguration is added at a later time.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3a18d754
    • J
      s390/qeth: clarify naming for some QDIO helpers · 41c47da3
      Julian Wiedmann 提交于
      The naming of several QDIO helpers doesn't match their actual
      functionality, or the structures they operate on. Clean this up.
      
      s/qeth_alloc_qdio_buffers/qeth_alloc_qdio_queues
      s/qeth_free_qdio_buffers/qeth_free_qdio_queues
      s/qeth_alloc_qdio_out_buf/qeth_alloc_output_queue
      s/qeth_clear_outq_buffers/qeth_drain_output_queue
      s/qeth_clear_qdio_buffers/qeth_drain_output_queues
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      41c47da3
  13. 29 3月, 2019 4 次提交