1. 21 11月, 2020 3 次提交
    • J
      s390/qeth: fix tear down of async TX buffers · 7ed10e16
      Julian Wiedmann 提交于
      When qeth_iqd_tx_complete() detects that a TX buffer requires additional
      async completion via QAOB, it might fail to replace the queue entry's
      metadata (and ends up triggering recovery).
      
      Assume now that the device gets torn down, overruling the recovery.
      If the QAOB notification then arrives before the tear down has
      sufficiently progressed, the buffer state is changed to
      QETH_QDIO_BUF_HANDLED_DELAYED by qeth_qdio_handle_aob().
      
      The tear down code calls qeth_drain_output_queue(), where
      qeth_cleanup_handled_pending() will then attempt to replace such a
      buffer _again_. If it succeeds this time, the buffer ends up dangling in
      its replacement's ->next_pending list ... where it will never be freed,
      since there's no further call to qeth_cleanup_handled_pending().
      
      But the second attempt isn't actually needed, we can simply leave the
      buffer on the queue and re-use it after a potential recovery has
      completed. The qeth_clear_output_buffer() in qeth_drain_output_queue()
      will ensure that it's in a clean state again.
      
      Fixes: 72861ae7 ("qeth: recovery through asynchronous delivery")
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      7ed10e16
    • J
      s390/qeth: fix af_iucv notification race · 8908f36d
      Julian Wiedmann 提交于
      The two expected notification sequences are
      1. TX_NOTIFY_PENDING with a subsequent TX_NOTIFY_DELAYED_*, when
         our TX completion code first observed the pending TX and the QAOB
         then completes at a later time; or
      2. TX_NOTIFY_OK, when qeth_qdio_handle_aob() picked up the QAOB
         completion before our TX completion code even noticed that the TX
         was pending.
      
      But as qeth_iqd_tx_complete() and qeth_qdio_handle_aob() can run
      concurrently, we may end up with a race that results in a sequence of
      TX_NOTIFY_DELAYED_* followed by TX_NOTIFY_PENDING. Which would confuse
      the af_iucv code in its tracking of pending transmits.
      
      Rework the notification code, so that qeth_qdio_handle_aob() defers its
      notification if the TX completion code is still active.
      
      Fixes: b3332930 ("qeth: add support for af_iucv HiperSockets transport")
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      8908f36d
    • J
      s390/qeth: make af_iucv TX notification call more robust · 34c7f50f
      Julian Wiedmann 提交于
      Calling into socket code is ugly already, at least check whether we are
      dealing with the expected sk_family. Only looking at skb->protocol is
      bound to cause troubles (consider eg. af_packet).
      
      Fixes: b3332930 ("qeth: add support for af_iucv HiperSockets transport")
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      34c7f50f
  2. 03 10月, 2020 5 次提交
  3. 24 9月, 2020 3 次提交
  4. 16 9月, 2020 1 次提交
    • A
      s390/qeth: Detect PNSO OC3 capability · fa115adf
      Alexandra Winter 提交于
      This patch detects whether device-to-bridge-notification, provided
      by the Perform Network Subchannel Operation (PNSO) operation code
      ADDR_INFO (OC3), is supported by this card. A following patch will
      map this to the learning_sync bridgeport flag, so we store it in
      priv->brport_hw_features in bridgeport flag format.
      
      Only IQD cards provide PNSO.
      There is a feature bit to indicate whether the machine provides OC3,
      unfortunately it is not set on old machines.
      So PNSO is called to find out. As this will disable notification
      and is exclusive with bridgeport_notification, this must be done
      during card initialisation before previous settings are restored.
      
      PNSO functionality requires some configuration values that are added to
      the qeth_card.info structure. Some helper functions are defined to fill
      them out when the card is brought online and some other places are
      adapted, that can also benefit from these fields.
      Signed-off-by: NAlexandra Winter <wintera@linux.ibm.com>
      Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fa115adf
  5. 27 8月, 2020 2 次提交
    • J
      s390/qeth: make queue lock a proper spinlock · a1668474
      Julian Wiedmann 提交于
      queue->state is a ternary spinlock in disguise, used by
      OSA's TX completion path to lock the Output Queue and flush any pending
      packets on it to the device. If the Queue is already locked by our TX
      code, setting the lock word to QETH_OUT_Q_LOCKED_FLUSH lets the TX
      completion code move on - the TX path will later take care of things
      when it unlocks the Queue.
      
      This sort of DIY locking is a non-starter of course, just let the
      TX completion path block on the spinlock when necessary. If that ends up
      causing additional latency due to lock contention, then converting
      the OSA path to use xmit_more is the right way to go forward.
      
      Also slightly expand the locked section and capture all of
      qeth_do_send_packet(), so that the update for the 'bufs_pack' statistics
      is done race-free.
      
      While reworking the TX completion path's code, remove a barrier() that
      doesn't make any sense.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a1668474
    • J
      s390/qeth: use to_delayed_work() · beaadcc6
      Julian Wiedmann 提交于
      Avoid poking around in the delayed_work struct's internals.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      beaadcc6
  6. 24 8月, 2020 1 次提交
  7. 01 8月, 2020 3 次提交
  8. 15 7月, 2020 10 次提交
  9. 19 6月, 2020 2 次提交
  10. 20 5月, 2020 1 次提交
  11. 07 5月, 2020 8 次提交
  12. 05 5月, 2020 1 次提交