1. 25 8月, 2019 4 次提交
  2. 21 8月, 2019 5 次提交
  3. 14 8月, 2019 1 次提交
    • J
      s390/qeth: serialize cmd reply with concurrent timeout · 072f7940
      Julian Wiedmann 提交于
      Callbacks for a cmd reply run outside the protection of card->lock, to
      allow for additional cmds to be issued & enqueued in parallel.
      
      When qeth_send_control_data() bails out for a cmd without having
      received a reply (eg. due to timeout), its callback may concurrently be
      processing a reply that just arrived. In this case, the callback
      potentially accesses a stale reply->reply_param area that eg. was
      on-stack and has already been released.
      
      To avoid this race, add some locking so that qeth_send_control_data()
      can (1) wait for a concurrently running callback, and (2) zap any
      pending callback that still wants to run.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      072f7940
  4. 28 6月, 2019 8 次提交
  5. 14 6月, 2019 7 次提交
  6. 26 4月, 2019 6 次提交
  7. 18 4月, 2019 5 次提交
    • J
      s390/qeth: stop/wake TX queues based on their fill level · 54a50941
      Julian Wiedmann 提交于
      Current xmit code only stops the txq after attempting to fill an
      IO buffer that hasn't been TX-completed yet. In many-connection
      scenarios, this can result in frequent rejected TX attempts, requeuing
      of skbs with NETDEV_TX_BUSY and extra overhead.
      
      Now that we have a proper 1-to-1 relation between stack-side txqs and
      our HW Queues, overhaul the stop/wake logic so that the xmit code
      stops the txq as needed.
      Given that we might map multiple skbs into a single buffer, it's crucial
      to ensure that the queue always provides an _entirely_ empty IO buffer.
      Otherwise large skbs (eg TSO) might not fit into the last available
      buffer. So whenever qeth_do_send_packet() first utilizes an _empty_
      buffer, it updates & checks the used_buffers count.
      
      This now ensures that an skb passed to qeth_xmit() can always be mapped
      into an IO buffer, so remove all of the -EBUSY roll-back handling in the
      TX path. We preserve the minimal safety-checks ("Is this IO buffer
      really available?"), just in case some nasty future bug ever attempts to
      corrupt an in-use buffer.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      54a50941
    • J
      s390/qeth: add TX multiqueue support for OSA devices · 73dc2daf
      Julian Wiedmann 提交于
      This adds trivial support for multiple TX queues on OSA-style devices
      (both real HW and z/VM NICs). For now we expose the driver's existing
      QoS mechanism via .ndo_select_queue, and adjust the number of available
      TX queues when qeth_update_from_chp_desc() detects that the
      HW configuration has changed.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73dc2daf
    • J
      s390/qeth: add TX multiqueue support for IQD devices · 3a18d754
      Julian Wiedmann 提交于
      qeth has been supporting multiple HW Output Queues for a long time. But
      rather than exposing those queues to the stack, it uses its own queue
      selection logic in .ndo_start_xmit... with all the drawbacks that
      entails.
      Start off by switching IQD devices over to a proper mqs net_device,
      and converting all the netdev_queue management code.
      
      One oddity with IQD devices is the requirement to place all mcast
      traffic on the _highest_ established HW queue. Doing so via
      .ndo_select_queue seems straight-forward - but that won't work if only
      some of the HW queues are active
      (ie. when dev->real_num_tx_queues < dev->num_tx_queues), since
      netdev_cap_txqueue() will not allow us to put skbs on the higher queues.
      
      To make this work, we
      1. let .ndo_select_queue() map all mcast traffic to netdev_queue 0, and
      2. later re-map the netdev_queue and HW queue indices in
         .ndo_start_xmit and the TX completion handler.
      
      With this patch we default to a fixed set of 1 ucast and 1 mcast queue.
      Support for dynamic reconfiguration is added at a later time.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3a18d754
    • J
      s390/qeth: don't keep statistics for tx timeout · 333ef9d1
      Julian Wiedmann 提交于
      struct netdev_queue contains a counter for tx timeouts, which gets
      updated by dev_watchdog(). So let's not attempt to maintain our own
      statistics, in particular not by overloading the skb-error counter.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      333ef9d1
    • J
      s390/qeth: clarify naming for some QDIO helpers · 41c47da3
      Julian Wiedmann 提交于
      The naming of several QDIO helpers doesn't match their actual
      functionality, or the structures they operate on. Clean this up.
      
      s/qeth_alloc_qdio_buffers/qeth_alloc_qdio_queues
      s/qeth_free_qdio_buffers/qeth_free_qdio_queues
      s/qeth_alloc_qdio_out_buf/qeth_alloc_output_queue
      s/qeth_clear_outq_buffers/qeth_drain_output_queue
      s/qeth_clear_qdio_buffers/qeth_drain_output_queues
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      41c47da3
  8. 29 3月, 2019 4 次提交