1. 20 7月, 2020 3 次提交
  2. 18 6月, 2020 2 次提交
  3. 16 6月, 2020 2 次提交
    • J
      s390/qdio: reduce SLSB writes during Input Queue processing · a87ee116
      Julian Wiedmann 提交于
      Streamline the processing of QDIO Input Queues, and remove some
      intermittent SLSB updates (no deleting of old ACKs, no redundant
      transitions through NOT_INIT).
      
      Rather than counting ACKs, we now keep track of the whole batch of
      SBALs that were completed during the current polling cycle.
      Most completed SBALs stay in their initial state (ie. PRIMED or ERROR),
      except that the most recent SBAL in each sub-run is ACKed for
      IRQ reduction.
      
      The only logic changes happen in inbound_handle_work(), the other
      delta is just a renaming of the variables that track the SBAL batch.
      
      Note that in particular we don't need to flip the _oldest_ SBAL to
      an idle state (eg. NOT_INIT or ACKed) as a guard against catching our
      own tail. Since get_inbound_buffer_frontier() will never scan more than
      the remaining nr_buf_used SBALs, this scenario just doesn't occur.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      a87ee116
    • J
      s390/qdio: fine-tune SLSB update · c119a8a3
      Julian Wiedmann 提交于
      xchg() for a single-byte location assembles to a 4-byte Compare&Swap,
      wrapped into a non-trivial amount of retry code that deals with
      concurrent modifications to the unaffected bytes.
      
      Change it to a simple byte-store, but preserve the memory ordering
      semantics that the CS provided.
      This simplifies the generated code for a hot path, and in theory also
      allows us to amortize the memory barriers over multiple SLSB updates.
      
      CC: Andreas Krebbel <krebbel@linux.ibm.com>
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      c119a8a3
  4. 28 5月, 2020 3 次提交
    • A
      s390/cio, s390/qeth: cleanup PNSO CHSC · a0138f59
      Alexandra Winter 提交于
      CHSC3D (PNSO - perform network subchannel operation) is used for
      OC0 (Store-network-bridging-information) as well as for
      OC3 (Store-network-address-information). So common fields are renamed
      from *brinfo* to *pnso*.
      Also *_bridge_host_* is changed into *_addr_change_*, e.g.
      qeth_bridge_host_event to qeth_addr_change_event, for the
      same reasons.
      The keywords in the card traces are changed accordingly.
      
      Remove unused L3 types, as PNSO will only return Layer2 entries.
      
      Make PNSO CHSC implementation more consistent with existing API usage:
      Add new function ccw_device_pnso() to drivers/s390/cio/device_ops.c and
      the function declaration to arch/s390/include/asm/ccwdev.h, which takes
      a struct ccw_device * as parameter instead of schid and calls
      chsc_pnso().
      
      PNSO CHSC has no strict relationship to qdio. So move the calling
      function from qdio to qeth_l2 and move the necessary structures to a
      new file arch/s390/include/asm/chsc.h.
      
      Do response code evaluation only in chsc_error_from_response() and
      use return code in all other places. qeth_anset_makerc() was meant to
      evaluate the PNSO response code, but never did, because pnso_rc was
      already non-zero.
      
      Indentation was corrected in some places.
      Signed-off-by: NAlexandra Winter <wintera@linux.ibm.com>
      Reviewed-by: NPeter Oberparleiter <oberpar@linux.ibm.com>
      Reviewed-by: NVineeth Vijayan <vneethv@linux.ibm.com>
      Reviewed-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      a0138f59
    • J
      s390/qdio: remove q->first_to_kick · cafebf86
      Julian Wiedmann 提交于
      q->first_to_kick is obsolete, and can be replaced by q->first_to_check.
      
      Both cursors start off at 0. Out of the three code paths that update
      first_to_check, the qdio_inspect_queue() path is irrelevant as it
      doesn't even touch first_to_kick anymore.
      This leaves us with the two tasklet-driven code paths. Here any update
      to first_to_check is followed by a call to qdio_kick_handler(), which
      advances first_to_kick by the same amount.
      
      So the two cursors will differ only for a tiny moment. Drivers have no
      way of deterministically observing this difference, and thus it doesn't
      matter which of the cursors we use for reporting an error to q->handler.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      cafebf86
    • J
      s390/qdio: fix up qdio_start_irq() kerneldoc · 0623b7dd
      Julian Wiedmann 提交于
      Document the actual semantics, correcting an old copy & paste mistake.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      0623b7dd
  5. 20 5月, 2020 3 次提交
  6. 28 4月, 2020 8 次提交
  7. 06 4月, 2020 3 次提交
  8. 27 3月, 2020 1 次提交
  9. 26 3月, 2020 1 次提交
    • J
      s390/qdio: extend polling support to multiple queues · 0a6e6345
      Julian Wiedmann 提交于
      When the support for polling drivers was initially added, it only
      considered Input Queue 0. But as QDIO interrupts are actually for the
      full device and not a single queue, this doesn't really fit for
      configurations where multiple Input Queues are used.
      
      Rework the qdio code so that interrupts for a polling driver are not
      split up into actions for each queue. Instead deliver the interrupt as
      a single event, and let the driver decide which queue needs what action.
      
      When re-enabling the QDIO interrupt via qdio_start_irq(), this means
      that the qdio code needs to
      (1) put _all_ eligible queues back into a state where they raise IRQs,
      (2) and afterwards check _all_ eligible queues for new work to bridge
          the race window.
      
      On the qeth side of things (as the only qdio polling driver), we can now
      add CQ polling support to the main NAPI poll routine. It doesn't consume
      NAPI budget, and to avoid hogging the CPU we yield control after
      completing one full queue worth of buffers.
      The subsequent qdio_start_irq() will check for any additional work, and
      have us re-schedule the NAPI instance accordingly.
      Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com>
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0a6e6345
  10. 20 2月, 2020 1 次提交
  11. 10 2月, 2020 1 次提交
  12. 01 11月, 2019 5 次提交
  13. 25 8月, 2019 2 次提交
  14. 23 7月, 2019 2 次提交
  15. 07 6月, 2019 1 次提交
  16. 08 5月, 2019 2 次提交