1. 09 8月, 2021 1 次提交
    • K
      net/smc: fix wait on already cleared link · 8f3d65c1
      Karsten Graul 提交于
      There can be a race between the waiters for a tx work request buffer
      and the link down processing that finally clears the link. Although
      all waiters are woken up before the link is cleared there might be
      waiters which did not yet get back control and are still waiting.
      This results in an access to a cleared wait queue head.
      
      Fix this by introducing atomic reference counting around the wait calls,
      and wait with the link clear processing until all waiters have finished.
      Move the work request layer related calls into smc_wr.c and set the
      link state to INACTIVE before calling smcr_link_clear() in
      smc_llc_srv_add_link().
      
      Fixes: 15e1b99a ("net/smc: no WR buffer wait for terminating link group")
      Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com>
      Signed-off-by: NGuvenc Gulce <guvenc@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f3d65c1
  2. 02 4月, 2021 1 次提交
  3. 02 12月, 2020 7 次提交
  4. 01 11月, 2020 1 次提交
    • K
      net/smc: improve return codes for SMC-Dv2 · 3752404a
      Karsten Graul 提交于
      To allow better problem diagnosis the return codes for SMC-Dv2 are
      improved by this patch. A few more CLC DECLINE codes are defined and
      sent to the peer when an SMC connection cannot be established.
      There are now multiple SMC variations that are offered by the client and
      the server may encounter problems to initialize all of them.
      Because only one diagnosis code can be sent to the client the decision
      was made to send the first code that was encountered. Because the server
      tries the variations in the order of importance (SMC-Dv2, SMC-D, SMC-R)
      this makes sure that the diagnosis code of the most important variation
      is sent.
      
      v2: initialize rc in smc_listen_v2_check().
      Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com>
      Link: https://lore.kernel.org/r/20201031181938.69903-1-kgraul@linux.ibm.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      3752404a
  5. 29 9月, 2020 5 次提交
  6. 11 9月, 2020 3 次提交
    • K
      net/smc: use separate work queues for different worker types · 22ef473d
      Karsten Graul 提交于
      There are 6 types of workers which exist per smc connection. 3 of them
      are used for listen and handshake processing, another 2 are used for
      close and abort processing and 1 is the tx worker that moves calls to
      sleeping functions into a worker.
      To prevent flooding of the system work queue when many connections are
      opened or closed at the same time (some pattern uperf implements), move
      those workers to one of 3 smc-specific work queues. Two work queues are
      module-global and used for handshake and close workers. The third work
      queue is defined per link group and used by the tx workers that may
      sleep waiting for resources of this link group.
      And in smc_llc_enqueue() queue the llc_event_work work to the system
      prio work queue because its critical that this work is started fast.
      Reviewed-by: NUrsula Braun <ubraun@linux.ibm.com>
      Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      22ef473d
    • U
      net/smc: immediate freeing in smc_lgr_cleanup_early() · f9aab6f2
      Ursula Braun 提交于
      smc_lgr_cleanup_early() schedules the free worker with delay. DMB
      unregistering occurs in this delayed worker increasing the risk
      to reach the SMCD SBA limit without need. Terminate the
      linkgroup immediately, since termination means early DMB unregistering.
      
      For SMCD the global smc_server_lgr_pending lock is given up early.
      A linkgroup to be given up with smc_lgr_cleanup_early() may already
      contain more than one connection. Using __smc_lgr_terminate() in
      smc_lgr_cleanup_early() covers this.
      
      And consolidate smc_ism_put_vlan() and smc_put_device() into smc_lgr_free()
      only.
      Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com>
      Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9aab6f2
    • U
      net/smc: introduce better field names · 5ac54d87
      Ursula Braun 提交于
      Field names "srv_first_contact" and "cln_first_contact" are misleading,
      since they apply to both, server and client. Rename them to
      "first_contact_peer" and "first_contact_local".
      Rename "ism_gid" by the more precise name "ism_peer_gid".
      Rename version constant "SMC_CLC_V1" into "SMC_V1".
      No functional change.
      Signed-off-by: NUrsula Braun <ubraun@linux.ibm.com>
      Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5ac54d87
  7. 20 7月, 2020 1 次提交
    • K
      net/smc: do not call dma sync for unmapped memory · 741a49a4
      Karsten Graul 提交于
      The dma related ...sync_sg... functions check the link state before the
      dma function is actually called. But the check in smc_link_usable()
      allows links in ACTIVATING state which are not yet mapped to dma memory.
      Under high load it may happen that the sync_sg functions are called for
      such a link which results in an debug output like
         DMA-API: mlx5_core 0002:00:00.0: device driver tries to sync
         DMA memory it has not allocated [device address=0x0000000103370000]
         [size=65536 bytes]
      To fix that introduce a helper to check for the link state ACTIVE and
      use it where appropriate. And move the link state update to ACTIVATING
      to the end of smcr_link_init() when most initial setup is done.
      Reviewed-by: NUrsula Braun <ubraun@linux.ibm.com>
      Fixes: d854fcbf ("net/smc: add new link state and related helpers")
      Signed-off-by: NKarsten Graul <kgraul@linux.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      741a49a4
  8. 09 7月, 2020 1 次提交
  9. 06 5月, 2020 1 次提交
  10. 05 5月, 2020 6 次提交
  11. 04 5月, 2020 2 次提交
  12. 02 5月, 2020 9 次提交
  13. 01 5月, 2020 2 次提交