1. 20 11月, 2009 6 次提交
    • S
      IB/addr: Fix IPv6 routing lookup · d14714df
      Sean Hefty 提交于
      Include link scope as part of address resolution.  Combine local
      and remote address resolution into a single, simpler code path.
      Fix error checking in the IPv6 routing lookups.
      
      Based on work from:
      David Wilder <dwilder@us.ibm.com>
      Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      
      [ Fix up cma_check_linklocal() for !IPV6 case.  - Roland ]
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      d14714df
    • S
      RDMA/cm: fix loopback address support · 6f8372b6
      Sean Hefty 提交于
      The RDMA CM is intended to support the use of a loopback address
      when establishing a connection; however, the behavior of the CM
      when loopback addresses are used is confusing and does not always
      work, depending on whether loopback was specified by the server,
      the client, or both.
      
      The defined behavior of rdma_bind_addr is to associate an RDMA
      device with an rdma_cm_id, as long as the user specified a non-
      zero address.  (ie they weren't just trying to reserve a port)
      Currently, if the loopback address is passed to rdam_bind_addr,
      no device is associated with the rdma_cm_id.  Fix this.
      
      If a loopback address is specified by the client as the destination
      address for a connection, it will fail to establish a connection.
      This is true even if the server is listing across all addresses or
      on the loopback address itself.  The issue is that the server tries
      to translate the IP address carried in the REQ message to a local
      net_device address, which fails.  The translation is not needed in
      this case, since the REQ carries the actual HW address that should
      be used.
      
      Finally, cleanup loopback support to be more transport neutral.
      Replace separate calls to get/set the sgid and dgid from the
      device address to a single call that behaves correctly depending
      on the format of the device address.  And support both IPv4 and
      IPv6 address formats.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      
      [ Fixed RDS build by s/ib_addr_get/rdma_addr_get/  - Roland ]
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      6f8372b6
    • S
      IB/addr: Store net_device type instead of translating to RDMA transport · c4315d85
      Sean Hefty 提交于
      The struct rdma_dev_addr stores net_device address information:
      the source device address, destination hardware address, and
      broadcast address.  For consistency, store the net_device type
      rather than converting it to the rdma_node_type.
      
      The type indicates the format of the various hardware addresses,
      which is what we're concerned with, and not the RDMA node type
      that the address may map to.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      c4315d85
    • S
      RDMA/cma: Replace net_device pointer with index · 6266ed6e
      Sean Hefty 提交于
      Provide the device interface when resolving route information to
      ensure that the correct outbound device is used.  This will also
      simplify processing of sin6_scope_id for IPv6 support.
      
      Based on work from:
      David Wilder <dwilder@us.ibm.com>
      Jason Gunthorpe <jgunthrope@obsidianresearch.com>
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      6266ed6e
    • J
      RDMA/cma: Fix AF_INET6 support in multicast joining · e2e62697
      Jason Gunthorpe 提交于
      If joining to an AF_INET6 address, we need to map the address to a MGID
      in the same way as the IP stack.  The old code would just fall through to
      the IPv4 case and generate garbage.
      Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      e2e62697
    • J
      RDMA/cma: Correct detection of SA Created MGID · 1c9b2819
      Jason Gunthorpe 提交于
      RDMA CM treats AF_INET6 addresses that are either 0 or prefixed with
      FF1x:A01B::/32 as MGIDs, but the detection for the prefix was buggy;
      fix it up.
      Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      1c9b2819
  2. 24 6月, 2009 1 次提交
  3. 09 4月, 2009 1 次提交
    • Y
      RDMA/cma: Create cm id even when IB port is down · d2ca39f2
      Yossi Etigin 提交于
      When doing rdma_resolve_addr(), if the relevant IB port is down, the
      function fails and the cm_id is not bound to the correct device.
      Therefore, application does not have a device handle and cannot wait
      for the port to become active.  The function fails because the
      underlying IPoIB interface is not joined to the broadcast group and
      therefore the SA does not have a multicast record to take a Q_Key
      from.
      
      The fix is to use lazy Q_Key resolution - cma_set_qkey() will set
      id_priv->qkey if it was not set, and will be called just before the
      Q_Key is really required.
      Signed-off-by: NYossi Etigin <yosefe@voltaire.com>
      Acked-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      d2ca39f2
  4. 02 4月, 2009 1 次提交
  5. 25 12月, 2008 1 次提交
  6. 05 8月, 2008 1 次提交
    • R
      RDMA/cma: Remove padding arrays by using struct sockaddr_storage · 3f446754
      Roland Dreier 提交于
      There are a few places where the RDMA CM code handles IPv6 by doing
      
      	struct sockaddr		addr;
      	u8			pad[sizeof(struct sockaddr_in6) -
      				    sizeof(struct sockaddr)];
      
      This is fragile and ugly; handle this in a better way with just
      
      	struct sockaddr_storage	addr;
      
      [ Also roll in patch from Aleksey Senin <alekseys@voltaire.com> to
        switch to struct sockaddr_storage and get rid of padding arrays in
        struct rdma_addr. ]
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      3f446754
  7. 23 7月, 2008 2 次提交
  8. 15 7月, 2008 4 次提交
  9. 17 4月, 2008 1 次提交
  10. 31 3月, 2008 1 次提交
  11. 15 2月, 2008 1 次提交
    • S
      RDMA/cma: Do not issue MRA if user rejects connection request · ead595ae
      Sean Hefty 提交于
      There's an undesirable interaction with issuing MRA requests to
      increase connection timeouts and the listen backlog.
      
      When the rdma_cm receives a connection request, it queues an MRA with
      the ib_cm.  (The ib_cm will send an MRA if it receives a duplicate
      REQ.)  The rdma_cm will then create a new rdma_cm_id and give that to
      the user, which in this case is the rdma_user_cm.
      
      If the listen backlog maintained in the rdma_user_cm is full, it
      destroys the rdma_cm_id, which in turns destroys the ib_cm_id.  The
      ib_cm_id generates a REJ because the state of the ib_cm_id has changed
      to MRA sent, versus REQ received.  When the backlog is full, we just
      want to drop the REQ so that it is retried later.
      
      Fix this by deferring queuing the MRA until after the user of the
      rdma_cm has examined the connection request.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      ead595ae
  12. 29 1月, 2008 2 次提交
  13. 26 1月, 2008 4 次提交
  14. 19 10月, 2007 1 次提交
  15. 17 10月, 2007 2 次提交
    • S
      RDMA/cma: Fix deadlock destroying listen requests · d02d1f53
      Sean Hefty 提交于
      Deadlock condition reported by Kanoj Sarcar <kanoj@netxen.com>.
      The deadlock occurs when a connection request arrives at the same
      time that a wildcard listen is being destroyed.
      
      A wildcard listen maintains per device listen requests for each
      RDMA device in the system.  The per device listens are automatically
      added and removed when RDMA devices are inserted or removed from
      the system.
      
      When a wildcard listen is destroyed, rdma_destroy_id() acquires
      the rdma_cm's device mutex ('lock') to protect against hot-plug
      events adding or removing per device listens.  It then tries to
      destroy the per device listens by calling ib_destroy_cm_id() or
      iw_destroy_cm_id().  It does this while holding the device mutex.
      
      However, if the underlying iw/ib CM reports a connection request
      while this is occurring, the rdma_cm callback function will try
      to acquire the same device mutex.  Since we're in a callback,
      the ib_destroy_cm_id() or iw_destroy_cm_id() calls will block until
      their callback thread returns, but the callback is blocked waiting for
      the device mutex.
      
      Fix this by re-working how per device listens are destroyed.  Use
      rdma_destroy_id(), which avoids the deadlock, in place of
      cma_destroy_listen().  Additional synchronization is added to handle
      device hot-plug events and ensure that the id is not destroyed twice.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      d02d1f53
    • S
      RDMA/cma: Add locking around QP accesses · c5483388
      Sean Hefty 提交于
      If a user allocates a QP on an rdma_cm_id, the rdma_cm will automatically
      transition the QP through its states (RTR, RTS, error, etc.)  While the
      QP state transitions are occurring, the QP itself must remain valid.
      Provide locking around the QP pointer to prevent its destruction while
      accessing the pointer.
      
      This fixes an issue reported by Olaf Kirch from Oracle that resulted in
      a system crash:
      
      "An incoming connection arrives and we decide to tear down the nascent
       connection.  The remote ends decides to do the same.  We start to shut
       down the connection, and call rdma_destroy_qp on our cm_id. ... Now
       apparently a 'connect reject' message comes in from the other host,
       and cma_ib_handler() is called with an event of IB_CM_REJ_RECEIVED.
       It calls cma_modify_qp_err, which for some odd reason tries to modify
       the exact same QP we just destroyed."
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      c5483388
  16. 11 10月, 2007 1 次提交
  17. 10 10月, 2007 2 次提交
  18. 18 7月, 2007 1 次提交
  19. 11 7月, 2007 1 次提交
  20. 08 6月, 2007 1 次提交
  21. 15 5月, 2007 3 次提交
  22. 07 3月, 2007 1 次提交
  23. 23 2月, 2007 1 次提交