1. 09 12月, 2015 1 次提交
  2. 31 10月, 2015 1 次提交
  3. 29 10月, 2015 3 次提交
  4. 22 10月, 2015 4 次提交
  5. 21 10月, 2015 2 次提交
  6. 07 10月, 2015 1 次提交
  7. 31 8月, 2015 8 次提交
  8. 29 8月, 2015 1 次提交
    • S
      RDMA/cma: fix IPv6 address resolution · 6c26a771
      Spencer Baugh 提交于
      Resolving a link-local IPv6 address with an unspecified source address
      was broken by commit 5462eddd, which prevented the IPv6 stack from
      learning the scope id of the link-local IPv6 address, causing random
      failures as the IP stack chose a random link to resolve the address on.
      
      This commit 5462eddd made us bail out of cma_check_linklocal early if
      the address passed in was not an IPv6 link-local address. On the address
      resolution path, the address passed in is the source address; if the
      source address is the unspecified address, which is not link-local, we
      will bail out early.
      
      This is mostly correct, but if the destination address is a link-local
      address, then we will be following a link-local route, and we'll need to
      tell the IPv6 stack what the scope id of the destination address is.
      This used to be done by last line of cma_check_linklocal, which is
      skipped when bailing out early:
      
      	dev_addr->bound_dev_if = sin6->sin6_scope_id;
      
      (In cma_bind_addr, the sin6_scope_id of the source address is set to the
      sin6_scope_id of the destination address, so this is correct)
      This line is required in turn for the following line, L279 of
      addr6_resolve, to actually inform the IPv6 stack of the scope id:
      
            fl6.flowi6_oif = addr->bound_dev_if;
      
      Since we can only know we are in this failure case when we have access
      to both the source IPv6 address and destination IPv6 address, we have to
      deal with this further up the stack. So detect this failure case in
      cma_bind_addr, and set bound_dev_if to the destination address scope id
      to correct it.
      Signed-off-by: NSpencer Baugh <sbaugh@catern.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      6c26a771
  9. 02 6月, 2015 1 次提交
  10. 21 5月, 2015 2 次提交
  11. 19 5月, 2015 12 次提交
  12. 05 5月, 2015 1 次提交
  13. 11 6月, 2014 1 次提交
    • T
      RDMA/core: Add support for iWARP Port Mapper user space service · 30dc5e63
      Tatyana Nikolova 提交于
      This patch adds iWARP Port Mapper (IWPM) Version 2 support.  The iWARP
      Port Mapper implementation is based on the port mapper specification
      section in the Sockets Direct Protocol paper -
      http://www.rdmaconsortium.org/home/draft-pinkerton-iwarp-sdp-v1.0.pdf
      
      Existing iWARP RDMA providers use the same IP address as the native
      TCP/IP stack when creating RDMA connections.  They need a mechanism to
      claim the TCP ports used for RDMA connections to prevent TCP port
      collisions when other host applications use TCP ports.  The iWARP Port
      Mapper provides a standard mechanism to accomplish this.  Without this
      service it is possible for RDMA application to bind/listen on the same
      port which is already being used by native TCP host application.  If
      that happens the incoming TCP connection data can be passed to the
      RDMA stack with error.
      
      The iWARP Port Mapper solution doesn't contain any changes to the
      existing network stack in the kernel space.  All the changes are
      contained with the infiniband tree and also in user space.
      
      The iWARP Port Mapper service is implemented as a user space daemon
      process.  Source for the IWPM service is located at
      http://git.openfabrics.org/git?p=~tnikolova/libiwpm-1.0.0/.git;a=summary
      
      The iWARP driver (port mapper client) sends to the IWPM service the
      local IP address and TCP port it has received from the RDMA
      application, when starting a connection.  The IWPM service performs a
      socket bind from user space to get an available TCP port, called a
      mapped port, and communicates it back to the client.  In that sense,
      the IWPM service is used to map the TCP port, which the RDMA
      application uses to any port available from the host TCP port
      space. The mapped ports are used in iWARP RDMA connections to avoid
      collisions with native TCP stack which is aware that these ports are
      taken. When an RDMA connection using a mapped port is terminated, the
      client notifies the IWPM service, which then releases the TCP port.
      
      The message exchange between the IWPM service and the iWARP drivers
      (between user space and kernel space) is implemented using netlink
      sockets.
      
      1) Netlink interface functions are added: ibnl_unicast() and
         ibnl_mulitcast() for sending netlink messages to user space
      
      2) The signature of the existing ibnl_put_msg() is changed to be more
         generic
      
      3) Two netlink clients are added: RDMA_NL_NES, RDMA_NL_C4IW
         corresponding to the two iWarp drivers - nes and cxgb4 which use
         the IWPM service
      
      4) Enums are added to enumerate the attributes in the netlink
         messages, which are exchanged between the user space IWPM service
         and the iWARP drivers
      Signed-off-by: NTatyana Nikolova <tatyana.e.nikolova@intel.com>
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Reviewed-by: NPJ Waskiewicz <pj.waskiewicz@solidfire.com>
      
      [ Fold in range checking fixes and nlh_next removal as suggested by Dan
        Carpenter and Steve Wise.  Fix sparse endianness in hash.  - Roland ]
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      30dc5e63
  14. 02 4月, 2014 1 次提交
  15. 23 1月, 2014 1 次提交