1. 12 10月, 2021 1 次提交
  2. 04 8月, 2021 1 次提交
  3. 17 6月, 2021 2 次提交
  4. 10 6月, 2021 1 次提交
  5. 04 6月, 2021 1 次提交
  6. 26 3月, 2021 1 次提交
    • M
      RDMA: Support more than 255 rdma ports · 1fb7f897
      Mark Bloch 提交于
      Current code uses many different types when dealing with a port of a RDMA
      device: u8, unsigned int and u32. Switch to u32 to clean up the logic.
      
      This allows us to make (at least) the core view consistent and use the
      same type. Unfortunately not all places can be converted. Many uverbs
      functions expect port to be u8 so keep those places in order not to break
      UAPIs.  HW/Spec defined values must also not be changed.
      
      With the switch to u32 we now can support devices with more than 255
      ports. U32_MAX is reserved to make control logic a bit easier to deal
      with. As a device with U32_MAX ports probably isn't going to happen any
      time soon this seems like a non issue.
      
      When a device with more than 255 ports is created uverbs will report the
      RDMA device as having 255 ports as this is the max currently supported.
      
      The verbs interface is not changed yet because the IBTA spec limits the
      port size in too many places to be u8 and all applications that relies in
      verbs won't be able to cope with this change. At this stage, we are
      extending the interfaces that are using vendor channel solely
      
      Once the limitation is lifted mlx5 in switchdev mode will be able to have
      thousands of SFs created by the device. As the only instance of an RDMA
      device that reports more than 255 ports will be a representor device and
      it exposes itself as a RAW Ethernet only device CM/MAD/IPoIB and other
      ULPs aren't effected by this change and their sysfs/interfaces that are
      exposes to userspace can remain unchanged.
      
      While here cleanup some alignment issues and remove unneeded sanity
      checks (mainly in rdmavt),
      
      Link: https://lore.kernel.org/r/20210301070420.439400-1-leon@kernel.orgSigned-off-by: NMark Bloch <mbloch@nvidia.com>
      Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
      Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
      1fb7f897
  7. 03 2月, 2021 1 次提交
  8. 11 12月, 2020 1 次提交
  9. 31 10月, 2020 1 次提交
  10. 27 10月, 2020 7 次提交
    • J
      RDMA: Convert sysfs device * show functions to use sysfs_emit() · 1c7fd726
      Joe Perches 提交于
      Done with cocci script:
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      	return
      -	sprintf(buf,
      +	sysfs_emit(buf,
      	...);
      	...>
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      	return
      -	snprintf(buf, PAGE_SIZE,
      +	sysfs_emit(buf,
      	...);
      	...>
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      	return
      -	scnprintf(buf, PAGE_SIZE,
      +	sysfs_emit(buf,
      	...);
      	...>
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      expression chr;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      	return
      -	strcpy(buf, chr);
      +	sysfs_emit(buf, chr);
      	...>
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      identifier len;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      	len =
      -	sprintf(buf,
      +	sysfs_emit(buf,
      	...);
      	...>
      	return len;
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      identifier len;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      	len =
      -	snprintf(buf, PAGE_SIZE,
      +	sysfs_emit(buf,
      	...);
      	...>
      	return len;
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      identifier len;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      	len =
      -	scnprintf(buf, PAGE_SIZE,
      +	sysfs_emit(buf,
      	...);
      	...>
      	return len;
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      identifier len;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	<...
      -	len += scnprintf(buf + len, PAGE_SIZE - len,
      +	len += sysfs_emit_at(buf, len,
      	...);
      	...>
      	return len;
      }
      
      @@
      identifier d_show;
      identifier dev, attr, buf;
      expression chr;
      @@
      
      ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
      {
      	...
      -	strcpy(buf, chr);
      -	return strlen(buf);
      +	return sysfs_emit(buf, chr);
      }
      
      Link: https://lore.kernel.org/r/7f406fa8e3aa2552c022bec680f621e38d1fe414.1602122879.git.joe@perches.comSigned-off-by: NJoe Perches <joe@perches.com>
      Reviewed-by: NJason Gunthorpe <jgg@nvidia.com>
      Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
      1c7fd726
    • J
      RDMA: Check create_flags during create_qp · 1f11a761
      Jason Gunthorpe 提交于
      Each driver should check that the QP attrs create_flags is supported.
      Unfortuantely when create_flags was added to the QP attrs the drivers were
      not updated. uverbs_ex_cmd_mask was used to block it - even though kernel
      drivers use these flags too.
      
      Check that flags is zero in all drivers that don't use it, remove
      IB_USER_VERBS_EX_CMD_CREATE_QP from uverbs_ex_cmd_mask. Fix the error code
      to be EOPNOTSUPP.
      
      Link: https://lore.kernel.org/r/8-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      1f11a761
    • J
      RDMA: Check flags during create_cq · 1c407cb5
      Jason Gunthorpe 提交于
      Each driver should check that the CQ attrs is supported. Unfortuantely
      when flags was added to the CQ attrs the drivers were not updated,
      uverbs_ex_cmd_mask was used to block it. This was missed when create CQ
      was converted to ioctl, so non-zero flags could have been passed into
      drivers.
      
      Check that flags is zero in all drivers that don't use it, remove
      IB_USER_VERBS_EX_CMD_CREATE_CQ from uverbs_ex_cmd_mask.
      
      Fixes: 41b2a71f ("IB/uverbs: Move ioctl path of create_cq and destroy_cq to a new file")
      Link: https://lore.kernel.org/r/7-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      1c407cb5
    • J
      RDMA: Check srq_type during create_srq · 652caba5
      Jason Gunthorpe 提交于
      uverbs was blocking srq_types the driver doesn't support based on the
      CREATE_XSRQ cmd_mask. Fix all drivers to check for supported srq_types
      during create_srq and move CREATE_XSRQ to the core code.
      
      Link: https://lore.kernel.org/r/5-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      652caba5
    • J
      RDMA: Move more uverbs_cmd_mask settings to the core · 44ce37bc
      Jason Gunthorpe 提交于
      These functions all depend on the driver providing a specific op:
      
      - REREG_MR is rereg_user_mr(). bnxt_re set this without providing the op
      - ATTACH/DEATCH_MCAST is attach_mcast()/detach_mcast(). usnic set this
        without providing the op
      - OPEN_QP doesn't involve the driver but requires a XRCD. qedr provides
        xrcd but forgot to set it, usnic doesn't provide XRCD but set it anyhow.
      - OPEN/CLOSE_XRCD are the ops alloc_xrcd()/dealloc_xrcd()
      - CREATE_SRQ/DESTROY_SRQ are the ops create_srq()/destroy_srq()
      - QUERY/MODIFY_SRQ is op query_srq()/modify_srq(). hns sets this but
        sometimes supplies a NULL op.
      - RESIZE_CQ is op resize_cq(). bnxt_re sets this boes doesn't supply an op
      - ALLOC/DEALLOC_MW is alloc_mw()/dealloc_mw(). cxgb4 provided an
        (now deleted) implementation but no userspace
      
      All drivers were checked that no drivers provide the op without also
      setting uverbs_cmd_mask so this should have no functional change.
      
      Link: https://lore.kernel.org/r/4-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      44ce37bc
    • J
      RDMA: Remove elements in uverbs_cmd_mask that all drivers set · c074bb1e
      Jason Gunthorpe 提交于
      This is a step toward eliminating uverbs_cmd_mask. Preset this list in the
      core code. Only the op reg_user_mr wasn't already being required from the
      drivers.
      
      Link: https://lore.kernel.org/r/3-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      c074bb1e
    • J
      RDMA: Remove uverbs_ex_cmd_mask values that are linked to functions · b8e3130d
      Jason Gunthorpe 提交于
      Since a while now the uverbs layer checks if the driver implements a
      function before allowing the ucmd to proceed. This largely obsoletes the
      cmd_mask stuff, but there is some tricky bits in drivers preventing it
      from being removed.
      
      Remove the easy elements of uverbs_ex_cmd_mask by pre-setting them in the
      core code. These are triggered soley based on the related ops function
      pointer.
      
      query_device_ex is not triggered based on an op, but all drivers already
      implement something compatible with the extension, so enable it globally
      too.
      
      Link: https://lore.kernel.org/r/2-v1-caa70ba3d1ab+1436e-ucmd_mask_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      b8e3130d
  11. 17 10月, 2020 1 次提交
  12. 18 9月, 2020 2 次提交
  13. 10 9月, 2020 2 次提交
  14. 27 8月, 2020 1 次提交
  15. 19 8月, 2020 1 次提交
  16. 07 7月, 2020 1 次提交
  17. 03 6月, 2020 1 次提交
  18. 15 4月, 2020 1 次提交
  19. 13 3月, 2020 1 次提交
  20. 17 1月, 2020 1 次提交
  21. 13 12月, 2019 1 次提交
  22. 20 11月, 2019 1 次提交
  23. 07 11月, 2019 1 次提交
  24. 12 8月, 2019 1 次提交
  25. 12 6月, 2019 1 次提交
  26. 11 6月, 2019 3 次提交
  27. 09 4月, 2019 2 次提交
  28. 02 4月, 2019 1 次提交