“72a7a7021fa8bc82a11bc08bac1b0241a92143d0”上不存在“compatible/crypto/conf/conf_ssl.c”
  1. 28 5月, 2020 1 次提交
  2. 11 5月, 2020 1 次提交
    • G
      net/mlx5: Replace zero-length array with flexible-array · b6ca09cb
      Gustavo A. R. Silva 提交于
      The current codebase makes use of the zero-length array language
      extension to the C90 standard, but the preferred mechanism to declare
      variable-length types such as these ones is a flexible array member[1][2],
      introduced in C99:
      
      struct foo {
              int stuff;
              struct boo array[];
      };
      
      By making use of the mechanism above, we will get a compiler warning
      in case the flexible array does not occur last in the structure, which
      will help us prevent some kind of undefined behavior bugs from being
      inadvertently introduced[3] to the codebase from now on.
      
      Also, notice that, dynamic memory allocations won't be affected by
      this change:
      
      "Flexible array members have incomplete type, and so the sizeof operator
      may not be applied. As a quirk of the original implementation of
      zero-length arrays, sizeof evaluates to zero."[1]
      
      sizeof(flexible-array-member) triggers a warning because flexible array
      members have incomplete type[1]. There are some instances of code in
      which the sizeof operator is being incorrectly/erroneously applied to
      zero-length arrays and the result is zero. Such instances may be hiding
      some bugs. So, this work (flexible-array member conversions) will also
      help to get completely rid of those sorts of issues.
      
      This issue was found with the help of Coccinelle.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      Signed-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      b6ca09cb
  3. 03 5月, 2020 1 次提交
  4. 29 4月, 2020 1 次提交
  5. 19 4月, 2020 2 次提交
  6. 05 7月, 2019 1 次提交
  7. 04 7月, 2019 1 次提交
  8. 25 6月, 2019 1 次提交
  9. 24 6月, 2019 1 次提交
    • M
      RDMA/mlx5: Introduce and implement new IB_WR_REG_MR_INTEGRITY work request · 38ca87c6
      Max Gurtovoy 提交于
      This new WR will be used to perform PI (protection information) handover
      using the new API. Using the new API, the user will post a single WR that
      will internally perform all the needed actions to complete PI operation.
      This new WR will use a memory region that was allocated as
      IB_MR_TYPE_INTEGRITY and was mapped using ib_map_mr_sg_pi to perform the
      registration. In the old API, in order to perform a signature handover
      operation, each ULP should perform the following:
      1. Map and register the data buffers.
      2. Map and register the protection buffers.
      3. Post a special reg WR to configure the signature handover operation
         layout.
      4. Invalidate the signature memory key.
      5. Invalidate protection buffers memory key.
      6. Invalidate data buffers memory key.
      
      In the new API, the mapping of both data and protection buffers is
      performed using a single call to ib_map_mr_sg_pi function. Also the
      registration of the buffers and the configuration of the signature
      operation layout is done by a single new work request called
      IB_WR_REG_MR_INTEGRITY.
      This patch implements this operation for mlx5 devices that are capable to
      offload data integrity generation/validation while performing the actual
      buffer transfer.
      This patch will not remove the old signature API that is used by the iSER
      initiator and target drivers. This will be done in the future.
      
      In the internal implementation, for each IB_WR_REG_MR_INTEGRITY work
      request, we are using a single UMR operation to register both data and
      protection buffers using KLM's.
      Afterwards, another UMR operation will describe the strided block format.
      These will be followed by 2 SET_PSV operations to set the memory/wire
      domains initial signature parameters passed by the user.
      In the end of the whole transaction, only the signature memory key
      (the one that exposed for the RDMA operation) will be invalidated.
      Signed-off-by: NMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: NIsrael Rukshin <israelr@mellanox.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      38ca87c6
  10. 24 4月, 2019 1 次提交
    • S
      net/mlx5e: XDP, Inline small packets into the TX MPWQE in XDP xmit flow · c2273219
      Shay Agroskin 提交于
      Upon high packet rate with multiple CPUs TX workloads, much of the HCA's
      resources are spent on prefetching TX descriptors, thus affecting
      transmission rates.
      This patch comes to mitigate this problem by moving some workload to the
      CPU and reducing the HW data prefetch overhead for small packets (<= 256B).
      
      When forwarding packets with XDP, a packet that is smaller
      than a certain size (set to ~256 bytes) would be sent inline within
      its WQE TX descrptor (mem-copied), when the hardware tx queue is congested
      beyond a pre-defined water-mark.
      
      This is added to better utilize the HW resources (which now makes
      one less packet data prefetch) and allow better scalability, on the
      account of CPU usage (which now 'memcpy's the packet into the WQE).
      
      To load balance between HW and CPU and get max packet rate, we use
      watermarks to detect how much the HW is congested and move the work
      loads back and forth between HW and CPU.
      
      Performance:
      Tested packet rate for UDP 64Byte multi-stream
      over two dual port ConnectX-5 100Gbps NICs.
      CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
      
      * Tested with hyper-threading disabled
      
      XDP_TX:
      
      |          | before | after   |       |
      | 24 rings | 51Mpps | 116Mpps | +126% |
      | 1 ring   | 12Mpps | 12Mpps  | same  |
      
      XDP_REDIRECT:
      
      ** Below is the transmit rate, not the redirection rate
      which might be larger, and is not affected by this patch.
      
      |          | before  | after   |      |
      | 32 rings | 64Mpps  | 92Mpps  | +43% |
      | 1 ring   | 6.4Mpps | 6.4Mpps | same |
      
      As we can see, feature significantly improves scaling, without
      hurting single ring performance.
      Signed-off-by: NShay Agroskin <shayag@mellanox.com>
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      c2273219
  11. 18 3月, 2019 1 次提交
  12. 13 11月, 2018 1 次提交
  13. 25 9月, 2018 1 次提交
  14. 09 1月, 2018 1 次提交
  15. 09 11月, 2017 1 次提交
  16. 05 8月, 2017 1 次提交
  17. 24 7月, 2017 2 次提交
  18. 27 6月, 2017 1 次提交
  19. 16 6月, 2017 1 次提交
  20. 17 4月, 2017 1 次提交
  21. 07 2月, 2017 1 次提交
    • S
      net/mlx5: TX WQE update · 2b31f7ae
      Saeed Mahameed 提交于
      Add new TX WQE fields for Connect-X5 vlan insertion support,
      type and vlan_tci, when type = MLX5_ETH_WQE_INSERT_VLAN the
      HW will insert the vlan and prio fields (vlan_tci) to the packet.
      
      Those bits and the inline header fields are mutually exclusive, and
      valid only when:
      MLX5_CAP_ETH(mdev, wqe_inline_mode) == MLX5_CAP_INLINE_MODE_NOT_REQUIRED
      and MLX5_CAP_ETH(mdev, wqe_vlan_insert),
      who will be set in ConnectX-5 and later HW generations.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      2b31f7ae
  22. 03 1月, 2017 3 次提交
  23. 17 8月, 2016 2 次提交
    • S
      {net,IB}/mlx5: Modify QP commands via mlx5 ifc · 1a412fb1
      Saeed Mahameed 提交于
      Prior to this patch we assumed that modify QP commands have the
      same layout.
      
      In ConnectX-4 for each QP transition there is a specific command
      and their layout can vary.
      
      e.g: 2err/2rst commands don't have QP context in their layout and before
      this patch we posted the QP context in those commands.
      
      Fortunately the FW only checks the suffix of the commands and executes
      them, while ignoring all invalid data sent after the valid command
      layout.
      
      This patch removes mlx5_modify_qp_mbox_in and changes
      mlx5_core_qp_modify to receive the required transition and QP context
      with opt_param_mask if needed.  This way the caller is not required to
      provide the command inbox layout and it will be generated automatically.
      
      mlx5_core_qp_modify will generate the command inbox/outbox layouts
      according to the requested transition and will fill the requested
      parameters.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      1a412fb1
    • S
      {net,IB}/mlx5: QP/XRCD commands via mlx5 ifc · 09a7d9ec
      Saeed Mahameed 提交于
      Remove old representation of manually created QP/XRCD commands layout
      amd use mlx5_ifc canonical structures and defines.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      09a7d9ec
  24. 23 6月, 2016 2 次提交
  25. 10 6月, 2016 1 次提交
  26. 07 6月, 2016 1 次提交
  27. 22 4月, 2016 1 次提交
  28. 02 3月, 2016 2 次提交
  29. 22 1月, 2016 3 次提交
  30. 24 12月, 2015 2 次提交