1. 14 11月, 2017 38 次提交
  2. 11 11月, 2017 2 次提交
    • N
      IB/mlx5: Add PCI write end padding support · b1383aa6
      Noa Osherovich 提交于
      Add the PCI write end padding flag to device_cap_flags enum and set it
      during mlx5_ib_query_device so it will be reported to user-space.
      
      During WQ/QP creation, set that capability for WQ/QP if user requested
      it and HW supports it.
      
      PCI write end padding modification is not supported for now. There's no
      such flag for a QP but for a WQ, create and modify use the same flag.
      Return an error if PCI write end padding flag is set during modify_wq.
      Signed-off-by: NNoa Osherovich <noaos@mellanox.com>
      Reviewed-by: NMajd Dibbiny <majd@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      b1383aa6
    • N
      IB/core: Add PCI write end padding flags for WQ and QP · e1d2e887
      Noa Osherovich 提交于
      There are root complexes that are able to optimize their
      performance when incoming data is multiple full cache lines.
      
      PCI write end padding is the device's ability to pad the ending of
      incoming packets (scatter) to full cache line such that the last
      upstream write generated by an incoming packet will be a full cache
      line.
      
      Add a relevant entry to ib_device_cap_flags to report such capability
      of an RDMA device.
      
      Add the QP and WQ create flags:
       * A QP/WQ created with a scatter end padding flag will cause
         HW to pad the last upstream write generated by a packet to cache line.
      
      User should consider several factors before activating this feature:
      - In case of high CPU memory load (which may cause PCI back pressure in
        turn), if a large percent of the writes are partial cache line, this
        feature should be checked as an optional solution.
      - This feature might reduce performance if most packets are between one
        and two cache lines and PCIe throughput has reached its maximum
        capacity. E.g. 65B packet from the network port will lead to 128B
        write on PCIe, which may cause traffic on PCIe to reach high
        throughput.
      Signed-off-by: NNoa Osherovich <noaos@mellanox.com>
      Reviewed-by: NMajd Dibbiny <majd@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      e1d2e887