1. 28 7月, 2015 1 次提交
  2. 25 6月, 2015 2 次提交
  3. 28 5月, 2015 1 次提交
    • R
      cpumask_set_cpu_local_first => cpumask_local_spread, lament · f36963c9
      Rusty Russell 提交于
      da91309e (cpumask: Utility function to set n'th cpu...) created a
      genuinely weird function.  I never saw it before, it went through DaveM.
      (He only does this to make us other maintainers feel better about our own
      mistakes.)
      
      cpumask_set_cpu_local_first's purpose is say "I need to spread things
      across N online cpus, choose the ones on this numa node first"; you call
      it in a loop.
      
      It can fail.  One of the two callers ignores this, the other aborts and
      fails the device open.
      
      It can fail in two ways: allocating the off-stack cpumask, or through a
      convoluted codepath which AFAICT can only occur if cpu_online_mask
      changes.  Which shouldn't happen, because if cpu_online_mask can change
      while you call this, it could return a now-offline cpu anyway.
      
      It contains a nonsensical test "!cpumask_of_node(numa_node)".  This was
      drawn to my attention by Geert, who said this causes a warning on Sparc.
      It sets a single bit in a cpumask instead of returning a cpu number,
      because that's what the callers want.
      
      It could be made more efficient by passing the previous cpu rather than
      an index, but that would be more invasive to the callers.
      
      Fixes: da91309e
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (then rebased)
      Tested-by: NAmir Vadai <amirv@mellanox.com>
      Acked-by: NAmir Vadai <amirv@mellanox.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      f36963c9
  4. 30 4月, 2015 1 次提交
  5. 10 4月, 2015 1 次提交
    • A
      mlx4/mlx5: Use dma_wmb/rmb where appropriate · 12b3375f
      Alexander Duyck 提交于
      This patch should help to improve the performance of the mlx4 and mlx5 on a
      number of architectures.  For example, on x86 the dma_wmb/rmb equates out
      to a barrer() call as the architecture is already strong ordered, and on
      PowerPC the call works out to a lwsync which is significantly less expensive
      than the sync call that was being used for wmb.
      
      I placed the new barriers between any spots that seemed to be trying to
      order memory/memory reads or writes, if there are any spots that involved
      MMIO I left the existing wmb in place as the new barriers cannot order
      transactions between coherent and non-coherent memories.
      
      v2: Reduced the replacments to just the spots where I could clearly
          identify the usage pattern.
      
      Cc: Amir Vadai <amirv@mellanox.com>
      Cc: Ido Shamay <idos@mellanox.com>
      Cc: Eli Cohen <eli@mellanox.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12b3375f
  6. 26 1月, 2015 1 次提交
  7. 14 1月, 2015 1 次提交
  8. 23 12月, 2014 1 次提交
  9. 12 12月, 2014 1 次提交
    • E
      net/mlx4: Change QP allocation scheme · ddae0349
      Eugenia Emantayev 提交于
      When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
      in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
      
      The current Ethernet driver code reserves a Tx QP range with 256b alignment.
      
      This is wrong because if there are more than 64 Tx QPs in use,
      QPNs >= base + 65 will have bits 6/7 set.
      
      This problem is not specific for the Ethernet driver, any entity that
      tries to reserve more than 64 BF-enabled QPs should fail. Also, using
      ranges is not necessary here and is wasteful.
      
      The new mechanism introduced here will support reservation for
      "Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
      (when hypervisors support WC in VMs). The flow we use is:
      
      1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
         and request "BF enabled QPs" if BF is supported for the function
      
      2. In the ALLOC_RES FW command, change param1 to:
      a. param1[23:0]  - number of QPs
      b. param1[31-24] - flags controlling QPs reservation
      
      Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
      bits 6 and 7 unset in order to be used in Ethernet.
      
      Bits 24-30 of the flags are currently reserved.
      
      When a function tries to allocate a QP, it states the required attributes
      for this QP. Those attributes are considered "best-effort". If an attribute,
      such as Ethernet BF enabled QP, is a must-have attribute, the function has
      to check that attribute is supported before trying to do the allocation.
      
      In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
      which are unsupported. If SRIOV is used, the PF validates those attributes
      and masks out unsupported attributes as well. In order to notify VFs which
      attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
      mailbox is filled by the PF, which notifies which QP allocation attributes
      it supports.
      Signed-off-by: NEugenia Emantayev <eugenia@mellanox.co.il>
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddae0349
  10. 31 10月, 2014 2 次提交
  11. 09 10月, 2014 1 次提交
  12. 08 10月, 2014 1 次提交
  13. 06 10月, 2014 12 次提交
  14. 05 10月, 2014 1 次提交
  15. 29 9月, 2014 1 次提交
  16. 20 9月, 2014 1 次提交
    • I
      net/mlx4_en: Add mlx4_en_get_cqe helper · b1b6b4da
      Ido Shamay 提交于
      This function derives the base address of the CQE from the CQE size,
      and calculates the real CQE context segment in it from the factor
      (this is like before). Before this change the code used the factor to
      calculate the base address of the CQE as well.
      
      The factor indicates in which segment of the cqe stride the cqe information
      is located. For 32-byte strides, the segment is 0, and for 64 byte strides,
      the segment is 1 (bytes 32..63). Using the factor was ok as long as we had
      only 32 and 64 byte strides. However, with larger strides, the factor is zero,
      and so cannot be used to calculate the base of the CQE.
      
      The helper uses the same method of CQE buffer pulling made by all other
      components that reads the CQE buffer (mlx4_ib driver and libmlx4).
      Signed-off-by: NIdo Shamay <idos@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b1b6b4da
  17. 04 9月, 2014 1 次提交
  18. 23 7月, 2014 1 次提交
  19. 09 7月, 2014 1 次提交
    • A
      net/mlx4_en: Ignore budget on TX napi polling · fbc6daf1
      Amir Vadai 提交于
      It is recommended that TX work not count against the quota.
      The cost of TX packet liberation is a minute percentage of what it costs to
      process an RX frame. Furthermore, that SKB freeing makes memory available for
      other paths in the stack.
      
      Give the TX a larger budget and be more aggressive about cleaning up the Tx
      descriptors this budget could be changed using ethtool:
      $ ethtool -C eth1 tx-frames-irq <budget>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbc6daf1
  20. 03 7月, 2014 1 次提交
  21. 03 6月, 2014 1 次提交
  22. 15 5月, 2014 1 次提交
  23. 09 5月, 2014 1 次提交
  24. 13 3月, 2014 1 次提交
  25. 03 3月, 2014 3 次提交