1. 12 12月, 2014 3 次提交
    • M
      net/mlx4: Refactor QUERY_PORT · 431df8c7
      Matan Barak 提交于
      Currently QUERY_PORT is done as a part of QUERY_DEV_CAP firmware command.
      
      Since we would like to use it without querying all device capabilities,
      extract this part to be a function of its own.
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      431df8c7
    • M
      net/mlx4: Add A0 hybrid steering · d57febe1
      Matan Barak 提交于
      A0 hybrid steering is a form of high performance flow steering.
      By using this mode, mlx4 cards use a fast limited table based steering,
      in order to enable fast steering of unicast packets to a QP.
      
      In order to implement A0 hybrid steering we allocate resources
      from different zones:
      (1) General range
      (2) Special MAC-assigned QPs [RSS, Raw-Ethernet] each has its own region.
      
      When we create a rss QP or a raw ethernet (A0 steerable and BF ready) QP,
      we try hard to allocate the QP from range (2). Otherwise, we try hard not
      to allocate from this  range. However, when the system is pushed to its
      limits and one needs every resource, the allocator uses every region it can.
      
      Meaning, when we run out of raw-eth qps, the allocator allocates from the
      general range (and the special-A0 area is no longer active). If we run out
      of RSS qps, the mechanism tries to allocate from the raw-eth QP zone. If that
      is also exhausted, the allocator will allocate from the general range
      (and the A0 region is no longer active).
      
      Note that if a raw-eth qp is allocated from the general range, it attempts
      to allocate the range such that bits 6 and 7 (blueflame bits) in the
      QP number are not set.
      
      When the feature is used in SRIOV, the VF has to notify the PF what
      kind of QP attributes it needs. In order to do that, along with the
      "Eth QP blueflame" bit, we reserve a new "A0 steerable QP". According
      to the combination of these bits, the PF tries to allocate a suitable QP.
      
      In order to maintain backward compatibility (with older PFs), the PF
      notifies which QP attributes it supports via QUERY_FUNC_CAP command.
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d57febe1
    • E
      net/mlx4: Change QP allocation scheme · ddae0349
      Eugenia Emantayev 提交于
      When using BF (Blue-Flame), the QPN overrides the VLAN, CV, and SV fields
      in the WQE. Thus, BF may only be used for QPNs with bits 6,7 unset.
      
      The current Ethernet driver code reserves a Tx QP range with 256b alignment.
      
      This is wrong because if there are more than 64 Tx QPs in use,
      QPNs >= base + 65 will have bits 6/7 set.
      
      This problem is not specific for the Ethernet driver, any entity that
      tries to reserve more than 64 BF-enabled QPs should fail. Also, using
      ranges is not necessary here and is wasteful.
      
      The new mechanism introduced here will support reservation for
      "Eth QPs eligible for BF" for all drivers: bare-metal, multi-PF, and VFs
      (when hypervisors support WC in VMs). The flow we use is:
      
      1. In mlx4_en, allocate Tx QPs one by one instead of a range allocation,
         and request "BF enabled QPs" if BF is supported for the function
      
      2. In the ALLOC_RES FW command, change param1 to:
      a. param1[23:0]  - number of QPs
      b. param1[31-24] - flags controlling QPs reservation
      
      Bit 31 refers to Eth blueflame supported QPs. Those QPs must have
      bits 6 and 7 unset in order to be used in Ethernet.
      
      Bits 24-30 of the flags are currently reserved.
      
      When a function tries to allocate a QP, it states the required attributes
      for this QP. Those attributes are considered "best-effort". If an attribute,
      such as Ethernet BF enabled QP, is a must-have attribute, the function has
      to check that attribute is supported before trying to do the allocation.
      
      In a lower layer of the code, mlx4_qp_reserve_range masks out the bits
      which are unsupported. If SRIOV is used, the PF validates those attributes
      and masks out unsupported attributes as well. In order to notify VFs which
      attributes are supported, the VF uses QUERY_FUNC_CAP command. This command's
      mailbox is filled by the PF, which notifies which QP allocation attributes
      it supports.
      Signed-off-by: NEugenia Emantayev <eugenia@mellanox.co.il>
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddae0349
  2. 14 11月, 2014 5 次提交
  3. 12 11月, 2014 1 次提交
  4. 04 11月, 2014 1 次提交
  5. 01 10月, 2014 2 次提交
  6. 27 9月, 2014 1 次提交
  7. 20 9月, 2014 1 次提交
    • I
      net/mlx4_core: Enable CQE/EQE stride support · 77507aa2
      Ido Shamay 提交于
      This feature is intended for archs having cache line larger then 64B.
      
      Since our CQE/EQEs are generally 64B in those systems, HW will write
      twice to the same cache line consecutively, causing pipe locks due to
      he hazard prevention mechanism. For elements in a cyclic buffer, writes
      are consecutive, so entries smaller than a cache line should be
      avoided, especially if they are written at a high rate.
      
      Reduce consecutive writes to same cache line in CQs/EQs, by allowing the
      driver to increase the distance between entries so that each will reside
      in a different cache line. Until the introduction of this feature, there
      were two types of CQE/EQE:
      
      1. 32B stride and context in the [0-31] segment
      2. 64B stride and context in the [32-63] segment
      
      This feature introduces two additional types:
      
      3. 128B stride and context in the [0-31] segment (128B cache line)
      4. 256B stride and context in the [0-31] segment (256B cache line)
      
      Modify the mlx4_core driver to query the device for the CQE/EQE cache
      line stride capability and to enable that capability when the host
      cache line size is larger than 64 bytes (supported cache lines are
      128B and 256B).
      
      The mlx4 IB driver and libmlx4 need not be aware of this change. The PF
      context behaviour is changed to require this change in VF drivers
      running on such archs.
      Signed-off-by: NIdo Shamay <idos@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      77507aa2
  8. 13 8月, 2014 1 次提交
  9. 05 8月, 2014 1 次提交
    • J
      mlx4_core: Add support for secure-host and SMP firewall · 114840c3
      Jack Morgenstein 提交于
      Secure-host is the general term for the capability of a device
      to protect itself and the subnet from malicious host software.
      
      This is achieved by:
      1. Not allowing un-trusted entities to access device configuration
         registers, directly (through pci_cr or pci_conf) and indirectly
         (through MADs).
      
      2. Hiding M_Key from untrusted entities.
      
      3. Preventing the modification of GUID0 by un-trusted entities
      
      4. Not allowing drivers on untrusted hosts to receive nor to transmit
         packets over QP0 (SMP Firewall).
      
      The secure-host capability depends on firmware handling all QP0
      packets, and not passing these packets up to the driver. Any information
      required by the driver for proper operation (e.g., SM lid) is passed
      via events generated by the firmware while processing QP0 MADs.
      
      Driver support mainly requires using the MAD_DEMUX FW command at startup,
      where the feature is enabled/disabled through a procedure described in
      the Mellanox HCA tools package.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      
      [ Fix error path in mlx4_setup_hca to go to err_mcg_table_free. - Roland ]
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      114840c3
  10. 23 7月, 2014 1 次提交
  11. 23 6月, 2014 1 次提交
  12. 11 6月, 2014 1 次提交
  13. 31 5月, 2014 1 次提交
  14. 30 5月, 2014 2 次提交
  15. 23 5月, 2014 2 次提交
  16. 15 5月, 2014 1 次提交
  17. 09 5月, 2014 1 次提交
  18. 06 5月, 2014 2 次提交
  19. 15 4月, 2014 1 次提交
  20. 14 4月, 2014 1 次提交
  21. 28 3月, 2014 1 次提交
  22. 21 3月, 2014 3 次提交
  23. 13 3月, 2014 2 次提交
  24. 07 3月, 2014 1 次提交
    • A
      net/mlx4_core: mlx4_init_slave() shouldn't access comm channel before PF is ready · 97989356
      Amir Vadai 提交于
      Currently, the PF call to pci_enable_sriov from the PF probe function
      stalls for 10 seconds times the number of VFs probed on the host. This
      happens because the way for such VFs to determine of the PF
      initialization finished, is by attempting to issue reset on the
      comm-channel and get timeout (after 10s).
      
      The PF probe function is called from a kenernel workqueue, and therefore
      during that time, rcu lock is being held and kernel's workqueue is
      stalled. This blocks other processes that try to use the workqueue
      or rcu lock.  For example, interface renaming which is calling
      rcu_synchronize is blocked, and timedout by systemd.
      
      Changed mlx4_init_slave() to allow VF probed on the host to immediatly
      detect that the PF is not ready, and return EPROBE_DEFER instantly.
      
      Only when the PF finishes the initialization, allow such VFs to
      access the comm channel.
      
      This issue and fix are relevant only for probed VFs on the hypervisor,
      there is no way to pass this information to a VM until comm channel is
      ready, so in a VM, if PF is not ready, the first command will be timedout
      after 10 seconds and return EPROBE_DEFER.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      97989356
  25. 06 3月, 2014 1 次提交
  26. 25 2月, 2014 1 次提交
    • I
      net/mlx4: Fix limiting number of IRQ's instead of RSS queues · bb2146bc
      Ido Shamay 提交于
      This fix a performance bug introduced by commit 90b1ebe7 "mlx4: set
      maximal number of default RSS queues", which limits the numbers of IRQs
      opened by core module.
      The limit should be on the number of queues in the indirection table -
      rx_rings, and not on the number of IRQ's. Also, limiting on mlx4_core
      initialization instead of in mlx4_en, prevented using "ethtool -L" to
      utilize all the CPU's, when performance mode is prefered, since limiting
      this number to 8 reduces overall packet rate by 15%-50% in multiple TCP
      streams applications.
      
      For example, after running ethtool -L <ethx> rx 16
      
                Packet rate
      Before the fix  897799
      After the fix   1142070
      
      Results were obtained using netperf:
      
      S=200 ; ( for i in $(seq 1 $S) ; do ( \
        netperf -H 11.7.13.55 -t TCP_RR -l 30 &) ; \
        wait ; done | grep "1        1" | awk '{SUM+=$6} END {print SUM}' )
      
      CC: Yuval Mintz <yuvalmin@broadcom.com>
      Signed-off-by: NIdo Shamay <idos@mellanox.com>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bb2146bc
  27. 19 2月, 2014 1 次提交