1. 31 1月, 2017 2 次提交
  2. 30 12月, 2016 1 次提交
    • J
      net/mlx4_core: Fix raw qp flow steering rules under SRIOV · 10b1c04e
      Jack Morgenstein 提交于
      Demoting simple flow steering rule priority (for DPDK) was achieved by
      wrapping FW commands MLX4_QP_FLOW_STEERING_ATTACH/DETACH for the PF
      as well, and forcing the priority to MLX4_DOMAIN_NIC in the wrapper
      function for the PF and all VFs.
      
      In function mlx4_ib_create_flow(), this change caused the main rule
      creation for the PF to be wrapped, while it left the associated
      tunnel steering rule creation unwrapped for the PF.
      
      This mismatch caused rule deletion failures in mlx4_ib_destroy_flow()
      for the PF when the detach wrapper function did not find the associated
      tunnel-steering rule (since creation of that rule for the PF did not
      go through the wrapper function).
      
      Fix this by setting MLX4_QP_FLOW_STEERING_ATTACH/DETACH to be "native"
      (so that the PF invocation does not go through the wrapper), and perform
      the required priority demotion for the PF in the mlx4_ib_create_flow()
      code path.
      
      Fixes: 48564135 ("net/mlx4_core: Demote simple multicast and broadcast flow steering rules")
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10b1c04e
  3. 25 12月, 2016 1 次提交
  4. 29 11月, 2016 1 次提交
    • T
      Revert "net/mlx4_en: Avoid unregister_netdev at shutdown flow" · b4353708
      Tariq Toukan 提交于
      This reverts commit 9d769311.
      
      Using unregister_netdev at shutdown flow prevents calling
      the netdev's ndos or trying to access its freed resources.
      
      This fixes crashes like the following:
       Call Trace:
        [<ffffffff81587a6e>] dev_get_phys_port_id+0x1e/0x30
        [<ffffffff815a36ce>] rtnl_fill_ifinfo+0x4be/0xff0
        [<ffffffff815a53f3>] rtmsg_ifinfo_build_skb+0x73/0xe0
        [<ffffffff815a5476>] rtmsg_ifinfo.part.27+0x16/0x50
        [<ffffffff815a54c8>] rtmsg_ifinfo+0x18/0x20
        [<ffffffff8158a6c6>] netdev_state_change+0x46/0x50
        [<ffffffff815a5e78>] linkwatch_do_dev+0x38/0x50
        [<ffffffff815a6165>] __linkwatch_run_queue+0xf5/0x170
        [<ffffffff815a6205>] linkwatch_event+0x25/0x30
        [<ffffffff81099a82>] process_one_work+0x152/0x400
        [<ffffffff8109a325>] worker_thread+0x125/0x4b0
        [<ffffffff8109a200>] ? rescuer_thread+0x350/0x350
        [<ffffffff8109fc6a>] kthread+0xca/0xe0
        [<ffffffff8109fba0>] ? kthread_park+0x60/0x60
        [<ffffffff816a1285>] ret_from_fork+0x25/0x30
      
      Fixes: 9d769311 ("net/mlx4_en: Avoid unregister_netdev at shutdown flow")
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Reported-by: NSebastian Ott <sebott@linux.vnet.ibm.com>
      Reported-by: NSteve Wise <swise@opengridcomputing.com>
      Cc: Jiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b4353708
  5. 30 10月, 2016 1 次提交
  6. 08 10月, 2016 1 次提交
    • J
      IB/mlx4: Fix possible vl/sl field mismatch in LRH header in QP1 packets · fd10ed8e
      Jack Morgenstein 提交于
      In MLX qp packets, the LRH (built by the driver) has both a VL field
      and an SL field. When building a QP1 packet, the VL field should
      reflect the SLtoVL mapping and not arbitrarily contain zero (as is
      done now). This bug causes credit problems in IB switches at
      high rates of QP1 packets.
      
      The fix is to cache the SL to VL mapping in the driver, and look up
      the VL mapped to the SL provided in the send request when sending
      QP1 packets.
      
      For FW versions which support generating a port_management_config_change
      event with subtype sl-to-vl-table-change, the driver uses that event
      to update its sl-to-vl mapping cache.  Otherwise, the driver snoops
      incoming SMP mads to update the cache.
      
      There remains the case where the FW is running in secure-host mode
      (so no QP0 packets are delivered to the driver), and the FW does not
      generate the sl2vl mapping change event. To support this case, the
      driver updates (via querying the FW) its sl2vl mapping cache when
      running in secure-host mode when it receives either a Port Up event
      or a client-reregister event (where the port is still up, but there
      may have been an opensm failover).
      OpenSM modifies the sl2vl mapping before Port Up and Client-reregister
      events occur, so if there is a mapping change the driver's cache will
      be properly updated.
      
      Fixes: 225c7b1f ("IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters")
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      fd10ed8e
  7. 24 9月, 2016 1 次提交
  8. 04 8月, 2016 3 次提交
    • M
      IB/mlx4: Add diagnostic hardware counters · 3f85f2aa
      Mark Bloch 提交于
      Expose IB diagnostic hardware counters.
      The counters count IB events and are applicable for IB and RoCE.
      
      The counters can be divided into two groups, per device and per port.
      Device counters are always exposed.
      Port counters are exposed only if the firmware supports per port counters.
      
      rq_num_dup and sq_num_to are only exposed if we have firmware support
      for them, if we do, we expose them per device and per port.
      rq_num_udsdprd and num_cqovf are device only counters.
      
      rq - denotes responder.
      sq - denotes requester.
      
      |-----------------------|---------------------------------------|
      |	Name		|	Description			|
      |-----------------------|---------------------------------------|
      |rq_num_lle		| Number of local length errors		|
      |-----------------------|---------------------------------------|
      |sq_num_lle		| number of local length errors		|
      |-----------------------|---------------------------------------|
      |rq_num_lqpoe		| Number of local QP operation errors	|
      |-----------------------|---------------------------------------|
      |sq_num_lqpoe		| Number of local QP operation errors	|
      |-----------------------|---------------------------------------|
      |rq_num_lpe		| Number of local protection errors	|
      |-----------------------|---------------------------------------|
      |sq_num_lpe		| Number of local protection errors	|
      |-----------------------|---------------------------------------|
      |rq_num_wrfe		| Number of CQEs with error		|
      |-----------------------|---------------------------------------|
      |sq_num_wrfe		| Number of CQEs with error		|
      |-----------------------|---------------------------------------|
      |sq_num_mwbe		| Number of Memory Window bind errors	|
      |-----------------------|---------------------------------------|
      |sq_num_bre		| Number of bad response errors		|
      |-----------------------|---------------------------------------|
      |sq_num_rire		| Number of Remote Invalid request	|
      |			| errors				|
      |-----------------------|---------------------------------------|
      |rq_num_rire		| Number of Remote Invalid request	|
      |			| errors				|
      |-----------------------|---------------------------------------|
      |sq_num_rae		| Number of remote access errors	|
      |-----------------------|---------------------------------------|
      |rq_num_rae		| Number of remote access errors	|
      |-----------------------|---------------------------------------|
      |sq_num_roe		| Number of remote operation errors	|
      |-----------------------|---------------------------------------|
      |sq_num_tree		| Number of transport retries exceeded	|
      |			| errors				|
      |-----------------------|---------------------------------------|
      |sq_num_rree		| Number of RNR NAK retries exceeded	|
      |			| errors				|
      |-----------------------|---------------------------------------|
      |rq_num_rnr		| Number of RNR NAKs sent		|
      |-----------------------|---------------------------------------|
      |sq_num_rnr		| Number of RNR NAKs received		|
      |-----------------------|---------------------------------------|
      |rq_num_oos		| Number of Out of Sequence requests	|
      |			| received				|
      |-----------------------|---------------------------------------|
      |sq_num_oos		| Number of Out of Sequence NAKs	|
      |			| received				|
      |-----------------------|---------------------------------------|
      |rq_num_udsdprd		| Number of UD packets silently		|
      |			| discarded on the Receive Queue due to	|
      |			| lack of receive descriptor		|
      |-----------------------|---------------------------------------|
      |rq_num_dup		| Number of duplicate requests received	|
      |-----------------------|---------------------------------------|
      |sq_num_to		| Number of time out received		|
      |-----------------------|---------------------------------------|
      |num_cqovf		| Number of CQ overflows		|
      |-----------------------|---------------------------------------|
      Signed-off-by: NMark Bloch <markb@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      3f85f2aa
    • M
      net/mlx4: Query performance and diagnostics counters · bfaf3168
      Mark Bloch 提交于
      Add a function to query diagnostics counters from the firmware.
      Signed-off-by: NMark Bloch <markb@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      bfaf3168
    • M
      net/mlx4: Add diagnostic counters capability bit · c7c122ed
      Mark Bloch 提交于
      Add a bit that indicates if the firmware supports per port
      diagnostic counters.
      Signed-off-by: NMark Bloch <markb@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      c7c122ed
  9. 24 6月, 2016 1 次提交
  10. 23 6月, 2016 1 次提交
  11. 06 5月, 2016 1 次提交
    • H
      net/mlx4: Avoid wrong virtual mappings · 73898db0
      Haggai Abramovsky 提交于
      The dma_alloc_coherent() function returns a virtual address which can
      be used for coherent access to the underlying memory.  On some
      architectures, like arm64, undefined behavior results if this memory is
      also accessed via virtual mappings that are not coherent.  Because of
      their undefined nature, operations like virt_to_page() return garbage
      when passed virtual addresses obtained from dma_alloc_coherent().  Any
      subsequent mappings via vmap() of the garbage page values are unusable
      and result in bad things like bus errors (synchronous aborts in ARM64
      speak).
      
      The mlx4 driver contains code that does the equivalent of:
      vmap(virt_to_page(dma_alloc_coherent)), this results in an OOPs when the
      device is opened.
      
      Prevent Ethernet driver to run this problematic code by forcing it to
      allocate contiguous memory. As for the Infiniband driver, at first we
      are trying to allocate contiguous memory, but in case of failure roll
      back to work with fragmented memory.
      Signed-off-by: NHaggai Abramovsky <hagaya@mellanox.com>
      Signed-off-by: NYishai Hadas <yishaih@mellanox.com>
      Reported-by: NDavid Daney <david.daney@cavium.com>
      Tested-by: NSinan Kaya <okaya@codeaurora.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73898db0
  12. 22 4月, 2016 1 次提交
  13. 01 3月, 2016 1 次提交
    • M
      IB/mlx4: Add support for the don't trap rule · 0e451e88
      Marina Varshaver 提交于
      Add support for receiving multicast/unicast traffic with
      the don't trap rule.
      
      Sniffing these packets requires a flow steering rule of type NORMAL
      at priority 0 with flag IB_FLOW_ATTR_FLAGS_DONT_TRAP set.
      Choosing between multicast or unicast is done via ethernet L2 dest_mac
      mask and value:
      - If mask is all zeros - unicast and multicast are set.
      - If mask non zero - only mask with multicast bit 1 and rest 0 is
                           supported, the mac value will choose if it is
                           multicast or unicast rule.
      
      If the mask multicast bit is on and some other bits are on too, it means
      a request for specific multicast or unicast, this is not supported,
      either receive all multicast or all unicast.
      
      Only when limitations are met registered QP will receive requested type
      but other QPs can receive same traffic if registered for it.
      Otherwise, if limitations are not met, an error will be returned.
      
      Limitations:
      - Rule must be with priority 0.
      - A0 mode is not supported.
      - Sniffer QP cannot appear in any other flow steering rule.
      Signed-off-by: NMarina Varshaver <marinav@mellanox.com>
      Reviewed-by: NMatan Barak <matanb@mellanox.com>
      Reviewed-by: NYishai Hadas <yishaih@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      0e451e88
  14. 17 2月, 2016 1 次提交
    • H
      net/mlx4_core: Set UAR page size to 4KB regardless of system page size · 85743f1e
      Huy Nguyen 提交于
      problem description:
      
      The current code sets UAR page size equal to system page size.
      The ConnectX-3 and ConnectX-3 Pro HWs require minimum 128 UAR pages.
      The mlx4 kernel drivers are not loaded if there is less than 128 UAR pages.
      
      solution:
      
      Always set UAR page to 4KB. This allows more UAR pages if the OS
      has PAGE_SIZE larger than 4KB. For example, PowerPC kernel use 64KB
      system page size, with 4MB uar region, there are 4MB/2/64KB = 32
      uars (half for uar, half for blueflame). This does not meet minimum 128
      UAR pages requirement. With 4KB UAR page, there are 4MB/2/4KB = 512 uars
      which meet the minimum requirement.
      
      Note that only codes in mlx4_core that deal with firmware know that uar
      page size is 4KB. Codes that deal with usr page in cq and qp context
      (mlx4_ib, mlx4_en and part of mlx4_core) still have the same assumption
      that uar page size equals to system page size.
      
      Note that with this implementation, on 64KB system page size kernel, there
      are 16 uars per system page but only one uars is used. The other 15
      uars are ignored because of the above assumption.
      
      Regarding SR-IOV, mlx4_core in hypervisor will set the uar page size
      to 4KB and mlx4_core code in virtual OS will obtain the uar page size from
      firmware.
      
      Regarding backward compatibility in SR-IOV, if hypervisor has this new code,
      the virtual OS must be updated. If hypervisor has old code, and the virtual
      OS has this new code, the new code will be backward compatible with the
      old code. If the uar size is big enough, this new code in VF continues to
      work with 64 KB uar page size (on PowerPc kernel). If the uar size does not
      meet 128 uars requirement, this new code not loaded in VF and print the same
      error message as the old code in Hypervisor.
      Signed-off-by: NHuy Nguyen <huyn@mellanox.com>
      Reviewed-by: NYishai Hadas <yishaih@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      85743f1e
  15. 20 1月, 2016 3 次提交
  16. 09 12月, 2015 1 次提交
  17. 22 10月, 2015 1 次提交
  18. 15 10月, 2015 1 次提交
    • J
      net/mlx4_core: Replace VF zero mac with random mac in mlx4_core · 2b3ddf27
      Jack Morgenstein 提交于
      By design, when no default MAC addresses are set in the Hypervisor for VFs,
      the VFs are passed zero-macs. When such a MAC is received by the VF, it
      generates a random MAC address and registers that MAC address
      with the Hypervisor.
      
      This random mac generation is currently done in the mlx4_en module.
      There is a problem, though, if the mlx4_ib module is loaded by a VF before
      the mlx4_en module. In this case, for RoCE, mlx4_ib will see the un-replaced
      zero-mac and register that zero-mac as part of QP1 initialization.
      
      Having a zero-mac in the port's MAC table creates problems for a
      Baseboard Management Console. The BMC occasionally sends packets with a
      zero-mac destination MAC. If there is a zero-mac present in the port's
      MAC table, the FW will send such BMC packets to the host driver rather than
      to the wire, and BMC will stop working.
      
      To address this problem, we move the replacement of zero-mac addresses
      with random-mac addresses to procedure mlx4_slave_cap(), which is part of the
      driver startup for VFs, and is before activation of mlx4_ib and mlx4_en.
      As a result, zero-mac addresses will never be registered in the port MAC table
      by the driver.
      
      In addition, when mlx4_en does initialize the net device, it needs to set
      the NET_ADDR_RANDOM flag in the netdev structure if the address was
      randomly generated. This is done so that udev on the VM does not create
      a new device name after each VF probe (VM boot and such). To accomplish this,
      we add a per-port flag in mlx4_dev which gets set whenever mlx4_core replaces
      a zero-mac with a randomly-generated mac. This flag is examined when mlx4_en
      initializes the net-device.
      
      Fix was suggested by Matan Barak <matanb@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2b3ddf27
  19. 31 8月, 2015 1 次提交
    • M
      IB/mlx4: Implement ib_device callbacks · e26be1bf
      Moni Shoua 提交于
      get_netdev: get the net_device on the physical port of the IB transport port. In
      port aggregation mode it is required to return the netdev of the active port.
      
      modify_gid: note for a change in the RoCE gid cache. Handle this by writing to
      the harsware GID table. It is possible that indexes in cahce and hardware tables
      won't match so a translation is required when modifying a QP or creating an
      address handle.
      Signed-off-by: NMoni Shoua <monis@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      e26be1bf
  20. 28 7月, 2015 1 次提交
  21. 16 6月, 2015 3 次提交
  22. 13 6月, 2015 1 次提交
  23. 31 5月, 2015 1 次提交
    • M
      net/mlx4: Add EQ pool · c66fa19c
      Matan Barak 提交于
      Previously, mlx4_en allocated EQs and used them exclusively.
      This affected RoCE performance, as applications which are
      events sensitive were limited to use only the legacy EQs.
      
      Change that by introducing an EQ pool. This pool is managed
      by mlx4_core. EQs are assigned to ports (when there are limited
      number of EQs, multiple ports could be assigned to the same EQs).
      
      An exception to this rule is the ASYNC EQ which handles various events.
      
      Legacy EQs are completely removed as all EQs could be shared.
      
      When a consumer (mlx4_ib/mlx4_en) requests an EQ, it asks for
      EQ serving on a specific port. The core driver calculates which
      EQ should be assigned to that request.
      
      Because IRQs are shared between IB and Ethernet modules, their
      names only include the PCI device BDF address.
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NIdo Shamay <idos@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c66fa19c
  24. 16 4月, 2015 2 次提交
  25. 03 4月, 2015 7 次提交
  26. 01 4月, 2015 1 次提交