1. 03 6月, 2014 1 次提交
  2. 31 5月, 2014 1 次提交
  3. 30 5月, 2014 1 次提交
    • J
      mlx4: Add infrastructure for selecting VFs to enable QP0 via MLX proxy QPs · 99ec41d0
      Jack Morgenstein 提交于
      This commit adds the infrastructure for enabling selected VFs to
      operate SMI (QP0) MADs without restriction.
      
      Additionally, for these enabled VFs, their QP0 proxy and tunnel QPs
      are MLX QPs.  As such, they operate over VL15.  Therefore, they are
      not affected by "credit" problems or changes in the VLArb table (which
      may shut down VL0).
      
      Non-enabled VFs may only create UD proxy QP0 qps (which are forced by
      the hypervisor to send packets using the q-key it assigns and places
      in the qp-context).  Thus, non-enabled VFs will not pose a security
      risk.  The hypervisor discards any privileged MADs it receives from
      these non-enabled VFs.
      
      By default, all VFs are NOT enabled, and must explicitly be enabled
      by the administrator.
      
      The sysfs interface which operates the VF enablement infrastructure
      is provided in the next commit.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      99ec41d0
  4. 17 5月, 2014 1 次提交
  5. 09 5月, 2014 1 次提交
  6. 14 4月, 2014 1 次提交
  7. 29 3月, 2014 1 次提交
  8. 21 3月, 2014 2 次提交
    • M
      net/mlx4: Adapt code for N-Port VF · 449fc488
      Matan Barak 提交于
      Adds support for N-Port VFs, this includes:
      1. Adding support in the wrapped FW command
      	In wrapped commands, we need to verify and convert
      	the slave's port into the real physical port.
      	Furthermore, when sending the response back to the slave,
      	a reverse conversion should be made.
      2. Adjusting sqpn for QP1 para-virtualization
      	The slave assumes that sqpn is used for QP1 communication.
      	If the slave is assigned to a port != (first port), we need
      	to adjust the sqpn that will direct its QP1 packets into the
      	correct endpoint.
      3. Adjusting gid[5] to modify the port for raw ethernet
      	In B0 steering, gid[5] contains the port. It needs
      	to be adjusted into the physical port.
      4. Adjusting number of ports in the query / ports caps in the FW commands
      	When a slave queries the hardware, it needs to view only
      	the physical ports it's assigned to.
      5. Adjusting the sched_qp according to the port number
      	The QP port is encoded in the sched_qp, thus in modify_qp we need
      	to encode the correct port in sched_qp.
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      449fc488
    • M
      net/mlx4: Add utils for N-Port VFs · f74462ac
      Matan Barak 提交于
      This patch adds the following utils:
      1. Convert slave_id -> VF
      2. Get the active ports by slave_id
      3. Convert slave's port to real port
      4. Get the slave's port from real port
      5. Get all slaves that uses the i'th real port
      6. Get all slaves that uses the i'th real port exclusively
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f74462ac
  9. 13 3月, 2014 2 次提交
  10. 26 2月, 2014 1 次提交
  11. 15 1月, 2014 1 次提交
  12. 10 12月, 2013 1 次提交
    • J
      mlx4_core: Roll back round robin bitmap allocation commit for CQs, SRQs, and MPTs · 7c6d74d2
      Jack Morgenstein 提交于
      Commit f4ec9e95 "mlx4_core: Change bitmap allocator to work in round-robin fashion"
      introduced round-robin allocation (via bitmap) for all resources which allocate
      via a bitmap.
      
      Round robin allocation is desirable for mcgs, counters, pd's, UARs, and xrcds.
      These are simply numbers, with no involvement of ICM memory mapping.
      
      Round robin is required for QPs, since we had a problem with immediate
      reuse of a 24-bit QP number (commit f4ec9e95).
      
      However, for other resources which use the bitmap allocator and involve
      mapping ICM memory -- MPTs, CQs, SRQs -- round-robin is not desirable.
      
      What happens in these cases is the following:
      
      ICM memory is allocated and mapped in chunks of 256K.
      
      Since the resource allocation index goes up monotonically, the allocator
      will eventually require mapping a new chunk. Now, chunks are also unmapped
      when their reference count goes back to zero.  Thus, if a single app is
      running and starts/exits frequently we will have the following situation:
      
      When the app starts, a new chunk must be allocated and mapped.
      
      When the app exits, the chunk reference count goes back to zero, and the
      chunk is unmapped and freed. Therefore, the app must pay the cost of allocation
      and mapping of ICM memory each time it runs (although the price is paid only when
      allocating the initial entry in the new chunk).
      
      For apps which allocate MPTs/SRQs/CQs and which operate as described above,
      this presented a performance problem.
      
      We therefore roll back the round-robin allocator modification for MPTs, CQs, SRQs.
      Reported-by: NMatthew Finlay <matt@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7c6d74d2
  13. 05 11月, 2013 4 次提交
    • J
      net/mlx4_core: Implement resource quota enforcement · 146f3ef4
      Jack Morgenstein 提交于
      Implements resource quota grant decision when resources are requested,
      for the following resources:  QPs, CQs, SRQs, MPTs, MTTs, vlans, MACs,
      and Counters.
      
      When granting a resource, the quota system increases the allocated-count
      for that slave.
      
      When the slave later frees the resource, its allocated-count is reduced.
      
      A spinlock is used to protect the integrity of each resource's free-pool counter.
      (One slave may be in the process of being granted a resource while another
      slave has crashed, initiating cleanup of that slave's resource quotas).
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      146f3ef4
    • J
      mlx4: Structures and init/teardown for VF resource quotas · 5a0d0a61
      Jack Morgenstein 提交于
      This is step #1 for implementing SRIOV resource quotas for VFs.
      
      Quotas are implemented per resource type for VFs and the PF, to prevent
      any entity from simply grabbing all the resources for itself and leaving
      the other entities unable to obtain such resources.
      
      Resources which are allocated using quotas:  QPs, CQs, SRQs, MPTs, MTTs, MAC,
                                                   VLAN, and Counters.
      
      The quota system works as follows:
      Each entity (VF or PF) is given a max number of a given resource (its quota),
      and a guaranteed minimum number for each resource (starvation prevention).
      
      For QPs, CQs, SRQs, MPTs and MTTs:
      50% of the available quantity for the resource is divided equally among
      the PF and all the active VFs (i.e., the number of VFs in the mlx4_core module
      parameter "num_vfs"). This 50% represents the "guaranteed minimum" pool.
      The other 50% is the "free pool", allocated on a first-come-first-serve basis.
      For each VF/PF, resources are first allocated from its "guaranteed-minimum"
      pool. When that pool is exhausted, the driver attempts to allocate from
      the resource "free-pool".
      
      The quota (i.e., max) for the VFs and the PF is:
        The free-pool amount (50% of the real max) + the guaranteed minimum
      
      For MACs:
        Guarantee 2 MACs per VF/PF per port. As a result, since we have only
        128 MACs per port, reduce the allowable number of VFs from 64 to 63.
        Any remaining MACs are put into a free pool.
      
      For VLANs:
        For the PF, the per-port quota is 128 and guarantee is 64
           (to allow the PF to register at least a VLAN per VF in VST mode).
        For the VFs, the per-port quota is 64 and the guarantee is 0.
            We assume that VGT VFs are trusted not to abuse the VLAN resource.
      
      For Counters:
        For all functions (PF and VFs), the quota is 128 and the guarantee is 0.
      
      In this patch, we define the needed structures, which are added to the
      resource-tracker struct.  In addition, we do initialization
      for the resource quota, and adjust the query_device response to use quotas
      rather than resource maxima.
      
      As part of the implementation, we introduce a new field in
      mlx4_dev: quotas.  This field holds the resource quotas used
      to report maxima to the upper layers (ib_core, via query_device).
      
      The HCA maxima of these values are passed to the VFs (via
      QUERY_HCA) so that they may continue to use these in handling
      QPs, CQs, SRQs and MPTs.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5a0d0a61
    • J
      net/mlx4_core: Don't fail reg/unreg vlan for older guests · 2c957ff2
      Jack Morgenstein 提交于
      In upstream kernels under SRIOV, the vlan register/unregister calls
      were NOPs (doing nothing and returning OK). We detect these old
      calls from guests (via the comm channel), since previously the
      port number in mlx4_register_vlan was passed (improperly) in the
      out_param. This has been corrected so that the port number is now
      passed in bits 8..15 of the in_modifier field.
      
      For old calls, these bits will be zero, so if the passed port
      number is zero, we can still look at the out_param field to see
      if it contains a valid port number. If yes, the VM is running
      an old driver.
      
      Since for old drivers, the register/unregister_vlan wrappers were
      NOPs, we continue this policy -- the reason being that upstream
      had an additional bug in eth driver running on guests (where
      procedure mlx4_en_vlan_rx_kill_vid() had the following code:
      
      if (!mlx4_find_cached_vlan(mdev->dev, priv->port, vid, &idx))
              mlx4_unregister_vlan(mdev->dev, priv->port, idx);
      else
              en_err(priv, "could not find vid %d in cache\n", vid);
      
      On a VM, mlx4_find_cached_vlan() will always fail, since the
      vlan cache is located on the Hypervisor; on guests it is empty.
      
      Therefore, if we allow upstream guests to register vlans, we will
      have vlan leakage since the unregister will never be performed.
      Leaving vlan reg/unreg for old guest drivers as a NOP is not a
      feature regression, since in upstream the register/unregister
      vlan wrapper is a NOP.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2c957ff2
    • J
      net/mlx4_en: Use vlan id instead of vlan index for unregistration · 2009d005
      Jack Morgenstein 提交于
      Use of vlan_index created problems unregistering vlans on guests.
      
      In addition, tools delete vlan by tag, not by index, lets follow that.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2009d005
  14. 29 7月, 2013 1 次提交
  15. 02 7月, 2013 2 次提交
  16. 14 6月, 2013 1 次提交
  17. 27 4月, 2013 2 次提交
  18. 25 4月, 2013 2 次提交
  19. 12 4月, 2013 1 次提交
  20. 08 3月, 2013 1 次提交
  21. 26 2月, 2013 2 次提交
  22. 22 2月, 2013 1 次提交
  23. 08 2月, 2013 1 次提交
  24. 01 2月, 2013 2 次提交
  25. 20 12月, 2012 1 次提交
  26. 01 10月, 2012 5 次提交
    • R
      mlx4_core: Clean up enabling of SENSE_PORT for older (ConnectX-1/-2) HCAs · ca3e57a5
      Roland Dreier 提交于
      Instead of having a hard-coded "PCI device ID != 0x1003" (which
      obviously breaks as newer devices with ID != 0x1003 become available),
      instead let's set a flag in our PCI device table for the older devices
      where we're supposed to force using SENSE_PORT.  This also avoids
      enabling SENSE_PORT for virtual functions by mistake.
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      ca3e57a5
    • R
      mlx4_core: Stash PCI ID driver_data in mlx4_priv structure · 839f1243
      Roland Dreier 提交于
      That way we can check flags later on, when we've finished with the
      pci_device_id structure.  Also convert the "is VF" flag to an enum:
      "Never do in the preprocessor what can be done in C."
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      839f1243
    • R
      mlx4_core: Fix crash on uninitialized priv->cmd.slave_sem · f3d4c89e
      Roland Dreier 提交于
      On an SR-IOV master device, __mlx4_init_one() calls mlx4_init_hca()
      before mlx4_multi_func_init().  However, for unlucky configurations,
      mlx4_init_hca() might call mlx4_SENSE_PORT() (via mlx4_dev_cap()), and
      that calls mlx4_cmd_imm() with MLX4_CMD_WRAPPED set.
      
      However, on a multifunction device with MLX4_CMD_WRAPPED, __mlx4_cmd()
      calls into mlx4_slave_cmd(), and that immediately tries to do
      
      	down(&priv->cmd.slave_sem);
      
      but priv->cmd.slave_sem isn't initialized until mlx4_multi_func_init()
      (which we haven't called yet).  The next thing it tries to do is access
      priv->mfunc.vhcr, but that hasn't been allocated yet.
      
      Fix this by moving the initialization of slave_sem and vhcr up into
      mlx4_cmd_init(). Also, since slave_sem is really just being used as a
      mutex, convert it into a slave_cmd_mutex.
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      f3d4c89e
    • J
      mlx4: Paravirtualize Node Guids for slaves · afa8fd1d
      Jack Morgenstein 提交于
      This is necessary in order to support > 1 VF/PF in a VM for software
      that uses the node guid as a discriminator, such as librdmacm.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      afa8fd1d
    • J
      IB/mlx4: Miscellaneous adjustments for SR-IOV IB support · 992e8e6e
      Jack Morgenstein 提交于
      1. Allow only master to change node description.
      2. Prevent AH leakage in send mads.
      3. Take device part number from PCI structure, so that guests see the
         VF part number (and not the PF part number).
      4. Place the device revision ID into caps structure at startup.
      5. SET_PORT in update_gids_task needs to go through wrapper on master.
      6. In mlx4_ib_event(), PORT_MGMT_EVENT needs be handled in a work
         queue on the master, since it propagates events to slaves using
         GEN_EQE.
      7. Do not support FMR on slaves.
      8. Add spinlock to slave_event(), since it is called both in interrupt
         context and in process context (due to 6 above, and also if
         smp_snoop is used).  This fix was found and implemented by Saeed
         Mahameed <saeedm@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      992e8e6e