1. 25 7月, 2014 1 次提交
  2. 23 7月, 2014 4 次提交
  3. 17 7月, 2014 6 次提交
  4. 15 7月, 2014 1 次提交
    • J
      mlx4: mark napi id for gro_skb · 32b333fe
      Jason Wang 提交于
      Napi id was not marked for gro_skb, this will lead rx busy loop won't
      work correctly since they stack never try to call low latency receive
      method because of a zero socket napi id. Fix this by marking napi id
      for gro_skb.
      
      The transaction rate of 1 byte netperf tcp_rr gets about 50% increased
      (from 20531.68 to 30610.88).
      
      Cc: Amir Vadai <amirv@mellanox.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      32b333fe
  5. 10 7月, 2014 1 次提交
  6. 09 7月, 2014 7 次提交
  7. 08 7月, 2014 1 次提交
  8. 03 7月, 2014 2 次提交
  9. 23 6月, 2014 1 次提交
  10. 12 6月, 2014 1 次提交
    • Y
      net/mlx4_en: Use affinity hint · 9e311e77
      Yuval Atias 提交于
      The “affinity hint” mechanism is used by the user space
      daemon, irqbalancer, to indicate a preferred CPU mask for irqs.
      Irqbalancer can use this hint to balance the irqs between the
      cpus indicated by the mask.
      
      We wish the HCA to preferentially map the IRQs it uses to numa cores
      close to it.  To accomplish this, we use cpumask_set_cpu_local_first(), that
      sets the affinity hint according the following policy:
      First it maps IRQs to “close” numa cores.  If these are exhausted, the
      remaining IRQs are mapped to “far” numa cores.
      Signed-off-by: NYuval Atias <yuvala@mellanox.com>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e311e77
  11. 11 6月, 2014 2 次提交
    • W
      net/mlx4_core: Keep only one driver entry release mlx4_priv · da1de8df
      Wei Yang 提交于
      Following commit befdf897 "net/mlx4_core: Preserve pci_dev_data after
      __mlx4_remove_one()", there are two mlx4 pci callbacks which will
      attempt to release the mlx4_priv object -- .shutdown and .remove.
      
      This leads to a use-after-free access to the already freed mlx4_priv
      instance and trigger a "Kernel access of bad area" crash when both
      .shutdown and .remove are called.
      
      During reboot or kexec, .shutdown is called, with the VFs probed to
      the host going through shutdown first and then the PF. Later, the PF
      will trigger VFs' .remove since VFs still have driver attached.
      
      Fix that by keeping only one driver entry which releases mlx4_priv.
      
      Fixes: befdf897 ('net/mlx4_core: Preserve pci_dev_data after __mlx4_remove_one()')
      CC: Bjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      da1de8df
    • J
      net/mlx4_core: Fix SRIOV free-pool management when enforcing resource quotas · 95646373
      Jack Morgenstein 提交于
      The Hypervisor driver tracks free slots and reserved slots at the global level
      and tracks allocated slots and guaranteed slots per VF.
      
      Guaranteed slots are treated as reserved by the driver, so the total
      reserved slots is the sum of all guaranteed slots over all the VFs.
      
      As VFs allocate resources, free (global) is decremented and allocated (per VF)
      is incremented for those resources. However, reserved (global) is never changed.
      
      This means that effectively, when a VF allocates a resource from its
      guaranteed pool, it is actually reducing that resource's free pool (since
      the global reserved count was not also reduced).
      
      The fix for this problem is the following: For each resource, as long as a
      VF's allocated count is <= its guaranteed number, when allocating for that
      VF, the reserved count (global) should be reduced by the allocation as well.
      
      When the global reserved count reaches zero, the remaining global free count
      is still accessible as the free pool for that resource.
      
      When the VF frees resources, the reverse happens: the global reserved count
      for a resource is incremented only once the VFs allocated number falls below
      its guaranteed number.
      
      This fix was developed by Rick Kready <kready@us.ibm.com>
      Reported-by: NRick Kready <kready@us.ibm.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      95646373
  12. 07 6月, 2014 1 次提交
  13. 05 6月, 2014 1 次提交
  14. 03 6月, 2014 2 次提交
  15. 02 6月, 2014 2 次提交
  16. 31 5月, 2014 2 次提交
  17. 30 5月, 2014 5 次提交