1. 24 1月, 2018 2 次提交
  2. 13 1月, 2018 1 次提交
  3. 10 1月, 2018 3 次提交
  4. 06 1月, 2018 1 次提交
  5. 10 10月, 2017 2 次提交
    • A
      ixgbe: Update adaptive ITR algorithm · b4ded832
      Alexander Duyck 提交于
      The following change is meant to update the adaptive ITR algorithm to
      better support the needs of the network. Specifically with this change what
      I have done is make it so that our ITR algorithm will try to prevent either
      starving a socket buffer for memory in the case of Tx, or overrunning an Rx
      socket buffer on receive.
      
      In addition a side effect of the calculations used is that we should
      function better with new features such as XDP which can handle small
      packets at high rates without needing to lock us into NAPI polling mode.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      b4ded832
    • J
      ixgbe: add counter for times Rx pages gets allocated, not recycled · 86e23494
      Jesper Dangaard Brouer 提交于
      The ixgbe driver have page recycle scheme based around the RX-ring
      queue, where a RX page is shared between two packets. Based on the
      refcnt, the driver can determine if the RX-page is currently only used
      by a single packet, if so it can then directly refill/recycle the
      RX-slot by with the opposite "side" of the page.
      
      While this is a clever trick, it is hard to determine when this
      recycling is successful and when it fails.  Adding a counter, which is
      available via ethtool --statistics as 'alloc_rx_page'.  Which counts
      the number of times the recycle fails and the real page allocator is
      invoked.  When interpreting the stats, do remember that every alloc
      will serve two packets.
      
      The counter is collected per rx_ring, but is summed and ethtool
      exported as 'alloc_rx_page'.  It would be relevant to know what
      rx_ring that cannot keep up, but that can be exported later if
      someone experience a need for this.
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      86e23494
  6. 14 6月, 2017 2 次提交
  7. 30 4月, 2017 4 次提交
  8. 19 4月, 2017 1 次提交
    • A
      ixgbe: Add support for maximum headroom when using build_skb · 541ea69a
      Alexander Duyck 提交于
      This patch increases the headroom allocated when using build_skb on a
      system with 4K pages.  Specifically the breakdown of headroom versus cache
      size is as follows:
          L1 Cache Size           Headroom
          64                      192
          64, NET_IP_ALIGN == 2   194
          128                     128
          128, NET_IP_ALIGN == 2  130
          256                     512
          256, NET_IP_ALIGN == 2  258
      
      I stopped at supporting only a cache line size of 256 as that was the
      largest cache size I could find supported in the kernel.
      
      With this we are guaranteeing at least 128 bytes of headroom to spare in
      the frame.  This should be enough for us to insert a couple of IPv6 headers
      if needed which is likely enough room for anything XDP should need.
      
      I'm leaving the padding for systems with pages larger than 4K unmodified
      for now.  XDP currently isn't really setup to work on those types of
      systems so we can cross that bridge when we get there.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      541ea69a
  9. 03 3月, 2017 2 次提交
    • A
      ixgbe: Limit use of 2K buffers on architectures with 256B or larger cache lines · c74042f3
      Alexander Duyck 提交于
      On architectures that have a cache line size larger than 64 Bytes we start
      running into issues where the amount of headroom for the frame starts
      shrinking.
      
      The size of skb_shared_info on a system with a 64B L1 cache line size is
      320.  This increases to 384 with a 128B cache line, and 512 with a 256B
      cache line.
      
      In addition the NET_SKB_PAD value increases as well consistent with the
      cache line size.  As a result when we get to a 256B cache line as seen on
      the s390 we end up 768 bytes used by padding and shared info leaving us
      with only 1280 bytes to use for data storage.  On architectures such as
      this we should default to using 3K Rx buffers out of a 8K page instead of
      trying to do 1.5K buffers out of a 4K page.
      
      To take all of this into account I have added one small check so that we
      compare the max_frame to the amount of actual data we can store.  This was
      already occurring for igb, but I had overlooked it for ixgbe as it doesn't
      have strict limits for 82599 once we enable jumbo frames.  By adding this
      check we will automatically enable 3K Rx buffers as soon as the maximum
      frame size we can handle drops below the standard Ethernet MTU.
      
      I also went through and fixed one small typo that I found where I had left
      an IGB in a variable name due to a copy/paste error.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      c74042f3
    • P
      ixgbe: update the rss key on h/w, when ethtool ask for it · d3aa9c9f
      Paolo Abeni 提交于
      Currently ixgbe_set_rxfh() updates the rss_key copy in the driver
      memory, but does not push the new value into the h/w. This commit
      add a new helper for the latter operation and call it in
      ixgbe_set_rxfh(), so that the h/w rss key value can be really
      updated via ethtool.
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      d3aa9c9f
  10. 16 2月, 2017 5 次提交
  11. 04 2月, 2017 1 次提交
  12. 04 1月, 2017 2 次提交
  13. 05 11月, 2016 1 次提交
  14. 13 9月, 2016 1 次提交
  15. 21 8月, 2016 1 次提交
  16. 19 8月, 2016 1 次提交
  17. 22 7月, 2016 1 次提交
  18. 04 5月, 2016 3 次提交
  19. 25 4月, 2016 4 次提交
    • J
      ixgbe: use BIT() macro · b4f47a48
      Jacob Keller 提交于
      Several areas of ixgbe were written before widespread usage of the
      BIT(n) macro. With the impending release of GCC 6 and its associated new
      warnings, some usages such as (1 << 31) have been noted within the ixgbe
      driver source. Fix these wholesale and prevent future issues by simply
      using BIT macro instead of hand coded bit shifts.
      
      Also fix a few shifts that are shifting values into place by using the
      'u' prefix to indicate unsigned. It doesn't strictly matter in these
      cases because we're not shifting by too large a value, but these are all
      unsigned values and should be indicated as such.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      b4f47a48
    • D
      ixgbe: Add work around for empty SFP+ cage crosstalk · 4319a797
      Don Skidmore 提交于
      It is possible on some systems that crosstalk could lead to link flap
      on empty SFP+ cages.  A new NVM bit was defined to let SW know it
      needs to implement the work around which consists of verifying that
      there is a module in the cage before acting on the LSC.
      Signed-off-by: NDon Skidmore <donald.c.skidmore@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      4319a797
    • S
      ixgbe: make 'action' field in struct ixgbe_fdir_filter a u64 value · 2a9ed5d1
      Sridhar Samudrala 提交于
      This field is used to record the RX queue index for a redirect action
      passed via ring_cookie field in struct ethtool_rx_flow_spec which is
      a u64 value.
      
      For ex: after adding a filter rule to redirect to a VF using ethtool
        # echo 4 > /sys/class/net/p4p1/device/sriov_numvfs
        # ethtool -N p4p1 flow-type ip4 src-ip 192.168.0.1 action 0x100000000
      
      querying for the rule shows the Action as 'Direct to queue 0'
      
        # ethtool -n p4p1
        4 RX rings available
        Total 1 rules
      
        Filter: 2045
       	Rule Type: Raw IPv4
      	Src IP addr: 192.168.0.1 mask: 0.0.0.0
      	Dest IP addr: 0.0.0.0 mask: 255.255.255.255
      	TOS: 0x0 mask: 0xff
      	Protocol: 0 mask: 0xff
      	L4 bytes: 0x0 mask: 0xffffffff
      	VLAN EtherType: 0x0 mask: 0xffff
      	VLAN: 0x0 mask: 0xffff
      	User-defined: 0x0 mask: 0xffffffffffffffff
      	Action: Direct to queue 0
      
      With this fix, ethtool will report the right queue index even for VFs.
      	Action: Direct to queue 4294967296
      
      Here 4294967296 corresponds to 0x100000000.
      We need to update 'ethtool' to report the queue index as a Hex value so
      that it is more  user friendly and matches with the 'action' value that
      is passed when adding the rule.
      Signed-off-by: NSridhar Samudrala <sridhar.samudrala@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      2a9ed5d1
    • E
      ixgbe: set VLAN spoof checking unconditionally · d3dec7c7
      Emil Tantilov 提交于
      Previously the PF driver would only set VLAN spoof checking if
      the VF had created VLANs. This was done by setting and checking
      a counter (vlan_count) whenever a VLAN was created by the VF.
      However it is possible for the vlan_count to be !=0 while there are
      no VLANs assigned to the VF due to the count incrementing every
      time a VLAN 0 is added on ifdown/up, which resulted in VLAN spoofing
      always being set for those VFs.
      
      This patch cleans up the logic by unconditionally setting VLAN based on
      how the VF is configured (via ip link set ethX vf Y spoofchk on/off).
      This change also resolves an issue where the VLAN spoofing can remain
      set even after being disabled by the user due to the driver enabling
      VLAN spoof checking every time a VLAN is added to the VF, but would
      only allow changes in the setting if vlan_count != 0.
      
      Also default_vf_vlan_id and vlans_enabled were removed from the
      vf_data_storage structure since they are not being used in the driver.
      Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      d3dec7c7
  20. 08 4月, 2016 1 次提交
  21. 05 4月, 2016 1 次提交