1. 30 4月, 2017 6 次提交
  2. 29 4月, 2017 1 次提交
  3. 19 4月, 2017 14 次提交
  4. 18 3月, 2017 1 次提交
  5. 16 3月, 2017 1 次提交
    • A
      mqprio: Modify mqprio to pass user parameters via ndo_setup_tc. · 56f36acd
      Amritha Nambiar 提交于
      The configurable priority to traffic class mapping and the user specified
      queue ranges are used to configure the traffic class, overriding the
      hardware defaults when the 'hw' option is set to 0. However, when the 'hw'
      option is non-zero, the hardware QOS defaults are used.
      
      This patch makes it so that we can pass the data the user provided to
      ndo_setup_tc. This allows us to pull in the queue configuration if the
      user requested it as well as any additional hardware offload type
      requested by using a value other than 1 for the hw value.
      
      Finally it also provides a means for the device driver to return the level
      supported for the offload type via the qopt->hw value. Previously we were
      just always assuming the value to be 1, in the future values beyond just 1
      may be supported.
      Signed-off-by: NAmritha Nambiar <amritha.nambiar@intel.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      56f36acd
  6. 13 3月, 2017 1 次提交
  7. 03 3月, 2017 2 次提交
    • A
      ixgbe: Limit use of 2K buffers on architectures with 256B or larger cache lines · c74042f3
      Alexander Duyck 提交于
      On architectures that have a cache line size larger than 64 Bytes we start
      running into issues where the amount of headroom for the frame starts
      shrinking.
      
      The size of skb_shared_info on a system with a 64B L1 cache line size is
      320.  This increases to 384 with a 128B cache line, and 512 with a 256B
      cache line.
      
      In addition the NET_SKB_PAD value increases as well consistent with the
      cache line size.  As a result when we get to a 256B cache line as seen on
      the s390 we end up 768 bytes used by padding and shared info leaving us
      with only 1280 bytes to use for data storage.  On architectures such as
      this we should default to using 3K Rx buffers out of a 8K page instead of
      trying to do 1.5K buffers out of a 4K page.
      
      To take all of this into account I have added one small check so that we
      compare the max_frame to the amount of actual data we can store.  This was
      already occurring for igb, but I had overlooked it for ixgbe as it doesn't
      have strict limits for 82599 once we enable jumbo frames.  By adding this
      check we will automatically enable 3K Rx buffers as soon as the maximum
      frame size we can handle drops below the standard Ethernet MTU.
      
      I also went through and fixed one small typo that I found where I had left
      an IGB in a variable name due to a copy/paste error.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      c74042f3
    • P
      ixgbe: update the rss key on h/w, when ethtool ask for it · d3aa9c9f
      Paolo Abeni 提交于
      Currently ixgbe_set_rxfh() updates the rss_key copy in the driver
      memory, but does not push the new value into the h/w. This commit
      add a new helper for the latter operation and call it in
      ixgbe_set_rxfh(), so that the h/w rss key value can be really
      updated via ethtool.
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      d3aa9c9f
  8. 28 2月, 2017 1 次提交
  9. 16 2月, 2017 13 次提交