1. 12 2月, 2017 3 次提交
  2. 03 2月, 2017 1 次提交
  3. 07 12月, 2016 1 次提交
  4. 01 11月, 2016 1 次提交
  5. 29 10月, 2016 2 次提交
    • A
      i40e: Drop redundant Rx descriptor processing code · 99dad8b3
      Alexander Duyck 提交于
      This patch cleans up several pieces of redundant code in the Rx clean-up
      paths.
      
      The first bit is that hdr_addr and the status_err_len portions of the Rx
      descriptor represent the same value.  As such there is no point in setting
      them to 0 before setting them to 0.  I'm dropping the second spot where we
      are updating the value to 0 so that we only have 1 write for this value
      instead of 2.
      
      The second piece is the checking for the DD bit in the packet.  We only
      need to check for a non-zero value for the status_err_len because if the
      device is done with the descriptor it will have written something back and
      the DD is just one piece of it.  In addition I have moved the reading of
      the Rx descriptor bits related to rx_ptype down so that they are actually
      below the dma_rmb() call so that we are guaranteed that we don't have any
      funky 64b on 32b calls causing any ordering issues.
      
      Change-ID: I256e44a025d3c64a7224aaaec37c852bfcb1871b
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      99dad8b3
    • A
      i40e/i40evf: fix interrupt affinity bug · 96db776a
      Alan Brady 提交于
      There exists a bug in which a 'perfect storm' can occur and cause
      interrupts to fail to be correctly affinitized. This causes unexpected
      behavior and has a substantial impact on performance when it happens.
      
      The bug occurs if there is heavy traffic, any number of CPUs that have
      an i40e interrupt are pegged at 100%, and the interrupt afffinity for
      those CPUs is changed.  Instead of moving to the new CPU, the interrupt
      continues to be polled while there is heavy traffic.
      
      The bug is most readily realized as the driver is first brought up and
      all interrupts start on CPU0. If there is heavy traffic and the
      interrupt starts polling before the interrupt is affinitized, the
      interrupt will be stuck on CPU0 until traffic stops. The bug, however,
      can also be wrought out more simply by affinitizing all the interrupts
      to a single CPU and then attempting to move any of those interrupts off
      while there is heavy traffic.
      
      This patch fixes the bug by registering for update notifications from
      the kernel when the interrupt affinity changes. When that fires, we
      cache the intended affinity mask. Then, while polling, if the cpu is
      pegged at 100% and we failed to clean the rings, we check to make sure
      we have the correct affinity and stop polling if we're firing on the
      wrong CPU.  When the kernel successfully moves the interrupt, it will
      start polling on the correct CPU. The performance impact is minimal
      since the only time this section gets executed is when performance is
      already compromised by the CPU.
      
      Change-ID: I4410a880159b9dba1f8297aa72bef36dca34e830
      Signed-off-by: NAlan Brady <alan.brady@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      96db776a
  6. 25 9月, 2016 3 次提交
  7. 23 9月, 2016 1 次提交
  8. 20 8月, 2016 1 次提交
  9. 22 7月, 2016 1 次提交
  10. 15 7月, 2016 1 次提交
    • A
      i40e/i40evf: Fix i40e_rx_checksum · 858296c8
      Alexander Duyck 提交于
      There are a couple of issues I found in i40e_rx_checksum while doing some
      recent testing.  As a result I have found the Rx checksum logic is pretty
      much broken and returning that the checksum is valid for tunnels in cases
      where it is not.
      
      First the inner types are not the correct values to use to test for if a
      tunnel is present or not.  In addition the inner protocol types are not a
      bitmask as such performing an OR of the values doesn't make sense.  I have
      instead changed the code so that the inner protocol types are used to
      determine if we report CHECKSUM_UNNECESSARY or not.  For anything that does
      not end in UDP, TCP, or SCTP it doesn't make much sense to report a
      checksum offload since it won't contain a checksum anyway.
      
      This leaves us with the need to set the csum_level based on some value.
      For that purpose I am using the tunnel_type field.  If the tunnel type is
      GRENAT or greater then this means we have a GRE or UDP tunnel with an inner
      header.  In the case of GRE or UDP we will have a possible checksum present
      so for this reason it should be safe to set the csum_level to 1 to indicate
      that we are reporting the state of the inner header.
      Signed-off-by: NAlexander Duyck <aduyck@mirantis.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      858296c8
  11. 21 5月, 2016 2 次提交
  12. 06 5月, 2016 3 次提交
  13. 02 5月, 2016 1 次提交
  14. 28 4月, 2016 1 次提交
  15. 26 4月, 2016 1 次提交
  16. 14 4月, 2016 1 次提交
  17. 07 4月, 2016 3 次提交
  18. 06 4月, 2016 1 次提交
  19. 05 4月, 2016 4 次提交
  20. 19 2月, 2016 8 次提交