1. 29 1月, 2021 1 次提交
  2. 12 1月, 2021 1 次提交
    • B
      i40e, xsk: clear the status bits for the next_to_use descriptor · 801a611d
      Björn Töpel 提交于
      stable inclusion
      from stable-5.10.4
      commit bc79bf6c581cde11213a2119bcf7dc2c59cd22ec
      bugzilla: 46903
      
      --------------------------------
      
      [ Upstream commit 64050b5b ]
      
      On the Rx side, the next_to_use index points to the next item in the
      HW ring to be refilled/allocated, and next_to_clean points to the next
      item to potentially be processed.
      
      When the HW Rx ring is fully refilled, i.e. no packets has been
      processed, the next_to_use will be next_to_clean - 1. When the ring is
      fully processed next_to_clean will be equal to next_to_use. The latter
      case is where a bug is triggered.
      
      If the next_to_use bits are not cleared, and the "fully processed"
      state is entered, a stale descriptor can be processed.
      
      The skb-path correctly clear the status bit for the next_to_use
      descriptor, but the AF_XDP zero-copy path did not do that.
      
      This change adds the status bits clearing of the next_to_use
      descriptor.
      
      Fixes: 3b4f0b66 ("i40e, xsk: Migrate to new MEM_TYPE_XSK_BUFF_POOL")
      Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
      801a611d
  3. 11 11月, 2020 1 次提交
  4. 15 9月, 2020 2 次提交
  5. 01 9月, 2020 3 次提交
  6. 02 7月, 2020 3 次提交
  7. 29 5月, 2020 1 次提交
  8. 22 5月, 2020 4 次提交
  9. 15 5月, 2020 1 次提交
  10. 06 2月, 2020 1 次提交
  11. 21 12月, 2019 1 次提交
  12. 19 12月, 2019 1 次提交
  13. 09 11月, 2019 1 次提交
    • M
      i40e: need_wakeup flag might not be set for Tx · 70563957
      Magnus Karlsson 提交于
      The need_wakeup flag for Tx might not be set for AF_XDP sockets that
      are only used to send packets. This happens if there is at least one
      outstanding packet that has not been completed by the hardware and we
      get that corresponding completion (which will not generate an
      interrupt since interrupts are disabled in the napi poll loop) between
      the time we stopped processing the Tx completions and interrupts are
      enabled again. In this case, the need_wakeup flag will have been
      cleared at the end of the Tx completion processing as we believe we
      will get an interrupt from the outstanding completion at a later point
      in time. But if this completion interrupt occurs before interrupts
      are enable, we lose it and should at that point really have set the
      need_wakeup flag since there are no more outstanding completions that
      can generate an interrupt to continue the processing. When this
      happens, user space will see a Tx queue need_wakeup of 0 and skip
      issuing a syscall, which means will never get into the Tx processing
      again and we have a deadlock.
      
      This patch introduces a quick fix for this issue by just setting the
      need_wakeup flag for Tx to 1 all the time. I am working on a proper
      fix for this that will toggle the flag appropriately, but it is more
      challenging than I anticipated and I am afraid that this patch will
      not be completed before the merge window closes, therefore this easier
      fix for now. This fix has a negative performance impact in the range
      of 0% to 4%. Towards the higher end of the scale if you have driver
      and application on the same core and issue a lot of packets, and
      towards no negative impact if you use two cores, lower transmission
      speeds and/or a workload that also receives packets.
      Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      70563957
  14. 02 11月, 2019 1 次提交
  15. 16 9月, 2019 1 次提交
  16. 12 9月, 2019 1 次提交
    • M
      i40e: fix potential RX buffer starvation for AF_XDP · 1f459bdc
      Magnus Karlsson 提交于
      When the RX rings are created they are also populated with buffers
      so that packets can be received. Usually these are kernel buffers,
      but for AF_XDP in zero-copy mode, these are user-space buffers and
      in this case the application might not have sent down any buffers
      to the driver at this point. And if no buffers are allocated at ring
      creation time, no packets can be received and no interrupts will be
      generated so the NAPI poll function that allocates buffers to the
      rings will never get executed.
      
      To rectify this, we kick the NAPI context of any queue with an
      attached AF_XDP zero-copy socket in two places in the code. Once
      after an XDP program has loaded and once after the umem is registered.
      This take care of both cases: XDP program gets loaded first then AF_XDP
      socket is created, and the reverse, AF_XDP socket is created first,
      then XDP program is loaded.
      
      Fixes: 0a714186 ("i40e: add AF_XDP zero-copy Rx support")
      Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      1f459bdc
  17. 05 9月, 2019 1 次提交
  18. 31 8月, 2019 2 次提交
  19. 18 8月, 2019 2 次提交
  20. 28 6月, 2019 1 次提交
  21. 15 6月, 2019 1 次提交
  22. 02 4月, 2019 1 次提交
  23. 22 2月, 2019 1 次提交
    • M
      i40e: fix potential RX buffer starvation for AF_XDP · 14ffeb52
      Magnus Karlsson 提交于
      When the RX rings are created they are also populated with buffers
      so that packets can be received. Usually these are kernel buffers,
      but for AF_XDP in zero-copy mode, these are user-space buffers and
      in this case the application might not have sent down any buffers
      to the driver at this point. And if no buffers are allocated at ring
      creation time, no packets can be received and no interrupts will be
      generated so the NAPI poll function that allocates buffers to the
      rings will never get executed.
      
      To rectify this, we kick the NAPI context of any queue with an
      attached AF_XDP zero-copy socket in two places in the code. Once
      after an XDP program has loaded and once after the umem is registered.
      This take care of both cases: XDP program gets loaded first then AF_XDP
      socket is created, and the reverse, AF_XDP socket is created first,
      then XDP program is loaded.
      
      Fixes: 0a714186 ("i40e: add AF_XDP zero-copy Rx support")
      Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      14ffeb52
  24. 15 2月, 2019 1 次提交
  25. 22 1月, 2019 1 次提交
  26. 13 12月, 2018 2 次提交
  27. 29 11月, 2018 1 次提交
  28. 26 9月, 2018 2 次提交