1. 17 1月, 2019 1 次提交
    • F
      nfp: flower: increase cmesg reply timeout · 96439889
      Fred Lotter 提交于
      QA tests report occasional timeouts on REIFY message replies. Profiling
      of the two cmesg reply types under burst conditions, with a 12-core host
      under heavy cpu and io load (stress --cpu 12 --io 12), show both PHY MTU
      change and REIFY replies can exceed the 10ms timeout. The maximum MTU
      reply wait under burst is 16ms, while the maximum REIFY wait under 40 VF
      burst is 12ms. Using a 4 VF REIFY burst results in an 8ms maximum wait.
      A larger VF burst does increase the delay, but not in a linear enough
      way to justify a scaled REIFY delay. The worse case values between
      MTU and REIFY appears close enough to justify a common timeout. Pick a
      conservative 40ms to make a safer future proof common reply timeout. The
      delay only effects the failure case.
      
      Change the REIFY timeout mechanism to use wait_event_timeout() instead
      of wait_event_interruptible_timeout(), to match the MTU code. In the
      current implementation, theoretically, a signal could interrupt the
      REIFY waiting period, with a return code of ERESTARTSYS. However, this is
      caught under the general timeout error code EIO. I cannot see the benefit
      of exposing the REIFY waiting period to signals with such a short delay
      (40ms), while the MTU mechnism does not use the same logic. In the absence
      of any reply (wakeup() call), both reply types will wake up the task after
      the timeout period. The REIFY timeout applies to the entire representor
      group being instantiated (e.g. VFs), while the MTU timeout apples to a
      single PHY MTU change.
      Signed-off-by: NFred Lotter <frederik.lotter@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96439889
  2. 12 11月, 2018 1 次提交
  3. 08 11月, 2018 2 次提交
  4. 12 10月, 2018 1 次提交
  5. 08 8月, 2018 3 次提交
  6. 30 6月, 2018 1 次提交
  7. 25 5月, 2018 2 次提交
  8. 02 5月, 2018 1 次提交
  9. 13 4月, 2018 1 次提交
  10. 30 3月, 2018 1 次提交
    • J
      nfp: flower: offload phys port MTU change · 29a5dcae
      John Hurley 提交于
      Trigger a port mod message to request an MTU change on the NIC when any
      physical port representor is assigned a new MTU value. The driver waits
      10 msec for an ack that the FW has set the MTU. If no ack is received the
      request is rejected and an appropriate warning flagged.
      
      Rather than maintain an MTU queue per repr, one is maintained per app.
      Because the MTU ndo is protected by the rtnl lock, there can never be
      contention here. Portmod messages from the NIC are also protected by
      rtnl so we first check if the portmod is an ack and, if so, handle outside
      rtnl and the cmsg work queue.
      
      Acks are detected by the marking of a bit in a portmod response. They are
      then verfied by checking the port number and MTU value expected by the
      app. If the expected MTU is 0 then no acks are currently expected.
      
      Also, ensure that the packet headroom reserved by the flower firmware is
      considered when accepting an MTU change on any repr.
      Signed-off-by: NJohn Hurley <john.hurley@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      29a5dcae
  11. 27 3月, 2018 2 次提交
  12. 17 2月, 2018 1 次提交
  13. 04 1月, 2018 1 次提交
  14. 20 12月, 2017 2 次提交
  15. 12 12月, 2017 2 次提交
  16. 02 11月, 2017 1 次提交
  17. 22 10月, 2017 1 次提交
  18. 07 10月, 2017 5 次提交
  19. 27 9月, 2017 7 次提交
  20. 17 8月, 2017 1 次提交
  21. 12 8月, 2017 1 次提交
  22. 01 7月, 2017 2 次提交