1. 11 9月, 2020 1 次提交
  2. 22 7月, 2020 2 次提交
    • A
      net: ena: support new LLQ acceleration mode · 0e3a3f6d
      Arthur Kiyanovski 提交于
      New devices add a new hardware acceleration engine, which adds some
      restrictions to the driver.
      Metadata descriptor must be present for each packet and the maximum
      burst size between two doorbells is now limited to a number
      advertised by the device.
      
      This patch adds:
      1. A handshake protocol between the driver and the device, so the
      device will enable the accelerated queues only when both sides
      support it.
      
      2. The driver support for the new acceleration engine:
      2.1. Send metadata descriptor for each Tx packet.
      2.2. Limit the number of packets sent between doorbells.(*)
      
      (*) A previous driver implementation of this feature was comitted in
      commit 05d62ca2 ("net: ena: add handling of llq max tx burst size")
      however the design of the interface between the driver and device
      changed since then. This change is reflected in this commit.
      Signed-off-by: NNetanel Belgazal <netanel@amazon.com>
      Signed-off-by: NArthur Kiyanovski <akiyano@amazon.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e3a3f6d
    • A
      net: ena: avoid unnecessary rearming of interrupt vector when busy-polling · 1e5ae350
      Arthur Kiyanovski 提交于
      For an overview of the race created by this patch goto synchronization
      label.
      
      In napi busy-poll mode, the kernel invokes the napi handler of the
      device repeatedly to poll the NIC's receive queues. This process
      repeats until a timeout, specific for each connection, is up.
      By polling packets in busy-poll mode the user may gain lower latency
      and higher throughput (since the kernel no longer waits for interrupts
      to poll the queues) in expense of CPU usage.
      
      Upon completing a napi routine, the driver checks whether
      the routine was called by an interrupt handler. If so, the driver
      re-enables interrupts for the device. This is needed since an
      interrupt routine invocation disables future invocations until
      explicitly re-enabled.
      
      The driver avoids re-enabling the interrupts if they were not disabled
      in the first place (e.g. if driver in busy mode).
      Originally, the driver checked whether interrupt re-enabling is needed
      by reading the 'ena_napi->unmask_interrupt' variable. This atomic
      variable was set upon interrupt and cleared after re-enabling it.
      
      In the 4.10 Linux version, the 'napi_complete_done' call was changed
      so that it returns 'false' when device should not re-enable
      interrupts, and 'true' otherwise. The change includes reading the
      "NAPIF_STATE_IN_BUSY_POLL" flag to check if the napi call is in
      busy-poll mode, and if so, return 'false'.
      The driver was changed to re-enable interrupts according to this
      routine's return value.
      The Linux community rejected the use of the
      'ena_napi->unmaunmask_interrupt' variable to determine whether
      unmasking is needed, and urged to use napi_napi_complete_done()
      return value solely.
      See https://lore.kernel.org/patchwork/patch/741149/ for more details
      
      As explained, a busy-poll session exists for a specified timeout
      value, after which it exits the busy-poll mode and re-enters it later.
      This leads to many invocations of the napi handler where
      napi_complete_done() false indicates that interrupts should be
      re-enabled.
      This creates a bug in which the interrupts are re-enabled
      unnecessarily.
      To reproduce this bug:
          1) echo 50 | sudo tee /proc/sys/net/core/busy_poll
          2) echo 50 | sudo tee /proc/sys/net/core/busy_read
          3) Add counters that check whether
          'ena_unmask_interrupt(tx_ring, rx_ring);'
          is called without disabling the interrupts in the first
          place (i.e. with calling the interrupt routine
          ena_intr_msix_io())
      
      Steps 1+2 enable busy-poll as the default mode for new connections.
      
      The busy poll routine rearms the interrupts after every session by
      design, and so we need to add an extra check that the interrupts were
      masked in the first place.
      
      synchronization:
      This patch introduces a race between the interrupt handler
      ena_intr_msix_io() and the napi routine ena_io_poll().
      Some macros and instruction were added to prevent this race from leaving
      the interrupts masked. The following specifies the different race
      scenarios in this patch:
      
      1) interrupt handler and napi routine run sequentially
          i) interrupt handler is called, sets 'interrupts_masked' flag and
      	successfully schedules the napi handler via softirq.
      
          In this scenario the napi routine might not see the flag change
          for several reasons:
      	a) The flag is stored in a register by the compiler. For this
      	case the WRITE_ONCE macro which prevents this.
      	b) The compiler might reorder the instruction. For this the
      	smp_wmb() instruction was used which implies a compiler memory
      	barrier.
      	c) On archs with weak consistency model (like ARM64) the napi
      	routine might be scheduled and start running before the flag
      	STORE instruction is committed to cache/memory. To ensure this
      	doesn't happen, the smp_wmb() instruction was added. It ensures
      	that the flag set instruction is committed before scheduling
      	napi.
      
          ii) compiler reorders the flag's value check in the 'if' with
          the flag set in the napi routine.
      
          This scenario is prevented by smp_rmb() call after the flag check.
      
      2) interrupt handler and napi routine run in parallel (can happen when
      busy poll routine invokes the napi handler)
      
          i) interrupt handler sets the flag in one core, while the napi
          routine reads it in another core.
      
          This scenario also is divided into two cases:
      	a) napi_complete_done() doesn't finish running, in which case
      	napi_sched() would just set NAPIF_STATE_MISSED and the napi
      	routine would reschedule itself without changing the flag's value.
      
      	b) napi_complete_done() finishes running. In this case the
      	napi routine might override the flag's value.
      	This doesn't present any rise since it later unmasks the
      	interrupt vector.
      Signed-off-by: NShay Agroskin <shayagr@amazon.com>
      Signed-off-by: NArthur Kiyanovski <akiyano@amazon.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1e5ae350
  3. 23 5月, 2020 3 次提交
  4. 15 5月, 2020 1 次提交
  5. 04 5月, 2020 2 次提交
  6. 29 4月, 2020 1 次提交
    • G
      net/ena: Fix build warning in ena_xdp_set() · caec6619
      Gavin Shan 提交于
      This fixes the following build warning in ena_xdp_set(), which is
      observed on aarch64 with 64KB page size.
      
         In file included from ./include/net/inet_sock.h:19,
            from ./include/net/ip.h:27,
            from drivers/net/ethernet/amazon/ena/ena_netdev.c:46:
         drivers/net/ethernet/amazon/ena/ena_netdev.c: In function         \
         ‘ena_xdp_set’:                                                    \
         drivers/net/ethernet/amazon/ena/ena_netdev.c:557:6: warning:      \
         format ‘%lu’                                                      \
         expects argument of type ‘long unsigned int’, but argument 4      \
         has type ‘int’                                                    \
         [-Wformat=] "Failed to set xdp program, the current MTU (%d) is   \
         larger than the maximum allowed MTU (%lu) while xdp is on",
      Signed-off-by: NGavin Shan <gshan@redhat.com>
      Acked-by: NShay Agroskin <shayagr@amazon.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      caec6619
  7. 27 2月, 2020 1 次提交
  8. 25 2月, 2020 1 次提交
  9. 12 2月, 2020 1 次提交
    • A
      net: ena: fix incorrectly saving queue numbers when setting RSS indirection table · 92569fd2
      Arthur Kiyanovski 提交于
      The indirection table has the indices of the Rx queues. When we store it
      during set indirection operation, we convert the indices to our internal
      representation of the indices.
      
      Our internal representation of the indices is: even indices for Tx and
      uneven indices for Rx, where every Tx/Rx pair are in a consecutive order
      starting from 0. For example if the driver has 3 queues (3 for Tx and 3
      for Rx) then the indices are as follows:
      0  1  2  3  4  5
      Tx Rx Tx Rx Tx Rx
      
      The BUG:
      The issue is that when we satisfy a get request for the indirection
      table, we don't convert the indices back to the original representation.
      
      The FIX:
      Simply apply the inverse function for the indices of the indirection
      table after we set it.
      
      Fixes: 1738cd3e ("net: ena: Add a driver for Amazon Elastic Network Adapters (ENA)")
      Signed-off-by: NSameeh Jubran <sameehj@amazon.com>
      Signed-off-by: NArthur Kiyanovski <akiyano@amazon.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      92569fd2
  10. 13 12月, 2019 3 次提交
  11. 07 10月, 2019 3 次提交
  12. 17 9月, 2019 3 次提交
  13. 27 6月, 2019 1 次提交
  14. 13 6月, 2019 4 次提交
  15. 04 6月, 2019 3 次提交
  16. 13 2月, 2019 1 次提交
  17. 20 11月, 2018 1 次提交
  18. 12 10月, 2018 6 次提交
  19. 09 9月, 2018 1 次提交
  20. 03 1月, 2018 1 次提交