1. 07 10月, 2019 1 次提交
  2. 05 9月, 2019 1 次提交
  3. 30 8月, 2019 2 次提交
    • I
      dpaa2-eth: Add pause frame support · 8eb3cef8
      Ioana Radulescu 提交于
      Starting with firmware version MC10.18.0, we have support for
      L2 flow control. Asymmetrical configuration (Rx or Tx only) is
      supported, but not pause frame autonegotioation.
      
      Pause frame configuration is done via ethtool. By default, we start
      with flow control enabled on both Rx and Tx. Changes are propagated
      to hardware through firmware commands, using two flags (PAUSE,
      ASYM_PAUSE) to specify Rx and Tx pause configuration, as follows:
      
      PAUSE | ASYM_PAUSE | Rx pause | Tx pause
      ----------------------------------------
        0   |     0      | disabled | disabled
        0   |     1      | disabled | enabled
        1   |     0      | enabled  | enabled
        1   |     1      | enabled  | disabled
      
      The hardware can automatically send pause frames when the number
      of buffers in the pool goes below a predefined threshold. Due to
      this, flow control is incompatible with Rx frame queue taildrop
      (both mechanisms target the case when processing of ingress
      frames can't keep up with the Rx rate; for large frames, the number
      of buffers in the pool may never get low enough to trigger pause
      frames as long as taildrop is enabled). So we set pause frame
      generation and Rx FQ taildrop as mutually exclusive.
      Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com>
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8eb3cef8
    • I
      dpaa2-eth: Use stored link settings · cce62943
      Ioana Radulescu 提交于
      Whenever a link state change occurs, we get notified and save
      the new link settings in the device's private data. In ethtool
      get_link_ksettings, use the stored state instead of interrogating
      the firmware each time.
      Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com>
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cce62943
  4. 13 6月, 2019 3 次提交
  5. 10 6月, 2019 2 次提交
  6. 27 5月, 2019 1 次提交
  7. 24 5月, 2019 1 次提交
    • I
      Revert "dpaa2-eth: configure the cache stashing amount on a queue" · 16fa1cf1
      Ioana Radulescu 提交于
      This reverts commit f8b99585.
      
      The reverted change instructed the QMan hardware block to fetch
      RX frame annotation and beginning of frame data to cache before
      the core would read them.
      
      It turns out that in rare cases, it's possible that a QMan
      stashing transaction is delayed long enough such that, by the time
      it gets executed, the frame in question had already been dequeued
      by the core and software processing began on it. If the core
      manages to unmap the frame buffer _before_ the stashing transaction
      is executed, an SMMU exception will be raised.
      
      Unfortunately there is no easy way to work around this while keeping
      the performance advantages brought by QMan stashing, so disable
      it altogether.
      Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      16fa1cf1
  8. 17 4月, 2019 4 次提交
  9. 27 3月, 2019 2 次提交
  10. 21 3月, 2019 1 次提交
  11. 04 3月, 2019 2 次提交
  12. 27 2月, 2019 1 次提交
  13. 07 2月, 2019 3 次提交
  14. 20 1月, 2019 1 次提交
  15. 18 1月, 2019 1 次提交
    • I
      dpaa2-eth: Fix ndo_stop routine · 68d74315
      Ioana Ciocoi Radulescu 提交于
      In the current implementation, on interface down we disabled NAPI and
      then manually drained any remaining ingress frames. This could lead
      to a situation when, under heavy traffic, the data availability
      notification for some of the channels would not get rearmed correctly.
      
      Change the implementation such that we let all remaining ingress frames
      be processed as usual and only disable NAPI once the hardware queues
      are empty.
      
      We also add a wait on the Tx side, to allow hardware time to process
      all in-flight Tx frames before issueing the disable command.
      Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68d74315
  16. 12 1月, 2019 1 次提交
  17. 30 11月, 2018 1 次提交
  18. 29 11月, 2018 8 次提交
  19. 17 11月, 2018 3 次提交
    • I
      dpaa2-eth: bql support · 569dac6a
      Ioana Ciocoi Radulescu 提交于
      Add support for byte queue limit.
      
      On NAPI poll, we save the total number of Tx confirmed frames/bytes
      and register them with bql at the end of the poll function.
      Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      569dac6a
    • I
      dpaa2-eth: Update callback signature · dbcdf728
      Ioana Ciocoi Radulescu 提交于
      Change the frame consume callback signature:
      * the entire FQ structure is passed to the callback instead
      of just the queue index
      * the NAPI structure can be easily obtained from the channel
      it is associated to, so we don't need to pass it explicitly
      Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dbcdf728
    • I
      dpaa2-eth: Don't use multiple queues per channel · b0e4f37b
      Ioana Ciocoi Radulescu 提交于
      The DPNI object on which we build a network interface has a
      certain number of {Rx, Tx, Tx confirmation} frame queues as
      resources. The default hardware setup offers one queue of each
      type, as well as one DPCON channel, for each core available
      in the system.
      
      There are however cases where the number of queues is greater
      than the number of cores or channels. Until now, we configured
      and used all the frame queues associated with a DPNI, even if it
      meant assigning multiple queues of one type to the same channel.
      
      Update the driver to only use a number of queues equal to the
      number of channels, ensuring each channel will contain exactly
      one Rx and one Tx confirmation queue.
      
      >From the user viewpoint, this change is completely transparent.
      Performance wise there is no impact in most scenarios. In case
      the number of queues is larger than and not a multiple of the
      number of channels, Rx hash distribution offers now better load
      balancing between cores, which can have a positive impact on
      overall system performance.
      Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b0e4f37b
  20. 10 11月, 2018 1 次提交