1. 18 2月, 2011 1 次提交
    • B
      sfc: Implement hardware acceleration of RFS · 64d8ad6d
      Ben Hutchings 提交于
      Use the existing filter management functions to insert TCP/IPv4 and
      UDP/IPv4 4-tuple filters for Receive Flow Steering.
      
      For each channel, track how many RFS filters are being added during
      processing of received packets and scan the corresponding number of
      table entries for filters that may be reclaimed.  Do this in batches
      to reduce lock overhead.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      64d8ad6d
  2. 17 2月, 2011 1 次提交
  3. 16 2月, 2011 5 次提交
    • B
      net: RPS: Make hardware-accelerated RFS conditional on NETIF_F_NTUPLE · 69a19ee6
      Ben Hutchings 提交于
      For testing and debugging purposes it is useful to be able to disable
      hardware acceleration of RFS without disabling RFS altogether.  Since
      this is a similar feature to 'n-tuple' flow steering through the
      ethtool API, test the same feature flag that controls that.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      69a19ee6
    • B
      sfc: Add TX queues for high-priority traffic · 94b274bf
      Ben Hutchings 提交于
      Implement the ndo_setup_tc() operation with 2 traffic classes.
      
      Current Solarstorm controllers do not implement TX queue priority, but
      they do allow queues to be 'paced' with an enforced delay between
      packets.  Paced and unpaced queues are scheduled in round-robin within
      two separate hardware bins (paced queues with a large delay may be
      placed into a third bin temporarily, but we won't use that).  If there
      are queues in both bins, the TX scheduler will alternate between them.
      
      If we make high-priority queues unpaced and best-effort queues paced,
      and high-priority queues are mostly empty, a single high-priority queue
      can then instantly take 50% of the packet rate regardless of how many
      of the best-effort queues have descriptors outstanding.
      
      We do not actually want an enforced delay between packets on best-
      effort queues, so we set the pace value to a reserved value that
      actually results in a delay of 0.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      94b274bf
    • B
      sfc: Distinguish queue lookup from test for queue existence · 525da907
      Ben Hutchings 提交于
      efx_channel_get_{rx,tx}_queue() currently return NULL if the channel
      isn't used for traffic in that direction.  In most cases this is a
      bug, but some callers rely on it as an existence test.
      
      Add existence test functions efx_channel_has_{rx_queue,tx_queues}()
      and use them as appropriate.
      
      Change efx_channel_get_{rx,tx}_queue() to assert that the requested
      queue exists.
      
      Remove now-redundant initialisation from efx_set_channels().
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      525da907
    • B
      sfc: Move TX queue core queue mapping into tx.c · 60031fcc
      Ben Hutchings 提交于
      efx_hard_start_xmit() needs to implement a mapping which is the
      inverse of tx_queue::core_txq.  Move the initialisation of
      tx_queue::core_txq next to efx_hard_start_xmit() to make the
      connection more obvious.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      60031fcc
    • B
      net: Adjust TX queue kobjects if number of queues changes during unregister · 5c56580b
      Ben Hutchings 提交于
      If the root qdisc for a net device is mqprio, and the driver's
      ndo_setup_tc() operation dynamically adds and remvoes TX queues,
      netif_set_real_num_tx_queues() will be called during device
      unregistration to remove the extra TX queues when the qdisc is
      destroyed.  Currently this causes the corresponding kobjects
      to be leaked, and the device's reference count never drops to 0.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      5c56580b
  4. 15 2月, 2011 1 次提交
  5. 09 2月, 2011 14 次提交
  6. 08 2月, 2011 18 次提交