1. 21 2月, 2015 1 次提交
  2. 16 1月, 2015 1 次提交
  3. 14 1月, 2015 2 次提交
  4. 13 12月, 2014 1 次提交
    • C
      net/macb: add TX multiqueue support for gem · 02c958dd
      Cyrille Pitchen 提交于
      gem devices designed with multiqueue CANNOT work without this patch.
      
      When probing a gem device, the driver must first prepare and enable the
      peripheral clock before accessing I/O registers. The second step is to read the
      MID register to find whether the device is a gem or an old macb IP.
      For gem devices, it reads the Design Configuration Register 6 (DCFG6) to
      compute to total number of queues, whereas macb devices always have a single
      queue.
      Only then it can call alloc_etherdev_mq() with the correct number of queues.
      This is the reason why the order of some initializations has been changed in
      macb_probe().
      Eventually, the dedicated IRQ and TX ring buffer descriptors are initialized
      for each queue.
      
      For backward compatibility reasons, queue0 uses the legacy registers ISR, IER,
      IDR, IMR, TBQP and RBQP. On the other hand, the other queues use new registers
      ISR[1..7], IER[1..7], IDR[1..7], IMR[1..7], TBQP[1..7] and RBQP[1..7].
      Except this hardware detail there is no real difference between queue0 and the
      others. The driver hides that thanks to the struct macb_queue.
      This structure allows us to share a common set of functions for all the queues.
      
      Besides when a TX error occurs, the gem MUST be halted before writing any of
      the TBQP registers to reset the relevant queue. An immediate side effect is
      that the other queues too aren't processed anymore by the gem.
      So macb_tx_error_task() calls netif_tx_stop_all_queues() to notify the Linux
      network engine that all transmissions are stopped.
      
      Also macb_tx_error_task() now calls spin_lock_irqsave() to prevent the
      interrupt handlers of the other queues from running as each of them may wake
      its associated queue up (please refer to macb_tx_interrupt()).
      
      Finally, as all queues have previously been stopped, they should be restarted
      calling netif_tx_start_all_queues() and setting the TSTART bit into the Network
      Control Register. Before this patch, when dealing with a single queue, the
      driver used to defer the reset of the faulting queue and the write of the
      TSTART bit until the next call of macb_start_xmit().
      As explained before, this bit is now set by macb_tx_error_task() too. That's
      why the faulting queue MUST be reset by setting the TX_USED bit in its first
      buffer descriptor before writing the TSTART bit.
      
      Queue 0 always exits and is the lowest priority when other queues are available.
      The higher the index of the queue is, the higher its priority is.
      
      When transmitting frames, the TX queue is selected by the skb->queue_mapping
      value. So queue discipline can be used to define the queue priority policy.
      Signed-off-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      02c958dd
  5. 25 7月, 2014 3 次提交
    • C
      net/macb: add RX checksum offload feature · 924ec53c
      Cyrille Pitchen 提交于
      When RX checksum offload is enabled at GEM level (bit 24 set in the Network
      Control Register), frames with invalid IP, TCP or UDP checksums are
      discarted even if promiscuous mode is enabled (bit 4 set in the Network Control
      Register).
      
      This was verified with a simple userspace program, which corrupts UDP checksum
      using libnetfilter_queue.
      
      Then both IFF_PROMISC bit must be clear in dev->flags and NETIF_F_RXCSUM bit
      must be set in dev->features to enable RX checksum offload at GEM level. This
      way tcpdump is still able to capture corrupted frames.
      
      Also skb->ip_summed is set to CHECKSUM_UNNECESSARY only when both TCP/IP or
      UDP/IP checksums were verified by the GEM. Indeed the GEM may verify only IP
      checksum but not the one for ICMP (or other protocol than TCP or UDP).
      Signed-off-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      924ec53c
    • C
      net/macb: add scatter-gather hw feature · a4c35ed3
      Cyrille Pitchen 提交于
      The scatter-gather feature will allow to enable the Generic Segmentation Offload.
      Generic Segmentation Offload can be enabled/disabled using ethtool -K DEVNAME gso on|off.
      
      e.g:
      ethtool -K eth0 gso off
      
      When enabled, the driver may be provided with socket buffers splitted into many fragments.
      These fragments need to be queued into the TX ring in reverse order, starting from to the
      last one down to the first one, to avoid a race condition with the MAC.
      Especially the 'TX_USED' bit in word 1 of the transmit buffer descriptor of the
      first fragment should be cleared at the very final step of the queueing algorithm.
      This will tell the hardware that fragments are ready to be sent.
      
      Also since the MAC only update the status word of the first buffer descriptor of the
      ethernet frame, the queueing algorithm can no longer expect a 'TX_USED' bit to be set by
      the MAC into the buffer descriptor following the one for last fragment of the skb.
      This is why the driver sets the 'TX_USED' bit before queueing any fragment, so the end of
      queue position is well defined for the MAC.
      Signed-off-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a4c35ed3
    • N
      net/macb: configure for FIFO mode and non-gigabit · e175587f
      Nicolas Ferre 提交于
      This addition will also allow to configure DMA burst length.
      Signed-off-by: NNicolas Ferre <nicolas.ferre@atmel.com>
      Acked-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e175587f
  6. 11 12月, 2013 1 次提交
  7. 07 6月, 2013 2 次提交
  8. 15 5月, 2013 1 次提交
  9. 29 3月, 2013 1 次提交
  10. 24 11月, 2012 1 次提交
  11. 15 11月, 2012 1 次提交
  12. 08 11月, 2012 3 次提交
  13. 01 11月, 2012 4 次提交
  14. 20 10月, 2012 8 次提交
  15. 16 12月, 2011 1 次提交
  16. 22 11月, 2011 5 次提交
  17. 13 8月, 2011 1 次提交
  18. 09 10月, 2008 1 次提交
  19. 11 10月, 2007 1 次提交
    • S
      [NET]: Make NAPI polling independent of struct net_device objects. · bea3348e
      Stephen Hemminger 提交于
      Several devices have multiple independant RX queues per net
      device, and some have a single interrupt doorbell for several
      queues.
      
      In either case, it's easier to support layouts like that if the
      structure representing the poll is independant from the net
      device itself.
      
      The signature of the ->poll() call back goes from:
      
      	int foo_poll(struct net_device *dev, int *budget)
      
      to
      
      	int foo_poll(struct napi_struct *napi, int budget)
      
      The caller is returned the number of RX packets processed (or
      the number of "NAPI credits" consumed if you want to get
      abstract).  The callee no longer messes around bumping
      dev->quota, *budget, etc. because that is all handled in the
      caller upon return.
      
      The napi_struct is to be embedded in the device driver private data
      structures.
      
      Furthermore, it is the driver's responsibility to disable all NAPI
      instances in it's ->stop() device close handler.  Since the
      napi_struct is privatized into the driver's private data structures,
      only the driver knows how to get at all of the napi_struct instances
      it may have per-device.
      
      With lots of help and suggestions from Rusty Russell, Roland Dreier,
      Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
      
      Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
      Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
      
      [ Ported to current tree and all drivers converted.  Integrated
        Stephen's follow-on kerneldoc additions, and restored poll_list
        handling to the old style to fix mutual exclusion issues.  -DaveM ]
      Signed-off-by: NStephen Hemminger <shemminger@linux-foundation.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bea3348e
  20. 17 7月, 2007 1 次提交