1. 26 6月, 2017 1 次提交
  2. 07 4月, 2017 1 次提交
  3. 11 2月, 2017 1 次提交
  4. 30 1月, 2017 1 次提交
  5. 20 1月, 2017 1 次提交
  6. 17 11月, 2016 1 次提交
  7. 20 10月, 2016 1 次提交
  8. 19 8月, 2016 1 次提交
  9. 11 8月, 2016 1 次提交
    • H
      net: macb: Add 64 bit addressing support for GEM · fff8019a
      Harini Katakam 提交于
      This patch adds support for 64 bit addressing and BDs.
      -> Enable 64 bit addressing in DMACFG register.
      -> Set DMA mask when design config register shows support for 64 bit addr.
      -> Add new BD words for higher address when 64 bit DMA support is present.
      -> Add and update TBQPH and RBQPH for MSB of BD pointers.
      -> Change extraction and updation of buffer addresses to use
      64 bit address.
      -> In gem_rx extract address in one place insted of two and use a
      separate flag for RXUSED.
      Signed-off-by: NHarini Katakam <harinik@xilinx.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fff8019a
  10. 07 8月, 2016 1 次提交
  11. 25 6月, 2016 1 次提交
  12. 14 3月, 2016 1 次提交
  13. 11 2月, 2016 1 次提交
  14. 08 1月, 2016 1 次提交
  15. 15 12月, 2015 1 次提交
  16. 19 11月, 2015 1 次提交
  17. 28 7月, 2015 1 次提交
  18. 27 7月, 2015 3 次提交
  19. 23 5月, 2015 1 次提交
  20. 10 5月, 2015 1 次提交
  21. 01 4月, 2015 4 次提交
  22. 08 3月, 2015 2 次提交
  23. 06 3月, 2015 1 次提交
  24. 02 3月, 2015 1 次提交
  25. 21 2月, 2015 1 次提交
  26. 16 1月, 2015 1 次提交
  27. 14 1月, 2015 2 次提交
  28. 13 12月, 2014 1 次提交
    • C
      net/macb: add TX multiqueue support for gem · 02c958dd
      Cyrille Pitchen 提交于
      gem devices designed with multiqueue CANNOT work without this patch.
      
      When probing a gem device, the driver must first prepare and enable the
      peripheral clock before accessing I/O registers. The second step is to read the
      MID register to find whether the device is a gem or an old macb IP.
      For gem devices, it reads the Design Configuration Register 6 (DCFG6) to
      compute to total number of queues, whereas macb devices always have a single
      queue.
      Only then it can call alloc_etherdev_mq() with the correct number of queues.
      This is the reason why the order of some initializations has been changed in
      macb_probe().
      Eventually, the dedicated IRQ and TX ring buffer descriptors are initialized
      for each queue.
      
      For backward compatibility reasons, queue0 uses the legacy registers ISR, IER,
      IDR, IMR, TBQP and RBQP. On the other hand, the other queues use new registers
      ISR[1..7], IER[1..7], IDR[1..7], IMR[1..7], TBQP[1..7] and RBQP[1..7].
      Except this hardware detail there is no real difference between queue0 and the
      others. The driver hides that thanks to the struct macb_queue.
      This structure allows us to share a common set of functions for all the queues.
      
      Besides when a TX error occurs, the gem MUST be halted before writing any of
      the TBQP registers to reset the relevant queue. An immediate side effect is
      that the other queues too aren't processed anymore by the gem.
      So macb_tx_error_task() calls netif_tx_stop_all_queues() to notify the Linux
      network engine that all transmissions are stopped.
      
      Also macb_tx_error_task() now calls spin_lock_irqsave() to prevent the
      interrupt handlers of the other queues from running as each of them may wake
      its associated queue up (please refer to macb_tx_interrupt()).
      
      Finally, as all queues have previously been stopped, they should be restarted
      calling netif_tx_start_all_queues() and setting the TSTART bit into the Network
      Control Register. Before this patch, when dealing with a single queue, the
      driver used to defer the reset of the faulting queue and the write of the
      TSTART bit until the next call of macb_start_xmit().
      As explained before, this bit is now set by macb_tx_error_task() too. That's
      why the faulting queue MUST be reset by setting the TX_USED bit in its first
      buffer descriptor before writing the TSTART bit.
      
      Queue 0 always exits and is the lowest priority when other queues are available.
      The higher the index of the queue is, the higher its priority is.
      
      When transmitting frames, the TX queue is selected by the skb->queue_mapping
      value. So queue discipline can be used to define the queue priority policy.
      Signed-off-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      02c958dd
  29. 25 7月, 2014 3 次提交
    • C
      net/macb: add RX checksum offload feature · 924ec53c
      Cyrille Pitchen 提交于
      When RX checksum offload is enabled at GEM level (bit 24 set in the Network
      Control Register), frames with invalid IP, TCP or UDP checksums are
      discarted even if promiscuous mode is enabled (bit 4 set in the Network Control
      Register).
      
      This was verified with a simple userspace program, which corrupts UDP checksum
      using libnetfilter_queue.
      
      Then both IFF_PROMISC bit must be clear in dev->flags and NETIF_F_RXCSUM bit
      must be set in dev->features to enable RX checksum offload at GEM level. This
      way tcpdump is still able to capture corrupted frames.
      
      Also skb->ip_summed is set to CHECKSUM_UNNECESSARY only when both TCP/IP or
      UDP/IP checksums were verified by the GEM. Indeed the GEM may verify only IP
      checksum but not the one for ICMP (or other protocol than TCP or UDP).
      Signed-off-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      924ec53c
    • C
      net/macb: add scatter-gather hw feature · a4c35ed3
      Cyrille Pitchen 提交于
      The scatter-gather feature will allow to enable the Generic Segmentation Offload.
      Generic Segmentation Offload can be enabled/disabled using ethtool -K DEVNAME gso on|off.
      
      e.g:
      ethtool -K eth0 gso off
      
      When enabled, the driver may be provided with socket buffers splitted into many fragments.
      These fragments need to be queued into the TX ring in reverse order, starting from to the
      last one down to the first one, to avoid a race condition with the MAC.
      Especially the 'TX_USED' bit in word 1 of the transmit buffer descriptor of the
      first fragment should be cleared at the very final step of the queueing algorithm.
      This will tell the hardware that fragments are ready to be sent.
      
      Also since the MAC only update the status word of the first buffer descriptor of the
      ethernet frame, the queueing algorithm can no longer expect a 'TX_USED' bit to be set by
      the MAC into the buffer descriptor following the one for last fragment of the skb.
      This is why the driver sets the 'TX_USED' bit before queueing any fragment, so the end of
      queue position is well defined for the MAC.
      Signed-off-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a4c35ed3
    • N
      net/macb: configure for FIFO mode and non-gigabit · e175587f
      Nicolas Ferre 提交于
      This addition will also allow to configure DMA burst length.
      Signed-off-by: NNicolas Ferre <nicolas.ferre@atmel.com>
      Acked-by: NCyrille Pitchen <cyrille.pitchen@atmel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e175587f
  30. 11 12月, 2013 1 次提交
  31. 07 6月, 2013 1 次提交