1. 10 3月, 2017 3 次提交
  2. 22 2月, 2017 13 次提交
  3. 20 2月, 2017 1 次提交
  4. 18 2月, 2017 1 次提交
  5. 11 2月, 2017 1 次提交
  6. 08 2月, 2017 2 次提交
  7. 06 2月, 2017 2 次提交
  8. 31 1月, 2017 1 次提交
  9. 19 1月, 2017 1 次提交
    • T
      net: Remove usage of net_device last_rx member · 4a7c9726
      Tobias Klauser 提交于
      The network stack no longer uses the last_rx member of struct net_device
      since the bonding driver switched to use its own private last_rx in
      commit 9f242738 ("bonding: use last_arp_rx in slave_last_rx()").
      
      However, some drivers still (ab)use the field for their own purposes and
      some driver just update it without actually using it.
      
      Previously, there was an accompanying comment for the last_rx member
      added in commit 4dc89133 ("net: add a comment on netdev->last_rx")
      which asked drivers not to update is, unless really needed. However,
      this commend was removed in commit f8ff080d ("bonding: remove
      useless updating of slave->dev->last_rx"), so some drivers added later
      on still did update last_rx.
      
      Remove all usage of last_rx and switch three drivers (sky2, atp and
      smc91c92_cs) which actually read and write it to use their own private
      copy in netdev_priv.
      
      Compile-tested with allyesconfig and allmodconfig on x86 and arm.
      
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Jay Vosburgh <j.vosburgh@gmail.com>
      Cc: Veaceslav Falico <vfalico@gmail.com>
      Cc: Andy Gospodarek <andy@greyhouse.net>
      Cc: Mirko Lindner <mlindner@marvell.com>
      Cc: Stephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NTobias Klauser <tklauser@distanz.ch>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Reviewed-by: NJay Vosburgh <jay.vosburgh@canonical.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4a7c9726
  10. 18 1月, 2017 2 次提交
  11. 17 1月, 2017 2 次提交
    • M
      net: mvneta: add BQL support · a29b6235
      Marcin Wojtas 提交于
      Tests showed that when whole bandwidth is consumed, the latency for
      various kind of traffic can reach high values. With saturated
      link (e.g. with iperf from target to host) simple ping could take
      significant amount of time. BQL proved to improve this situation
      when implemented in mvneta driver. Measurements of ping latency
      for 3 link speeds:
      Speed | Latency w/o BQL | Latency with BQL
      10    |      7-14 ms    |     3.5 ms
      100   |      2-12 ms    |     0.6 ms
      1000  |   often timeout |   up to 2ms
      
      Decreasing latency as above result in sligt performance cost - 4kpps
      (-1.4%) when pushing 64B packets via two bridged interfaces of Armada 38x.
      For 1500B packets in the same setup, the mpstat tool showed +8% of
      CPU occupation (default affinity, second CPU idle). Even though this
      cost seems reasonable to take, considering other improvements.
      
      This commit adds byte queue limit mechanism for the mvneta driver.
      Signed-off-by: NMarcin Wojtas <mw@semihalf.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a29b6235
    • S
      net: mvneta: add xmit_more support · 2a90f7e1
      Simon Guinot 提交于
      Basing on xmit_more flag of the skb, TX descriptors can be concatenated
      before flushing. This commit delay Tx descriptor flush if the queue is
      running and if there is more skb's to send.
      
      A maximum allowed number of descriptors for flushing at once due to
      MVNETA_TXQ_UPDATE_REG(q) reqisters limitation, is 255. Because of that
      a new macro was added (MVNETA_TXQ_DEC_SENT_MASK) in order to ensure that
      concatenated amount of descriptor does not exceed that value.
      Signed-off-by: NSimon Guinot <simon.guinot@sequanux.org>
      Signed-off-by: NMarcin Wojtas <mw@semihalf.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2a90f7e1
  12. 09 1月, 2017 1 次提交
  13. 26 12月, 2016 1 次提交
    • T
      ktime: Cleanup ktime_set() usage · 8b0e1953
      Thomas Gleixner 提交于
      ktime_set(S,N) was required for the timespec storage type and is still
      useful for situations where a Seconds and Nanoseconds part of a time value
      needs to be converted. For anything where the Seconds argument is 0, this
      is pointless and can be replaced with a simple assignment.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      8b0e1953
  14. 22 12月, 2016 1 次提交
    • T
      net: mvpp2: fix dma unmapping of TX buffers for fragments · 8354491c
      Thomas Petazzoni 提交于
      Since commit 71ce391d ("net: mvpp2: enable proper per-CPU TX
      buffers unmapping"), we are not correctly DMA unmapping TX buffers for
      fragments.
      
      Indeed, the mvpp2_txq_inc_put() function only stores in the
      txq_cpu->tx_buffs[] array the physical address of the buffer to be
      DMA-unmapped when skb != NULL. In addition, when DMA-unmapping, we use
      skb_headlen(skb) to get the size to be unmapped. Both of this works fine
      for TX descriptors that are associated directly to a SKB, but not the
      ones that are used for fragments, with a NULL pointer as skb:
      
       - We have a NULL physical address when calling DMA unmap
       - skb_headlen(skb) crashes because skb is NULL
      
      This causes random crashes when fragments are used.
      
      To solve this problem, we need to:
      
       - Store the physical address of the buffer to be unmapped
         unconditionally, regardless of whether it is tied to a SKB or not.
      
       - Store the length of the buffer to be unmapped, which requires a new
         field.
      
      Instead of adding a third array to store the length of the buffer to be
      unmapped, and as suggested by David Miller, this commit refactors the
      tx_buffs[] and tx_skb[] arrays of 'struct mvpp2_txq_pcpu' into a
      separate structure 'mvpp2_txq_pcpu_buf', to which a 'size' field is
      added. Therefore, instead of having three arrays to allocate/free, we
      have a single one, which also improve data locality, reducing the
      impact on the CPU cache.
      
      Fixes: 71ce391d ("net: mvpp2: enable proper per-CPU TX buffers unmapping")
      Reported-by: NRaphael G <raphael.glon@corp.ovh.com>
      Cc: Raphael G <raphael.glon@corp.ovh.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8354491c
  15. 18 12月, 2016 1 次提交
  16. 11 12月, 2016 1 次提交
    • A
      net: mvneta: select GENERIC_ALLOCATOR · 0e85663b
      Arnd Bergmann 提交于
      We previously relied on GENERIC_ALLOCATOR to be selected by CONFIG_ARM,
      but now we can compile-test the driver on other architectures that
      don't select it:
      
      drivers/net/built-in.o: In function `mvneta_bm_remove':
      mvneta_bm.c:(.text+0x4ee35): undefined reference to `gen_pool_free'
      
      This adds an explicit select for the part of the driver that has
      the dependency.
      
      Fixes: a0627f77 ("net: marvell: Allow drivers to be built with COMPILE_TEST")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e85663b
  17. 09 12月, 2016 1 次提交
  18. 03 12月, 2016 5 次提交