1. 29 7月, 2018 3 次提交
  2. 23 6月, 2018 1 次提交
    • A
      net: mvneta: fix the Rx desc DMA address in the Rx path · 271f7ff5
      Antoine Tenart 提交于
      When using s/w buffer management, buffers are allocated and DMA mapped.
      When doing so on an arm64 platform, an offset correction is applied on
      the DMA address, before storing it in an Rx descriptor. The issue is
      this DMA address is then used later in the Rx path without removing the
      offset correction. Thus the DMA address is wrong, which can led to
      various issues.
      
      This patch fixes this by removing the offset correction from the DMA
      address retrieved from the Rx descriptor before using it in the Rx path.
      
      Fixes: 8d5047cf ("net: mvneta: Convert to be 64 bits compatible")
      Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      271f7ff5
  3. 02 4月, 2018 2 次提交
  4. 31 3月, 2018 2 次提交
  5. 30 3月, 2018 2 次提交
  6. 27 3月, 2018 1 次提交
  7. 03 1月, 2018 9 次提交
  8. 21 12月, 2017 3 次提交
  9. 14 11月, 2017 1 次提交
    • S
      net: mvneta: fix handling of the Tx descriptor counter · 0d63785c
      Simon Guinot 提交于
      The mvneta controller provides a 8-bit register to update the pending
      Tx descriptor counter. Then, a maximum of 255 Tx descriptors can be
      added at once. In the current code the mvneta_txq_pend_desc_add function
      assumes the caller takes care of this limit. But it is not the case. In
      some situations (xmit_more flag), more than 255 descriptors are added.
      When this happens, the Tx descriptor counter register is updated with a
      wrong value, which breaks the whole Tx queue management.
      
      This patch fixes the issue by allowing the mvneta_txq_pend_desc_add
      function to process more than 255 Tx descriptors.
      
      Fixes: 2a90f7e1 ("net: mvneta: add xmit_more support")
      Cc: stable@vger.kernel.org # 4.11+
      Signed-off-by: NSimon Guinot <simon.guinot@sequanux.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d63785c
  10. 02 9月, 2017 1 次提交
  11. 24 8月, 2017 1 次提交
  12. 16 6月, 2017 1 次提交
    • J
      networking: introduce and use skb_put_data() · 59ae1d12
      Johannes Berg 提交于
      A common pattern with skb_put() is to just want to memcpy()
      some data into the new space, introduce skb_put_data() for
      this.
      
      An spatch similar to the one for skb_put_zero() converts many
      of the places using it:
      
          @@
          identifier p, p2;
          expression len, skb, data;
          type t, t2;
          @@
          (
          -p = skb_put(skb, len);
          +p = skb_put_data(skb, data, len);
          |
          -p = (t)skb_put(skb, len);
          +p = skb_put_data(skb, data, len);
          )
          (
          p2 = (t2)p;
          -memcpy(p2, data, len);
          |
          -memcpy(p, data, len);
          )
      
          @@
          type t, t2;
          identifier p, p2;
          expression skb, data;
          @@
          t *p;
          ...
          (
          -p = skb_put(skb, sizeof(t));
          +p = skb_put_data(skb, data, sizeof(t));
          |
          -p = (t *)skb_put(skb, sizeof(t));
          +p = skb_put_data(skb, data, sizeof(t));
          )
          (
          p2 = (t2)p;
          -memcpy(p2, data, sizeof(*p));
          |
          -memcpy(p, data, sizeof(*p));
          )
      
          @@
          expression skb, len, data;
          @@
          -memcpy(skb_put(skb, len), data, len);
          +skb_put_data(skb, data, len);
      
      (again, manually post-processed to retain some comments)
      Reviewed-by: NStephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      59ae1d12
  13. 19 4月, 2017 4 次提交
  14. 18 4月, 2017 1 次提交
    • J
      net: mvneta: fix failed to suspend if WOL is enabled · 82960fff
      Jisheng Zhang 提交于
      Recently, suspend/resume and WOL support are added into mvneta driver.
      If we enable WOL, then we get some error as below on Marvell BG4CT
      platforms during suspend:
      
      [  184.149723] dpm_run_callback(): mdio_bus_suspend+0x0/0x50 returns -16
      [  184.149727] PM: Device f7b62004.mdio-mi:00 failed to suspend: error -16
      
      -16 means -EBUSY, phy_suspend() will return -EBUSY if it finds the
      device has WOL enabled.
      
      We fix this issue by properly setting the netdev's power.can_wakeup
      and power.wakeup, i.e
      
      1. in mvneta_mdio_probe(), call device_set_wakeup_capable() to set
      power.can_wakeup if the phy support WOL.
      
      2. in mvneta_ethtool_set_wol(), call device_set_wakeup_enable() to
      set power.wakeup if WOL has been successfully enabled in phy.
      Signed-off-by: NJisheng Zhang <jszhang@marvell.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      82960fff
  15. 30 3月, 2017 2 次提交
  16. 17 3月, 2017 1 次提交
  17. 18 2月, 2017 1 次提交
  18. 11 2月, 2017 1 次提交
  19. 06 2月, 2017 1 次提交
  20. 31 1月, 2017 1 次提交
  21. 17 1月, 2017 1 次提交
    • M
      net: mvneta: add BQL support · a29b6235
      Marcin Wojtas 提交于
      Tests showed that when whole bandwidth is consumed, the latency for
      various kind of traffic can reach high values. With saturated
      link (e.g. with iperf from target to host) simple ping could take
      significant amount of time. BQL proved to improve this situation
      when implemented in mvneta driver. Measurements of ping latency
      for 3 link speeds:
      Speed | Latency w/o BQL | Latency with BQL
      10    |      7-14 ms    |     3.5 ms
      100   |      2-12 ms    |     0.6 ms
      1000  |   often timeout |   up to 2ms
      
      Decreasing latency as above result in sligt performance cost - 4kpps
      (-1.4%) when pushing 64B packets via two bridged interfaces of Armada 38x.
      For 1500B packets in the same setup, the mpstat tool showed +8% of
      CPU occupation (default affinity, second CPU idle). Even though this
      cost seems reasonable to take, considering other improvements.
      
      This commit adds byte queue limit mechanism for the mvneta driver.
      Signed-off-by: NMarcin Wojtas <mw@semihalf.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a29b6235