1. 01 5月, 2017 1 次提交
  2. 28 4月, 2017 1 次提交
  3. 26 4月, 2017 2 次提交
    • E
      net: move xdp_prog field in RX cache lines · 7acedaf5
      Eric Dumazet 提交于
      (struct net_device, xdp_prog) field should be moved in RX cache lines,
      reducing latencies when a single packet is received on idle host,
      since netif_elide_gro() needs it.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7acedaf5
    • D
      net: Generic XDP · b5cdae32
      David S. Miller 提交于
      This provides a generic SKB based non-optimized XDP path which is used
      if either the driver lacks a specific XDP implementation, or the user
      requests it via a new IFLA_XDP_FLAGS value named XDP_FLAGS_SKB_MODE.
      
      It is arguable that perhaps I should have required something like
      this as part of the initial XDP feature merge.
      
      I believe this is critical for two reasons:
      
      1) Accessibility.  More people can play with XDP with less
         dependencies.  Yes I know we have XDP support in virtio_net, but
         that just creates another depedency for learning how to use this
         facility.
      
         I wrote this to make life easier for the XDP newbies.
      
      2) As a model for what the expected semantics are.  If there is a pure
         generic core implementation, it serves as a semantic example for
         driver folks adding XDP support.
      
      One thing I have not tried to address here is the issue of
      XDP_PACKET_HEADROOM, thanks to Daniel for spotting that.  It seems
      incredibly expensive to do a skb_cow(skb, XDP_PACKET_HEADROOM) or
      whatever even if the XDP program doesn't try to push headers at all.
      I think we really need the verifier to somehow propagate whether
      certain XDP helpers are used or not.
      
      v5:
       - Handle both negative and positive offset after running prog
       - Fix mac length in XDP_TX case (Alexei)
       - Use rcu_dereference_protected() in free_netdev (kbuild test robot)
      
      v4:
       - Fix MAC header adjustmnet before calling prog (David Ahern)
       - Disable LRO when generic XDP is installed (Michael Chan)
       - Bypass qdisc et al. on XDP_TX and record the event (Alexei)
       - Do not perform generic XDP on reinjected packets (DaveM)
      
      v3:
       - Make sure XDP program sees packet at MAC header, push back MAC
         header if we do XDP_TX.  (Alexei)
       - Elide GRO when generic XDP is in use.  (Alexei)
       - Add XDP_FLAG_SKB_MODE flag which the user can use to request generic
         XDP even if the driver has an XDP implementation.  (Alexei)
       - Report whether SKB mode is in use in rtnl_xdp_fill() via XDP_FLAGS
         attribute.  (Daniel)
      
      v2:
       - Add some "fall through" comments in switch statements based
         upon feedback from Andrew Lunn
       - Use RCU for generic xdp_prog, thanks to Johannes Berg.
      Tested-by: NAndy Gospodarek <andy@greyhouse.net>
      Tested-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Tested-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b5cdae32
  4. 22 4月, 2017 1 次提交
  5. 21 4月, 2017 1 次提交
  6. 14 4月, 2017 2 次提交
  7. 13 4月, 2017 2 次提交
  8. 29 3月, 2017 1 次提交
  9. 16 3月, 2017 1 次提交
    • A
      mqprio: Modify mqprio to pass user parameters via ndo_setup_tc. · 56f36acd
      Amritha Nambiar 提交于
      The configurable priority to traffic class mapping and the user specified
      queue ranges are used to configure the traffic class, overriding the
      hardware defaults when the 'hw' option is set to 0. However, when the 'hw'
      option is non-zero, the hardware QOS defaults are used.
      
      This patch makes it so that we can pass the data the user provided to
      ndo_setup_tc. This allows us to pull in the queue configuration if the
      user requested it as well as any additional hardware offload type
      requested by using a value other than 1 for the hw value.
      
      Finally it also provides a means for the device driver to return the level
      supported for the offload type via the qopt->hw value. Previously we were
      just always assuming the value to be 1, in the future values beyond just 1
      may be supported.
      Signed-off-by: NAmritha Nambiar <amritha.nambiar@intel.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      56f36acd
  10. 02 3月, 2017 1 次提交
    • E
      net: solve a NAPI race · 39e6c820
      Eric Dumazet 提交于
      While playing with mlx4 hardware timestamping of RX packets, I found
      that some packets were received by TCP stack with a ~200 ms delay...
      
      Since the timestamp was provided by the NIC, and my probe was added
      in tcp_v4_rcv() while in BH handler, I was confident it was not
      a sender issue, or a drop in the network.
      
      This would happen with a very low probability, but hurting RPC
      workloads.
      
      A NAPI driver normally arms the IRQ after the napi_complete_done(),
      after NAPI_STATE_SCHED is cleared, so that the hard irq handler can grab
      it.
      
      Problem is that if another point in the stack grabs NAPI_STATE_SCHED bit
      while IRQ are not disabled, we might have later an IRQ firing and
      finding this bit set, right before napi_complete_done() clears it.
      
      This can happen with busy polling users, or if gro_flush_timeout is
      used. But some other uses of napi_schedule() in drivers can cause this
      as well.
      
      thread 1                                 thread 2 (could be on same cpu, or not)
      
      // busy polling or napi_watchdog()
      napi_schedule();
      ...
      napi->poll()
      
      device polling:
      read 2 packets from ring buffer
                                                Additional 3rd packet is
      available.
                                                device hard irq
      
                                                // does nothing because
      NAPI_STATE_SCHED bit is owned by thread 1
                                                napi_schedule();
      
      napi_complete_done(napi, 2);
      rearm_irq();
      
      Note that rearm_irq() will not force the device to send an additional
      IRQ for the packet it already signaled (3rd packet in my example)
      
      This patch adds a new NAPI_STATE_MISSED bit, that napi_schedule_prep()
      can set if it could not grab NAPI_STATE_SCHED
      
      Then napi_complete_done() properly reschedules the napi to make sure
      we do not miss something.
      
      Since we manipulate multiple bits at once, use cmpxchg() like in
      sk_busy_loop() to provide proper transactions.
      
      In v2, I changed napi_watchdog() to use a relaxed variant of
      napi_schedule_prep() : No need to set NAPI_STATE_MISSED from this point.
      
      In v3, I added more details in the changelog and clears
      NAPI_STATE_MISSED in busy_poll_stop()
      
      In v4, I added the ideas given by Alexander Duyck in v3 review
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Alexander Duyck <alexander.duyck@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      39e6c820
  11. 15 2月, 2017 2 次提交
  12. 14 2月, 2017 1 次提交
  13. 09 2月, 2017 1 次提交
  14. 07 2月, 2017 1 次提交
  15. 06 2月, 2017 1 次提交
  16. 04 2月, 2017 1 次提交
  17. 02 2月, 2017 1 次提交
  18. 28 1月, 2017 1 次提交
  19. 27 1月, 2017 1 次提交
  20. 19 1月, 2017 1 次提交
    • T
      net: Remove usage of net_device last_rx member · 4a7c9726
      Tobias Klauser 提交于
      The network stack no longer uses the last_rx member of struct net_device
      since the bonding driver switched to use its own private last_rx in
      commit 9f242738 ("bonding: use last_arp_rx in slave_last_rx()").
      
      However, some drivers still (ab)use the field for their own purposes and
      some driver just update it without actually using it.
      
      Previously, there was an accompanying comment for the last_rx member
      added in commit 4dc89133 ("net: add a comment on netdev->last_rx")
      which asked drivers not to update is, unless really needed. However,
      this commend was removed in commit f8ff080d ("bonding: remove
      useless updating of slave->dev->last_rx"), so some drivers added later
      on still did update last_rx.
      
      Remove all usage of last_rx and switch three drivers (sky2, atp and
      smc91c92_cs) which actually read and write it to use their own private
      copy in netdev_priv.
      
      Compile-tested with allyesconfig and allmodconfig on x86 and arm.
      
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Jay Vosburgh <j.vosburgh@gmail.com>
      Cc: Veaceslav Falico <vfalico@gmail.com>
      Cc: Andy Gospodarek <andy@greyhouse.net>
      Cc: Mirko Lindner <mlindner@marvell.com>
      Cc: Stephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NTobias Klauser <tklauser@distanz.ch>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Reviewed-by: NJay Vosburgh <jay.vosburgh@canonical.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4a7c9726
  21. 12 1月, 2017 1 次提交
  22. 11 1月, 2017 1 次提交
  23. 09 1月, 2017 1 次提交
  24. 30 12月, 2016 1 次提交
    • M
      net: dev_weight: TX/RX orthogonality · 3d48b53f
      Matthias Tafelmeier 提交于
      Oftenly, introducing side effects on packet processing on the other half
      of the stack by adjusting one of TX/RX via sysctl is not desirable.
      There are cases of demand for asymmetric, orthogonal configurability.
      
      This holds true especially for nodes where RPS for RFS usage on top is
      configured and therefore use the 'old dev_weight'. This is quite a
      common base configuration setup nowadays, even with NICs of superior processing
      support (e.g. aRFS).
      
      A good example use case are nodes acting as noSQL data bases with a
      large number of tiny requests and rather fewer but large packets as responses.
      It's affordable to have large budget and rx dev_weights for the
      requests. But as a side effect having this large a number on TX
      processed in one run can overwhelm drivers.
      
      This patch therefore introduces an independent configurability via sysctl to
      userland.
      Signed-off-by: NMatthias Tafelmeier <matthias.tafelmeier@gmx.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3d48b53f
  25. 09 12月, 2016 1 次提交
  26. 03 12月, 2016 1 次提交
    • H
      net/sched: cls_flower: Add offload support using egress Hardware device · 7091d8c7
      Hadar Hen Zion 提交于
      In order to support hardware offloading when the device given by the tc
      rule is different from the Hardware underline device, extract the mirred
      (egress) device from the tc action when a filter is added, using the new
      tc_action_ops, get_dev().
      
      Flower caches the information about the mirred device and use it for
      calling ndo_setup_tc in filter change, update stats and delete.
      
      Calling ndo_setup_tc of the mirred (egress) device instead of the
      ingress device will allow a resolution between the software ingress
      device and the underline hardware device.
      
      The resolution will take place inside the offloading driver using
      'egress_device' flag added to tc_to_netdev struct which is provided to
      the offloading driver.
      Signed-off-by: NHadar Hen Zion <hadarh@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7091d8c7
  27. 30 11月, 2016 1 次提交
  28. 28 11月, 2016 1 次提交
  29. 25 11月, 2016 1 次提交
  30. 24 11月, 2016 1 次提交
  31. 17 11月, 2016 3 次提交
    • E
      netpoll: more efficient locking · 89c4b442
      Eric Dumazet 提交于
      Callers of netpoll_poll_lock() own NAPI_STATE_SCHED
      
      Callers of netpoll_poll_unlock() have BH blocked between
      the NAPI_STATE_SCHED being cleared and poll_lock is released.
      
      We can avoid the spinlock which has no contention, and use cmpxchg()
      on poll_owner which we need to set anyway.
      
      This removes a possible lockdep violation after the cited commit,
      since sk_busy_loop() re-enables BH before calling busy_poll_stop()
      
      Fixes: 217f6974 ("net: busy-poll: allow preemption in sk_busy_loop()")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89c4b442
    • E
      net: busy-poll: return busypolling status to drivers · 364b6055
      Eric Dumazet 提交于
      NAPI drivers use napi_complete_done() or napi_complete() when
      they drained RX ring and right before re-enabling device interrupts.
      
      In busy polling, we can avoid interrupts being delivered since
      we are polling RX ring in a controlled loop.
      
      Drivers can chose to use napi_complete_done() return value
      to reduce interrupts overhead while busy polling is active.
      
      This is optional, legacy drivers should work fine even
      if not updated.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Cc: Adam Belay <abelay@google.com>
      Cc: Tariq Toukan <tariqt@mellanox.com>
      Cc: Yuval Mintz <Yuval.Mintz@cavium.com>
      Cc: Ariel Elior <ariel.elior@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      364b6055
    • E
      net: busy-poll: allow preemption in sk_busy_loop() · 217f6974
      Eric Dumazet 提交于
      After commit 4cd13c21 ("softirq: Let ksoftirqd do its job"),
      sk_busy_loop() needs a bit of care :
      softirqs might be delayed since we do not allow preemption yet.
      
      This patch adds preemptiom points in sk_busy_loop(),
      and makes sure no unnecessary cache line dirtying
      or atomic operations are done while looping.
      
      A new flag is added into napi->state : NAPI_STATE_IN_BUSY_POLL
      
      This prevents napi_complete_done() from clearing NAPIF_STATE_SCHED,
      so that sk_busy_loop() does not have to grab it again.
      
      Similarly, netpoll_poll_lock() is done one time.
      
      This gives about 10 to 20 % improvement in various busy polling
      tests, especially when many threads are busy polling in
      configurations with large number of NIC queues.
      
      This should allow experimenting with bigger delays without
      hurting overall latencies.
      
      Tested:
       On a 40Gb mlx4 NIC, 32 RX/TX queues.
      
       echo 70 >/proc/sys/net/core/busy_read
       for i in `seq 1 40`; do echo -n $i: ; ./super_netperf $i -H lpaa24 -t UDP_RR -- -N -n; done
      
          Before:      After:
       1:   90072   92819
       2:  157289  184007
       3:  235772  213504
       4:  344074  357513
       5:  394755  458267
       6:  461151  487819
       7:  549116  625963
       8:  544423  716219
       9:  720460  738446
      10:  794686  837612
      11:  915998  923960
      12:  937507  925107
      13: 1019677  971506
      14: 1046831 1113650
      15: 1114154 1148902
      16: 1105221 1179263
      17: 1266552 1299585
      18: 1258454 1383817
      19: 1341453 1312194
      20: 1363557 1488487
      21: 1387979 1501004
      22: 1417552 1601683
      23: 1550049 1642002
      24: 1568876 1601915
      25: 1560239 1683607
      26: 1640207 1745211
      27: 1706540 1723574
      28: 1638518 1722036
      29: 1734309 1757447
      30: 1782007 1855436
      31: 1724806 1888539
      32: 1717716 1944297
      33: 1778716 1869118
      34: 1805738 1983466
      35: 1815694 2020758
      36: 1893059 2035632
      37: 1843406 2034653
      38: 1888830 2086580
      39: 1972827 2143567
      40: 1877729 2181851
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Cc: Adam Belay <abelay@google.com>
      Cc: Tariq Toukan <tariqt@mellanox.com>
      Cc: Yuval Mintz <Yuval.Mintz@cavium.com>
      Cc: Ariel Elior <ariel.elior@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      217f6974
  32. 13 11月, 2016 1 次提交
    • M
      bpf: Fix bpf_redirect to an ipip/ip6tnl dev · 4e3264d2
      Martin KaFai Lau 提交于
      If the bpf program calls bpf_redirect(dev, 0) and dev is
      an ipip/ip6tnl, it currently includes the mac header.
      e.g. If dev is ipip, the end result is IP-EthHdr-IP instead
      of IP-IP.
      
      The fix is to pull the mac header.  At ingress, skb_postpull_rcsum()
      is not needed because the ethhdr should have been pulled once already
      and then got pushed back just before calling the bpf_prog.
      At egress, this patch calls skb_postpull_rcsum().
      
      If bpf_redirect(dev, BPF_F_INGRESS) is called,
      it also fails now because it calls dev_forward_skb() which
      eventually calls eth_type_trans(skb, dev).  The eth_type_trans()
      will set skb->type = PACKET_OTHERHOST because the mac address
      does not match the redirecting dev->dev_addr.  The PACKET_OTHERHOST
      will eventually cause the ip_rcv() errors out.  To fix this,
      ____dev_forward_skb() is added.
      
      Joint work with Daniel Borkmann.
      
      Fixes: cfc7381b ("ip_tunnel: add collect_md mode to IPIP tunnel")
      Fixes: 8d79266b ("ip6_tunnel: add collect_md mode to IPv6 tunnels")
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4e3264d2
  33. 10 11月, 2016 1 次提交
  34. 01 11月, 2016 1 次提交