1. 13 2月, 2019 8 次提交
    • V
      net: sched: add flags to Qdisc class ops struct · dfcd2a2b
      Vlad Buslov 提交于
      Extend Qdisc_class_ops with flags. Create enum to hold possible class ops
      flag values. Add first class ops flags value QDISC_CLASS_OPS_DOIT_UNLOCKED
      to indicate that class ops functions can be called without taking rtnl
      lock.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dfcd2a2b
    • V
      net: sched: extend proto ops to support unlocked classifiers · 12db03b6
      Vlad Buslov 提交于
      Add 'rtnl_held' flag to tcf proto change, delete, destroy, dump, walk
      functions to track rtnl lock status. Extend users of these function in cls
      API to propagate rtnl lock status to them. This allows classifiers to
      obtain rtnl lock when necessary and to pass rtnl lock status to extensions
      and driver offload callbacks.
      
      Add flags field to tcf proto ops. Add flag value to indicate that
      classifier doesn't require rtnl lock.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12db03b6
    • V
      net: sched: extend proto ops with 'put' callback · 7d5509fa
      Vlad Buslov 提交于
      Add optional tp->ops->put() API to be implemented for filter reference
      counting. This new function is called by cls API to release filter
      reference for filters returned by tp->ops->change() or tp->ops->get()
      functions. Implement tfilter_put() helper to call tp->ops->put() only for
      classifiers that implement it.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d5509fa
    • V
      net: sched: prevent insertion of new classifiers during chain flush · 726d0612
      Vlad Buslov 提交于
      Extend tcf_chain with 'flushing' flag. Use the flag to prevent insertion of
      new classifier instances when chain flushing is in progress in order to
      prevent resource leak when tcf_proto is created by unlocked users
      concurrently.
      
      Return EAGAIN error from tcf_chain_tp_insert_unique() to restart
      tc_new_tfilter() and lookup the chain/proto again.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      726d0612
    • V
      net: sched: refactor tp insert/delete for concurrent execution · 8b64678e
      Vlad Buslov 提交于
      Implement unique insertion function to atomically attach tcf_proto to chain
      after verifying that no other tcf proto with specified priority exists.
      Implement delete function that verifies that tp is actually empty before
      deleting it. Use these functions to refactor cls API to account for
      concurrent tp and rule update instead of relying on rtnl lock. Add new
      'deleting' flag to tcf proto. Use it to restart search when iterating over
      tp's on chain to prevent accessing potentially inval tp->next pointer.
      
      Extend tcf proto with spinlock that is intended to be used to protect its
      data from concurrent modification instead of relying on rtnl mutex. Use it
      to protect 'deleting' flag. Add lockdep macros to validate that lock is
      held when accessing protected fields.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b64678e
    • V
      net: sched: introduce reference counting for tcf_proto · 4dbfa766
      Vlad Buslov 提交于
      In order to remove dependency on rtnl lock and allow concurrent tcf_proto
      modification, extend tcf_proto with reference counter. Implement helper
      get/put functions for tcf proto and use them to modify cls API to always
      take reference to tcf_proto while using it. Only release reference to
      parent chain after releasing last reference to tp.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4dbfa766
    • V
      net: sched: protect filter_chain list with filter_chain_lock mutex · ed76f5ed
      Vlad Buslov 提交于
      Extend tcf_chain with new filter_chain_lock mutex. Always lock the chain
      when accessing filter_chain list, instead of relying on rtnl lock.
      Dereference filter_chain with tcf_chain_dereference() lockdep macro to
      verify that all users of chain_list have the lock taken.
      
      Rearrange tp insert/remove code in tc_new_tfilter/tc_del_tfilter to execute
      all necessary code while holding chain lock in order to prevent
      invalidation of chain_info structure by potential concurrent change. This
      also serializes calls to tcf_chain0_head_change(), which allows head change
      callbacks to rely on filter_chain_lock for synchronization instead of rtnl
      mutex.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ed76f5ed
    • V
      net: sched: protect block state with mutex · c266f64d
      Vlad Buslov 提交于
      Currently, tcf_block doesn't use any synchronization mechanisms to protect
      critical sections that manage lifetime of its chains. block->chain_list and
      multiple variables in tcf_chain that control its lifetime assume external
      synchronization provided by global rtnl lock. Converting chain reference
      counting to atomic reference counters is not possible because cls API uses
      multiple counters and flags to control chain lifetime, so all of them must
      be synchronized in chain get/put code.
      
      Use single per-block lock to protect block data and manage lifetime of all
      chains on the block. Always take block->lock when accessing chain_list.
      Chain get and put modify chain lifetime-management data and parent block's
      chain_list, so take the lock in these functions. Verify block->lock state
      with assertions in functions that expect to be called with the lock taken
      and are called from multiple places. Take block->lock when accessing
      filter_chain_list.
      
      In order to allow parallel update of rules on single block, move all calls
      to classifiers outside of critical sections protected by new block->lock.
      Rearrange chain get and put functions code to only access protected chain
      data while holding block lock:
      - Rearrange code to only access chain reference counter and chain action
        reference counter while holding block lock.
      - Extract code that requires block->lock from tcf_chain_destroy() into
        standalone tcf_chain_destroy() function that is called by
        __tcf_chain_put() in same critical section that changes chain reference
        counters.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c266f64d
  2. 20 1月, 2019 1 次提交
  3. 12 11月, 2018 1 次提交
    • J
      net: sched: register callbacks for indirect tc block binds · 7f76fa36
      John Hurley 提交于
      Currently drivers can register to receive TC block bind/unbind callbacks
      by implementing the setup_tc ndo in any of their given netdevs. However,
      drivers may also be interested in binds to higher level devices (e.g.
      tunnel drivers) to potentially offload filters applied to them.
      
      Introduce indirect block devs which allows drivers to register callbacks
      for block binds on other devices. The callback is triggered when the
      device is bound to a block, allowing the driver to register for rules
      applied to that block using already available functions.
      
      Freeing an indirect block callback will trigger an unbind event (if
      necessary) to direct the driver to remove any offloaded rules and unreg
      any block rule callbacks. It is the responsibility of the implementing
      driver to clean any registered indirect block callbacks before exiting,
      if the block it still active at such a time.
      
      Allow registering an indirect block dev callback for a device that is
      already bound to a block. In this case (if it is an ingress block),
      register and also trigger the callback meaning that any already installed
      rules can be replayed to the calling driver.
      Signed-off-by: NJohn Hurley <john.hurley@netronome.com>
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f76fa36
  4. 09 11月, 2018 2 次提交
  5. 26 9月, 2018 5 次提交
  6. 15 9月, 2018 1 次提交
  7. 11 9月, 2018 3 次提交
  8. 31 7月, 2018 2 次提交
    • P
      net/tc: introduce TC_ACT_REINSERT. · cd11b164
      Paolo Abeni 提交于
      This is similar TC_ACT_REDIRECT, but with a slightly different
      semantic:
      - on ingress the mirred skbs are passed to the target device
      network stack without any additional check not scrubbing.
      - the rcu-protected stats provided via the tcf_result struct
        are updated on error conditions.
      
      This new tcfa_action value is not exposed to the user-space
      and can be used only internally by clsact.
      
      v1 -> v2: do not touch TC_ACT_REDIRECT code path, introduce
       a new action type instead
      v2 -> v3:
       - rename the new action value TC_ACT_REINJECT, update the
         helper accordingly
       - take care of uncloned reinjected packets in XDP generic
         hook
      v3 -> v4:
       - renamed again the new action value (JiriP)
      v4 -> v5:
       - fix build error with !NET_CLS_ACT (kbuild bot)
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cd11b164
    • P
      tc/act: remove unneeded RCU lock in action callback · 7fd4b288
      Paolo Abeni 提交于
      Each lockless action currently does its own RCU locking in ->act().
      This allows using plain RCU accessor, even if the context
      is really RCU BH.
      
      This change drops the per action RCU lock, replace the accessors
      with the _bh variant, cleans up a bit the surrounding code and
      documents the RCU status in the relevant header.
      No functional nor performance change is intended.
      
      The goal of this patch is clarifying that the RCU critical section
      used by the tc actions extends up to the classifier's caller.
      
      v1 -> v2:
       - preserve rcu lock in act_bpf: it's needed by eBPF helpers,
         as pointed out by Daniel
      
      v3 -> v4:
       - fixed some typos in the commit message (JiriP)
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7fd4b288
  9. 28 7月, 2018 1 次提交
  10. 24 7月, 2018 3 次提交
  11. 26 6月, 2018 2 次提交
  12. 29 5月, 2018 2 次提交
    • J
      net: sched: add qstats.qlen to qlen · 6172abc1
      Jakub Kicinski 提交于
      AFAICT struct gnet_stats_queue.qlen is not used in Qdiscs.
      It may, however, be useful for offloads to report HW queue
      length there.  Add that value to the result of qdisc_qlen_sum().
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6172abc1
    • P
      net: sched: shrink struct Qdisc · e9be0e99
      Paolo Abeni 提交于
      The struct Qdisc has a lot of holes, especially after commit
      a53851e2 ("net: sched: explicit locking in gso_cpu fallback"),
      which as a side effect, moved the fields just after 'busylock'
      on a new cacheline.
      
      Since both 'padded' and 'refcnt' are not updated frequently, and
      there is a hole before 'gso_skb', we can move such fields there,
      saving a cacheline without any performance side effect.
      
      Before this commit:
      
      pahole -C Qdisc net/sche/sch_generic.o
      	# ...
              /* size: 384, cachelines: 6, members: 25 */
              /* sum members: 236, holes: 3, sum holes: 92 */
              /* padding: 56 */
      
      After this commit:
      pahole -C Qdisc net/sche/sch_generic.o
      	# ...
      	/* size: 320, cachelines: 5, members: 25 */
      	/* sum members: 236, holes: 2, sum holes: 28 */
      	/* padding: 56 */
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e9be0e99
  13. 18 5月, 2018 1 次提交
  14. 17 5月, 2018 1 次提交
    • P
      sched: manipulate __QDISC_STATE_RUNNING in qdisc_run_* helpers · 32f7b44d
      Paolo Abeni 提交于
      Currently NOLOCK qdiscs pay a measurable overhead to atomically
      manipulate the __QDISC_STATE_RUNNING. Such bit is flipped twice per
      packet in the uncontended scenario with packet rate below the
      line rate: on packed dequeue and on the next, failing dequeue attempt.
      
      This changeset moves the bit manipulation into the qdisc_run_{begin,end}
      helpers, so that the bit is now flipped only once per packet, with
      measurable performance improvement in the uncontended scenario.
      
      This also allows simplifying the qdisc teardown code path - since
      qdisc_is_running() is now effective for each qdisc type - and avoid a
      possible race between qdisc_run() and dev_deactivate_many(), as now
      the some_qdisc_is_busy() can properly detect NOLOCK qdiscs being busy
      dequeuing packets.
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      32f7b44d
  15. 27 3月, 2018 1 次提交
    • J
      net: sched, fix OOO packets with pfifo_fast · eb82a994
      John Fastabend 提交于
      After the qdisc lock was dropped in pfifo_fast we allow multiple
      enqueue threads and dequeue threads to run in parallel. On the
      enqueue side the skb bit ooo_okay is used to ensure all related
      skbs are enqueued in-order. On the dequeue side though there is
      no similar logic. What we observe is with fewer queues than CPUs
      it is possible to re-order packets when two instances of
      __qdisc_run() are running in parallel. Each thread will dequeue
      a skb and then whichever thread calls the ndo op first will
      be sent on the wire. This doesn't typically happen because
      qdisc_run() is usually triggered by the same core that did the
      enqueue. However, drivers will trigger __netif_schedule()
      when queues are transitioning from stopped to awake using the
      netif_tx_wake_* APIs. When this happens netif_schedule() calls
      qdisc_run() on the same CPU that did the netif_tx_wake_* which
      is usually done in the interrupt completion context. This CPU
      is selected with the irq affinity which is unrelated to the
      enqueue operations.
      
      To resolve this we add a RUNNING bit to the qdisc to ensure
      only a single dequeue per qdisc is running. Enqueue and dequeue
      operations can still run in parallel and also on multi queue
      NICs we can still have a dequeue in-flight per qdisc, which
      is typically per CPU.
      
      Fixes: c5ad119f ("net: sched: pfifo_fast use skb_array")
      Reported-by: NJakob Unterwurzacher <jakob.unterwurzacher@theobroma-systems.com>
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eb82a994
  16. 08 3月, 2018 1 次提交
    • A
      sch_netem: fix skb leak in netem_enqueue() · 35d889d1
      Alexey Kodanev 提交于
      When we exceed current packets limit and we have more than one
      segment in the list returned by skb_gso_segment(), netem drops
      only the first one, skipping the rest, hence kmemleak reports:
      
      unreferenced object 0xffff880b5d23b600 (size 1024):
        comm "softirq", pid 0, jiffies 4384527763 (age 2770.629s)
        hex dump (first 32 bytes):
          00 80 23 5d 0b 88 ff ff 00 00 00 00 00 00 00 00  ..#]............
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          [<00000000d8a19b9d>] __alloc_skb+0xc9/0x520
          [<000000001709b32f>] skb_segment+0x8c8/0x3710
          [<00000000c7b9bb88>] tcp_gso_segment+0x331/0x1830
          [<00000000c921cba1>] inet_gso_segment+0x476/0x1370
          [<000000008b762dd4>] skb_mac_gso_segment+0x1f9/0x510
          [<000000002182660a>] __skb_gso_segment+0x1dd/0x620
          [<00000000412651b9>] netem_enqueue+0x1536/0x2590 [sch_netem]
          [<0000000005d3b2a9>] __dev_queue_xmit+0x1167/0x2120
          [<00000000fc5f7327>] ip_finish_output2+0x998/0xf00
          [<00000000d309e9d3>] ip_output+0x1aa/0x2c0
          [<000000007ecbd3a4>] tcp_transmit_skb+0x18db/0x3670
          [<0000000042d2a45f>] tcp_write_xmit+0x4d4/0x58c0
          [<0000000056a44199>] tcp_tasklet_func+0x3d9/0x540
          [<0000000013d06d02>] tasklet_action+0x1ca/0x250
          [<00000000fcde0b8b>] __do_softirq+0x1b4/0x5a3
          [<00000000e7ed027c>] irq_exit+0x1e2/0x210
      
      Fix it by adding the rest of the segments, if any, to skb 'to_free'
      list. Add new __qdisc_drop_all() and qdisc_drop_all() functions
      because they can be useful in the future if we need to drop segmented
      GSO packets in other places.
      
      Fixes: 6071bd1a ("netem: Segment GSO packets on enqueue")
      Signed-off-by: NAlexey Kodanev <alexey.kodanev@oracle.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      35d889d1
  17. 02 3月, 2018 1 次提交
  18. 30 1月, 2018 1 次提交
    • C
      net_sched: plug in qdisc ops change_tx_queue_len · 48bfd55e
      Cong Wang 提交于
      Introduce a new qdisc ops ->change_tx_queue_len() so that
      each qdisc could decide how to implement this if it wants.
      Previously we simply read dev->tx_queue_len, after pfifo_fast
      switches to skb array, we need this API to resize the skb array
      when we change dev->tx_queue_len.
      
      To avoid handling race conditions with TX BH, we need to
      deactivate all TX queues before change the value and bring them
      back after we are done, this also makes implementation easier.
      
      Cc: John Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      48bfd55e
  19. 25 1月, 2018 1 次提交
  20. 20 1月, 2018 2 次提交