1. 29 6月, 2019 1 次提交
    • J
      net: sched: refactor reinsert action · 720f22fe
      John Hurley 提交于
      The TC_ACT_REINSERT return type was added as an in-kernel only option to
      allow a packet ingress or egress redirect. This is used to avoid
      unnecessary skb clones in situations where they are not required. If a TC
      hook returns this code then the packet is 'reinserted' and no skb consume
      is carried out as no clone took place.
      
      This return type is only used in act_mirred. Rather than have the reinsert
      called from the main datapath, call it directly in act_mirred. Instead of
      returning TC_ACT_REINSERT, change the type to the new TC_ACT_CONSUMED
      which tells the caller that the packet has been stolen by another process
      and that no consume call is required.
      
      Moving all redirect calls to the act_mirred code is in preparation for
      tracking recursion created by act_mirred.
      Signed-off-by: NJohn Hurley <john.hurley@netronome.com>
      Reviewed-by: NSimon Horman <simon.horman@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      720f22fe
  2. 24 4月, 2019 1 次提交
  3. 11 4月, 2019 4 次提交
    • P
      Revert: "net: sched: put back q.qlen into a single location" · 73eb628d
      Paolo Abeni 提交于
      This revert commit 46b1c18f ("net: sched: put back q.qlen into
      a single location").
      After the previous patch, when a NOLOCK qdisc is enslaved to a
      locking qdisc it switches to global stats accounting. As a consequence,
      when a classful qdisc accesses directly a child qdisc's qlen, such
      qdisc is not doing per CPU accounting and qlen value is consistent.
      
      In the control path nobody uses directly qlen since commit
      e5f0e8f8 ("net: sched: introduce and use qdisc tree flush/purge
      helpers"), so we can remove the contented atomic ops from the
      datapath.
      
      v1 -> v2:
       - complete the qdisc_qstats_atomic_qlen_dec() ->
         qdisc_qstats_cpu_qlen_dec() replacement, fix build issue
       - more descriptive commit message
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73eb628d
    • P
      net: sched: when clearing NOLOCK, clear TCQ_F_CPUSTATS, too · 8a53e616
      Paolo Abeni 提交于
      Since stats updating is always consistent with TCQ_F_CPUSTATS flag,
      we can disable it at qdisc creation time flipping such bit.
      
      In my experiments, if the NOLOCK flag is cleared, per CPU stats
      accounting does not give any measurable performance gain, but it
      waste some memory.
      
      Let's clear TCQ_F_CPUSTATS together with NOLOCK, when enslaving
      a NOLOCK qdisc to 'lock' one.
      
      Use stats update helper inside pfifo_fast, to cope correctly with
      TCQ_F_CPUSTATS flag change.
      
      As a side effect, q.qlen value for any child qdiscs is always
      consistent for all lock classfull qdiscs.
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a53e616
    • P
      net: sched: always do stats accounting according to TCQ_F_CPUSTATS · 9c01c9f1
      Paolo Abeni 提交于
      The core sched implementation checks independently for NOLOCK flag
      to acquire/release the root spin lock and for qdisc_is_percpu_stats()
      to account per CPU values in many places.
      
      This change update the last few places checking the TCQ_F_NOLOCK to
      do per CPU stats accounting according to qdisc_is_percpu_stats()
      value.
      
      The above allows to clean dev_requeue_skb() implementation a bit
      and makes stats update always consistent with a single flag.
      
      v1 -> v2:
       - do not move qdisc_is_empty definition, fix build issue
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9c01c9f1
    • P
      net: sched: prefer qdisc_is_empty() over direct qlen access · 1f5e6fdd
      Paolo Abeni 提交于
      When checking for root qdisc queue length, do not access directly q.qlen.
      In the following patches we will move back qlen accounting to per CPU
      values for NOLOCK qdiscs.
      
      Instead, prefer the qdisc_is_empty() helper usage.
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f5e6fdd
  4. 02 4月, 2019 2 次提交
    • P
      net: sched: introduce and use qdisc tree flush/purge helpers · e5f0e8f8
      Paolo Abeni 提交于
      The same code to flush qdisc tree and purge the qdisc queue
      is duplicated in many places and in most cases it does not
      respect NOLOCK qdisc: the global backlog len is used and the
      per CPU values are ignored.
      
      This change addresses the above, factoring-out the relevant
      code and using the helpers introduced by the previous patch
      to fetch the correct backlog len.
      
      Fixes: c5ad119f ("net: sched: pfifo_fast use skb_array")
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e5f0e8f8
    • P
      net: sched: introduce and use qstats read helpers · 5dd431b6
      Paolo Abeni 提交于
      Classful qdiscs can't access directly the child qdiscs backlog
      length: if such qdisc is NOLOCK, per CPU values should be
      accounted instead.
      
      Most qdiscs no not respect the above. As a result, qstats fetching
      for most classful qdisc is currently incorrect: if the child qdisc is
      NOLOCK, it always reports 0 len backlog.
      
      This change introduces a pair of helpers to safely fetch
      both backlog and qlen and use them in stats class dumping
      functions, fixing the above issue and cleaning a bit the code.
      
      DRR needs also to access the child qdisc queue length, so it
      needs custom handling.
      
      Fixes: c5ad119f ("net: sched: pfifo_fast use skb_array")
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5dd431b6
  5. 24 3月, 2019 1 次提交
  6. 22 3月, 2019 1 次提交
    • D
      net/sched: let actions use RCU to access 'goto_chain' · ee3bbfe8
      Davide Caratti 提交于
      use RCU when accessing the action chain, to avoid use after free in the
      traffic path when 'goto chain' is replaced on existing TC actions (see
      script below). Since the control action is read in the traffic path
      without holding the action spinlock, we need to explicitly ensure that
      a->goto_chain is not NULL before dereferencing (i.e it's not sufficient
      to rely on the value of TC_ACT_GOTO_CHAIN bits). Not doing so caused NULL
      dereferences in tcf_action_goto_chain_exec() when the following script:
      
       # tc chain add dev dd0 chain 42 ingress protocol ip flower \
       > ip_proto udp action pass index 4
       # tc filter add dev dd0 ingress protocol ip flower \
       > ip_proto udp action csum udp goto chain 42 index 66
       # tc chain del dev dd0 chain 42 ingress
       (start UDP traffic towards dd0)
       # tc action replace action csum udp pass index 66
      
      was run repeatedly for several hours.
      Suggested-by: NCong Wang <xiyou.wangcong@gmail.com>
      Suggested-by: NVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: NDavide Caratti <dcaratti@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ee3bbfe8
  7. 03 3月, 2019 1 次提交
    • E
      net: sched: put back q.qlen into a single location · 46b1c18f
      Eric Dumazet 提交于
      In the series fc8b81a5 ("Merge branch 'lockless-qdisc-series'")
      John made the assumption that the data path had no need to read
      the qdisc qlen (number of packets in the qdisc).
      
      It is true when pfifo_fast is used as the root qdisc, or as direct MQ/MQPRIO
      children.
      
      But pfifo_fast can be used as leaf in class full qdiscs, and existing
      logic needs to access the child qlen in an efficient way.
      
      HTB breaks badly, since it uses cl->leaf.q->q.qlen in :
        htb_activate() -> WARN_ON()
        htb_dequeue_tree() to decide if a class can be htb_deactivated
        when it has no more packets.
      
      HFSC, DRR, CBQ, QFQ have similar issues, and some calls to
      qdisc_tree_reduce_backlog() also read q.qlen directly.
      
      Using qdisc_qlen_sum() (which iterates over all possible cpus)
      in the data path is a non starter.
      
      It seems we have to put back qlen in a central location,
      at least for stable kernels.
      
      For all qdisc but pfifo_fast, qlen is guarded by the qdisc lock,
      so the existing q.qlen{++|--} are correct.
      
      For 'lockless' qdisc (pfifo_fast so far), we need to use atomic_{inc|dec}()
      because the spinlock might be not held (for example from
      pfifo_fast_enqueue() and pfifo_fast_dequeue())
      
      This patch adds atomic_qlen (in the same location than qlen)
      and renames the following helpers, since we want to express
      they can be used without qdisc lock, and that qlen is no longer percpu.
      
      - qdisc_qstats_cpu_qlen_dec -> qdisc_qstats_atomic_qlen_dec()
      - qdisc_qstats_cpu_qlen_inc -> qdisc_qstats_atomic_qlen_inc()
      
      Later (net-next) we might revert this patch by tracking all these
      qlen uses and replace them by a more efficient method (not having
      to access a precise qlen, but an empty/non_empty status that might
      be less expensive to maintain/track).
      
      Another possibility is to have a legacy pfifo_fast version that would
      be used when used a a child qdisc, since the parent qdisc needs
      a spinlock anyway. But then, future lockless qdiscs would also
      have the same problem.
      
      Fixes: 7e66016f ("net: sched: helpers to sum qlen and qlen for per cpu logic")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: John Fastabend <john.fastabend@gmail.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: Jiri Pirko <jiri@resnulli.us>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      46b1c18f
  8. 13 2月, 2019 8 次提交
    • V
      net: sched: add flags to Qdisc class ops struct · dfcd2a2b
      Vlad Buslov 提交于
      Extend Qdisc_class_ops with flags. Create enum to hold possible class ops
      flag values. Add first class ops flags value QDISC_CLASS_OPS_DOIT_UNLOCKED
      to indicate that class ops functions can be called without taking rtnl
      lock.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dfcd2a2b
    • V
      net: sched: extend proto ops to support unlocked classifiers · 12db03b6
      Vlad Buslov 提交于
      Add 'rtnl_held' flag to tcf proto change, delete, destroy, dump, walk
      functions to track rtnl lock status. Extend users of these function in cls
      API to propagate rtnl lock status to them. This allows classifiers to
      obtain rtnl lock when necessary and to pass rtnl lock status to extensions
      and driver offload callbacks.
      
      Add flags field to tcf proto ops. Add flag value to indicate that
      classifier doesn't require rtnl lock.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12db03b6
    • V
      net: sched: extend proto ops with 'put' callback · 7d5509fa
      Vlad Buslov 提交于
      Add optional tp->ops->put() API to be implemented for filter reference
      counting. This new function is called by cls API to release filter
      reference for filters returned by tp->ops->change() or tp->ops->get()
      functions. Implement tfilter_put() helper to call tp->ops->put() only for
      classifiers that implement it.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d5509fa
    • V
      net: sched: prevent insertion of new classifiers during chain flush · 726d0612
      Vlad Buslov 提交于
      Extend tcf_chain with 'flushing' flag. Use the flag to prevent insertion of
      new classifier instances when chain flushing is in progress in order to
      prevent resource leak when tcf_proto is created by unlocked users
      concurrently.
      
      Return EAGAIN error from tcf_chain_tp_insert_unique() to restart
      tc_new_tfilter() and lookup the chain/proto again.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      726d0612
    • V
      net: sched: refactor tp insert/delete for concurrent execution · 8b64678e
      Vlad Buslov 提交于
      Implement unique insertion function to atomically attach tcf_proto to chain
      after verifying that no other tcf proto with specified priority exists.
      Implement delete function that verifies that tp is actually empty before
      deleting it. Use these functions to refactor cls API to account for
      concurrent tp and rule update instead of relying on rtnl lock. Add new
      'deleting' flag to tcf proto. Use it to restart search when iterating over
      tp's on chain to prevent accessing potentially inval tp->next pointer.
      
      Extend tcf proto with spinlock that is intended to be used to protect its
      data from concurrent modification instead of relying on rtnl mutex. Use it
      to protect 'deleting' flag. Add lockdep macros to validate that lock is
      held when accessing protected fields.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b64678e
    • V
      net: sched: introduce reference counting for tcf_proto · 4dbfa766
      Vlad Buslov 提交于
      In order to remove dependency on rtnl lock and allow concurrent tcf_proto
      modification, extend tcf_proto with reference counter. Implement helper
      get/put functions for tcf proto and use them to modify cls API to always
      take reference to tcf_proto while using it. Only release reference to
      parent chain after releasing last reference to tp.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4dbfa766
    • V
      net: sched: protect filter_chain list with filter_chain_lock mutex · ed76f5ed
      Vlad Buslov 提交于
      Extend tcf_chain with new filter_chain_lock mutex. Always lock the chain
      when accessing filter_chain list, instead of relying on rtnl lock.
      Dereference filter_chain with tcf_chain_dereference() lockdep macro to
      verify that all users of chain_list have the lock taken.
      
      Rearrange tp insert/remove code in tc_new_tfilter/tc_del_tfilter to execute
      all necessary code while holding chain lock in order to prevent
      invalidation of chain_info structure by potential concurrent change. This
      also serializes calls to tcf_chain0_head_change(), which allows head change
      callbacks to rely on filter_chain_lock for synchronization instead of rtnl
      mutex.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ed76f5ed
    • V
      net: sched: protect block state with mutex · c266f64d
      Vlad Buslov 提交于
      Currently, tcf_block doesn't use any synchronization mechanisms to protect
      critical sections that manage lifetime of its chains. block->chain_list and
      multiple variables in tcf_chain that control its lifetime assume external
      synchronization provided by global rtnl lock. Converting chain reference
      counting to atomic reference counters is not possible because cls API uses
      multiple counters and flags to control chain lifetime, so all of them must
      be synchronized in chain get/put code.
      
      Use single per-block lock to protect block data and manage lifetime of all
      chains on the block. Always take block->lock when accessing chain_list.
      Chain get and put modify chain lifetime-management data and parent block's
      chain_list, so take the lock in these functions. Verify block->lock state
      with assertions in functions that expect to be called with the lock taken
      and are called from multiple places. Take block->lock when accessing
      filter_chain_list.
      
      In order to allow parallel update of rules on single block, move all calls
      to classifiers outside of critical sections protected by new block->lock.
      Rearrange chain get and put functions code to only access protected chain
      data while holding block lock:
      - Rearrange code to only access chain reference counter and chain action
        reference counter while holding block lock.
      - Extract code that requires block->lock from tcf_chain_destroy() into
        standalone tcf_chain_destroy() function that is called by
        __tcf_chain_put() in same critical section that changes chain reference
        counters.
      Signed-off-by: NVlad Buslov <vladbu@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c266f64d
  9. 20 1月, 2019 1 次提交
  10. 12 11月, 2018 1 次提交
    • J
      net: sched: register callbacks for indirect tc block binds · 7f76fa36
      John Hurley 提交于
      Currently drivers can register to receive TC block bind/unbind callbacks
      by implementing the setup_tc ndo in any of their given netdevs. However,
      drivers may also be interested in binds to higher level devices (e.g.
      tunnel drivers) to potentially offload filters applied to them.
      
      Introduce indirect block devs which allows drivers to register callbacks
      for block binds on other devices. The callback is triggered when the
      device is bound to a block, allowing the driver to register for rules
      applied to that block using already available functions.
      
      Freeing an indirect block callback will trigger an unbind event (if
      necessary) to direct the driver to remove any offloaded rules and unreg
      any block rule callbacks. It is the responsibility of the implementing
      driver to clean any registered indirect block callbacks before exiting,
      if the block it still active at such a time.
      
      Allow registering an indirect block dev callback for a device that is
      already bound to a block. In this case (if it is an ingress block),
      register and also trigger the callback meaning that any already installed
      rules can be replayed to the calling driver.
      Signed-off-by: NJohn Hurley <john.hurley@netronome.com>
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f76fa36
  11. 09 11月, 2018 2 次提交
  12. 26 9月, 2018 5 次提交
  13. 15 9月, 2018 1 次提交
  14. 11 9月, 2018 3 次提交
  15. 31 7月, 2018 2 次提交
    • P
      net/tc: introduce TC_ACT_REINSERT. · cd11b164
      Paolo Abeni 提交于
      This is similar TC_ACT_REDIRECT, but with a slightly different
      semantic:
      - on ingress the mirred skbs are passed to the target device
      network stack without any additional check not scrubbing.
      - the rcu-protected stats provided via the tcf_result struct
        are updated on error conditions.
      
      This new tcfa_action value is not exposed to the user-space
      and can be used only internally by clsact.
      
      v1 -> v2: do not touch TC_ACT_REDIRECT code path, introduce
       a new action type instead
      v2 -> v3:
       - rename the new action value TC_ACT_REINJECT, update the
         helper accordingly
       - take care of uncloned reinjected packets in XDP generic
         hook
      v3 -> v4:
       - renamed again the new action value (JiriP)
      v4 -> v5:
       - fix build error with !NET_CLS_ACT (kbuild bot)
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cd11b164
    • P
      tc/act: remove unneeded RCU lock in action callback · 7fd4b288
      Paolo Abeni 提交于
      Each lockless action currently does its own RCU locking in ->act().
      This allows using plain RCU accessor, even if the context
      is really RCU BH.
      
      This change drops the per action RCU lock, replace the accessors
      with the _bh variant, cleans up a bit the surrounding code and
      documents the RCU status in the relevant header.
      No functional nor performance change is intended.
      
      The goal of this patch is clarifying that the RCU critical section
      used by the tc actions extends up to the classifier's caller.
      
      v1 -> v2:
       - preserve rcu lock in act_bpf: it's needed by eBPF helpers,
         as pointed out by Daniel
      
      v3 -> v4:
       - fixed some typos in the commit message (JiriP)
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7fd4b288
  16. 28 7月, 2018 1 次提交
  17. 24 7月, 2018 3 次提交
  18. 26 6月, 2018 2 次提交