1. 29 9月, 2015 17 次提交
  2. 22 9月, 2015 18 次提交
  3. 28 8月, 2015 1 次提交
  4. 27 8月, 2015 4 次提交
    • A
      bpf: fix bpf_skb_set_tunnel_key() helper · 1dd34b5a
      Alexei Starovoitov 提交于
      Make sure to indicate to tunnel driver that key.tun_id is set,
      otherwise gre won't recognize the metadata.
      
      Fixes: d3aa45ce ("bpf: add helpers to access tunnel metadata")
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1dd34b5a
    • D
      Merge branch 'act_bpf_lockless' · 8c5bbe77
      David S. Miller 提交于
      Alexei Starovoitov says:
      
      ====================
      act_bpf: remove spinlock in fast path
      
      v1 version had a race condition in cleanup path of bpf_prog.
      I tried to fix it by adding new callback 'cleanup_rcu' to 'struct tcf_common'
      and call it out of act_api cleanup path, but Daniel noticed
      (thanks for the idea!) that most of the classifiers already do action cleanup
      out of rcu callback.
      So instead this set of patches converts tcindex and rsvp classifiers to call
      tcf_exts_destroy() after rcu grace period and since action cleanup logic
      in __tcf_hash_release() is only called when bind and refcnt goes to zero,
      it's guaranteed that cleanup() callback is called from rcu callback.
      More specifically:
      patches 1 and 2 - simple fixes
      patches 2 and 3 - convert tcf_exts_destroy in tcindex and rsvp to call_rcu
      patch 5 - removes spin_lock from act_bpf
      
      The cleanup of actions is now universally done after rcu grace period
      and in the future we can drop (now unnecessary) call_rcu from tcf_hash_destroy()
      patch 5 is using synchronize_rcu() in act_bpf replacement path, since it's
      very rare and alternative of dynamically allocating 'struct tcf_bpf_cfg' just
      to pass it to call_rcu looks even less appealing.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8c5bbe77
    • A
      net_sched: act_bpf: remove spinlock in fast path · cff82457
      Alexei Starovoitov 提交于
      Similar to act_gact/act_mirred, act_bpf can be lockless in packet processing
      with extra care taken to free bpf programs after rcu grace period.
      Replacement of existing act_bpf (very rare) is done with synchronize_rcu()
      and final destruction is done from tc_action_ops->cleanup() callback that is
      called from tcf_exts_destroy()->tcf_action_destroy()->__tcf_hash_release() when
      bind and refcnt reach zero which is only possible when classifier is destroyed.
      Previous two patches fixed the last two classifiers (tcindex and rsvp) to
      call tcf_exts_destroy() from rcu callback.
      
      Similar to gact/mirred there is a race between prog->filter and
      prog->tcf_action. Meaning that the program being replaced may use
      previous default action if it happened to return TC_ACT_UNSPEC.
      act_mirred race betwen tcf_action and tcfm_dev is similar.
      In all cases the race is harmless.
      Long term we may want to improve the situation by replacing the whole
      tc_action->priv as single pointer instead of updating inner fields one by one.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cff82457
    • A
      net_sched: convert rsvp to call tcf_exts_destroy from rcu callback · 9e528d89
      Alexei Starovoitov 提交于
      Adjust destroy path of cls_rsvp to call tcf_exts_destroy() after
      rcu grace period.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e528d89