1. 31 3月, 2014 4 次提交
    • A
      net: filter: rework/optimize internal BPF interpreter's instruction set · bd4cf0ed
      Alexei Starovoitov 提交于
      This patch replaces/reworks the kernel-internal BPF interpreter with
      an optimized BPF instruction set format that is modelled closer to
      mimic native instruction sets and is designed to be JITed with one to
      one mapping. Thus, the new interpreter is noticeably faster than the
      current implementation of sk_run_filter(); mainly for two reasons:
      
      1. Fall-through jumps:
      
        BPF jump instructions are forced to go either 'true' or 'false'
        branch which causes branch-miss penalty. The new BPF jump
        instructions have only one branch and fall-through otherwise,
        which fits the CPU branch predictor logic better. `perf stat`
        shows drastic difference for branch-misses between the old and
        new code.
      
      2. Jump-threaded implementation of interpreter vs switch
         statement:
      
        Instead of single table-jump at the top of 'switch' statement,
        gcc will now generate multiple table-jump instructions, which
        helps CPU branch predictor logic.
      
      Note that the verification of filters is still being done through
      sk_chk_filter() in classical BPF format, so filters from user- or
      kernel space are verified in the same way as we do now, and same
      restrictions/constraints hold as well.
      
      We reuse current BPF JIT compilers in a way that this upgrade would
      even be fine as is, but nevertheless allows for a successive upgrade
      of BPF JIT compilers to the new format.
      
      The internal instruction set migration is being done after the
      probing for JIT compilation, so in case JIT compilers are able to
      create a native opcode image, we're going to use that, and in all
      other cases we're doing a follow-up migration of the BPF program's
      instruction set, so that it can be transparently run in the new
      interpreter.
      
      In short, the *internal* format extends BPF in the following way (more
      details can be taken from the appended documentation):
      
        - Number of registers increase from 2 to 10
        - Register width increases from 32-bit to 64-bit
        - Conditional jt/jf targets replaced with jt/fall-through
        - Adds signed > and >= insns
        - 16 4-byte stack slots for register spill-fill replaced
          with up to 512 bytes of multi-use stack space
        - Introduction of bpf_call insn and register passing convention
          for zero overhead calls from/to other kernel functions
        - Adds arithmetic right shift and endianness conversion insns
        - Adds atomic_add insn
        - Old tax/txa insns are replaced with 'mov dst,src' insn
      
      Performance of two BPF filters generated by libpcap resp. bpf_asm
      was measured on x86_64, i386 and arm32 (other libpcap programs
      have similar performance differences):
      
      fprog #1 is taken from Documentation/networking/filter.txt:
      tcpdump -i eth0 port 22 -dd
      
      fprog #2 is taken from 'man tcpdump':
      tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
         ((tcp[12]&0xf0)>>2)) != 0)' -dd
      
      Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on the
      same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call,
      smaller is better:
      
      --x86_64--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF      90       101        192       202
      new BPF      31        71         47        97
      old BPF jit  12        34         17        44
      new BPF jit TBD
      
      --i386--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF     107       136        227       252
      new BPF      40       119         69       172
      
      --arm32--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF     202       300        475       540
      new BPF     180       270        330       470
      old BPF jit  26       182         37       202
      new BPF jit TBD
      
      Thus, without changing any userland BPF filters, applications on
      top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf
      classifier, netfilter's xt_bpf, team driver's load-balancing mode,
      and many more will have better interpreter filtering performance.
      
      While we are replacing the internal BPF interpreter, we also need
      to convert seccomp BPF in the same step to make use of the new
      internal structure since it makes use of lower-level API details
      without being further decoupled through higher-level calls like
      sk_unattached_filter_{create,destroy}(), for example.
      
      Just as for normal socket filtering, also seccomp BPF experiences
      a time-to-verdict speedup:
      
      05-sim-long_jumps.c of libseccomp was used as micro-benchmark:
      
        seccomp_rule_add_exact(ctx,...
        seccomp_rule_add_exact(ctx,...
      
        rc = seccomp_load(ctx);
      
        for (i = 0; i < 10000000; i++)
           syscall(199, 100);
      
      'short filter' has 2 rules
      'large filter' has 200 rules
      
      'short filter' performance is slightly better on x86_64/i386/arm32
      'large filter' is much faster on x86_64 and i386 and shows no
                     difference on arm32
      
      --x86_64-- short filter
      old BPF: 2.7 sec
       39.12%  bench  libc-2.15.so       [.] syscall
        8.10%  bench  [kernel.kallsyms]  [k] sk_run_filter
        6.31%  bench  [kernel.kallsyms]  [k] system_call
        5.59%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
        4.37%  bench  [kernel.kallsyms]  [k] trace_hardirqs_off_caller
        3.70%  bench  [kernel.kallsyms]  [k] __secure_computing
        3.67%  bench  [kernel.kallsyms]  [k] lock_is_held
        3.03%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
      new BPF: 2.58 sec
       42.05%  bench  libc-2.15.so       [.] syscall
        6.91%  bench  [kernel.kallsyms]  [k] system_call
        6.25%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
        6.07%  bench  [kernel.kallsyms]  [k] __secure_computing
        5.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
      
      --arm32-- short filter
      old BPF: 4.0 sec
       39.92%  bench  [kernel.kallsyms]  [k] vector_swi
       16.60%  bench  [kernel.kallsyms]  [k] sk_run_filter
       14.66%  bench  libc-2.17.so       [.] syscall
        5.42%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
        5.10%  bench  [kernel.kallsyms]  [k] __secure_computing
      new BPF: 3.7 sec
       35.93%  bench  [kernel.kallsyms]  [k] vector_swi
       21.89%  bench  libc-2.17.so       [.] syscall
       13.45%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
        6.25%  bench  [kernel.kallsyms]  [k] __secure_computing
        3.96%  bench  [kernel.kallsyms]  [k] syscall_trace_exit
      
      --x86_64-- large filter
      old BPF: 8.6 seconds
          73.38%    bench  [kernel.kallsyms]  [k] sk_run_filter
          10.70%    bench  libc-2.15.so       [.] syscall
           5.09%    bench  [kernel.kallsyms]  [k] seccomp_bpf_load
           1.97%    bench  [kernel.kallsyms]  [k] system_call
      new BPF: 5.7 seconds
          66.20%    bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
          16.75%    bench  libc-2.15.so       [.] syscall
           3.31%    bench  [kernel.kallsyms]  [k] system_call
           2.88%    bench  [kernel.kallsyms]  [k] __secure_computing
      
      --i386-- large filter
      old BPF: 5.4 sec
      new BPF: 3.8 sec
      
      --arm32-- large filter
      old BPF: 13.5 sec
       73.88%  bench  [kernel.kallsyms]  [k] sk_run_filter
       10.29%  bench  [kernel.kallsyms]  [k] vector_swi
        6.46%  bench  libc-2.17.so       [.] syscall
        2.94%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
        1.19%  bench  [kernel.kallsyms]  [k] __secure_computing
        0.87%  bench  [kernel.kallsyms]  [k] sys_getuid
      new BPF: 13.5 sec
       76.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
       10.98%  bench  [kernel.kallsyms]  [k] vector_swi
        5.87%  bench  libc-2.17.so       [.] syscall
        1.77%  bench  [kernel.kallsyms]  [k] __secure_computing
        0.93%  bench  [kernel.kallsyms]  [k] sys_getuid
      
      BPF filters generated by seccomp are very branchy, so the new
      internal BPF performance is better than the old one. Performance
      gains will be even higher when BPF JIT is committed for the
      new structure, which is planned in future work (as successive
      JIT migrations).
      
      BPF has also been stress-tested with trinity's BPF fuzzer.
      
      Joint work with Daniel Borkmann.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Paul Moore <pmoore@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: linux-kernel@vger.kernel.org
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bd4cf0ed
    • D
      net: filter: move filter accounting to filter core · fbc907f0
      Daniel Borkmann 提交于
      This patch basically does two things, i) removes the extern keyword
      from the include/linux/filter.h file to be more consistent with the
      rest of Joe's changes, and ii) moves filter accounting into the filter
      core framework.
      
      Filter accounting mainly done through sk_filter_{un,}charge() take
      care of the case when sockets are being cloned through sk_clone_lock()
      so that removal of the filter on one socket won't result in eviction
      as it's still referenced by the other.
      
      These functions actually belong to net/core/filter.c and not
      include/net/sock.h as we want to keep all that in a central place.
      It's also not in fast-path so uninlining them is fine and even allows
      us to get rd of sk_filter_release_rcu()'s EXPORT_SYMBOL and a forward
      declaration.
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbc907f0
    • D
      net: filter: keep original BPF program around · a3ea269b
      Daniel Borkmann 提交于
      In order to open up the possibility to internally transform a BPF program
      into an alternative and possibly non-trivial reversible representation, we
      need to keep the original BPF program around, so that it can be passed back
      to user space w/o the need of a complex decoder.
      
      The reason for that use case resides in commit a8fc9277 ("sk-filter:
      Add ability to get socket filter program (v2)"), that is, the ability
      to retrieve the currently attached BPF filter from a given socket used
      mainly by the checkpoint-restore project, for example.
      
      Therefore, we add two helpers sk_{store,release}_orig_filter for taking
      care of that. In the sk_unattached_filter_create() case, there's no such
      possibility/requirement to retrieve a loaded BPF program. Therefore, we
      can spare us the work in that case.
      
      This approach will simplify and slightly speed up both, sk_get_filter()
      and sock_diag_put_filterinfo() handlers as we won't need to successively
      decode filters anymore through sk_decode_filter(). As we still need
      sk_decode_filter() later on, we're keeping it around.
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a3ea269b
    • D
      net: filter: add jited flag to indicate jit compiled filters · f8bbbfc3
      Daniel Borkmann 提交于
      This patch adds a jited flag into sk_filter struct in order to indicate
      whether a filter is currently jited or not. The size of sk_filter is
      not being expanded as the 32 bit 'len' member allows upper bits to be
      reused since a filter can currently only grow as large as BPF_MAXINSNS.
      
      Therefore, there's enough room also for other in future needed flags to
      reuse 'len' field if necessary. The jited flag also allows for having
      alternative interpreter functions running as currently, we can only
      detect jit compiled filters by testing fp->bpf_func to not equal the
      address of sk_run_filter().
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Pablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8bbbfc3
  2. 27 3月, 2014 1 次提交
  3. 16 1月, 2014 1 次提交
  4. 08 10月, 2013 1 次提交
    • A
      net: fix unsafe set_memory_rw from softirq · d45ed4a4
      Alexei Starovoitov 提交于
      on x86 system with net.core.bpf_jit_enable = 1
      
      sudo tcpdump -i eth1 'tcp port 22'
      
      causes the warning:
      [   56.766097]  Possible unsafe locking scenario:
      [   56.766097]
      [   56.780146]        CPU0
      [   56.786807]        ----
      [   56.793188]   lock(&(&vb->lock)->rlock);
      [   56.799593]   <Interrupt>
      [   56.805889]     lock(&(&vb->lock)->rlock);
      [   56.812266]
      [   56.812266]  *** DEADLOCK ***
      [   56.812266]
      [   56.830670] 1 lock held by ksoftirqd/1/13:
      [   56.836838]  #0:  (rcu_read_lock){.+.+..}, at: [<ffffffff8118f44c>] vm_unmap_aliases+0x8c/0x380
      [   56.849757]
      [   56.849757] stack backtrace:
      [   56.862194] CPU: 1 PID: 13 Comm: ksoftirqd/1 Not tainted 3.12.0-rc3+ #45
      [   56.868721] Hardware name: System manufacturer System Product Name/P8Z77 WS, BIOS 3007 07/26/2012
      [   56.882004]  ffffffff821944c0 ffff88080bbdb8c8 ffffffff8175a145 0000000000000007
      [   56.895630]  ffff88080bbd5f40 ffff88080bbdb928 ffffffff81755b14 0000000000000001
      [   56.909313]  ffff880800000001 ffff880800000000 ffffffff8101178f 0000000000000001
      [   56.923006] Call Trace:
      [   56.929532]  [<ffffffff8175a145>] dump_stack+0x55/0x76
      [   56.936067]  [<ffffffff81755b14>] print_usage_bug+0x1f7/0x208
      [   56.942445]  [<ffffffff8101178f>] ? save_stack_trace+0x2f/0x50
      [   56.948932]  [<ffffffff810cc0a0>] ? check_usage_backwards+0x150/0x150
      [   56.955470]  [<ffffffff810ccb52>] mark_lock+0x282/0x2c0
      [   56.961945]  [<ffffffff810ccfed>] __lock_acquire+0x45d/0x1d50
      [   56.968474]  [<ffffffff810cce6e>] ? __lock_acquire+0x2de/0x1d50
      [   56.975140]  [<ffffffff81393bf5>] ? cpumask_next_and+0x55/0x90
      [   56.981942]  [<ffffffff810cef72>] lock_acquire+0x92/0x1d0
      [   56.988745]  [<ffffffff8118f52a>] ? vm_unmap_aliases+0x16a/0x380
      [   56.995619]  [<ffffffff817628f1>] _raw_spin_lock+0x41/0x50
      [   57.002493]  [<ffffffff8118f52a>] ? vm_unmap_aliases+0x16a/0x380
      [   57.009447]  [<ffffffff8118f52a>] vm_unmap_aliases+0x16a/0x380
      [   57.016477]  [<ffffffff8118f44c>] ? vm_unmap_aliases+0x8c/0x380
      [   57.023607]  [<ffffffff810436b0>] change_page_attr_set_clr+0xc0/0x460
      [   57.030818]  [<ffffffff810cfb8d>] ? trace_hardirqs_on+0xd/0x10
      [   57.037896]  [<ffffffff811a8330>] ? kmem_cache_free+0xb0/0x2b0
      [   57.044789]  [<ffffffff811b59c3>] ? free_object_rcu+0x93/0xa0
      [   57.051720]  [<ffffffff81043d9f>] set_memory_rw+0x2f/0x40
      [   57.058727]  [<ffffffff8104e17c>] bpf_jit_free+0x2c/0x40
      [   57.065577]  [<ffffffff81642cba>] sk_filter_release_rcu+0x1a/0x30
      [   57.072338]  [<ffffffff811108e2>] rcu_process_callbacks+0x202/0x7c0
      [   57.078962]  [<ffffffff81057f17>] __do_softirq+0xf7/0x3f0
      [   57.085373]  [<ffffffff81058245>] run_ksoftirqd+0x35/0x70
      
      cannot reuse jited filter memory, since it's readonly,
      so use original bpf insns memory to hold work_struct
      
      defer kfree of sk_filter until jit completed freeing
      
      tested on x86_64 and i386
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d45ed4a4
  5. 11 6月, 2013 1 次提交
  6. 21 3月, 2013 1 次提交
  7. 17 1月, 2013 1 次提交
    • V
      sk-filter: Add ability to lock a socket filter program · d59577b6
      Vincent Bernat 提交于
      While a privileged program can open a raw socket, attach some
      restrictive filter and drop its privileges (or send the socket to an
      unprivileged program through some Unix socket), the filter can still
      be removed or modified by the unprivileged program. This commit adds a
      socket option to lock the filter (SO_LOCK_FILTER) preventing any
      modification of a socket filter program.
      
      This is similar to OpenBSD BIOCLOCK ioctl on bpf sockets, except even
      root is not allowed change/drop the filter.
      
      The state of the lock can be read with getsockopt(). No error is
      triggered if the state is not changed. -EPERM is returned when a user
      tries to remove the lock or to change/remove the filter while the lock
      is active. The check is done directly in sk_attach_filter() and
      sk_detach_filter() and does not affect only setsockopt() syscall.
      Signed-off-by: NVincent Bernat <bernat@luffy.cx>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d59577b6
  8. 30 12月, 2012 1 次提交
    • D
      net: filter: return -EINVAL if BPF_S_ANC* operation is not supported · aa1113d9
      Daniel Borkmann 提交于
      Currently, we return -EINVAL for malformed or wrong BPF filters.
      However, this is not done for BPF_S_ANC* operations, which makes it
      more difficult to detect if it's actually supported or not by the
      BPF machine. Therefore, we should also return -EINVAL if K is within
      the SKF_AD_OFF universe and the ancillary operation did not match.
      
      Why exactly is it needed? If tools such as libpcap/tcpdump want to
      make use of new ancillary operations (like filtering VLAN in kernel
      space), there is currently no sane way to test if this feature /
      BPF_S_ANC* op is present or not, since no error is returned. This
      patch will make life easier for that and allow for a proper usage
      for user space applications.
      
      There was concern, if this patch will break userland. Short answer: Yes
      and no. Long answer: It will "break" only for code that calls ...
      
        { BPF_LD | BPF_(W|H|B) | BPF_ABS, 0, 0, <K> },
      
      ... where <K> is in [0xfffff000, 0xffffffff] _and_ <K> is *not* an
      ancillary. And here comes the BUT: assuming some *old* code will have
      such an instruction where <K> is between [0xfffff000, 0xffffffff] and
      it doesn't know ancillary operations, then this will give a
      non-expected / unwanted behavior as well (since we do not return the
      BPF machine with 0 after a failed load_pointer(), which was the case
      before introducing ancillary operations, but load sth. into the
      accumulator instead, and continue with the next instruction, for
      instance). Thus, user space code would already have been broken by
      introducing ancillary operations into the BPF machine per se. Code
      that does such a direct load, e.g. "load word at packet offset
      0xffffffff into accumulator" ("ld [0xffffffff]") is quite broken,
      isn't it? The whole assumption of ancillary operations is that no-one
      intentionally calls things like "ld [0xffffffff]" and expect this
      word to be loaded from such a packet offset. Hence, we can also safely
      make use of this feature testing patch and facilitate application
      development. Therefore, at least from this patch onwards, we have
      *for sure* a check whether current or in future implemented BPF_S_ANC*
      ops are supported in the kernel. Patch was tested on x86_64.
      
      (Thanks to Eric for the previous review.)
      
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Reported-by: NAni Sinha <ani@aristanetworks.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aa1113d9
  9. 01 11月, 2012 2 次提交
    • P
      sk-filter: Add ability to get socket filter program (v2) · a8fc9277
      Pavel Emelyanov 提交于
      The SO_ATTACH_FILTER option is set only. I propose to add the get
      ability by using SO_ATTACH_FILTER in getsockopt. To be less
      irritating to eyes the SO_GET_FILTER alias to it is declared. This
      ability is required by checkpoint-restore project to be able to
      save full state of a socket.
      
      There are two issues with getting filter back.
      
      First, kernel modifies the sock_filter->code on filter load, thus in
      order to return the filter element back to user we have to decode it
      into user-visible constants. Fortunately the modification in question
      is interconvertible.
      
      Second, the BPF_S_ALU_DIV_K code modifies the command argument k to
      speed up the run-time division by doing kernel_k = reciprocal(user_k).
      Bad news is that different user_k may result in same kernel_k, so we
      can't get the original user_k back. Good news is that we don't have
      to do it. What we need to is calculate a user2_k so, that
      
        reciprocal(user2_k) == reciprocal(user_k) == kernel_k
      
      i.e. if it's re-loaded back the compiled again value will be exactly
      the same as it was. That said, the user2_k can be calculated like this
      
        user2_k = reciprocal(kernel_k)
      
      with an exception, that if kernel_k == 0, then user2_k == 1.
      
      The optlen argument is treated like this -- when zero, kernel returns
      the amount of sock_fprog elements in filter, otherwise it should be
      large enough for the sock_fprog array.
      
      changes since v1:
      * Declared SO_GET_FILTER in all arch headers
      * Added decode of vlan-tag codes
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a8fc9277
    • E
      net: filter: add vlan tag access · f3335031
      Eric Dumazet 提交于
      BPF filters lack ability to access skb->vlan_tci
      
      This patch adds two new ancillary accessors :
      
      SKF_AD_VLAN_TAG         (44) mapped to vlan_tx_tag_get(skb)
      
      SKF_AD_VLAN_TAG_PRESENT (48) mapped to vlan_tx_tag_present(skb)
      
      This allows libpcap/tcpdump to use a kernel filter instead of
      having to fallback to accept all packets, then filter them in
      user space.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Suggested-by: NAni Sinha <ani@aristanetworks.com>
      Suggested-by: NDaniel Borkmann <danborkmann@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f3335031
  10. 25 9月, 2012 1 次提交
  11. 11 9月, 2012 1 次提交
  12. 01 8月, 2012 1 次提交
    • M
      netvm: allow skb allocation to use PFMEMALLOC reserves · c93bdd0e
      Mel Gorman 提交于
      Change the skb allocation API to indicate RX usage and use this to fall
      back to the PFMEMALLOC reserve when needed.  SKBs allocated from the
      reserve are tagged in skb->pfmemalloc.  If an SKB is allocated from the
      reserve and the socket is later found to be unrelated to page reclaim, the
      packet is dropped so that the memory remains available for page reclaim.
      Network protocols are expected to recover from this packet loss.
      
      [a.p.zijlstra@chello.nl: Ideas taken from various patches]
      [davem@davemloft.net: Use static branches, coding style corrections]
      [sebastian@breakpoint.cc: Avoid unnecessary cast, fix !CONFIG_NET build]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c93bdd0e
  13. 09 6月, 2012 1 次提交
  14. 16 4月, 2012 1 次提交
  15. 14 4月, 2012 1 次提交
  16. 04 4月, 2012 3 次提交
  17. 29 3月, 2012 1 次提交
  18. 20 10月, 2011 1 次提交
  19. 02 8月, 2011 1 次提交
  20. 27 5月, 2011 1 次提交
  21. 24 5月, 2011 1 次提交
  22. 28 4月, 2011 1 次提交
    • E
      net: filter: Just In Time compiler for x86-64 · 0a14842f
      Eric Dumazet 提交于
      In order to speedup packet filtering, here is an implementation of a
      JIT compiler for x86_64
      
      It is disabled by default, and must be enabled by the admin.
      
      echo 1 >/proc/sys/net/core/bpf_jit_enable
      
      It uses module_alloc() and module_free() to get memory in the 2GB text
      kernel range since we call helpers functions from the generated code.
      
      EAX : BPF A accumulator
      EBX : BPF X accumulator
      RDI : pointer to skb   (first argument given to JIT function)
      RBP : frame pointer (even if CONFIG_FRAME_POINTER=n)
      r9d : skb->len - skb->data_len (headlen)
      r8  : skb->data
      
      To get a trace of generated code, use :
      
      echo 2 >/proc/sys/net/core/bpf_jit_enable
      
      Example of generated code :
      
      # tcpdump -p -n -s 0 -i eth1 host 192.168.20.0/24
      
      flen=18 proglen=147 pass=3 image=ffffffffa00b5000
      JIT code: ffffffffa00b5000: 55 48 89 e5 48 83 ec 60 48 89 5d f8 44 8b 4f 60
      JIT code: ffffffffa00b5010: 44 2b 4f 64 4c 8b 87 b8 00 00 00 be 0c 00 00 00
      JIT code: ffffffffa00b5020: e8 24 7b f7 e0 3d 00 08 00 00 75 28 be 1a 00 00
      JIT code: ffffffffa00b5030: 00 e8 fe 7a f7 e0 24 00 3d 00 14 a8 c0 74 49 be
      JIT code: ffffffffa00b5040: 1e 00 00 00 e8 eb 7a f7 e0 24 00 3d 00 14 a8 c0
      JIT code: ffffffffa00b5050: 74 36 eb 3b 3d 06 08 00 00 74 07 3d 35 80 00 00
      JIT code: ffffffffa00b5060: 75 2d be 1c 00 00 00 e8 c8 7a f7 e0 24 00 3d 00
      JIT code: ffffffffa00b5070: 14 a8 c0 74 13 be 26 00 00 00 e8 b5 7a f7 e0 24
      JIT code: ffffffffa00b5080: 00 3d 00 14 a8 c0 75 07 b8 ff ff 00 00 eb 02 31
      JIT code: ffffffffa00b5090: c0 c9 c3
      
      BPF program is 144 bytes long, so native program is almost same size ;)
      
      (000) ldh      [12]
      (001) jeq      #0x800           jt 2    jf 8
      (002) ld       [26]
      (003) and      #0xffffff00
      (004) jeq      #0xc0a81400      jt 16   jf 5
      (005) ld       [30]
      (006) and      #0xffffff00
      (007) jeq      #0xc0a81400      jt 16   jf 17
      (008) jeq      #0x806           jt 10   jf 9
      (009) jeq      #0x8035          jt 10   jf 17
      (010) ld       [28]
      (011) and      #0xffffff00
      (012) jeq      #0xc0a81400      jt 16   jf 13
      (013) ld       [38]
      (014) and      #0xffffff00
      (015) jeq      #0xc0a81400      jt 16   jf 17
      (016) ret      #65535
      (017) ret      #0
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0a14842f
  23. 31 3月, 2011 1 次提交
  24. 19 1月, 2011 1 次提交
  25. 10 1月, 2011 1 次提交
  26. 22 12月, 2010 1 次提交
    • E
      filter: optimize accesses to ancillary data · 12b16dad
      Eric Dumazet 提交于
      We can translate pseudo load instructions at filter check time to
      dedicated instructions to speed up filtering and avoid one switch().
      libpcap currently uses SKF_AD_PROTOCOL, but custom filters probably use
      other ancillary accesses.
      
      Note : I made the assertion that ancillary data was always accessed with
      BPF_LD|BPF_?|BPF_ABS instructions, not with BPF_LD|BPF_?|BPF_IND ones
      (offset given by K constant, not by K + X register)
      
      On x86_64, this saves a few bytes of text :
      
      # size net/core/filter.o.*
         text	   data	    bss	    dec	    hex	filename
         4864	      0	      0	   4864	   1300	net/core/filter.o.new
         4944	      0	      0	   4944	   1350	net/core/filter.o.old
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12b16dad
  27. 10 12月, 2010 1 次提交
  28. 09 12月, 2010 1 次提交
  29. 07 12月, 2010 3 次提交
    • E
      filter: add a security check at install time · 2d5311e4
      Eric Dumazet 提交于
      We added some security checks in commit 57fe93b3
      (filter: make sure filters dont read uninitialized memory) to close a
      potential leak of kernel information to user.
      
      This added a potential extra cost at run time, while we can perform a
      check of the filter itself, to make sure a malicious user doesnt try to
      abuse us.
      
      This patch adds a check_loads() function, whole unique purpose is to
      make this check, allocating a temporary array of mask. We scan the
      filter and propagate a bitmask information, telling us if a load M(K) is
      allowed because a previous store M(K) is guaranteed. (So that
      sk_run_filter() can possibly not read unitialized memory)
      
      Note: this can uncover application bug, denying a filter attach,
      previously allowed.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Cc: Dan Rosenberg <drosenberg@vsecurity.com>
      Cc: Changli Gao <xiaosuo@gmail.com>
      Acked-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2d5311e4
    • E
      filter: add SKF_AD_RXHASH and SKF_AD_CPU · da2033c2
      Eric Dumazet 提交于
      Add SKF_AD_RXHASH and SKF_AD_CPU to filter ancillary mechanism,
      to be able to build advanced filters.
      
      This can help spreading packets on several sockets with a fast
      selection, after RPS dispatch to N cpus for example, or to catch a
      percentage of flows in one queue.
      
      tcpdump -s 500 "cpu = 1" :
      
      [0] ld CPU
      [1] jeq #1  jt 2  jf 3
      [2] ret #500
      [3] ret #0
      
      # take 12.5 % of flows (average)
      tcpdump -s 1000 "rxhash & 7 = 2" :
      
      [0] ld RXHASH
      [1] and #7
      [2] jeq #2  jt 3  jf 4
      [3] ret #1000
      [4] ret #0
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Cc: Rui <wirelesser@gmail.com>
      Acked-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      da2033c2
    • E
      filter: fix sk_filter rcu handling · 46bcf14f
      Eric Dumazet 提交于
      Pavel Emelyanov tried to fix a race between sk_filter_(de|at)tach and
      sk_clone() in commit 47e958ea
      
      Problem is we can have several clones sharing a common sk_filter, and
      these clones might want to sk_filter_attach() their own filters at the
      same time, and can overwrite old_filter->rcu, corrupting RCU queues.
      
      We can not use filter->rcu without being sure no other thread could do
      the same thing.
      
      Switch code to a more conventional ref-counting technique : Do the
      atomic decrement immediately and queue one rcu call back when last
      reference is released.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      46bcf14f
  30. 20 11月, 2010 3 次提交
    • E
      filter: use reciprocal divide · c26aed40
      Eric Dumazet 提交于
      At compile time, we can replace the DIV_K instruction (divide by a
      constant value) by a reciprocal divide.
      
      At exec time, the expensive divide is replaced by a multiply, a less
      expensive operation on most processors.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c26aed40
    • E
      filter: cleanup codes[] init · 8c1592d6
      Eric Dumazet 提交于
      Starting the translated instruction to 1 instead of 0 allows us to
      remove one descrement at check time and makes codes[] array init
      cleaner.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8c1592d6
    • E
      filter: optimize sk_run_filter · 93aaae2e
      Eric Dumazet 提交于
      Remove pc variable to avoid arithmetic to compute fentry at each filter
      instruction. Jumps directly manipulate fentry pointer.
      
      As the last instruction of filter[] is guaranteed to be a RETURN, and
      all jumps are before the last instruction, we dont need to check filter
      bounds (number of instructions in filter array) at each iteration, so we
      remove it from sk_run_filter() params.
      
      On x86_32 remove f_k var introduced in commit 57fe93b3
      (filter: make sure filters dont read uninitialized memory)
      
      Note : We could use a CONFIG_ARCH_HAS_{FEW|MANY}_REGISTERS in order to
      avoid too many ifdefs in this code.
      
      This helps compiler to use cpu registers to hold fentry and A
      accumulator.
      
      On x86_32, this saves 401 bytes, and more important, sk_run_filter()
      runs much faster because less register pressure (One less conditional
      branch per BPF instruction)
      
      # size net/core/filter.o net/core/filter_pre.o
         text    data     bss     dec     hex filename
         2948       0       0    2948     b84 net/core/filter.o
         3349       0       0    3349     d15 net/core/filter_pre.o
      
      on x86_64 :
      # size net/core/filter.o net/core/filter_pre.o
         text    data     bss     dec     hex filename
         5173       0       0    5173    1435 net/core/filter.o
         5224       0       0    5224    1468 net/core/filter_pre.o
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93aaae2e