1. 02 3月, 2015 11 次提交
    • A
      net: macb: Add on the fly CPU endianness detection · 62f6924c
      Arun Chandran 提交于
      Program management descriptor's access mode according to the
      dynamically detected CPU endianness.
      Signed-off-by: NArun Chandran <achandran@mvista.com>
      Acked-by: NNicolas Ferre <nicolas.ferre@atmel.com>
      Tested-by: NMichal Simek <michal.simek@xilinx.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62f6924c
    • S
      Driver: Vmxnet3: Copy TCP header to mapped frame for IPv6 packets · 759c9359
      Shrikrishna Khare 提交于
      Allows for packet parsing to be done by the fast path. This performance
      optimization already exists for IPv4. Add similar logic for IPv6.
      Signed-off-by: NAmitabha Banerjee <banerjeea@vmware.com>
      Signed-off-by: NShrikrishna Khare <skhare@vmware.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      759c9359
    • D
      Merge branch 'ebpf_support_for_cls_bpf' · 68932f71
      David S. Miller 提交于
      Daniel Borkmann says:
      
      ====================
      eBPF support for cls_bpf
      
      This is the non-RFC version of my patchset posted before netdev01 [1]
      conference. It contains a couple of eBPF cleanups and preparation
      patches to get eBPF support into cls_bpf. The last patch adds the
      actual support. I'll post the iproute2 parts after the kernel bits
      are merged, an initial preview link to the code is mentioned in the
      last patch.
      
      Patch 4 and 5 were originally one patch, but I've split them into
      two parts upon request as patch 4 only is also needed for Alexei's
      tracing patches that go via tip tree.
      
      Tested with tc and all in-kernel available BPF test suites.
      
      I have configured and built LLVM with --enable-experimental-targets=BPF
      but as Alexei put it, the plan is to get rid of the experimental
      status in future [2].
      
      Thanks a lot!
      
      v1 -> v2:
       - Removed arch patches from this series
        - x86 is already queued in tip tree, under x86/mm
        - arm64 just reposted directly to arm folks
       - Rest is unchanged
      
        [1] http://thread.gmane.org/gmane.linux.network/350191
        [2] http://article.gmane.org/gmane.linux.kernel/1874969
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68932f71
    • D
      cls_bpf: add initial eBPF support for programmable classifiers · e2e9b654
      Daniel Borkmann 提交于
      This work extends the "classic" BPF programmable tc classifier by
      extending its scope also to native eBPF code!
      
      This allows for user space to implement own custom, 'safe' C like
      classifiers (or whatever other frontend language LLVM et al may
      provide in future), that can then be compiled with the LLVM eBPF
      backend to an eBPF elf file. The result of this can be loaded into
      the kernel via iproute2's tc. In the kernel, they can be JITed on
      major archs and thus run in native performance.
      
      Simple, minimal toy example to demonstrate the workflow:
      
        #include <linux/ip.h>
        #include <linux/if_ether.h>
        #include <linux/bpf.h>
      
        #include "tc_bpf_api.h"
      
        __section("classify")
        int cls_main(struct sk_buff *skb)
        {
          return (0x800 << 16) | load_byte(skb, ETH_HLEN + __builtin_offsetof(struct iphdr, tos));
        }
      
        char __license[] __section("license") = "GPL";
      
      The classifier can then be compiled into eBPF opcodes and loaded
      via tc, for example:
      
        clang -O2 -emit-llvm -c cls.c -o - | llc -march=bpf -filetype=obj -o cls.o
        tc filter add dev em1 parent 1: bpf cls.o [...]
      
      As it has been demonstrated, the scope can even reach up to a fully
      fledged flow dissector (similarly as in samples/bpf/sockex2_kern.c).
      
      For tc, maps are allowed to be used, but from kernel context only,
      in other words, eBPF code can keep state across filter invocations.
      In future, we perhaps may reattach from a different application to
      those maps e.g., to read out collected statistics/state.
      
      Similarly as in socket filters, we may extend functionality for eBPF
      classifiers over time depending on the use cases. For that purpose,
      cls_bpf programs are using BPF_PROG_TYPE_SCHED_CLS program type, so
      we can allow additional functions/accessors (e.g. an ABI compatible
      offset translation to skb fields/metadata). For an initial cls_bpf
      support, we allow the same set of helper functions as eBPF socket
      filters, but we could diverge at some point in time w/o problem.
      
      I was wondering whether cls_bpf and act_bpf could share C programs,
      I can imagine that at some point, we introduce i) further common
      handlers for both (or even beyond their scope), and/or if truly needed
      ii) some restricted function space for each of them. Both can be
      abstracted easily through struct bpf_verifier_ops in future.
      
      The context of cls_bpf versus act_bpf is slightly different though:
      a cls_bpf program will return a specific classid whereas act_bpf a
      drop/non-drop return code, latter may also in future mangle skbs.
      That said, we can surely have a "classify" and "action" section in
      a single object file, or considered mentioned constraint add a
      possibility of a shared section.
      
      The workflow for getting native eBPF running from tc [1] is as
      follows: for f_bpf, I've added a slightly modified ELF parser code
      from Alexei's kernel sample, which reads out the LLVM compiled
      object, sets up maps (and dynamically fixes up map fds) if any, and
      loads the eBPF instructions all centrally through the bpf syscall.
      
      The resulting fd from the loaded program itself is being passed down
      to cls_bpf, which looks up struct bpf_prog from the fd store, and
      holds reference, so that it stays available also after tc program
      lifetime. On tc filter destruction, it will then drop its reference.
      
      Moreover, I've also added the optional possibility to annotate an
      eBPF filter with a name (e.g. path to object file, or something
      else if preferred) so that when tc dumps currently installed filters,
      some more context can be given to an admin for a given instance (as
      opposed to just the file descriptor number).
      
      Last but not least, bpf_prog_get() and bpf_prog_put() needed to be
      exported, so that eBPF can be used from cls_bpf built as a module.
      Thanks to 60a3b225 ("net: bpf: make eBPF interpreter images
      read-only") I think this is of no concern since anything wanting to
      alter eBPF opcode after verification stage would crash the kernel.
      
        [1] http://git.breakpoint.cc/cgit/dborkman/iproute2.git/log/?h=ebpfSigned-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: Jiri Pirko <jiri@resnulli.us>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e2e9b654
    • D
      ebpf: move read-only fields to bpf_prog and shrink bpf_prog_aux · 24701ece
      Daniel Borkmann 提交于
      is_gpl_compatible and prog_type should be moved directly into bpf_prog
      as they stay immutable during bpf_prog's lifetime, are core attributes
      and they can be locked as read-only later on via bpf_prog_select_runtime().
      
      With a bit of rearranging, this also allows us to shrink bpf_prog_aux
      to exactly 1 cacheline.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      24701ece
    • D
      ebpf: add sched_cls_type and map it to sk_filter's verifier ops · 96be4325
      Daniel Borkmann 提交于
      As discussed recently and at netconf/netdev01, we want to prevent making
      bpf_verifier_ops registration available for modules, but have them at a
      controlled place inside the kernel instead.
      
      The reason for this is, that out-of-tree modules can go crazy and define
      and register any verfifier ops they want, doing all sorts of crap, even
      bypassing available GPLed eBPF helper functions. We don't want to offer
      such a shiny playground, of course, but keep strict control to ourselves
      inside the core kernel.
      
      This also encourages us to design eBPF user helpers carefully and
      generically, so they can be shared among various subsystems using eBPF.
      
      For the eBPF traffic classifier (cls_bpf), it's a good start to share
      the same helper facilities as we currently do in eBPF for socket filters.
      
      That way, we have BPF_PROG_TYPE_SCHED_CLS look like it's own type, thus
      one day if there's a good reason to diverge the set of helper functions
      from the set available to socket filters, we keep ABI compatibility.
      
      In future, we could place all bpf_prog_type_list at a central place,
      perhaps.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96be4325
    • D
      ebpf: remove CONFIG_BPF_SYSCALL ifdefs in socket filter code · d4052c4a
      Daniel Borkmann 提交于
      This gets rid of CONFIG_BPF_SYSCALL ifdefs in the socket filter code,
      now that the BPF internal header can deal with it.
      
      While going over it, I also changed eBPF related functions to a sk_filter
      prefix to be more consistent with the rest of the file.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d4052c4a
    • D
      ebpf: make internal bpf API independent of CONFIG_BPF_SYSCALL ifdefs · 0fc174de
      Daniel Borkmann 提交于
      Socket filter code and other subsystems with upcoming eBPF support should
      not need to deal with the fact that we have CONFIG_BPF_SYSCALL defined or
      not.
      
      Having the bpf syscall as a config option is a nice thing and I'd expect
      it to stay that way for expert users (I presume one day the default setting
      of it might change, though), but code making use of it should not care if
      it's actually enabled or not.
      
      Instead, hide this via header files and let the rest deal with it.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0fc174de
    • D
      ebpf: export BPF_PSEUDO_MAP_FD to uapi · f1a66f85
      Daniel Borkmann 提交于
      We need to export BPF_PSEUDO_MAP_FD to user space, as it's used in the
      ELF BPF loader where instructions are being loaded that need map fixups.
      
      An initial stage loads all maps into the kernel, and later on replaces
      related instructions in the eBPF blob with BPF_PSEUDO_MAP_FD as source
      register and the actual fd as immediate value.
      
      The kernel verifier recognizes this keyword and replaces the map fd with
      a real pointer internally.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f1a66f85
    • D
      ebpf: constify various function pointer structs · a2c83fff
      Daniel Borkmann 提交于
      We can move bpf_map_ops and bpf_verifier_ops and other structs into ro
      section, bpf_map_type_list and bpf_prog_type_list into read mostly.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a2c83fff
    • D
      ebpf: remove kernel test stubs · f91fe17e
      Daniel Borkmann 提交于
      Now that we have BPF_PROG_TYPE_SOCKET_FILTER up and running, we can
      remove the test stubs which were added to get the verifier suite up.
      
      We can just let the test cases probe under socket filter type instead.
      In the fill/spill test case, we cannot (yet) access fields from the
      context (skb), but we may adapt that test case in future.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f91fe17e
  2. 01 3月, 2015 11 次提交
  3. 28 2月, 2015 16 次提交
  4. 27 2月, 2015 2 次提交
    • R
      bridge: fix link notification skb size calculation to include vlan ranges · fed0a159
      Roopa Prabhu 提交于
      my previous patch skipped vlan range optimizations during skb size
      calculations for simplicity.
      
      This incremental patch considers vlan ranges during
      skb size calculations. This leads to a bit of code duplication
      in the fill and size calculation functions. But, I could not find a
      prettier way to do this. will take any suggestions.
      
      Previously, I had reused the existing br_get_link_af_size size calculation
      function to calculate skb size for notifications. Reusing it this time
      around creates some change in behaviour issues for the usual
      .get_link_af_size callback.
      
      This patch adds a new br_get_link_af_size_filtered() function to
      base the size calculation on the incoming filter flag and include
      vlan ranges.
      Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com>
      Reviewed-by: NScott Feldman <sfeldma@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fed0a159
    • D
      Merge branch 'rocker-next' · 90030191
      David S. Miller 提交于
      Scott Feldman says:
      
      ====================
      rocker cleanups
      
      Pushing out some rocker cleanups I've had in my queue for a while.  Nothing
      major, just some sync-up with changes that already went into device code
      (hard-coding desc err return values and lport renaming).  Also fixup
      port fowarding transitions prompted by some DSA discussions about how to
      restore port state when port leaves bridge.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      90030191