1. 01 4月, 2014 8 次提交
  2. 31 3月, 2014 6 次提交
    • A
      net: filter: rework/optimize internal BPF interpreter's instruction set · bd4cf0ed
      Alexei Starovoitov 提交于
      This patch replaces/reworks the kernel-internal BPF interpreter with
      an optimized BPF instruction set format that is modelled closer to
      mimic native instruction sets and is designed to be JITed with one to
      one mapping. Thus, the new interpreter is noticeably faster than the
      current implementation of sk_run_filter(); mainly for two reasons:
      
      1. Fall-through jumps:
      
        BPF jump instructions are forced to go either 'true' or 'false'
        branch which causes branch-miss penalty. The new BPF jump
        instructions have only one branch and fall-through otherwise,
        which fits the CPU branch predictor logic better. `perf stat`
        shows drastic difference for branch-misses between the old and
        new code.
      
      2. Jump-threaded implementation of interpreter vs switch
         statement:
      
        Instead of single table-jump at the top of 'switch' statement,
        gcc will now generate multiple table-jump instructions, which
        helps CPU branch predictor logic.
      
      Note that the verification of filters is still being done through
      sk_chk_filter() in classical BPF format, so filters from user- or
      kernel space are verified in the same way as we do now, and same
      restrictions/constraints hold as well.
      
      We reuse current BPF JIT compilers in a way that this upgrade would
      even be fine as is, but nevertheless allows for a successive upgrade
      of BPF JIT compilers to the new format.
      
      The internal instruction set migration is being done after the
      probing for JIT compilation, so in case JIT compilers are able to
      create a native opcode image, we're going to use that, and in all
      other cases we're doing a follow-up migration of the BPF program's
      instruction set, so that it can be transparently run in the new
      interpreter.
      
      In short, the *internal* format extends BPF in the following way (more
      details can be taken from the appended documentation):
      
        - Number of registers increase from 2 to 10
        - Register width increases from 32-bit to 64-bit
        - Conditional jt/jf targets replaced with jt/fall-through
        - Adds signed > and >= insns
        - 16 4-byte stack slots for register spill-fill replaced
          with up to 512 bytes of multi-use stack space
        - Introduction of bpf_call insn and register passing convention
          for zero overhead calls from/to other kernel functions
        - Adds arithmetic right shift and endianness conversion insns
        - Adds atomic_add insn
        - Old tax/txa insns are replaced with 'mov dst,src' insn
      
      Performance of two BPF filters generated by libpcap resp. bpf_asm
      was measured on x86_64, i386 and arm32 (other libpcap programs
      have similar performance differences):
      
      fprog #1 is taken from Documentation/networking/filter.txt:
      tcpdump -i eth0 port 22 -dd
      
      fprog #2 is taken from 'man tcpdump':
      tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
         ((tcp[12]&0xf0)>>2)) != 0)' -dd
      
      Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on the
      same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call,
      smaller is better:
      
      --x86_64--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF      90       101        192       202
      new BPF      31        71         47        97
      old BPF jit  12        34         17        44
      new BPF jit TBD
      
      --i386--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF     107       136        227       252
      new BPF      40       119         69       172
      
      --arm32--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF     202       300        475       540
      new BPF     180       270        330       470
      old BPF jit  26       182         37       202
      new BPF jit TBD
      
      Thus, without changing any userland BPF filters, applications on
      top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf
      classifier, netfilter's xt_bpf, team driver's load-balancing mode,
      and many more will have better interpreter filtering performance.
      
      While we are replacing the internal BPF interpreter, we also need
      to convert seccomp BPF in the same step to make use of the new
      internal structure since it makes use of lower-level API details
      without being further decoupled through higher-level calls like
      sk_unattached_filter_{create,destroy}(), for example.
      
      Just as for normal socket filtering, also seccomp BPF experiences
      a time-to-verdict speedup:
      
      05-sim-long_jumps.c of libseccomp was used as micro-benchmark:
      
        seccomp_rule_add_exact(ctx,...
        seccomp_rule_add_exact(ctx,...
      
        rc = seccomp_load(ctx);
      
        for (i = 0; i < 10000000; i++)
           syscall(199, 100);
      
      'short filter' has 2 rules
      'large filter' has 200 rules
      
      'short filter' performance is slightly better on x86_64/i386/arm32
      'large filter' is much faster on x86_64 and i386 and shows no
                     difference on arm32
      
      --x86_64-- short filter
      old BPF: 2.7 sec
       39.12%  bench  libc-2.15.so       [.] syscall
        8.10%  bench  [kernel.kallsyms]  [k] sk_run_filter
        6.31%  bench  [kernel.kallsyms]  [k] system_call
        5.59%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
        4.37%  bench  [kernel.kallsyms]  [k] trace_hardirqs_off_caller
        3.70%  bench  [kernel.kallsyms]  [k] __secure_computing
        3.67%  bench  [kernel.kallsyms]  [k] lock_is_held
        3.03%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
      new BPF: 2.58 sec
       42.05%  bench  libc-2.15.so       [.] syscall
        6.91%  bench  [kernel.kallsyms]  [k] system_call
        6.25%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
        6.07%  bench  [kernel.kallsyms]  [k] __secure_computing
        5.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
      
      --arm32-- short filter
      old BPF: 4.0 sec
       39.92%  bench  [kernel.kallsyms]  [k] vector_swi
       16.60%  bench  [kernel.kallsyms]  [k] sk_run_filter
       14.66%  bench  libc-2.17.so       [.] syscall
        5.42%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
        5.10%  bench  [kernel.kallsyms]  [k] __secure_computing
      new BPF: 3.7 sec
       35.93%  bench  [kernel.kallsyms]  [k] vector_swi
       21.89%  bench  libc-2.17.so       [.] syscall
       13.45%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
        6.25%  bench  [kernel.kallsyms]  [k] __secure_computing
        3.96%  bench  [kernel.kallsyms]  [k] syscall_trace_exit
      
      --x86_64-- large filter
      old BPF: 8.6 seconds
          73.38%    bench  [kernel.kallsyms]  [k] sk_run_filter
          10.70%    bench  libc-2.15.so       [.] syscall
           5.09%    bench  [kernel.kallsyms]  [k] seccomp_bpf_load
           1.97%    bench  [kernel.kallsyms]  [k] system_call
      new BPF: 5.7 seconds
          66.20%    bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
          16.75%    bench  libc-2.15.so       [.] syscall
           3.31%    bench  [kernel.kallsyms]  [k] system_call
           2.88%    bench  [kernel.kallsyms]  [k] __secure_computing
      
      --i386-- large filter
      old BPF: 5.4 sec
      new BPF: 3.8 sec
      
      --arm32-- large filter
      old BPF: 13.5 sec
       73.88%  bench  [kernel.kallsyms]  [k] sk_run_filter
       10.29%  bench  [kernel.kallsyms]  [k] vector_swi
        6.46%  bench  libc-2.17.so       [.] syscall
        2.94%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
        1.19%  bench  [kernel.kallsyms]  [k] __secure_computing
        0.87%  bench  [kernel.kallsyms]  [k] sys_getuid
      new BPF: 13.5 sec
       76.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
       10.98%  bench  [kernel.kallsyms]  [k] vector_swi
        5.87%  bench  libc-2.17.so       [.] syscall
        1.77%  bench  [kernel.kallsyms]  [k] __secure_computing
        0.93%  bench  [kernel.kallsyms]  [k] sys_getuid
      
      BPF filters generated by seccomp are very branchy, so the new
      internal BPF performance is better than the old one. Performance
      gains will be even higher when BPF JIT is committed for the
      new structure, which is planned in future work (as successive
      JIT migrations).
      
      BPF has also been stress-tested with trinity's BPF fuzzer.
      
      Joint work with Daniel Borkmann.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Paul Moore <pmoore@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: linux-kernel@vger.kernel.org
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bd4cf0ed
    • D
      net: ptp: do not reimplement PTP/BPF classifier · 164d8c66
      Daniel Borkmann 提交于
      There are currently pch_gbe, cpts, and ixp4xx_eth drivers that open-code
      and reimplement a BPF classifier for the PTP protocol. Since all of them
      effectively do the very same thing and load the very same PTP/BPF filter,
      we can just consolidate that code by introducing ptp_classify_raw() in
      the time-stamping core framework which can be used in drivers.
      
      As drivers get initialized after bootstrapping the core networking
      subsystem, they can make use of ptp_insns wrapped through
      ptp_classify_raw(), which allows to simplify and remove PTP classifier
      setup code in drivers.
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Cc: Richard Cochran <richard.cochran@omicron.at>
      Cc: Jiri Benc <jbenc@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      164d8c66
    • D
      net: ptp: use sk_unattached_filter_create() for BPF · e62d2df0
      Daniel Borkmann 提交于
      This patch migrates an open-coded sk_run_filter() implementation with
      proper use of the BPF API, that is, sk_unattached_filter_create(). This
      migration is needed, as we will be internally transforming the filter
      to a different representation, and therefore needs to be decoupled.
      
      It is okay to do so as skb_timestamping_init() is called during
      initialization of the network stack in core initcall via sock_init().
      This would effectively also allow for PTP filters to be jit compiled if
      bpf_jit_enable is set.
      
      For better readability, there are also some newlines introduced, also
      ptp_classify.h is only in kernel space.
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Cc: Richard Cochran <richard.cochran@omicron.at>
      Cc: Jiri Benc <jbenc@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e62d2df0
    • D
      net: filter: move filter accounting to filter core · fbc907f0
      Daniel Borkmann 提交于
      This patch basically does two things, i) removes the extern keyword
      from the include/linux/filter.h file to be more consistent with the
      rest of Joe's changes, and ii) moves filter accounting into the filter
      core framework.
      
      Filter accounting mainly done through sk_filter_{un,}charge() take
      care of the case when sockets are being cloned through sk_clone_lock()
      so that removal of the filter on one socket won't result in eviction
      as it's still referenced by the other.
      
      These functions actually belong to net/core/filter.c and not
      include/net/sock.h as we want to keep all that in a central place.
      It's also not in fast-path so uninlining them is fine and even allows
      us to get rd of sk_filter_release_rcu()'s EXPORT_SYMBOL and a forward
      declaration.
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbc907f0
    • D
      net: filter: keep original BPF program around · a3ea269b
      Daniel Borkmann 提交于
      In order to open up the possibility to internally transform a BPF program
      into an alternative and possibly non-trivial reversible representation, we
      need to keep the original BPF program around, so that it can be passed back
      to user space w/o the need of a complex decoder.
      
      The reason for that use case resides in commit a8fc9277 ("sk-filter:
      Add ability to get socket filter program (v2)"), that is, the ability
      to retrieve the currently attached BPF filter from a given socket used
      mainly by the checkpoint-restore project, for example.
      
      Therefore, we add two helpers sk_{store,release}_orig_filter for taking
      care of that. In the sk_unattached_filter_create() case, there's no such
      possibility/requirement to retrieve a loaded BPF program. Therefore, we
      can spare us the work in that case.
      
      This approach will simplify and slightly speed up both, sk_get_filter()
      and sock_diag_put_filterinfo() handlers as we won't need to successively
      decode filters anymore through sk_decode_filter(). As we still need
      sk_decode_filter() later on, we're keeping it around.
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a3ea269b
    • D
      net: filter: add jited flag to indicate jit compiled filters · f8bbbfc3
      Daniel Borkmann 提交于
      This patch adds a jited flag into sk_filter struct in order to indicate
      whether a filter is currently jited or not. The size of sk_filter is
      not being expanded as the 32 bit 'len' member allows upper bits to be
      reused since a filter can currently only grow as large as BPF_MAXINSNS.
      
      Therefore, there's enough room also for other in future needed flags to
      reuse 'len' field if necessary. The jited flag also allows for having
      alternative interpreter functions running as currently, we can only
      detect jit compiled filters by testing fp->bpf_func to not equal the
      address of sk_run_filter().
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Pablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8bbbfc3
  3. 30 3月, 2014 9 次提交
  4. 29 3月, 2014 12 次提交
    • V
      vlan: Warn the user if lowerdev has bad vlan features. · 2adb956b
      Vlad Yasevich 提交于
      Some drivers incorrectly assign vlan acceleration features to
      vlan_features thus causing issues for Q-in-Q vlan configurations.
      Warn the user of such cases.
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2adb956b
    • V
      bridge: Fix crash with vlan filtering and tcpdump · fc92f745
      Vlad Yasevich 提交于
      When the vlan filtering is enabled on the bridge, but
      the filter is not configured on the bridge device itself,
      running tcpdump on the bridge device will result in a
      an Oops with NULL pointer dereference.  The reason
      is that br_pass_frame_up() will bypass the vlan
      check because promisc flag is set.  It will then try
      to get the table pointer and process the packet based
      on the table.  Since the table pointer is NULL, we oops.
      Catch this special condition in br_handle_vlan().
      Reported-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      CC: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Acked-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fc92f745
    • V
      net: Account for all vlan headers in skb_mac_gso_segment · 53d6471c
      Vlad Yasevich 提交于
      skb_network_protocol() already accounts for multiple vlan
      headers that may be present in the skb.  However, skb_mac_gso_segment()
      doesn't know anything about it and assumes that skb->mac_len
      is set correctly to skip all mac headers.  That may not
      always be the case.  If we are simply forwarding the packet (via
      bridge or macvtap), all vlan headers may not be accounted for.
      
      A simple solution is to allow skb_network_protocol to return
      the vlan depth it has calculated.  This way skb_mac_gso_segment
      will correctly skip all mac headers.
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      53d6471c
    • H
      ipv6: move DAD and addrconf_verify processing to workqueue · c15b1cca
      Hannes Frederic Sowa 提交于
      addrconf_join_solict and addrconf_join_anycast may cause actions which
      need rtnl locked, especially on first address creation.
      
      A new DAD state is introduced which defers processing of the initial
      DAD processing into a workqueue.
      
      To get rtnl lock we need to push the code paths which depend on those
      calls up to workqueues, specifically addrconf_verify and the DAD
      processing.
      
      (v2)
      addrconf_dad_failure needs to be queued up to the workqueue, too. This
      patch introduces a new DAD state and stop the DAD processing in the
      workqueue (this is because of the possible ipv6_del_addr processing
      which removes the solicited multicast address from the device).
      
      addrconf_verify_lock is removed, too. After the transition it is not
      needed any more.
      
      As we are not processing in bottom half anymore we need to be a bit more
      careful about disabling bottom half out when we lock spin_locks which are also
      used in bh.
      
      Relevant backtrace:
      [  541.030090] RTNL: assertion failed at net/core/dev.c (4496)
      [  541.031143] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G           O 3.10.33-1-amd64-vyatta #1
      [  541.031145] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
      [  541.031146]  ffffffff8148a9f0 000000000000002f ffffffff813c98c1 ffff88007c4451f8
      [  541.031148]  0000000000000000 0000000000000000 ffffffff813d3540 ffff88007fc03d18
      [  541.031150]  0000880000000006 ffff88007c445000 ffffffffa0194160 0000000000000000
      [  541.031152] Call Trace:
      [  541.031153]  <IRQ>  [<ffffffff8148a9f0>] ? dump_stack+0xd/0x17
      [  541.031180]  [<ffffffff813c98c1>] ? __dev_set_promiscuity+0x101/0x180
      [  541.031183]  [<ffffffff813d3540>] ? __hw_addr_create_ex+0x60/0xc0
      [  541.031185]  [<ffffffff813cfe1a>] ? __dev_set_rx_mode+0xaa/0xc0
      [  541.031189]  [<ffffffff813d3a81>] ? __dev_mc_add+0x61/0x90
      [  541.031198]  [<ffffffffa01dcf9c>] ? igmp6_group_added+0xfc/0x1a0 [ipv6]
      [  541.031208]  [<ffffffff8111237b>] ? kmem_cache_alloc+0xcb/0xd0
      [  541.031212]  [<ffffffffa01ddcd7>] ? ipv6_dev_mc_inc+0x267/0x300 [ipv6]
      [  541.031216]  [<ffffffffa01c2fae>] ? addrconf_join_solict+0x2e/0x40 [ipv6]
      [  541.031219]  [<ffffffffa01ba2e9>] ? ipv6_dev_ac_inc+0x159/0x1f0 [ipv6]
      [  541.031223]  [<ffffffffa01c0772>] ? addrconf_join_anycast+0x92/0xa0 [ipv6]
      [  541.031226]  [<ffffffffa01c311e>] ? __ipv6_ifa_notify+0x11e/0x1e0 [ipv6]
      [  541.031229]  [<ffffffffa01c3213>] ? ipv6_ifa_notify+0x33/0x50 [ipv6]
      [  541.031233]  [<ffffffffa01c36c8>] ? addrconf_dad_completed+0x28/0x100 [ipv6]
      [  541.031241]  [<ffffffff81075c1d>] ? task_cputime+0x2d/0x50
      [  541.031244]  [<ffffffffa01c38d6>] ? addrconf_dad_timer+0x136/0x150 [ipv6]
      [  541.031247]  [<ffffffffa01c37a0>] ? addrconf_dad_completed+0x100/0x100 [ipv6]
      [  541.031255]  [<ffffffff8105313a>] ? call_timer_fn.isra.22+0x2a/0x90
      [  541.031258]  [<ffffffffa01c37a0>] ? addrconf_dad_completed+0x100/0x100 [ipv6]
      
      Hunks and backtrace stolen from a patch by Stephen Hemminger.
      Reported-by: NStephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NStephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c15b1cca
    • E
      net: net: add a core netdev->tx_dropped counter · 015f0688
      Eric Dumazet 提交于
      Dropping packets in __dev_queue_xmit() when transmit queue
      is stopped (NIC TX ring buffer full or BQL limit reached) currently
      outputs a syslog message.
      
      It would be better to get a precise count of such events available in
      netdevice stats so that monitoring tools can have a clue.
      
      This extends the work done in caf586e5
      ("net: add a core netdev->rx_dropped counter")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      015f0688
    • D
      packet: respect devices with LLTX flag in direct xmit · 43279500
      Daniel Borkmann 提交于
      Quite often it can be useful to test with dummy or similar
      devices as a blackhole sink for skbs. Such devices are only
      equipped with a single txq, but marked as NETIF_F_LLTX as
      they do not require locking their internal queues on xmit
      (or implement locking themselves). Therefore, rather use
      HARD_TX_{UN,}LOCK API, so that NETIF_F_LLTX will be respected.
      
      trafgen mmap/TX_RING example against dummy device with config
      foo: { fill(0xff, 64) } results in the following performance
      improvements for such scenarios on an ordinary Core i7/2.80GHz:
      
      Before:
      
       Performance counter stats for 'trafgen -i foo -o du0 -n100000000' (10 runs):
      
         160,975,944,159 instructions:k            #    0.55  insns per cycle          ( +-  0.09% )
         293,319,390,278 cycles:k                  #    0.000 GHz                      ( +-  0.35% )
             192,501,104 branch-misses:k                                               ( +-  1.63% )
                     831 context-switches:k                                            ( +-  9.18% )
                       7 cpu-migrations:k                                              ( +-  7.40% )
                  69,382 cache-misses:k            #    0.010 % of all cache refs      ( +-  2.18% )
             671,552,021 cache-references:k                                            ( +-  1.29% )
      
            22.856401569 seconds time elapsed                                          ( +-  0.33% )
      
      After:
      
       Performance counter stats for 'trafgen -i foo -o du0 -n100000000' (10 runs):
      
         133,788,739,692 instructions:k            #    0.92  insns per cycle          ( +-  0.06% )
         145,853,213,256 cycles:k                  #    0.000 GHz                      ( +-  0.17% )
              59,867,100 branch-misses:k                                               ( +-  4.72% )
                     384 context-switches:k                                            ( +-  3.76% )
                       6 cpu-migrations:k                                              ( +-  6.28% )
                  70,304 cache-misses:k            #    0.077 % of all cache refs      ( +-  1.73% )
              90,879,408 cache-references:k                                            ( +-  1.35% )
      
            11.719372413 seconds time elapsed                                          ( +-  0.24% )
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43279500
    • E
      tcp: fix get_timewait4_sock() delay computation on 64bit · e2a1d3e4
      Eric Dumazet 提交于
      It seems I missed one change in get_timewait4_sock() to compute
      the remaining time before deletion of IPV4 timewait socket.
      
      This could result in wrong output in /proc/net/tcp for tm->when field.
      
      Fixes: 96f817fe ("tcp: shrink tcp6_timewait_sock by one cache line")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e2a1d3e4
    • F
      openvswitch: fix a possible deadlock and lockdep warning · 4f647e0a
      Flavio Leitner 提交于
      There are two problematic situations.
      
      A deadlock can happen when is_percpu is false because it can get
      interrupted while holding the spinlock. Then it executes
      ovs_flow_stats_update() in softirq context which tries to get
      the same lock.
      
      The second sitation is that when is_percpu is true, the code
      correctly disables BH but only for the local CPU, so the
      following can happen when locking the remote CPU without
      disabling BH:
      
             CPU#0                            CPU#1
        ovs_flow_stats_get()
         stats_read()
       +->spin_lock remote CPU#1        ovs_flow_stats_get()
       |  <interrupted>                  stats_read()
       |  ...                       +-->  spin_lock remote CPU#0
       |                            |     <interrupted>
       |  ovs_flow_stats_update()   |     ...
       |   spin_lock local CPU#0 <--+     ovs_flow_stats_update()
       +---------------------------------- spin_lock local CPU#1
      
      This patch disables BH for both cases fixing the deadlocks.
      Acked-by: NJesse Gross <jesse@nicira.com>
      
      =================================
      [ INFO: inconsistent lock state ]
      3.14.0-rc8-00007-g632b06aa #1 Tainted: G          I
      ---------------------------------
      inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
      swapper/0/0 [HC0[0]:SC1[5]:HE1:SE0] takes:
      (&(&cpu_stats->lock)->rlock){+.?...}, at: [<ffffffffa05dd8a1>] ovs_flow_stats_update+0x51/0xd0 [openvswitch]
      {SOFTIRQ-ON-W} state was registered at:
      [<ffffffff810f973f>] __lock_acquire+0x68f/0x1c40
      [<ffffffff810fb4e2>] lock_acquire+0xa2/0x1d0
      [<ffffffff817d8d9e>] _raw_spin_lock+0x3e/0x80
      [<ffffffffa05dd9e4>] ovs_flow_stats_get+0xc4/0x1e0 [openvswitch]
      [<ffffffffa05da855>] ovs_flow_cmd_fill_info+0x185/0x360 [openvswitch]
      [<ffffffffa05daf05>] ovs_flow_cmd_build_info.constprop.27+0x55/0x90 [openvswitch]
      [<ffffffffa05db41d>] ovs_flow_cmd_new_or_set+0x4dd/0x570 [openvswitch]
      [<ffffffff816c245d>] genl_family_rcv_msg+0x1cd/0x3f0
      [<ffffffff816c270e>] genl_rcv_msg+0x8e/0xd0
      [<ffffffff816c0239>] netlink_rcv_skb+0xa9/0xc0
      [<ffffffff816c0798>] genl_rcv+0x28/0x40
      [<ffffffff816bf830>] netlink_unicast+0x100/0x1e0
      [<ffffffff816bfc57>] netlink_sendmsg+0x347/0x770
      [<ffffffff81668e9c>] sock_sendmsg+0x9c/0xe0
      [<ffffffff816692d9>] ___sys_sendmsg+0x3a9/0x3c0
      [<ffffffff8166a911>] __sys_sendmsg+0x51/0x90
      [<ffffffff8166a962>] SyS_sendmsg+0x12/0x20
      [<ffffffff817e3ce9>] system_call_fastpath+0x16/0x1b
      irq event stamp: 1740726
      hardirqs last  enabled at (1740726): [<ffffffff8175d5e0>] ip6_finish_output2+0x4f0/0x840
      hardirqs last disabled at (1740725): [<ffffffff8175d59b>] ip6_finish_output2+0x4ab/0x840
      softirqs last  enabled at (1740674): [<ffffffff8109be12>] _local_bh_enable+0x22/0x50
      softirqs last disabled at (1740675): [<ffffffff8109db05>] irq_exit+0xc5/0xd0
      
      other info that might help us debug this:
       Possible unsafe locking scenario:
      
             CPU0
             ----
        lock(&(&cpu_stats->lock)->rlock);
        <Interrupt>
          lock(&(&cpu_stats->lock)->rlock);
      
       *** DEADLOCK ***
      
      5 locks held by swapper/0/0:
       #0:  (((&ifa->dad_timer))){+.-...}, at: [<ffffffff810a7155>] call_timer_fn+0x5/0x320
       #1:  (rcu_read_lock){.+.+..}, at: [<ffffffff81788a55>] mld_sendpack+0x5/0x4a0
       #2:  (rcu_read_lock_bh){.+....}, at: [<ffffffff8175d149>] ip6_finish_output2+0x59/0x840
       #3:  (rcu_read_lock_bh){.+....}, at: [<ffffffff8168ba75>] __dev_queue_xmit+0x5/0x9b0
       #4:  (rcu_read_lock){.+.+..}, at: [<ffffffffa05e41b5>] internal_dev_xmit+0x5/0x110 [openvswitch]
      
      stack backtrace:
      CPU: 0 PID: 0 Comm: swapper/0 Tainted: G          I  3.14.0-rc8-00007-g632b06aa #1
      Hardware name:                  /DX58SO, BIOS SOX5810J.86A.5599.2012.0529.2218 05/29/2012
       0000000000000000 0fcf20709903df0c ffff88042d603808 ffffffff817cfe3c
       ffffffff81c134c0 ffff88042d603858 ffffffff817cb6da 0000000000000005
       ffffffff00000001 ffff880400000000 0000000000000006 ffffffff81c134c0
      Call Trace:
       <IRQ>  [<ffffffff817cfe3c>] dump_stack+0x4d/0x66
       [<ffffffff817cb6da>] print_usage_bug+0x1f4/0x205
       [<ffffffff810f7f10>] ? check_usage_backwards+0x180/0x180
       [<ffffffff810f8963>] mark_lock+0x223/0x2b0
       [<ffffffff810f96d3>] __lock_acquire+0x623/0x1c40
       [<ffffffff810f5707>] ? __lock_is_held+0x57/0x80
       [<ffffffffa05e26c6>] ? masked_flow_lookup+0x236/0x250 [openvswitch]
       [<ffffffff810fb4e2>] lock_acquire+0xa2/0x1d0
       [<ffffffffa05dd8a1>] ? ovs_flow_stats_update+0x51/0xd0 [openvswitch]
       [<ffffffff817d8d9e>] _raw_spin_lock+0x3e/0x80
       [<ffffffffa05dd8a1>] ? ovs_flow_stats_update+0x51/0xd0 [openvswitch]
       [<ffffffffa05dd8a1>] ovs_flow_stats_update+0x51/0xd0 [openvswitch]
       [<ffffffffa05dcc64>] ovs_dp_process_received_packet+0x84/0x120 [openvswitch]
       [<ffffffff810f93f7>] ? __lock_acquire+0x347/0x1c40
       [<ffffffffa05e3bea>] ovs_vport_receive+0x2a/0x30 [openvswitch]
       [<ffffffffa05e4218>] internal_dev_xmit+0x68/0x110 [openvswitch]
       [<ffffffffa05e41b5>] ? internal_dev_xmit+0x5/0x110 [openvswitch]
       [<ffffffff8168b4a6>] dev_hard_start_xmit+0x2e6/0x8b0
       [<ffffffff8168be87>] __dev_queue_xmit+0x417/0x9b0
       [<ffffffff8168ba75>] ? __dev_queue_xmit+0x5/0x9b0
       [<ffffffff8175d5e0>] ? ip6_finish_output2+0x4f0/0x840
       [<ffffffff8168c430>] dev_queue_xmit+0x10/0x20
       [<ffffffff8175d641>] ip6_finish_output2+0x551/0x840
       [<ffffffff8176128a>] ? ip6_finish_output+0x9a/0x220
       [<ffffffff8176128a>] ip6_finish_output+0x9a/0x220
       [<ffffffff8176145f>] ip6_output+0x4f/0x1f0
       [<ffffffff81788c29>] mld_sendpack+0x1d9/0x4a0
       [<ffffffff817895b8>] mld_send_initial_cr.part.32+0x88/0xa0
       [<ffffffff817691b0>] ? addrconf_dad_completed+0x220/0x220
       [<ffffffff8178e301>] ipv6_mc_dad_complete+0x31/0x50
       [<ffffffff817690d7>] addrconf_dad_completed+0x147/0x220
       [<ffffffff817691b0>] ? addrconf_dad_completed+0x220/0x220
       [<ffffffff8176934f>] addrconf_dad_timer+0x19f/0x1c0
       [<ffffffff810a71e9>] call_timer_fn+0x99/0x320
       [<ffffffff810a7155>] ? call_timer_fn+0x5/0x320
       [<ffffffff817691b0>] ? addrconf_dad_completed+0x220/0x220
       [<ffffffff810a76c4>] run_timer_softirq+0x254/0x3b0
       [<ffffffff8109d47d>] __do_softirq+0x12d/0x480
      Signed-off-by: NFlavio Leitner <fbl@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4f647e0a
    • T
      bridge: Fix handling stacked vlan tags · 99b192da
      Toshiaki Makita 提交于
      If a bridge with vlan_filtering enabled receives frames with stacked
      vlan tags, i.e., they have two vlan tags, br_vlan_untag() strips not
      only the outer tag but also the inner tag.
      
      br_vlan_untag() is called only from br_handle_vlan(), and in this case,
      it is enough to set skb->vlan_tci to 0 here, because vlan_tci has already
      been set before calling br_handle_vlan().
      Signed-off-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Acked-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      99b192da
    • T
      bridge: Fix inabillity to retrieve vlan tags when tx offload is disabled · 12464bb8
      Toshiaki Makita 提交于
      Bridge vlan code (br_vlan_get_tag()) assumes that all frames have vlan_tci
      if they are tagged, but if vlan tx offload is manually disabled on bridge
      device and frames are sent from vlan device on the bridge device, the tags
      are embedded in skb->data and they break this assumption.
      Extract embedded vlan tags and move them to vlan_tci at ingress.
      Signed-off-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Acked-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12464bb8
    • E
      tipc: make discovery domain a bearer attribute · 16470111
      Erik Hugne 提交于
      The node discovery domain is assigned when a bearer is enabled.
      In the previous commit we reflect this attribute directly in the
      bearer structure since it's needed to reinitialize the node
      discovery mechanism after a hardware address change.
      
      There's no need to replicate this attribute anywhere else, so we
      remove it from the tipc_link_req structure.
      Signed-off-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      16470111
    • E
      tipc: fix neighbor detection problem after hw address change · a21a584d
      Erik Hugne 提交于
      If the hardware address of a underlying netdevice is changed, it is
      not enough to simply reset the bearer/links over this device. We
      also need to reflect this change in the TIPC bearer and node
      discovery structures aswell.
      
      This patch adds the necessary reinitialization of the node disovery
      mechanism following a hardware address change so that the correct
      originating media address is advertised in the discovery messages.
      Signed-off-by: NErik Hugne <erik.hugne@ericsson.com>
      Reported-by: NDong Liu <dliu.cn@gmail.com>
      Reviewed-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a21a584d
  5. 28 3月, 2014 5 次提交
    • Z
      core, nfqueue, openvswitch: Orphan frags in skb_zerocopy and handle errors · 36d5fe6a
      Zoltan Kiss 提交于
      skb_zerocopy can copy elements of the frags array between skbs, but it doesn't
      orphan them. Also, it doesn't handle errors, so this patch takes care of that
      as well, and modify the callers accordingly. skb_tx_error() is also added to
      the callers so they will signal the failed delivery towards the creator of the
      skb.
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      36d5fe6a
    • J
      hsr: replace del_timer by del_timer_sync · 02f2d5a0
      Julia Lawall 提交于
      Use del_timer_sync to ensure that the timer is stopped on all CPUs before
      the driver exists.
      
      This change was suggested by Thomas Gleixner.
      
      The semantic patch that makes this change is as follows:
      (http://coccinelle.lip6.fr/)
      
      // <smpl>
      @r@
      declarer name module_exit;
      identifier ex;
      @@
      
      module_exit(ex);
      
      @@
      identifier r.ex;
      @@
      
      ex(...) {
        <...
      - del_timer
      + del_timer_sync
          (...)
        ...>
      }
      // </smpl>
      Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      02f2d5a0
    • J
      atm: replace del_timer by del_timer_sync · 84275593
      Julia Lawall 提交于
      Use del_timer_sync to ensure that the timer is stopped on all CPUs before
      the driver exists.
      
      This change was suggested by Thomas Gleixner.
      
      The semantic patch that makes this change is as follows:
      (http://coccinelle.lip6.fr/)
      
      // <smpl>
      @r@
      declarer name module_exit;
      identifier ex;
      @@
      
      module_exit(ex);
      
      @@
      identifier r.ex;
      @@
      
      ex(...) {
        <...
      - del_timer
      + del_timer_sync
          (...)
        ...>
      }
      // </smpl>
      Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84275593
    • E
      tcp: tcp_make_synack() minor changes · a0b8486c
      Eric Dumazet 提交于
      There is no need to allocate 15 bytes in excess for a SYNACK packet,
      as it contains no data, only headers.
      
      SYNACK are always generated in softirq context, and contain a single
      segment, we can use TCP_INC_STATS_BH()
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0b8486c
    • M
      ipv6: do not overwrite inetpeer metrics prematurely · e5fd387a
      Michal Kubeček 提交于
      If an IPv6 host route with metrics exists, an attempt to add a
      new route for the same target with different metrics fails but
      rewrites the metrics anyway:
      
      12sp0:~ # ip route add fec0::1 dev eth0 rto_min 1000
      12sp0:~ # ip -6 route show
      fe80::/64 dev eth0  proto kernel  metric 256
      fec0::1 dev eth0  metric 1024  rto_min lock 1s
      12sp0:~ # ip route add fec0::1 dev eth0 rto_min 1500
      RTNETLINK answers: File exists
      12sp0:~ # ip -6 route show
      fe80::/64 dev eth0  proto kernel  metric 256
      fec0::1 dev eth0  metric 1024  rto_min lock 1.5s
      
      This is caused by all IPv6 host routes using the metrics in
      their inetpeer (or the shared default). This also holds for the
      new route created in ip6_route_add() which shares the metrics
      with the already existing route and thus ip6_route_add()
      rewrites the metrics even if the new route ends up not being
      used at all.
      
      Another problem is that old metrics in inetpeer can reappear
      unexpectedly for a new route, e.g.
      
      12sp0:~ # ip route add fec0::1 dev eth0 rto_min 1000
      12sp0:~ # ip route del fec0::1
      12sp0:~ # ip route add fec0::1 dev eth0
      12sp0:~ # ip route change fec0::1 dev eth0 hoplimit 10
      12sp0:~ # ip -6 route show
      fe80::/64 dev eth0  proto kernel  metric 256
      fec0::1 dev eth0  metric 1024  hoplimit 10 rto_min lock 1s
      
      Resolve the first problem by moving the setting of metrics down
      into fib6_add_rt2node() to the point we are sure we are
      inserting the new route into the tree. Second problem is
      addressed by introducing new flag DST_METRICS_FORCE_OVERWRITE
      which is set for a new host route in ip6_route_add() and makes
      ipv6_cow_metrics() always overwrite the metrics in inetpeer
      (even if they are not "new"); it is reset after that.
      
      v5: use a flag in _metrics member rather than one in flags
      
      v4: fix a typo making a condition always true (thanks to Hannes
      Frederic Sowa)
      
      v3: rewritten based on David Miller's idea to move setting the
      metrics (and allocation in non-host case) down to the point we
      already know the route is to be inserted. Also rebased to
      net-next as it is quite late in the cycle.
      Signed-off-by: NMichal Kubecek <mkubecek@suse.cz>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e5fd387a