1. 21 12月, 2018 6 次提交
    • J
      bpf: sk_msg, sock{map|hash} redirect through ULP · 0608c69c
      John Fastabend 提交于
      A sockmap program that redirects through a kTLS ULP enabled socket
      will not work correctly because the ULP layer is skipped. This
      fixes the behavior to call through the ULP layer on redirect to
      ensure any operations required on the data stream at the ULP layer
      continue to be applied.
      
      To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid
      calling the BPF layer on a redirected message. This is
      required to avoid calling the BPF layer multiple times (possibly
      recursively) which is not the current/expected behavior without
      ULPs. In the future we may add a redirect flag if users _do_
      want the policy applied again but this would need to work for both
      ULP and non-ULP sockets and be opt-in to avoid breaking existing
      programs.
      
      Also to avoid polluting the flag space with an internal flag we
      reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with
      MSG_WAITFORONE. Here WAITFORONE is specific to recv path and
      SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing
      to verify is user space API is masked correctly to ensure the flag
      can not be set by user. (Note this needs to be true regardless
      because we have internal flags already in-use that user space
      should not be able to set). But for completeness we have two UAPI
      paths into sendpage, sendfile and splice.
      
      In the sendfile case the function do_sendfile() zero's flags,
      
      ./fs/read_write.c:
       static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos,
      		   	    size_t count, loff_t max)
       {
         ...
         fl = 0;
      #if 0
         /*
          * We need to debate whether we can enable this or not. The
          * man page documents EAGAIN return for the output at least,
          * and the application is arguably buggy if it doesn't expect
          * EAGAIN on a non-blocking file descriptor.
          */
          if (in.file->f_flags & O_NONBLOCK)
      	fl = SPLICE_F_NONBLOCK;
      #endif
          file_start_write(out.file);
          retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl);
       }
      
      In the splice case the pipe_to_sendpage "actor" is used which
      masks flags with SPLICE_F_MORE.
      
      ./fs/splice.c:
       static int pipe_to_sendpage(struct pipe_inode_info *pipe,
      			    struct pipe_buffer *buf, struct splice_desc *sd)
       {
         ...
         more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0;
         ...
       }
      
      Confirming what we expect that internal flags  are in fact internal
      to socket side.
      
      Fixes: d3b18ad3 ("tls: add bpf support to sk_msg handling")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      0608c69c
    • J
      bpf: sk_msg, zap ingress queue on psock down · a136678c
      John Fastabend 提交于
      In addition to releasing any cork'ed data on a psock when the psock
      is removed we should also release any skb's in the ingress work queue.
      Otherwise the skb's eventually get free'd but late in the tear
      down process so we see the WARNING due to non-zero sk_forward_alloc.
      
        void sk_stream_kill_queues(struct sock *sk)
        {
      	...
      	WARN_ON(sk->sk_forward_alloc);
      	...
        }
      
      Fixes: 604326b4 ("bpf, sockmap: convert to generic sk_msg interface")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      a136678c
    • J
      bpf: sk_msg, fix socket data_ready events · 552de910
      John Fastabend 提交于
      When a skb verdict program is in-use and either another BPF program
      redirects to that socket or the new SK_PASS support is used the
      data_ready callback does not wake up application. Instead because
      the stream parser/verdict is using the sk data_ready callback we wake
      up the stream parser/verdict block.
      
      Fix this by adding a helper to check if the stream parser block is
      enabled on the sk and if so call the saved pointer which is the
      upper layers wake up function.
      
      This fixes application stalls observed when an application is waiting
      for data in a blocking read().
      
      Fixes: d829e9c4 ("tls: convert to generic sk_msg interface")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      552de910
    • J
      bpf: skb_verdict, support SK_PASS on RX BPF path · 51199405
      John Fastabend 提交于
      Add SK_PASS verdict support to SK_SKB_VERDICT programs. Now that
      support for redirects exists we can implement SK_PASS as a redirect
      to the same socket. This simplifies the BPF programs and avoids an
      extra map lookup on RX path for simple visibility cases.
      
      Further, reduces user (BPF programmer in this context) confusion
      when their program drops skb due to lack of support.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      51199405
    • J
      bpf: skmsg, replace comments with BUILD bug · 7a69c0f2
      John Fastabend 提交于
      Enforce comment on structure layout dependency with a BUILD_BUG_ON
      to ensure the condition is maintained.
      Suggested-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      7a69c0f2
    • J
      bpf: sk_msg, improve offset chk in _is_valid_access · bc1b4f01
      John Fastabend 提交于
      The check for max offset in sk_msg_is_valid_access uses sizeof()
      which is incorrect because it would allow accessing possibly
      past the end of the struct in the padded case. Further, it doesn't
      preclude accessing any padding that may be added in the middle of
      a struct. All told this makes it fragile to rely on.
      
      To fix this explicitly check offsets with fields using the
      bpf_ctx_range() and bpf_ctx_range_till() macros.
      
      For reference the current structure layout looks as follows (reported
      by pahole)
      
      struct sk_msg_md {
      	union {
      		void *             data;                 /*           8 */
      	};                                               /*     0     8 */
      	union {
      		void *             data_end;             /*           8 */
      	};                                               /*     8     8 */
      	__u32                      family;               /*    16     4 */
      	__u32                      remote_ip4;           /*    20     4 */
      	__u32                      local_ip4;            /*    24     4 */
      	__u32                      remote_ip6[4];        /*    28    16 */
      	__u32                      local_ip6[4];         /*    44    16 */
      	__u32                      remote_port;          /*    60     4 */
      	/* --- cacheline 1 boundary (64 bytes) --- */
      	__u32                      local_port;           /*    64     4 */
      	__u32                      size;                 /*    68     4 */
      
      	/* size: 72, cachelines: 2, members: 10 */
      	/* last cacheline: 8 bytes */
      };
      
      So there should be no padding at the moment but fixing this now
      prevents future errors.
      Reported-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      bc1b4f01
  2. 20 12月, 2018 1 次提交
    • B
      xsk: simplify AF_XDP socket teardown · e2ce3674
      Björn Töpel 提交于
      Prior this commit, when the struct socket object was being released,
      the UMEM did not have its reference count decreased. Instead, this was
      done in the struct sock sk_destruct function.
      
      There is no reason to keep the UMEM reference around when the socket
      is being orphaned, so in this patch the xdp_put_mem is called in the
      xsk_release function. This results in that the xsk_destruct function
      can be removed!
      
      Note that, it still holds that a struct xsk_sock reference might still
      linger in the XSKMAP after the UMEM is released, e.g. if a user does
      not clear the XSKMAP prior to closing the process. This sock will be
      in a "released" zombie like state, until the XSKMAP is removed.
      Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      e2ce3674
  3. 19 12月, 2018 1 次提交
    • J
      bpf: sockmap, metadata support for reporting size of msg · 3bdbd022
      John Fastabend 提交于
      This adds metadata to sk_msg_md for BPF programs to read the sk_msg
      size.
      
      When the SK_MSG program is running under an application that is using
      sendfile the data is not copied into sk_msg buffers by default. Rather
      the BPF program uses sk_msg_pull_data to read the bytes in. This
      avoids doing the costly memcopy instructions when they are not in
      fact needed. However, if we don't know the size of the sk_msg we
      have to guess if needed bytes are available by doing a pull request
      which may fail. By including the size of the sk_msg BPF programs can
      check the size before issuing sk_msg_pull_data requests.
      
      Additionally, the same applies for sendmsg calls when the application
      provides multiple iovs. Here the BPF program needs to pull in data
      to update data pointers but its not clear where the data ends without
      a size parameter. In many cases "guessing" is not easy to do
      and results in multiple calls to pull and without bounded loops
      everything gets fairly tricky.
      
      Clean this up by including a u32 size field. Note, all writes into
      sk_msg_md are rejected already from sk_msg_is_valid_access so nothing
      additional is needed there.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      3bdbd022
  4. 11 12月, 2018 4 次提交
  5. 10 12月, 2018 1 次提交
  6. 09 12月, 2018 3 次提交
  7. 08 12月, 2018 6 次提交
    • S
      ipv6: Check available headroom in ip6_xmit() even without options · 66033f47
      Stefano Brivio 提交于
      Even if we send an IPv6 packet without options, MAX_HEADER might not be
      enough to account for the additional headroom required by alignment of
      hardware headers.
      
      On a configuration without HYPERV_NET, WLAN, AX25, and with IPV6_TUNNEL,
      sending short SCTP packets over IPv4 over L2TP over IPv6, we start with
      100 bytes of allocated headroom in sctp_packet_transmit(), end up with 54
      bytes after l2tp_xmit_skb(), and 14 bytes in ip6_finish_output2().
      
      Those would be enough to append our 14 bytes header, but we're going to
      align that to 16 bytes, and write 2 bytes out of the allocated slab in
      neigh_hh_output().
      
      KASan says:
      
      [  264.967848] ==================================================================
      [  264.967861] BUG: KASAN: slab-out-of-bounds in ip6_finish_output2+0x1aec/0x1c70
      [  264.967866] Write of size 16 at addr 000000006af1c7fe by task netperf/6201
      [  264.967870]
      [  264.967876] CPU: 0 PID: 6201 Comm: netperf Not tainted 4.20.0-rc4+ #1
      [  264.967881] Hardware name: IBM 2827 H43 400 (z/VM 6.4.0)
      [  264.967887] Call Trace:
      [  264.967896] ([<00000000001347d6>] show_stack+0x56/0xa0)
      [  264.967903]  [<00000000017e379c>] dump_stack+0x23c/0x290
      [  264.967912]  [<00000000007bc594>] print_address_description+0xf4/0x290
      [  264.967919]  [<00000000007bc8fc>] kasan_report+0x13c/0x240
      [  264.967927]  [<000000000162f5e4>] ip6_finish_output2+0x1aec/0x1c70
      [  264.967935]  [<000000000163f890>] ip6_finish_output+0x430/0x7f0
      [  264.967943]  [<000000000163fe44>] ip6_output+0x1f4/0x580
      [  264.967953]  [<000000000163882a>] ip6_xmit+0xfea/0x1ce8
      [  264.967963]  [<00000000017396e2>] inet6_csk_xmit+0x282/0x3f8
      [  264.968033]  [<000003ff805fb0ba>] l2tp_xmit_skb+0xe02/0x13e0 [l2tp_core]
      [  264.968037]  [<000003ff80631192>] l2tp_eth_dev_xmit+0xda/0x150 [l2tp_eth]
      [  264.968041]  [<0000000001220020>] dev_hard_start_xmit+0x268/0x928
      [  264.968069]  [<0000000001330e8e>] sch_direct_xmit+0x7ae/0x1350
      [  264.968071]  [<000000000122359c>] __dev_queue_xmit+0x2b7c/0x3478
      [  264.968075]  [<00000000013d2862>] ip_finish_output2+0xce2/0x11a0
      [  264.968078]  [<00000000013d9b14>] ip_finish_output+0x56c/0x8c8
      [  264.968081]  [<00000000013ddd1e>] ip_output+0x226/0x4c0
      [  264.968083]  [<00000000013dbd6c>] __ip_queue_xmit+0x894/0x1938
      [  264.968100]  [<000003ff80bc3a5c>] sctp_packet_transmit+0x29d4/0x3648 [sctp]
      [  264.968116]  [<000003ff80b7bf68>] sctp_outq_flush_ctrl.constprop.5+0x8d0/0xe50 [sctp]
      [  264.968131]  [<000003ff80b7c716>] sctp_outq_flush+0x22e/0x7d8 [sctp]
      [  264.968146]  [<000003ff80b35c68>] sctp_cmd_interpreter.isra.16+0x530/0x6800 [sctp]
      [  264.968161]  [<000003ff80b3410a>] sctp_do_sm+0x222/0x648 [sctp]
      [  264.968177]  [<000003ff80bbddac>] sctp_primitive_ASSOCIATE+0xbc/0xf8 [sctp]
      [  264.968192]  [<000003ff80b93328>] __sctp_connect+0x830/0xc20 [sctp]
      [  264.968208]  [<000003ff80bb11ce>] sctp_inet_connect+0x2e6/0x378 [sctp]
      [  264.968212]  [<0000000001197942>] __sys_connect+0x21a/0x450
      [  264.968215]  [<000000000119aff8>] sys_socketcall+0x3d0/0xb08
      [  264.968218]  [<000000000184ea7a>] system_call+0x2a2/0x2c0
      
      [...]
      
      Just like ip_finish_output2() does for IPv4, check that we have enough
      headroom in ip6_xmit(), and reallocate it if we don't.
      
      This issue is older than git history.
      Reported-by: NJianlin Shi <jishi@redhat.com>
      Signed-off-by: NStefano Brivio <sbrivio@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      66033f47
    • E
      tcp: lack of available data can also cause TSO defer · f9bfe4e6
      Eric Dumazet 提交于
      tcp_tso_should_defer() can return true in three different cases :
      
       1) We are cwnd-limited
       2) We are rwnd-limited
       3) We are application limited.
      
      Neal pointed out that my recent fix went too far, since
      it assumed that if we were not in 1) case, we must be rwnd-limited
      
      Fix this by properly populating the is_cwnd_limited and
      is_rwnd_limited booleans.
      
      After this change, we can finally move the silly check for FIN
      flag only for the application-limited case.
      
      The same move for EOR bit will be handled in net-next,
      since commit 1c09f7d0 ("tcp: do not try to defer skbs
      with eor mark (MSG_EOR)") is scheduled for linux-4.21
      
      Tested by running 200 concurrent netperf -t TCP_RR -- -r 60000,100
      and checking none of them was rwnd_limited in the chrono_stat
      output from "ss -ti" command.
      
      Fixes: 41727549 ("tcp: Do not underestimate rwnd_limited")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Suggested-by: NNeal Cardwell <ncardwell@google.com>
      Reviewed-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Reviewed-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9bfe4e6
    • Y
      net: call sk_dst_reset when set SO_DONTROUTE · 0fbe82e6
      yupeng 提交于
      after set SO_DONTROUTE to 1, the IP layer should not route packets if
      the dest IP address is not in link scope. But if the socket has cached
      the dst_entry, such packets would be routed until the sk_dst_cache
      expires. So we should clean the sk_dst_cache when a user set
      SO_DONTROUTE option. Below are server/client python scripts which
      could reprodue this issue:
      
      server side code:
      
      ==========================================================================
      import socket
      import struct
      import time
      
      s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
      s.bind(('0.0.0.0', 9000))
      s.listen(1)
      sock, addr = s.accept()
      sock.setsockopt(socket.SOL_SOCKET, socket.SO_DONTROUTE, struct.pack('i', 1))
      while True:
          sock.send(b'foo')
          time.sleep(1)
      ==========================================================================
      
      client side code:
      ==========================================================================
      import socket
      import time
      
      s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
      s.connect(('server_address', 9000))
      while True:
          data = s.recv(1024)
          print(data)
      ==========================================================================
      Signed-off-by: Nyupeng <yupeng0921@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0fbe82e6
    • D
      neighbor: Improve garbage collection · 58956317
      David Ahern 提交于
      The existing garbage collection algorithm has a number of problems:
      
      1. The gc algorithm will not evict PERMANENT entries as those entries
         are managed by userspace, yet the existing algorithm walks the entire
         hash table which means it always considers PERMANENT entries when
         looking for entries to evict. In some use cases (e.g., EVPN) there
         can be tens of thousands of PERMANENT entries leading to wasted
         CPU cycles when gc kicks in. As an example, with 32k permanent
         entries, neigh_alloc has been observed taking more than 4 msec per
         invocation.
      
      2. Currently, when the number of neighbor entries hits gc_thresh2 and
         the last flush for the table was more than 5 seconds ago gc kicks in
         walks the entire hash table evicting *all* entries not in PERMANENT
         or REACHABLE state and not marked as externally learned. There is no
         discriminator on when the neigh entry was created or if it just moved
         from REACHABLE to another NUD_VALID state (e.g., NUD_STALE).
      
         It is possible for entries to be created or for established neighbor
         entries to be moved to STALE (e.g., an external node sends an ARP
         request) right before the 5 second window lapses:
      
              -----|---------x|----------|-----
                  t-5         t         t+5
      
         If that happens those entries are evicted during gc causing unnecessary
         thrashing on neighbor entries and userspace caches trying to track them.
      
         Further, this contradicts the description of gc_thresh2 which says
         "Entries older than 5 seconds will be cleared".
      
         One workaround is to make gc_thresh2 == gc_thresh3 but that negates the
         whole point of having separate thresholds.
      
      3. Clearing *all* neigh non-PERMANENT/REACHABLE/externally learned entries
         when gc_thresh2 is exceeded is over kill and contributes to trashing
         especially during startup.
      
      This patch addresses these problems as follows:
      
      1. Use of a separate list_head to track entries that can be garbage
         collected along with a separate counter. PERMANENT entries are not
         added to this list.
      
         The gc_thresh parameters are only compared to the new counter, not the
         total entries in the table. The forced_gc function is updated to only
         walk this new gc_list looking for entries to evict.
      
      2. Entries are added to the list head at the tail and removed from the
         front.
      
      3. Entries are only evicted if they were last updated more than 5 seconds
         ago, adhering to the original intent of gc_thresh2.
      
      4. Forced gc is stopped once the number of gc_entries drops below
         gc_thresh2.
      
      5. Since gc checks do not apply to PERMANENT entries, gc levels are skipped
         when allocating a new neighbor for a PERMANENT entry. By extension this
         means there are no explicit limits on the number of PERMANENT entries
         that can be created, but this is no different than FIB entries or FDB
         entries.
      Signed-off-by: NDavid Ahern <dsahern@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58956317
    • P
      bridge: Add br_fdb_clear_offload() · 43920edf
      Petr Machata 提交于
      When a driver unoffloads all FDB entries en bloc, it's inefficient to
      send the switchdev notification one by one. Add a helper that unsets the
      offload flag on FDB entries on a given bridge port and VLAN.
      Signed-off-by: NPetr Machata <petrm@mellanox.com>
      Acked-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43920edf
    • S
      ipv6: sr: properly initialize flowi6 prior passing to ip6_route_output · 1b4e5ad5
      Shmulik Ladkani 提交于
      In 'seg6_output', stack variable 'struct flowi6 fl6' was missing
      initialization.
      
      Fixes: 6c8702c6 ("ipv6: sr: add support for SRH encapsulation and injection with lwtunnels")
      Signed-off-by: NShmulik Ladkani <shmulik.ladkani@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1b4e5ad5
  8. 07 12月, 2018 8 次提交
  9. 06 12月, 2018 10 次提交
    • D
      neighbor: Add extack messages for add and delete commands · 7a35a50d
      David Ahern 提交于
      Add extack messages for failures in neigh_add and neigh_delete.
      Signed-off-by: NDavid Ahern <dsahern@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7a35a50d
    • H
      tipc: fix node keep alive interval calculation · f5d6c3e5
      Hoang Le 提交于
      When setting LINK tolerance, node timer interval will be calculated
      base on the LINK with lowest tolerance.
      
      But when calculated, the old node timer interval only updated if current
      setting value (tolerance/4) less than old ones regardless of number of
      links as well as links' lowest tolerance value.
      
      This caused to two cases missing if tolerance changed as following:
      Case 1:
      1.1/ There is one link (L1) available in the system
      1.2/ Set L1's tolerance from 1500ms => lower (i.e 500ms)
      1.3/ Then, fallback to default (1500ms) or higher (i.e 2000ms)
      
      Expected:
          node timer interval is 1500/4=375ms after 1.3
      
      Result:
      node timer interval will not being updated after changing tolerance at 1.3
      since its value 1500/4=375ms is not less than 500/4=125ms at 1.2.
      
      Case 2:
      2.1/ There are two links (L1, L2) available in the system
      2.2/ L1 and L2 tolerance value are 2000ms as initial
      2.3/ Set L2's tolerance from 2000ms => lower 1500ms
      2.4/ Disable link L2 (bring down its bearer)
      
      Expected:
          node timer interval is 2000ms/4=500ms after 2.4
      
      Result:
      node timer interval will not being updated after disabling L2 since
      its value 2000ms/4=500ms is still not less than 1500/4=375ms at 2.3
      although L2 is already not available in the system.
      
      To fix this, we start the node interval calculation by initializing it to
      a value larger than any conceivable calculated value. This way, the link
      with the lowest tolerance will always determine the calculated value.
      Acked-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NHoang Le <hoang.h.le@dektech.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f5d6c3e5
    • J
      ipv4: ipv6: netfilter: Adjust the frag mem limit when truesize changes · ebaf39e6
      Jiri Wiesner 提交于
      The *_frag_reasm() functions are susceptible to miscalculating the byte
      count of packet fragments in case the truesize of a head buffer changes.
      The truesize member may be changed by the call to skb_unclone(), leaving
      the fragment memory limit counter unbalanced even if all fragments are
      processed. This miscalculation goes unnoticed as long as the network
      namespace which holds the counter is not destroyed.
      
      Should an attempt be made to destroy a network namespace that holds an
      unbalanced fragment memory limit counter the cleanup of the namespace
      never finishes. The thread handling the cleanup gets stuck in
      inet_frags_exit_net() waiting for the percpu counter to reach zero. The
      thread is usually in running state with a stacktrace similar to:
      
       PID: 1073   TASK: ffff880626711440  CPU: 1   COMMAND: "kworker/u48:4"
        #5 [ffff880621563d48] _raw_spin_lock at ffffffff815f5480
        #6 [ffff880621563d48] inet_evict_bucket at ffffffff8158020b
        #7 [ffff880621563d80] inet_frags_exit_net at ffffffff8158051c
        #8 [ffff880621563db0] ops_exit_list at ffffffff814f5856
        #9 [ffff880621563dd8] cleanup_net at ffffffff814f67c0
       #10 [ffff880621563e38] process_one_work at ffffffff81096f14
      
      It is not possible to create new network namespaces, and processes
      that call unshare() end up being stuck in uninterruptible sleep state
      waiting to acquire the net_mutex.
      
      The bug was observed in the IPv6 netfilter code by Per Sundstrom.
      I thank him for his analysis of the problem. The parts of this patch
      that apply to IPv4 and IPv6 fragment reassembly are preemptive measures.
      Signed-off-by: NJiri Wiesner <jwiesner@suse.com>
      Reported-by: NPer Sundstrom <per.sundstrom@redqube.se>
      Acked-by: NPeter Oskolkov <posk@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ebaf39e6
    • J
      sctp: frag_point sanity check · afd0a800
      Jakub Audykowicz 提交于
      If for some reason an association's fragmentation point is zero,
      sctp_datamsg_from_user will try to endlessly try to divide a message
      into zero-sized chunks. This eventually causes kernel panic due to
      running out of memory.
      
      Although this situation is quite unlikely, it has occurred before as
      reported. I propose to add this simple last-ditch sanity check due to
      the severity of the potential consequences.
      Signed-off-by: NJakub Audykowicz <jakub.audykowicz@gmail.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      afd0a800
    • P
      net: netem: use a list in addition to rbtree · d66280b1
      Peter Oskolkov 提交于
      When testing high-bandwidth TCP streams with large windows,
      high latency, and low jitter, netem consumes a lot of CPU cycles
      doing rbtree rebalancing.
      
      This patch uses a linear list/queue in addition to the rbtree:
      if an incoming packet is past the tail of the linear queue, it is
      added there, otherwise it is inserted into the rbtree.
      
      Without this patch, perf shows netem_enqueue, netem_dequeue,
      and rb_* functions among the top offenders. With this patch,
      only netem_enqueue is noticeable if jitter is low/absent.
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NPeter Oskolkov <posk@google.com>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d66280b1
    • N
      net: bridge: increase multicast's default maximum number of entries · d08c6bc0
      Nikolay Aleksandrov 提交于
      bridge's default hash_max was 512 which is rather conservative, now that
      we're using the generic rhashtable API which autoshrinks let's increase
      it to 4096 and move it to a define in br_private.h.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d08c6bc0
    • N
      net: bridge: mark hash_elasticity as obsolete · cf332bca
      Nikolay Aleksandrov 提交于
      Now that the bridge multicast uses the generic rhashtable interface we
      can drop the hash_elasticity option as that is already done for us and
      it's hardcoded to a maximum of RHT_ELASTICITY (16 currently). Add a
      warning about the obsolete option when the hash_elasticity is set.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cf332bca
    • N
      net: bridge: multicast: use non-bh rcu flavor · 4329596c
      Nikolay Aleksandrov 提交于
      The bridge multicast code has been using a mix of RCU and RCU-bh flavors
      sometimes in questionable way. Since we've moved to rhashtable just use
      non-bh RCU everywhere. In addition this simplifies freeing of objects
      and allows us to remove some unnecessary callback functions.
      
      v3: new patch
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4329596c
    • N
      net: bridge: convert multicast to generic rhashtable · 19e3a9c9
      Nikolay Aleksandrov 提交于
      The bridge multicast code currently uses a custom resizable hashtable
      which predates the generic rhashtable interface. It has many
      shortcomings compared and duplicates functionality that is presently
      available via the generic rhashtable, so this patch removes the custom
      rhashtable implementation in favor of the kernel's generic rhashtable.
      The hash maximum is kept and the rhashtable's size is used to do a loose
      check if it's reached in which case we revert to the old behaviour and
      disable further bridge multicast processing. Also now we can support any
      hash maximum, doesn't need to be a power of 2.
      
      v3: add non-rcu br_mdb_get variant and use it where multicast_lock is
          held to avoid RCU splat, drop hash_max function and just set it
          directly
      
      v2: handle when IGMP snooping is undefined, add br_mdb_init/uninit
          placeholders
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      19e3a9c9
    • Y
      tcp: fix NULL ref in tail loss probe · b2b7af86
      Yuchung Cheng 提交于
      TCP loss probe timer may fire when the retranmission queue is empty but
      has a non-zero tp->packets_out counter. tcp_send_loss_probe will call
      tcp_rearm_rto which triggers NULL pointer reference by fetching the
      retranmission queue head in its sub-routines.
      
      Add a more detailed warning to help catch the root cause of the inflight
      accounting inconsistency.
      Reported-by: NRafael Tinoco <rafael.tinoco@linaro.org>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b2b7af86