1. 12 7月, 2014 1 次提交
  2. 09 7月, 2014 1 次提交
    • D
      ip_tunnel: fix ip_tunnel_lookup · e0056593
      Dmitry Popov 提交于
      This patch fixes 3 similar bugs where incoming packets might be routed into
      wrong non-wildcard tunnels:
      
      1) Consider the following setup:
          ip address add 1.1.1.1/24 dev eth0
          ip address add 1.1.1.2/24 dev eth0
          ip tunnel add ipip1 remote 2.2.2.2 local 1.1.1.1 mode ipip dev eth0
          ip link set ipip1 up
      
      Incoming ipip packets from 2.2.2.2 were routed into ipip1 even if it has dst =
      1.1.1.2. Moreover even if there was wildcard tunnel like
         ip tunnel add ipip0 remote 2.2.2.2 local any mode ipip dev eth0
      but it was created before explicit one (with local 1.1.1.1), incoming ipip
      packets with src = 2.2.2.2 and dst = 1.1.1.2 were still routed into ipip1.
      
      Same issue existed with all tunnels that use ip_tunnel_lookup (gre, vti)
      
      2)  ip address add 1.1.1.1/24 dev eth0
          ip tunnel add ipip1 remote 2.2.146.85 local 1.1.1.1 mode ipip dev eth0
          ip link set ipip1 up
      
      Incoming ipip packets with dst = 1.1.1.1 were routed into ipip1, no matter what
      src address is. Any remote ip address which has ip_tunnel_hash = 0 raised this
      issue, 2.2.146.85 is just an example, there are more than 4 million of them.
      And again, wildcard tunnel like
         ip tunnel add ipip0 remote any local 1.1.1.1 mode ipip dev eth0
      wouldn't be ever matched if it was created before explicit tunnel like above.
      
      Gre & vti tunnels had the same issue.
      
      3)  ip address add 1.1.1.1/24 dev eth0
          ip tunnel add gre1 remote 2.2.146.84 local 1.1.1.1 key 1 mode gre dev eth0
          ip link set gre1 up
      
      Any incoming gre packet with key = 1 were routed into gre1, no matter what
      src/dst addresses are. Any remote ip address which has ip_tunnel_hash = 0 raised
      the issue, 2.2.146.84 is just an example, there are more than 4 million of them.
      Wildcard tunnel like
         ip tunnel add gre2 remote any local any key 1 mode gre dev eth0
      wouldn't be ever matched if it was created before explicit tunnel like above.
      
      All this stuff happened because while looking for a wildcard tunnel we didn't
      check that matched tunnel is a wildcard one. Fixed.
      Signed-off-by: NDmitry Popov <ixaphire@qrator.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e0056593
  3. 08 7月, 2014 3 次提交
    • Y
      tcp: fix false undo corner cases · 6e08d5e3
      Yuchung Cheng 提交于
      The undo code assumes that, upon entering loss recovery, TCP
      1) always retransmit something
      2) the retransmission never fails locally (e.g., qdisc drop)
      
      so undo_marker is set in tcp_enter_recovery() and undo_retrans is
      incremented only when tcp_retransmit_skb() is successful.
      
      When the assumption is broken because TCP's cwnd is too small to
      retransmit or the retransmit fails locally. The next (DUP)ACK
      would incorrectly revert the cwnd and the congestion state in
      tcp_try_undo_dsack() or tcp_may_undo(). Subsequent (DUP)ACKs
      may enter the recovery state. The sender repeatedly enter and
      (incorrectly) exit recovery states if the retransmits continue to
      fail locally while receiving (DUP)ACKs.
      
      The fix is to initialize undo_retrans to -1 and start counting on
      the first retransmission. Always increment undo_retrans even if the
      retransmissions fail locally because they couldn't cause DSACKs to
      undo the cwnd reduction.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6e08d5e3
    • D
      igmp: fix the problem when mc leave group · 52ad353a
      dingtianhong 提交于
      The problem was triggered by these steps:
      
      1) create socket, bind and then setsockopt for add mc group.
         mreq.imr_multiaddr.s_addr = inet_addr("255.0.0.37");
         mreq.imr_interface.s_addr = inet_addr("192.168.1.2");
         setsockopt(sockfd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq));
      
      2) drop the mc group for this socket.
         mreq.imr_multiaddr.s_addr = inet_addr("255.0.0.37");
         mreq.imr_interface.s_addr = inet_addr("0.0.0.0");
         setsockopt(sockfd, IPPROTO_IP, IP_DROP_MEMBERSHIP, &mreq, sizeof(mreq));
      
      3) and then drop the socket, I found the mc group was still used by the dev:
      
         netstat -g
      
         Interface       RefCnt Group
         --------------- ------ ---------------------
         eth2		   1	  255.0.0.37
      
      Normally even though the IP_DROP_MEMBERSHIP return error, the mc group still need
      to be released for the netdev when drop the socket, but this process was broken when
      route default is NULL, the reason is that:
      
      The ip_mc_leave_group() will choose the in_dev by the imr_interface.s_addr, if input addr
      is NULL, the default route dev will be chosen, then the ifindex is got from the dev,
      then polling the inet->mc_list and return -ENODEV, but if the default route dev is NULL,
      the in_dev and ifIndex is both NULL, when polling the inet->mc_list, the mc group will be
      released from the mc_list, but the dev didn't dec the refcnt for this mc group, so
      when dropping the socket, the mc_list is NULL and the dev still keep this group.
      
      v1->v2: According Hideaki's suggestion, we should align with IPv6 (RFC3493) and BSDs,
      	so I add the checking for the in_dev before polling the mc_list, make sure when
      	we remove the mc group, dec the refcnt to the real dev which was using the mc address.
      	The problem would never happened again.
      Signed-off-by: NDing Tianhong <dingtianhong@huawei.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      52ad353a
    • E
      ipv4: icmp: Fix pMTU handling for rare case · 68b7107b
      Edward Allcutt 提交于
      Some older router implementations still send Fragmentation Needed
      errors with the Next-Hop MTU field set to zero. This is explicitly
      described as an eventuality that hosts must deal with by the
      standard (RFC 1191) since older standards specified that those
      bits must be zero.
      
      Linux had a generic (for all of IPv4) implementation of the algorithm
      described in the RFC for searching a list of MTU plateaus for a good
      value. Commit 46517008 ("ipv4: Kill ip_rt_frag_needed().")
      removed this as part of the changes to remove the routing cache.
      Subsequently any Fragmentation Needed packet with a zero Next-Hop
      MTU has been discarded without being passed to the per-protocol
      handlers or notifying userspace for raw sockets.
      
      When there is a router which does not implement RFC 1191 on an
      MTU limited path then this results in stalled connections since
      large packets are discarded and the local protocols are not
      notified so they never attempt to lower the pMTU.
      
      One example I have seen is an OpenBSD router terminating IPSec
      tunnels. It's worth pointing out that this case is distinct from
      the BSD 4.2 bug which incorrectly calculated the Next-Hop MTU
      since the commit in question dismissed that as a valid concern.
      
      All of the per-protocols handlers implement the simple approach from
      RFC 1191 of immediately falling back to the minimum value. Although
      this is sub-optimal it is vastly preferable to connections hanging
      indefinitely.
      
      Remove the Next-Hop MTU != 0 check and allow such packets
      to follow the normal path.
      
      Fixes: 46517008 ("ipv4: Kill ip_rt_frag_needed().")
      Signed-off-by: NEdward Allcutt <edward.allcutt@openmarket.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68b7107b
  4. 03 7月, 2014 1 次提交
    • C
      tcp: Fix divide by zero when pushing during tcp-repair · 5924f17a
      Christoph Paasch 提交于
      When in repair-mode and TCP_RECV_QUEUE is set, we end up calling
      tcp_push with mss_now being 0. If data is in the send-queue and
      tcp_set_skb_tso_segs gets called, we crash because it will divide by
      mss_now:
      
      [  347.151939] divide error: 0000 [#1] SMP
      [  347.152907] Modules linked in:
      [  347.152907] CPU: 1 PID: 1123 Comm: packetdrill Not tainted 3.16.0-rc2 #4
      [  347.152907] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
      [  347.152907] task: f5b88540 ti: f3c82000 task.ti: f3c82000
      [  347.152907] EIP: 0060:[<c1601359>] EFLAGS: 00210246 CPU: 1
      [  347.152907] EIP is at tcp_set_skb_tso_segs+0x49/0xa0
      [  347.152907] EAX: 00000b67 EBX: f5acd080 ECX: 00000000 EDX: 00000000
      [  347.152907] ESI: f5a28f40 EDI: f3c88f00 EBP: f3c83d10 ESP: f3c83d00
      [  347.152907]  DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
      [  347.152907] CR0: 80050033 CR2: 083158b0 CR3: 35146000 CR4: 000006b0
      [  347.152907] Stack:
      [  347.152907]  c167f9d9 f5acd080 000005b4 00000002 f3c83d20 c16013e6 f3c88f00 f5acd080
      [  347.152907]  f3c83da0 c1603b5a f3c83d38 c10a0188 00000000 00000000 f3c83d84 c10acc85
      [  347.152907]  c1ad5ec0 00000000 00000000 c1ad679c 010003e0 00000000 00000000 f3c88fc8
      [  347.152907] Call Trace:
      [  347.152907]  [<c167f9d9>] ? apic_timer_interrupt+0x2d/0x34
      [  347.152907]  [<c16013e6>] tcp_init_tso_segs+0x36/0x50
      [  347.152907]  [<c1603b5a>] tcp_write_xmit+0x7a/0xbf0
      [  347.152907]  [<c10a0188>] ? up+0x28/0x40
      [  347.152907]  [<c10acc85>] ? console_unlock+0x295/0x480
      [  347.152907]  [<c10ad24f>] ? vprintk_emit+0x1ef/0x4b0
      [  347.152907]  [<c1605716>] __tcp_push_pending_frames+0x36/0xd0
      [  347.152907]  [<c15f4860>] tcp_push+0xf0/0x120
      [  347.152907]  [<c15f7641>] tcp_sendmsg+0xf1/0xbf0
      [  347.152907]  [<c116d920>] ? kmem_cache_free+0xf0/0x120
      [  347.152907]  [<c106a682>] ? __sigqueue_free+0x32/0x40
      [  347.152907]  [<c106a682>] ? __sigqueue_free+0x32/0x40
      [  347.152907]  [<c114f0f0>] ? do_wp_page+0x3e0/0x850
      [  347.152907]  [<c161c36a>] inet_sendmsg+0x4a/0xb0
      [  347.152907]  [<c1150269>] ? handle_mm_fault+0x709/0xfb0
      [  347.152907]  [<c15a006b>] sock_aio_write+0xbb/0xd0
      [  347.152907]  [<c1180b79>] do_sync_write+0x69/0xa0
      [  347.152907]  [<c1181023>] vfs_write+0x123/0x160
      [  347.152907]  [<c1181d55>] SyS_write+0x55/0xb0
      [  347.152907]  [<c167f0d8>] sysenter_do_call+0x12/0x28
      
      This can easily be reproduced with the following packetdrill-script (the
      "magic" with netem, sk_pacing and limit_output_bytes is done to prevent
      the kernel from pushing all segments, because hitting the limit without
      doing this is not so easy with packetdrill):
      
      0   socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
      +0  setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
      
      +0  bind(3, ..., ...) = 0
      +0  listen(3, 1) = 0
      
      +0  < S 0:0(0) win 32792 <mss 1460>
      +0  > S. 0:0(0) ack 1 <mss 1460>
      +0.1  < . 1:1(0) ack 1 win 65000
      
      +0  accept(3, ..., ...) = 4
      
      // This forces that not all segments of the snd-queue will be pushed
      +0 `tc qdisc add dev tun0 root netem delay 10ms`
      +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=2`
      +0 setsockopt(4, SOL_SOCKET, 47, [2], 4) = 0
      
      +0 write(4,...,10000) = 10000
      +0 write(4,...,10000) = 10000
      
      // Set tcp-repair stuff, particularly TCP_RECV_QUEUE
      +0 setsockopt(4, SOL_TCP, 19, [1], 4) = 0
      +0 setsockopt(4, SOL_TCP, 20, [1], 4) = 0
      
      // This now will make the write push the remaining segments
      +0 setsockopt(4, SOL_SOCKET, 47, [20000], 4) = 0
      +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=130000`
      
      // Now we will crash
      +0 write(4,...,1000) = 1000
      
      This happens since ec342325 (tcp: fix retransmission in repair
      mode). Prior to that, the call to tcp_push was prevented by a check for
      tp->repair.
      
      The patch fixes it, by adding the new goto-label out_nopush. When exiting
      tcp_sendmsg and a push is not required, which is the case for tp->repair,
      we go to this label.
      
      When repairing and calling send() with TCP_RECV_QUEUE, the data is
      actually put in the receive-queue. So, no push is required because no
      data has been added to the send-queue.
      
      Cc: Andrew Vagin <avagin@openvz.org>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Fixes: ec342325 (tcp: fix retransmission in repair mode)
      Signed-off-by: NChristoph Paasch <christoph.paasch@uclouvain.be>
      Acked-by: NAndrew Vagin <avagin@openvz.org>
      Acked-by: NPavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5924f17a
  5. 01 7月, 2014 1 次提交
    • E
      ipv4: irq safe sk_dst_[re]set() and ipv4_sk_update_pmtu() fix · 7f502361
      Eric Dumazet 提交于
      We have two different ways to handle changes to sk->sk_dst
      
      First way (used by TCP) assumes socket lock is owned by caller, and use
      no extra lock : __sk_dst_set() & __sk_dst_reset()
      
      Another way (used by UDP) uses sk_dst_lock because socket lock is not
      always taken. Note that sk_dst_lock is not softirq safe.
      
      These ways are not inter changeable for a given socket type.
      
      ipv4_sk_update_pmtu(), added in linux-3.8, added a race, as it used
      the socket lock as synchronization, but users might be UDP sockets.
      
      Instead of converting sk_dst_lock to a softirq safe version, use xchg()
      as we did for sk_rx_dst in commit e47eb5df ("udp: ipv4: do not use
      sk_dst_lock from softirq context")
      
      In a follow up patch, we probably can remove sk_dst_lock, as it is
      only used in IPv6.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Fixes: 9cb3a50c ("ipv4: Invalidate the socket cached route on pmtu events if possible")
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f502361
  6. 27 6月, 2014 1 次提交
  7. 26 6月, 2014 1 次提交
    • E
      ipv4: fix dst race in sk_dst_get() · f8864972
      Eric Dumazet 提交于
      When IP route cache had been removed in linux-3.6, we broke assumption
      that dst entries were all freed after rcu grace period. DST_NOCACHE
      dst were supposed to be freed from dst_release(). But it appears
      we want to keep such dst around, either in UDP sockets or tunnels.
      
      In sk_dst_get() we need to make sure dst refcount is not 0
      before incrementing it, or else we might end up freeing a dst
      twice.
      
      DST_NOCACHE set on a dst does not mean this dst can not be attached
      to a socket or a tunnel.
      
      Then, before actual freeing, we need to observe a rcu grace period
      to make sure all other cpus can catch the fact the dst is no longer
      usable.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NDormando <dormando@rydia.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8864972
  8. 20 6月, 2014 1 次提交
    • N
      tcp: fix tcp_match_skb_to_sack() for unaligned SACK at end of an skb · 2cd0d743
      Neal Cardwell 提交于
      If there is an MSS change (or misbehaving receiver) that causes a SACK
      to arrive that covers the end of an skb but is less than one MSS, then
      tcp_match_skb_to_sack() was rounding up pkt_len to the full length of
      the skb ("Round if necessary..."), then chopping all bytes off the skb
      and creating a zero-byte skb in the write queue.
      
      This was visible now because the recently simplified TLP logic in
      bef1909e ("tcp: fixing TLP's FIN recovery") could find that 0-byte
      skb at the end of the write queue, and now that we do not check that
      skb's length we could send it as a TLP probe.
      
      Consider the following example scenario:
      
       mss: 1000
       skb: seq: 0 end_seq: 4000  len: 4000
       SACK: start_seq: 3999 end_seq: 4000
      
      The tcp_match_skb_to_sack() code will compute:
      
       in_sack = false
       pkt_len = start_seq - TCP_SKB_CB(skb)->seq = 3999 - 0 = 3999
       new_len = (pkt_len / mss) * mss = (3999/1000)*1000 = 3000
       new_len += mss = 4000
      
      Previously we would find the new_len > skb->len check failing, so we
      would fall through and set pkt_len = new_len = 4000 and chop off
      pkt_len of 4000 from the 4000-byte skb, leaving a 0-byte segment
      afterward in the write queue.
      
      With this new commit, we notice that the new new_len >= skb->len check
      succeeds, so that we return without trying to fragment.
      
      Fixes: adb92db8 ("tcp: Make SACK code to split only at mss boundaries")
      Reported-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Ilpo Jarvinen <ilpo.jarvinen@helsinki.fi>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2cd0d743
  9. 17 6月, 2014 1 次提交
  10. 14 6月, 2014 1 次提交
  11. 13 6月, 2014 1 次提交
  12. 12 6月, 2014 3 次提交
    • T
      net: Add skb_gro_postpull_rcsum to udp and vxlan · 6bae1d4c
      Tom Herbert 提交于
      Need to gro_postpull_rcsum for GRO to work with checksum complete.
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6bae1d4c
    • T
      net: Save software checksum complete · 7e3cead5
      Tom Herbert 提交于
      In skb_checksum complete, if we need to compute the checksum for the
      packet (via skb_checksum) save the result as CHECKSUM_COMPLETE.
      Subsequent checksum verification can use this.
      
      Also, added csum_complete_sw flag to distinguish between software and
      hardware generated checksum complete, we should always be able to trust
      the software computation.
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e3cead5
    • E
      ipv4: fix a race in ip4_datagram_release_cb() · 9709674e
      Eric Dumazet 提交于
      Alexey gave a AddressSanitizer[1] report that finally gave a good hint
      at where was the origin of various problems already reported by Dormando
      in the past [2]
      
      Problem comes from the fact that UDP can have a lockless TX path, and
      concurrent threads can manipulate sk_dst_cache, while another thread,
      is holding socket lock and calls __sk_dst_set() in
      ip4_datagram_release_cb() (this was added in linux-3.8)
      
      It seems that all we need to do is to use sk_dst_check() and
      sk_dst_set() so that all the writers hold same spinlock
      (sk->sk_dst_lock) to prevent corruptions.
      
      TCP stack do not need this protection, as all sk_dst_cache writers hold
      the socket lock.
      
      [1]
      https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
      
      AddressSanitizer: heap-use-after-free in ipv4_dst_check
      Read of size 2 by thread T15453:
       [<ffffffff817daa3a>] ipv4_dst_check+0x1a/0x90 ./net/ipv4/route.c:1116
       [<ffffffff8175b789>] __sk_dst_check+0x89/0xe0 ./net/core/sock.c:531
       [<ffffffff81830a36>] ip4_datagram_release_cb+0x46/0x390 ??:0
       [<ffffffff8175eaea>] release_sock+0x17a/0x230 ./net/core/sock.c:2413
       [<ffffffff81830882>] ip4_datagram_connect+0x462/0x5d0 ??:0
       [<ffffffff81846d06>] inet_dgram_connect+0x76/0xd0 ./net/ipv4/af_inet.c:534
       [<ffffffff817580ac>] SYSC_connect+0x15c/0x1c0 ./net/socket.c:1701
       [<ffffffff817596ce>] SyS_connect+0xe/0x10 ./net/socket.c:1682
       [<ffffffff818b0a29>] system_call_fastpath+0x16/0x1b
      ./arch/x86/kernel/entry_64.S:629
      
      Freed by thread T15455:
       [<ffffffff8178d9b8>] dst_destroy+0xa8/0x160 ./net/core/dst.c:251
       [<ffffffff8178de25>] dst_release+0x45/0x80 ./net/core/dst.c:280
       [<ffffffff818304c1>] ip4_datagram_connect+0xa1/0x5d0 ??:0
       [<ffffffff81846d06>] inet_dgram_connect+0x76/0xd0 ./net/ipv4/af_inet.c:534
       [<ffffffff817580ac>] SYSC_connect+0x15c/0x1c0 ./net/socket.c:1701
       [<ffffffff817596ce>] SyS_connect+0xe/0x10 ./net/socket.c:1682
       [<ffffffff818b0a29>] system_call_fastpath+0x16/0x1b
      ./arch/x86/kernel/entry_64.S:629
      
      Allocated by thread T15453:
       [<ffffffff8178d291>] dst_alloc+0x81/0x2b0 ./net/core/dst.c:171
       [<ffffffff817db3b7>] rt_dst_alloc+0x47/0x50 ./net/ipv4/route.c:1406
       [<     inlined    >] __ip_route_output_key+0x3e8/0xf70
      __mkroute_output ./net/ipv4/route.c:1939
       [<ffffffff817dde08>] __ip_route_output_key+0x3e8/0xf70 ./net/ipv4/route.c:2161
       [<ffffffff817deb34>] ip_route_output_flow+0x14/0x30 ./net/ipv4/route.c:2249
       [<ffffffff81830737>] ip4_datagram_connect+0x317/0x5d0 ??:0
       [<ffffffff81846d06>] inet_dgram_connect+0x76/0xd0 ./net/ipv4/af_inet.c:534
       [<ffffffff817580ac>] SYSC_connect+0x15c/0x1c0 ./net/socket.c:1701
       [<ffffffff817596ce>] SyS_connect+0xe/0x10 ./net/socket.c:1682
       [<ffffffff818b0a29>] system_call_fastpath+0x16/0x1b
      ./arch/x86/kernel/entry_64.S:629
      
      [2]
      <4>[196727.311203] general protection fault: 0000 [#1] SMP
      <4>[196727.311224] Modules linked in: xt_TEE xt_dscp xt_DSCP macvlan bridge coretemp crc32_pclmul ghash_clmulni_intel gpio_ich microcode ipmi_watchdog ipmi_devintf sb_edac edac_core lpc_ich mfd_core tpm_tis tpm tpm_bios ipmi_si ipmi_msghandler isci igb libsas i2c_algo_bit ixgbe ptp pps_core mdio
      <4>[196727.311333] CPU: 17 PID: 0 Comm: swapper/17 Not tainted 3.10.26 #1
      <4>[196727.311344] Hardware name: Supermicro X9DRi-LN4+/X9DR3-LN4+/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.0 07/05/2013
      <4>[196727.311364] task: ffff885e6f069700 ti: ffff885e6f072000 task.ti: ffff885e6f072000
      <4>[196727.311377] RIP: 0010:[<ffffffff815f8c7f>]  [<ffffffff815f8c7f>] ipv4_dst_destroy+0x4f/0x80
      <4>[196727.311399] RSP: 0018:ffff885effd23a70  EFLAGS: 00010282
      <4>[196727.311409] RAX: dead000000200200 RBX: ffff8854c398ecc0 RCX: 0000000000000040
      <4>[196727.311423] RDX: dead000000100100 RSI: dead000000100100 RDI: dead000000200200
      <4>[196727.311437] RBP: ffff885effd23a80 R08: ffffffff815fd9e0 R09: ffff885d5a590800
      <4>[196727.311451] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
      <4>[196727.311464] R13: ffffffff81c8c280 R14: 0000000000000000 R15: ffff880e85ee16ce
      <4>[196727.311510] FS:  0000000000000000(0000) GS:ffff885effd20000(0000) knlGS:0000000000000000
      <4>[196727.311554] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4>[196727.311581] CR2: 00007a46751eb000 CR3: 0000005e65688000 CR4: 00000000000407e0
      <4>[196727.311625] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      <4>[196727.311669] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      <4>[196727.311713] Stack:
      <4>[196727.311733]  ffff8854c398ecc0 ffff8854c398ecc0 ffff885effd23ab0 ffffffff815b7f42
      <4>[196727.311784]  ffff88be6595bc00 ffff8854c398ecc0 0000000000000000 ffff8854c398ecc0
      <4>[196727.311834]  ffff885effd23ad0 ffffffff815b86c6 ffff885d5a590800 ffff8816827821c0
      <4>[196727.311885] Call Trace:
      <4>[196727.311907]  <IRQ>
      <4>[196727.311912]  [<ffffffff815b7f42>] dst_destroy+0x32/0xe0
      <4>[196727.311959]  [<ffffffff815b86c6>] dst_release+0x56/0x80
      <4>[196727.311986]  [<ffffffff81620bd5>] tcp_v4_do_rcv+0x2a5/0x4a0
      <4>[196727.312013]  [<ffffffff81622b5a>] tcp_v4_rcv+0x7da/0x820
      <4>[196727.312041]  [<ffffffff815fd9e0>] ? ip_rcv_finish+0x360/0x360
      <4>[196727.312070]  [<ffffffff815de02d>] ? nf_hook_slow+0x7d/0x150
      <4>[196727.312097]  [<ffffffff815fd9e0>] ? ip_rcv_finish+0x360/0x360
      <4>[196727.312125]  [<ffffffff815fda92>] ip_local_deliver_finish+0xb2/0x230
      <4>[196727.312154]  [<ffffffff815fdd9a>] ip_local_deliver+0x4a/0x90
      <4>[196727.312183]  [<ffffffff815fd799>] ip_rcv_finish+0x119/0x360
      <4>[196727.312212]  [<ffffffff815fe00b>] ip_rcv+0x22b/0x340
      <4>[196727.312242]  [<ffffffffa0339680>] ? macvlan_broadcast+0x160/0x160 [macvlan]
      <4>[196727.312275]  [<ffffffff815b0c62>] __netif_receive_skb_core+0x512/0x640
      <4>[196727.312308]  [<ffffffff811427fb>] ? kmem_cache_alloc+0x13b/0x150
      <4>[196727.312338]  [<ffffffff815b0db1>] __netif_receive_skb+0x21/0x70
      <4>[196727.312368]  [<ffffffff815b0fa1>] netif_receive_skb+0x31/0xa0
      <4>[196727.312397]  [<ffffffff815b1ae8>] napi_gro_receive+0xe8/0x140
      <4>[196727.312433]  [<ffffffffa00274f1>] ixgbe_poll+0x551/0x11f0 [ixgbe]
      <4>[196727.312463]  [<ffffffff815fe00b>] ? ip_rcv+0x22b/0x340
      <4>[196727.312491]  [<ffffffff815b1691>] net_rx_action+0x111/0x210
      <4>[196727.312521]  [<ffffffff815b0db1>] ? __netif_receive_skb+0x21/0x70
      <4>[196727.312552]  [<ffffffff810519d0>] __do_softirq+0xd0/0x270
      <4>[196727.312583]  [<ffffffff816cef3c>] call_softirq+0x1c/0x30
      <4>[196727.312613]  [<ffffffff81004205>] do_softirq+0x55/0x90
      <4>[196727.312640]  [<ffffffff81051c85>] irq_exit+0x55/0x60
      <4>[196727.312668]  [<ffffffff816cf5c3>] do_IRQ+0x63/0xe0
      <4>[196727.312696]  [<ffffffff816c5aaa>] common_interrupt+0x6a/0x6a
      <4>[196727.312722]  <EOI>
      <1>[196727.313071] RIP  [<ffffffff815f8c7f>] ipv4_dst_destroy+0x4f/0x80
      <4>[196727.313100]  RSP <ffff885effd23a70>
      <4>[196727.313377] ---[ end trace 64b3f14fae0f2e29 ]---
      <0>[196727.380908] Kernel panic - not syncing: Fatal exception in interrupt
      Reported-by: NAlexey Preobrazhensky <preobr@google.com>
      Reported-by: Ndormando <dormando@rydia.ne>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Fixes: 8141ed9f ("ipv4: Add a socket release callback for datagram sockets")
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9709674e
  13. 11 6月, 2014 5 次提交
  14. 06 6月, 2014 1 次提交
  15. 05 6月, 2014 8 次提交
  16. 03 6月, 2014 3 次提交
    • Y
      tcp: fix cwnd undo on DSACK in F-RTO · 0cfa5c07
      Yuchung Cheng 提交于
      This bug is discovered by an recent F-RTO issue on tcpm list
      https://www.ietf.org/mail-archive/web/tcpm/current/msg08794.html
      
      The bug is that currently F-RTO does not use DSACK to undo cwnd in
      certain cases: upon receiving an ACK after the RTO retransmission in
      F-RTO, and the ACK has DSACK indicating the retransmission is spurious,
      the sender only calls tcp_try_undo_loss() if some never retransmisted
      data is sacked (FLAG_ORIG_DATA_SACKED).
      
      The correct behavior is to unconditionally call tcp_try_undo_loss so
      the DSACK information is used properly to undo the cwnd reduction.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0cfa5c07
    • D
      fib_trie: use seq_file_net rather than seq->private · 30f38d2f
      David Ahern 提交于
      Make fib_triestat_seq_show consistent with other /proc/net files and
      use seq_file_net.
      Signed-off-by: NDavid Ahern <dsahern@gmail.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      Cc: James Morris <jmorris@namei.org>
      Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
      Cc: Patrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      30f38d2f
    • E
      inetpeer: get rid of ip_id_count · 73f156a6
      Eric Dumazet 提交于
      Ideally, we would need to generate IP ID using a per destination IP
      generator.
      
      linux kernels used inet_peer cache for this purpose, but this had a huge
      cost on servers disabling MTU discovery.
      
      1) each inet_peer struct consumes 192 bytes
      
      2) inetpeer cache uses a binary tree of inet_peer structs,
         with a nominal size of ~66000 elements under load.
      
      3) lookups in this tree are hitting a lot of cache lines, as tree depth
         is about 20.
      
      4) If server deals with many tcp flows, we have a high probability of
         not finding the inet_peer, allocating a fresh one, inserting it in
         the tree with same initial ip_id_count, (cf secure_ip_id())
      
      5) We garbage collect inet_peer aggressively.
      
      IP ID generation do not have to be 'perfect'
      
      Goal is trying to avoid duplicates in a short period of time,
      so that reassembly units have a chance to complete reassembly of
      fragments belonging to one message before receiving other fragments
      with a recycled ID.
      
      We simply use an array of generators, and a Jenkin hash using the dst IP
      as a key.
      
      ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
      belongs (it is only used from this file)
      
      secure_ip_id() and secure_ipv6_id() no longer are needed.
      
      Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
      unnecessary decrement/increment of the number of segments.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73f156a6
  17. 31 5月, 2014 1 次提交
  18. 24 5月, 2014 3 次提交
  19. 23 5月, 2014 2 次提交
    • L
      ipv4: initialise the itag variable in __mkroute_input · fbdc0ad0
      Li RongQing 提交于
      the value of itag is a random value from stack, and may not be initiated by
      fib_validate_source, which called fib_combine_itag if CONFIG_IP_ROUTE_CLASSID
      is not set
      
      This will make the cached dst uncertainty
      Signed-off-by: NLi RongQing <roy.qing.li@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbdc0ad0
    • N
      tcp: make cwnd-limited checks measurement-based, and gentler · ca8a2263
      Neal Cardwell 提交于
      Experience with the recent e114a710 ("tcp: fix cwnd limited
      checking to improve congestion control") has shown that there are
      common cases where that commit can cause cwnd to be much larger than
      necessary. This leads to TSO autosizing cooking skbs that are too
      large, among other things.
      
      The main problems seemed to be:
      
      (1) That commit attempted to predict the future behavior of the
      connection by looking at the write queue (if TSO or TSQ limit
      sending). That prediction sometimes overestimated future outstanding
      packets.
      
      (2) That commit always allowed cwnd to grow to twice the number of
      outstanding packets (even in congestion avoidance, where this is not
      needed).
      
      This commit improves both of these, by:
      
      (1) Switching to a measurement-based approach where we explicitly
      track the largest number of packets in flight during the past window
      ("max_packets_out"), and remember whether we were cwnd-limited at the
      moment we finished sending that flight.
      
      (2) Only allowing cwnd to grow to twice the number of outstanding
      packets ("max_packets_out") in slow start. In congestion avoidance
      mode we now only allow cwnd to grow if it was fully utilized.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ca8a2263
  20. 22 5月, 2014 1 次提交