1. 05 11月, 2014 14 次提交
  2. 31 10月, 2014 7 次提交
  3. 30 10月, 2014 3 次提交
    • N
      inet: frags: remove the WARN_ON from inet_evict_bucket · d70127e8
      Nikolay Aleksandrov 提交于
      The WARN_ON in inet_evict_bucket can be triggered by a valid case:
      inet_frag_kill and inet_evict_bucket can be running in parallel on the
      same queue which means that there has been at least one more ref added
      by a previous inet_frag_find call, but inet_frag_kill can delete the
      timer before inet_evict_bucket which will cause the WARN_ON() there to
      trigger since we'll have refcnt!=1. Now, this case is valid because the
      queue is being "killed" for some reason (removed from the chain list and
      its timer deleted) so it will get destroyed in the end by one of the
      inet_frag_put() calls which reaches 0 i.e. refcnt is still valid.
      
      CC: Florian Westphal <fw@strlen.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McLean <chutzpah@gentoo.org>
      
      Fixes: b13d3cbf ("inet: frag: move eviction of queues to work queue")
      Reported-by: NPatrick McLean <chutzpah@gentoo.org>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d70127e8
    • N
      inet: frags: fix a race between inet_evict_bucket and inet_frag_kill · 65ba1f1e
      Nikolay Aleksandrov 提交于
      When the evictor is running it adds some chosen frags to a local list to
      be evicted once the chain lock has been released but at the same time
      the *frag_queue can be running for some of the same queues and it
      may call inet_frag_kill which will wait on the chain lock and
      will then delete the queue from the wrong list since it was added in the
      eviction one. The fix is simple - check if the queue has the evict flag
      set under the chain lock before deleting it, this is safe because the
      evict flag is set only under that lock and having the flag set also means
      that the queue has been detached from the chain list, so no need to delete
      it again.
      An important note to make is that we're safe w.r.t refcnt because
      inet_frag_kill and inet_evict_bucket will sync on the del_timer operation
      where only one of the two can succeed (or if the timer is executing -
      none of them), the cases are:
      1. inet_frag_kill succeeds in del_timer
       - then the timer ref is removed, but inet_evict_bucket will not add
         this queue to its expire list but will restart eviction in that chain
      2. inet_evict_bucket succeeds in del_timer
       - then the timer ref is kept until the evictor "expires" the queue, but
         inet_frag_kill will remove the initial ref and will set
         INET_FRAG_COMPLETE which will make the frag_expire fn just to remove
         its ref.
      In the end all of the queue users will do an inet_frag_put and the one
      that reaches 0 will free it. The refcount balance should be okay.
      
      CC: Florian Westphal <fw@strlen.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McLean <chutzpah@gentoo.org>
      
      Fixes: b13d3cbf ("inet: frag: move eviction of queues to work queue")
      Suggested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Reported-by: NPatrick McLean <chutzpah@gentoo.org>
      Tested-by: NPatrick McLean <chutzpah@gentoo.org>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Reviewed-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      65ba1f1e
    • E
      tcp: allow for bigger reordering level · dca145ff
      Eric Dumazet 提交于
      While testing upcoming Yaogong patch (converting out of order queue
      into an RB tree), I hit the max reordering level of linux TCP stack.
      
      Reordering level was limited to 127 for no good reason, and some
      network setups [1] can easily reach this limit and get limited
      throughput.
      
      Allow a new max limit of 300, and add a sysctl to allow admins to even
      allow bigger (or lower) values if needed.
      
      [1] Aggregation of links, per packet load balancing, fabrics not doing
       deep packet inspections, alternative TCP congestion modules...
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Yaogong Wang <wygivan@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dca145ff
  4. 28 10月, 2014 1 次提交
  5. 26 10月, 2014 1 次提交
    • E
      tcp: md5: do not use alloc_percpu() · 349ce993
      Eric Dumazet 提交于
      percpu tcp_md5sig_pool contains memory blobs that ultimately
      go through sg_set_buf().
      
      -> sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf));
      
      This requires that whole area is in a physically contiguous portion
      of memory. And that @buf is not backed by vmalloc().
      
      Given that alloc_percpu() can use vmalloc() areas, this does not
      fit the requirements.
      
      Replace alloc_percpu() by a static DEFINE_PER_CPU() as tcp_md5sig_pool
      is small anyway, there is no gain to dynamically allocate it.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Fixes: 765cf997 ("tcp: md5: remove one indirection level in tcp_md5sig_pool")
      Reported-by: NCrestez Dan Leonard <cdleonard@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      349ce993
  6. 23 10月, 2014 1 次提交
    • S
      net: fix saving TX flow hash in sock for outgoing connections · 9e7ceb06
      Sathya Perla 提交于
      The commit "net: Save TX flow hash in sock and set in skbuf on xmit"
      introduced the inet_set_txhash() and ip6_set_txhash() routines to calculate
      and record flow hash(sk_txhash) in the socket structure. sk_txhash is used
      to set skb->hash which is used to spread flows across multiple TXQs.
      
      But, the above routines are invoked before the source port of the connection
      is created. Because of this all outgoing connections that just differ in the
      source port get hashed into the same TXQ.
      
      This patch fixes this problem for IPv4/6 by invoking the the above routines
      after the source port is available for the socket.
      
      Fixes: b73c3d0e("net: Save TX flow hash in sock and set in skbuf on xmit")
      Signed-off-by: NSathya Perla <sathya.perla@emulex.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e7ceb06
  7. 21 10月, 2014 2 次提交
    • F
      net: make skb_gso_segment error handling more robust · 330966e5
      Florian Westphal 提交于
      skb_gso_segment has three possible return values:
      1. a pointer to the first segmented skb
      2. an errno value (IS_ERR())
      3. NULL.  This can happen when GSO is used for header verification.
      
      However, several callers currently test IS_ERR instead of IS_ERR_OR_NULL
      and would oops when NULL is returned.
      
      Note that these call sites should never actually see such a NULL return
      value; all callers mask out the GSO bits in the feature argument.
      
      However, there have been issues with some protocol handlers erronously not
      respecting the specified feature mask in some cases.
      
      It is preferable to get 'have to turn off hw offloading, else slow' reports
      rather than 'kernel crashes'.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      330966e5
    • F
      net: gso: use feature flag argument in all protocol gso handlers · 1e16aa3d
      Florian Westphal 提交于
      skb_gso_segment() has a 'features' argument representing offload features
      available to the output path.
      
      A few handlers, e.g. GRE, instead re-fetch the features of skb->dev and use
      those instead of the provided ones when handing encapsulation/tunnels.
      
      Depending on dev->hw_enc_features of the output device skb_gso_segment() can
      then return NULL even when the caller has disabled all GSO feature bits,
      as segmentation of inner header thinks device will take care of segmentation.
      
      This e.g. affects the tbf scheduler, which will silently drop GRE-encap GSO skbs
      that did not fit the remaining token quota as the segmentation does not work
      when device supports corresponding hw offload capabilities.
      
      Cc: Pravin B Shelar <pshelar@nicira.com>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1e16aa3d
  8. 19 10月, 2014 1 次提交
  9. 18 10月, 2014 6 次提交
  10. 15 10月, 2014 4 次提交
    • E
      tcp: TCP Small Queues and strange attractors · 9b462d02
      Eric Dumazet 提交于
      TCP Small queues tries to keep number of packets in qdisc
      as small as possible, and depends on a tasklet to feed following
      packets at TX completion time.
      Choice of tasklet was driven by latencies requirements.
      
      Then, TCP stack tries to avoid reorders, by locking flows with
      outstanding packets in qdisc in a given TX queue.
      
      What can happen is that many flows get attracted by a low performing
      TX queue, and cpu servicing TX completion has to feed packets for all of
      them, making this cpu 100% busy in softirq mode.
      
      This became particularly visible with latest skb->xmit_more support
      
      Strategy adopted in this patch is to detect when tcp_wfree() is called
      from ksoftirqd and let the outstanding queue for this flow being drained
      before feeding additional packets, so that skb->ooo_okay can be set
      to allow select_queue() to select the optimal queue :
      
      Incoming ACKS are normally handled by different cpus, so this patch
      gives more chance for these cpus to take over the burden of feeding
      qdisc with future packets.
      
      Tested:
      
      lpaa23:~# ./super_netperf 1400 --google-pacing-rate 3028000 -H lpaa24 -l 3600 &
      
      lpaa23:~# sar -n DEV 1 10 | grep eth1
      06:16:18 AM      eth1 595448.00 1190564.00  38381.09 1760253.12      0.00      0.00      1.00
      06:16:19 AM      eth1 594858.00 1189686e.00  38340.76 1758952.72      0.00      0.00      0.00
      06:16:20 AM      eth1 597017.00 1194019.00  38480.79 1765370.29      0.00      0.00      1.00
      06:16:21 AM      eth1 595450.00 1190936.00  38380.19 1760805.05      0.00      0.00      0.00
      06:16:22 AM      eth1 596385.00 1193096.00  38442.56 1763976.29      0.00      0.00      1.00
      06:16:23 AM      eth1 598155.00 1195978.00  38552.97 1768264.60      0.00      0.00      0.00
      06:16:24 AM      eth1 594405.00 1188643.00  38312.57 1757414.89      0.00      0.00      1.00
      06:16:25 AM      eth1 593366.00 1187154.00  38252.16 1755195.83      0.00      0.00      0.00
      06:16:26 AM      eth1 593188.00 1186118.00  38232.88 1753682.57      0.00      0.00      1.00
      06:16:27 AM      eth1 596301.00 1192241.00  38440.94 1762733.09      0.00      0.00      0.00
      Average:         eth1 595457.30 1190843.50  38381.69 1760664.84      0.00      0.00      0.50
      lpaa23:~# ./tc -s -d qd sh dev eth1 | grep backlog
       backlog 7606336b 2513p requeues 167982
       backlog 224072b 74p requeues 566
       backlog 581376b 192p requeues 5598
       backlog 181680b 60p requeues 1070
       backlog 5305056b 1753p requeues 110166    // Here, this TX queue is attracting flows
       backlog 157456b 52p requeues 1758
       backlog 672216b 222p requeues 3025
       backlog 60560b 20p requeues 24541
       backlog 448144b 148p requeues 21258
      
      lpaa23:~# echo 1 >/proc/sys/net/ipv4/tcp_tsq_enable_tcp_wfree_ksoftirqd_detect
      
      Immediate jump to full bandwidth, and traffic is properly
      shard on all tx queues.
      
      lpaa23:~# sar -n DEV 1 10 | grep eth1
      06:16:46 AM      eth1 1397632.00 2795397.00  90081.87 4133031.26      0.00      0.00      1.00
      06:16:47 AM      eth1 1396874.00 2793614.00  90032.99 4130385.46      0.00      0.00      0.00
      06:16:48 AM      eth1 1395842.00 2791600.00  89966.46 4127409.67      0.00      0.00      1.00
      06:16:49 AM      eth1 1395528.00 2791017.00  89946.17 4126551.24      0.00      0.00      0.00
      06:16:50 AM      eth1 1397891.00 2795716.00  90098.74 4133497.39      0.00      0.00      1.00
      06:16:51 AM      eth1 1394951.00 2789984.00  89908.96 4125022.51      0.00      0.00      0.00
      06:16:52 AM      eth1 1394608.00 2789190.00  89886.90 4123851.36      0.00      0.00      1.00
      06:16:53 AM      eth1 1395314.00 2790653.00  89934.33 4125983.09      0.00      0.00      0.00
      06:16:54 AM      eth1 1396115.00 2792276.00  89984.25 4128411.21      0.00      0.00      1.00
      06:16:55 AM      eth1 1396829.00 2793523.00  90030.19 4130250.28      0.00      0.00      0.00
      Average:         eth1 1396158.40 2792297.00  89987.09 4128439.35      0.00      0.00      0.50
      
      lpaa23:~# tc -s -d qd sh dev eth1 | grep backlog
       backlog 7900052b 2609p requeues 173287
       backlog 878120b 290p requeues 589
       backlog 1068884b 354p requeues 5621
       backlog 996212b 329p requeues 1088
       backlog 984100b 325p requeues 115316
       backlog 956848b 316p requeues 1781
       backlog 1080996b 357p requeues 3047
       backlog 975016b 322p requeues 24571
       backlog 990156b 327p requeues 21274
      
      (All 8 TX queues get a fair share of the traffic)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b462d02
    • J
      ipv4: fix nexthop attlen check in fib_nh_match · f76936d0
      Jiri Pirko 提交于
      fib_nh_match does not match nexthops correctly. Example:
      
      ip route add 172.16.10/24 nexthop via 192.168.122.12 dev eth0 \
                                nexthop via 192.168.122.13 dev eth0
      ip route del 172.16.10/24 nexthop via 192.168.122.14 dev eth0 \
                                nexthop via 192.168.122.15 dev eth0
      
      Del command is successful and route is removed. After this patch
      applied, the route is correctly matched and result is:
      RTNETLINK answers: No such process
      
      Please consider this for stable trees as well.
      
      Fixes: 4e902c57 ("[IPv4]: FIB configuration using struct fib_config")
      Signed-off-by: NJiri Pirko <jiri@resnulli.us>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f76936d0
    • E
      tcp: fix tcp_ack() performance problem · ad971f61
      Eric Dumazet 提交于
      We worked hard to improve tcp_ack() performance, by not accessing
      skb_shinfo() in fast path (cd7d8498 tcp: change tcp_skb_pcount()
      location)
      
      We still have one spurious access because of ACK timestamping,
      added in commit e1c8a607 ("net-timestamp: ACK timestamp for
      bytestreams")
      
      By checking if sk_tsflags has SOF_TIMESTAMPING_TX_ACK set,
      we can avoid two cache line misses for the common case.
      
      While we are at it, add two prefetchw() :
      
      One in tcp_ack() to bring skb at the head of write queue.
      
      One in tcp_clean_rtx_queue() loop to bring following skb,
      as we will delete skb from the write queue and dirty skb->next->prev.
      
      Add a couple of [un]likely() clauses.
      
      After this patch, tcp_ack() is no longer the most consuming
      function in tcp stack.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Van Jacobson <vanj@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ad971f61
    • E
      tcp: fix ooo_okay setting vs Small Queues · b2532eb9
      Eric Dumazet 提交于
      TCP Small Queues (tcp_tsq_handler()) can hold one reference on
      sk->sk_wmem_alloc, preventing skb->ooo_okay being set.
      
      We should relax test done to set skb->ooo_okay to take care
      of this extra reference.
      
      Minimal truesize of skb containing one byte of payload is
      SKB_TRUESIZE(1)
      
      Without this fix, we have more chance locking flows into the wrong
      transmit queue.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b2532eb9