1. 06 11月, 2014 7 次提交
  2. 05 11月, 2014 14 次提交
  3. 31 10月, 2014 7 次提交
  4. 30 10月, 2014 3 次提交
    • N
      inet: frags: remove the WARN_ON from inet_evict_bucket · d70127e8
      Nikolay Aleksandrov 提交于
      The WARN_ON in inet_evict_bucket can be triggered by a valid case:
      inet_frag_kill and inet_evict_bucket can be running in parallel on the
      same queue which means that there has been at least one more ref added
      by a previous inet_frag_find call, but inet_frag_kill can delete the
      timer before inet_evict_bucket which will cause the WARN_ON() there to
      trigger since we'll have refcnt!=1. Now, this case is valid because the
      queue is being "killed" for some reason (removed from the chain list and
      its timer deleted) so it will get destroyed in the end by one of the
      inet_frag_put() calls which reaches 0 i.e. refcnt is still valid.
      
      CC: Florian Westphal <fw@strlen.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McLean <chutzpah@gentoo.org>
      
      Fixes: b13d3cbf ("inet: frag: move eviction of queues to work queue")
      Reported-by: NPatrick McLean <chutzpah@gentoo.org>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d70127e8
    • N
      inet: frags: fix a race between inet_evict_bucket and inet_frag_kill · 65ba1f1e
      Nikolay Aleksandrov 提交于
      When the evictor is running it adds some chosen frags to a local list to
      be evicted once the chain lock has been released but at the same time
      the *frag_queue can be running for some of the same queues and it
      may call inet_frag_kill which will wait on the chain lock and
      will then delete the queue from the wrong list since it was added in the
      eviction one. The fix is simple - check if the queue has the evict flag
      set under the chain lock before deleting it, this is safe because the
      evict flag is set only under that lock and having the flag set also means
      that the queue has been detached from the chain list, so no need to delete
      it again.
      An important note to make is that we're safe w.r.t refcnt because
      inet_frag_kill and inet_evict_bucket will sync on the del_timer operation
      where only one of the two can succeed (or if the timer is executing -
      none of them), the cases are:
      1. inet_frag_kill succeeds in del_timer
       - then the timer ref is removed, but inet_evict_bucket will not add
         this queue to its expire list but will restart eviction in that chain
      2. inet_evict_bucket succeeds in del_timer
       - then the timer ref is kept until the evictor "expires" the queue, but
         inet_frag_kill will remove the initial ref and will set
         INET_FRAG_COMPLETE which will make the frag_expire fn just to remove
         its ref.
      In the end all of the queue users will do an inet_frag_put and the one
      that reaches 0 will free it. The refcount balance should be okay.
      
      CC: Florian Westphal <fw@strlen.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McLean <chutzpah@gentoo.org>
      
      Fixes: b13d3cbf ("inet: frag: move eviction of queues to work queue")
      Suggested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Reported-by: NPatrick McLean <chutzpah@gentoo.org>
      Tested-by: NPatrick McLean <chutzpah@gentoo.org>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Reviewed-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      65ba1f1e
    • E
      tcp: allow for bigger reordering level · dca145ff
      Eric Dumazet 提交于
      While testing upcoming Yaogong patch (converting out of order queue
      into an RB tree), I hit the max reordering level of linux TCP stack.
      
      Reordering level was limited to 127 for no good reason, and some
      network setups [1] can easily reach this limit and get limited
      throughput.
      
      Allow a new max limit of 300, and add a sysctl to allow admins to even
      allow bigger (or lower) values if needed.
      
      [1] Aggregation of links, per packet load balancing, fabrics not doing
       deep packet inspections, alternative TCP congestion modules...
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Yaogong Wang <wygivan@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dca145ff
  5. 28 10月, 2014 1 次提交
  6. 26 10月, 2014 1 次提交
    • E
      tcp: md5: do not use alloc_percpu() · 349ce993
      Eric Dumazet 提交于
      percpu tcp_md5sig_pool contains memory blobs that ultimately
      go through sg_set_buf().
      
      -> sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf));
      
      This requires that whole area is in a physically contiguous portion
      of memory. And that @buf is not backed by vmalloc().
      
      Given that alloc_percpu() can use vmalloc() areas, this does not
      fit the requirements.
      
      Replace alloc_percpu() by a static DEFINE_PER_CPU() as tcp_md5sig_pool
      is small anyway, there is no gain to dynamically allocate it.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Fixes: 765cf997 ("tcp: md5: remove one indirection level in tcp_md5sig_pool")
      Reported-by: NCrestez Dan Leonard <cdleonard@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      349ce993
  7. 23 10月, 2014 1 次提交
    • S
      net: fix saving TX flow hash in sock for outgoing connections · 9e7ceb06
      Sathya Perla 提交于
      The commit "net: Save TX flow hash in sock and set in skbuf on xmit"
      introduced the inet_set_txhash() and ip6_set_txhash() routines to calculate
      and record flow hash(sk_txhash) in the socket structure. sk_txhash is used
      to set skb->hash which is used to spread flows across multiple TXQs.
      
      But, the above routines are invoked before the source port of the connection
      is created. Because of this all outgoing connections that just differ in the
      source port get hashed into the same TXQ.
      
      This patch fixes this problem for IPv4/6 by invoking the the above routines
      after the source port is available for the socket.
      
      Fixes: b73c3d0e("net: Save TX flow hash in sock and set in skbuf on xmit")
      Signed-off-by: NSathya Perla <sathya.perla@emulex.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e7ceb06
  8. 21 10月, 2014 2 次提交
    • F
      net: make skb_gso_segment error handling more robust · 330966e5
      Florian Westphal 提交于
      skb_gso_segment has three possible return values:
      1. a pointer to the first segmented skb
      2. an errno value (IS_ERR())
      3. NULL.  This can happen when GSO is used for header verification.
      
      However, several callers currently test IS_ERR instead of IS_ERR_OR_NULL
      and would oops when NULL is returned.
      
      Note that these call sites should never actually see such a NULL return
      value; all callers mask out the GSO bits in the feature argument.
      
      However, there have been issues with some protocol handlers erronously not
      respecting the specified feature mask in some cases.
      
      It is preferable to get 'have to turn off hw offloading, else slow' reports
      rather than 'kernel crashes'.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      330966e5
    • F
      net: gso: use feature flag argument in all protocol gso handlers · 1e16aa3d
      Florian Westphal 提交于
      skb_gso_segment() has a 'features' argument representing offload features
      available to the output path.
      
      A few handlers, e.g. GRE, instead re-fetch the features of skb->dev and use
      those instead of the provided ones when handing encapsulation/tunnels.
      
      Depending on dev->hw_enc_features of the output device skb_gso_segment() can
      then return NULL even when the caller has disabled all GSO feature bits,
      as segmentation of inner header thinks device will take care of segmentation.
      
      This e.g. affects the tbf scheduler, which will silently drop GRE-encap GSO skbs
      that did not fit the remaining token quota as the segmentation does not work
      when device supports corresponding hw offload capabilities.
      
      Cc: Pravin B Shelar <pshelar@nicira.com>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1e16aa3d
  9. 19 10月, 2014 1 次提交
  10. 18 10月, 2014 3 次提交