1. 12 3月, 2014 1 次提交
  2. 17 2月, 2014 1 次提交
  3. 29 1月, 2014 1 次提交
  4. 27 1月, 2014 1 次提交
  5. 23 1月, 2014 1 次提交
    • M
      fuse: fix pipe_buf_operations · 28a625cb
      Miklos Szeredi 提交于
      Having this struct in module memory could Oops when if the module is
      unloaded while the buffer still persists in a pipe.
      
      Since sock_pipe_buf_ops is essentially the same as fuse_dev_pipe_buf_steal
      merge them into nosteal_pipe_buf_ops (this is the same as
      default_pipe_buf_ops except stealing the page from the buffer is not
      allowed).
      Reported-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: stable@vger.kernel.org
      28a625cb
  6. 15 1月, 2014 1 次提交
    • P
      net: add skb_checksum_setup · ed1f50c3
      Paul Durrant 提交于
      This patch adds a function to set up the partial checksum offset for IP
      packets (and optionally re-calculate the pseudo-header checksum) into the
      core network code.
      The implementation was previously private and duplicated between xen-netback
      and xen-netfront, however it is not xen-specific and is potentially useful
      to any network driver.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Veaceslav Falico <vfalico@redhat.com>
      Cc: Alexander Duyck <alexander.h.duyck@intel.com>
      Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ed1f50c3
  7. 07 1月, 2014 1 次提交
  8. 22 12月, 2013 1 次提交
  9. 18 12月, 2013 1 次提交
  10. 06 12月, 2013 1 次提交
  11. 22 11月, 2013 1 次提交
  12. 11 11月, 2013 1 次提交
    • J
      netfilter: push reasm skb through instead of original frag skbs · 6aafeef0
      Jiri Pirko 提交于
      Pushing original fragments through causes several problems. For example
      for matching, frags may not be matched correctly. Take following
      example:
      
      <example>
      On HOSTA do:
      ip6tables -I INPUT -p icmpv6 -j DROP
      ip6tables -I INPUT -p icmpv6 -m icmp6 --icmpv6-type 128 -j ACCEPT
      
      and on HOSTB you do:
      ping6 HOSTA -s2000    (MTU is 1500)
      
      Incoming echo requests will be filtered out on HOSTA. This issue does
      not occur with smaller packets than MTU (where fragmentation does not happen)
      </example>
      
      As was discussed previously, the only correct solution seems to be to use
      reassembled skb instead of separete frags. Doing this has positive side
      effects in reducing sk_buff by one pointer (nfct_reasm) and also the reams
      dances in ipvs and conntrack can be removed.
      
      Future plan is to remove net/ipv6/netfilter/nf_conntrack_reasm.c
      entirely and use code in net/ipv6/reassembly.c instead.
      Signed-off-by: NJiri Pirko <jiri@resnulli.us>
      Acked-by: NJulian Anastasov <ja@ssi.bg>
      Signed-off-by: NMarcelo Ricardo Leitner <mleitner@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6aafeef0
  13. 08 11月, 2013 2 次提交
  14. 05 11月, 2013 2 次提交
  15. 04 11月, 2013 1 次提交
  16. 20 10月, 2013 1 次提交
    • E
      net: generalize skb_segment() · 030737bc
      Eric Dumazet 提交于
      While implementing GSO/TSO support for IPIP, I found skb_segment()
      was assuming network header was immediately following mac header.
      
      Its not really true in the case inet_gso_segment() is stacked :
      By the time tcp_gso_segment() is called, network header points
      to the inner IP header.
      
      Let's instead assume nothing and pick the current offsets found in
      original skb, we have skb_headers_offset_update() helper for that.
      
      Also move the csum_start update inside skb_headers_offset_update()
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      030737bc
  17. 10 10月, 2013 1 次提交
    • E
      net: gro: allow to build full sized skb · 8a29111c
      Eric Dumazet 提交于
      skb_gro_receive() is currently limited to 16 or 17 MSS per GRO skb,
      typically 24616 bytes, because it fills up to MAX_SKB_FRAGS frags.
      
      It's relatively easy to extend the skb using frag_list to allow
      more frags to be appended into the last sk_buff.
      
      This still builds very efficient skbs, and allows reaching 45 MSS per
      skb.
      
      (45 MSS GRO packet uses one skb plus a frag_list containing 2 additional
      sk_buff)
      
      High speed TCP flows benefit from this extension by lowering TCP stack
      cpu usage (less packets stored in receive queue, less ACK packets
      processed)
      
      Forwarding setups could be hurt, as such skbs will need to be
      linearized, although its not a new problem, as GRO could already
      provide skbs with a frag_list.
      
      We could make the 65536 bytes threshold a tunable to mitigate this.
      
      (First time we need to linearize skb in skb_needs_linearize(), we could
      lower the tunable to ~16*1460 so that following skb_gro_receive() calls
      build smaller skbs)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a29111c
  18. 04 9月, 2013 1 次提交
  19. 02 8月, 2013 1 次提交
  20. 25 7月, 2013 1 次提交
  21. 13 7月, 2013 1 次提交
  22. 04 7月, 2013 1 次提交
  23. 28 6月, 2013 1 次提交
  24. 26 6月, 2013 1 次提交
  25. 24 6月, 2013 1 次提交
    • W
      net: Unmap fragment page once iterator is done · aeb193ea
      Wedson Almeida Filho 提交于
      Callers of skb_seq_read() are currently forced to call skb_abort_seq_read()
      even when consuming all the data because the last call to skb_seq_read (the
      one that returns 0 to indicate the end) fails to unmap the last fragment page.
      
      With this patch callers will be allowed to traverse the SKB data by calling
      skb_prepare_seq_read() once and repeatedly calling skb_seq_read() as originally
      intended (and documented in the original commit 677e90ed), that is, only call
      skb_abort_seq_read() if the sequential read is actually aborted.
      Signed-off-by: NWedson Almeida Filho <wedsonaf@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aeb193ea
  26. 11 6月, 2013 2 次提交
  27. 05 6月, 2013 1 次提交
  28. 01 6月, 2013 1 次提交
  29. 29 5月, 2013 1 次提交
    • D
      net: Fix build warnings after mac_header and transport_header became __u16. · 06ecf24b
      David S. Miller 提交于
      net/core/skbuff.c: In function ‘__alloc_skb_head’:
      net/core/skbuff.c:203:2: warning: large integer implicitly truncated to unsigned type [-Woverflow]
      net/core/skbuff.c: In function ‘__alloc_skb’:
      net/core/skbuff.c:279:2: warning: large integer implicitly truncated to unsigned type [-Woverflow]
      net/core/skbuff.c:280:2: warning: large integer implicitly truncated to unsigned type [-Woverflow]
      net/core/skbuff.c: In function ‘build_skb’:
      net/core/skbuff.c:348:2: warning: large integer implicitly truncated to unsigned type [-Woverflow]
      net/core/skbuff.c:349:2: warning: large integer implicitly truncated to unsigned type [-Woverflow]
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06ecf24b
  30. 23 5月, 2013 1 次提交
    • S
      net: Loosen constraints for recalculating checksum in skb_segment() · 1cdbcb79
      Simon Horman 提交于
      This is a generic solution to resolve a specific problem that I have observed.
      
      If the encapsulation of an skb changes then ability to offload checksums
      may also change. In particular it may be necessary to perform checksumming
      in software.
      
      An example of such a case is where a non-GRE packet is received but
      is to be encapsulated and transmitted as GRE.
      
      Another example relates to my proposed support for for packets
      that are non-MPLS when received but MPLS when transmitted.
      
      The cost of this change is that the value of the csum variable may be
      checked when it previously was not. In the case where the csum variable is
      true this is pure overhead. In the case where the csum variable is false it
      leads to software checksumming, which I believe also leads to correct
      checksums in transmitted packets for the cases described above.
      
      Further analysis:
      
      This patch relies on the return value of can_checksum_protocol()
      being correct and in turn the return value of skb_network_protocol(),
      used to provide the protocol parameter of can_checksum_protocol(),
      being correct. It also relies on the features passed to skb_segment()
      and in turn to can_checksum_protocol() being correct.
      
      I believe that this problem has not been observed for VLANs because it
      appears that almost all drivers, the exception being xgbe, set
      vlan_features such that that the checksum offload support for VLAN packets
      is greater than or equal to that of non-VLAN packets.
      
      I wonder if the code in xgbe may be an oversight and the hardware does
      support checksumming of VLAN packets.  If so it may be worth updating the
      vlan_features of the driver as this patch will force such checksums to be
      performed in software rather than hardware.
      Signed-off-by: NSimon Horman <horms@verge.net.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1cdbcb79
  31. 25 4月, 2013 1 次提交
  32. 20 4月, 2013 2 次提交
  33. 28 3月, 2013 1 次提交
  34. 10 3月, 2013 3 次提交