1. 11 7月, 2007 2 次提交
    • P
      [CORE] Stack changes to add multiqueue hardware support API · f25f4e44
      Peter P Waskiewicz Jr 提交于
      Add the multiqueue hardware device support API to the core network
      stack.  Allow drivers to allocate multiple queues and manage them at
      the netdev level if they choose to do so.
      
      Added a new field to sk_buff, namely queue_mapping, for drivers to
      know which tx_ring to select based on OS classification of the flow.
      Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f25f4e44
    • P
      [SKBUFF]: Keep track of writable header len of headerless clones · 334a8132
      Patrick McHardy 提交于
      Currently NAT (and others) that want to modify cloned skbs copy them,
      even if in the vast majority of cases its not necessary because the
      skb is a clone made by TCP and the portion NAT wants to modify is
      actually writable because TCP release the header reference before
      cloning.
      
      The problem is that there is no clean way for NAT to find out how
      long the writable header area is, so this patch introduces skb->hdr_len
      to hold this length. When a headerless skb is cloned skb->hdr_len
      is set to the current headroom, for regular clones it is copied from
      the original. A new function skb_clone_writable(skb, len) returns
      whether the skb is writable up to len bytes from skb->data. To avoid
      enlarging the skb the mac_len field is reduced to 16 bit and the
      new hdr_len field is put in the remaining 16 bit.
      
      I've done a few rough benchmarks of NAT (not with this exact patch,
      but a very similar one). As expected it saves huge amounts of system
      time in case of sendfile, bringing it down to basically the same
      amount as without NAT, with sendmsg it only helps on loopback,
      probably because of the large MTU.
      
      Transmit a 1GB file using sendfile/sendmsg over eth0/lo with and
      without NAT:
      
      - sendfile eth0, no NAT:	sys     0m0.388s
      - sendfile eth0, NAT:		sys     0m1.835s
      - sendfile eth0: NAT + path:	sys     0m0.370s	(~ -80%)
      
      - sendfile lo, no NAT:		sys     0m0.258s
      - sendfile lo, NAT:		sys     0m2.609s
      - sendfile lo, NAT + patch:	sys     0m0.260s	(~ -90%)
      
      - sendmsg eth0, no NAT:		sys     0m2.508s
      - sendmsg eth0, NAT:		sys     0m2.539s
      - sendmsg eth0, NAT + patch:	sys     0m2.445s	(no change)
      
      - sendmsg lo, no NAT:		sys	0m2.151s
      - sendmsg lo, NAT:		sys     0m3.557s
      - sendmsg lo, NAT + patch:	sys     0m2.159s	(~ -40%)
      
      I expect other users can see a similar performance improvement,
      packet mangling iptables targets, ipip and ip_gre come to mind ..
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      334a8132
  2. 16 6月, 2007 1 次提交
  3. 03 5月, 2007 1 次提交
  4. 30 4月, 2007 1 次提交
  5. 26 4月, 2007 35 次提交