1. 06 12月, 2008 9 次提交
  2. 05 12月, 2008 1 次提交
    • D
      tcp: tcp_vegas ssthresh bug fix · a6af2d6b
      Doug Leith 提交于
      This patch fixes a bug in tcp_vegas.c.  At the moment this code leaves
      ssthresh untouched.  However, this means that the vegas congestion
      control algorithm is effectively unable to reduce cwnd below the
      ssthresh value (if the vegas update lowers the cwnd below ssthresh,
      then slow start is activated to raise it back up).  One example where
      this matters is when during slow start cwnd overshoots the link
      capacity and a flow then exits slow start with ssthresh set to a value
      above where congestion avoidance would like to adjust it.
      Signed-off-by: NDoug Leith <doug.leith@nuim.ie>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a6af2d6b
  3. 04 12月, 2008 3 次提交
    • B
      net: /proc/net/ip_mr_cache, display Iif as a signed short · 999890b2
      Benjamin Thery 提交于
      Today, iproute2 fails to show multicast forwarding unresolved cache
      entries while scanning /proc/net/ip_mr_cache.
      
      Indeed, it expects to see -1 in 'Iif' column to identify unresolved
      entries but the kernel outputs 65535. It's a signed/unsigned issue:
      
      'Iif', the source interface, is retrieved from member mfc_parent in
      struct mfc_cache. mfc_parent is a vifi_t: unsigned short, but is
      displayed in ipmr_mfc_seq_show() as "%-3d", signed integer.
      
      In unresolevd entries, the 65535 value (0xFFFF) comes from this define:
      #define ALL_VIFS    ((vifi_t)(-1))
      
      That may explains why the guy who added support for this in iproute2
      thought a -1 should be expected.
      
      I don't know if this must be fixed in kernel or in iproute2. Who is
      right? What is the correct API? How was it designed originally?
      
      I let you decide if it should goes in the kernel or be fixed in iproute2.
      Signed-off-by: NBenjamin Thery <benjamin.thery@bull.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      999890b2
    • B
      net: fix /proc/net/ip_mr_cache display - V2 · 1ea472e2
      Benjamin Thery 提交于
      /proc/net/ip_mr_cache and /proc/net/ip6_mr_cache displays garbage when
      showing unresolved mfc_cache entries.
      
      [root@qemu tests]# cat /proc/net/ip_mr_cache
      Group    Origin   Iif     Pkts    Bytes    Wrong Oifs
      014C00EF 010014AC 1         10    10050        0  2:1    3:1
      024C00EF 010014AC 65535      514        2 -559067475
      
      The first line is correct. It is a resolved cache entry, 10 packets used it...
      The second line represents an unresolved entry, and the columns Pkts(4th),
      Bytes(5th) and Wrong(6th) just show garbage.
      
      In struct mfc_cache, there's an union to store data for resolved and
      unresolved cases. And what ipmr_mfc_seq_show() is printing in these 
      columns for the unresolved entries is some bytes from mfc_cache.mfc_un.res.
      Bad.
      (eg. In our case -559067475 is in fact 0xdead4ead which is the spinlock
      magic from mfc_cache.mfc_un.unres.unresolved.lock.magic).
      
      This patch replaces the garbage data written in these columns for the
      unresolved entries by '0' (zeros) which is more correct.
      This change doesn't break the ABI.
      
      Also, mfc->mfc_un.res.pkt, mfc->mfc_un.res.bytes, mfc->mfc_un.res.wrong_if
      are unsigned long.
      
      It applies on top of net-next-2.6.
      
      The patch for net-2.6 is slightly different because of the NIP6_FMT to
      %pI6 conversion that was made in the seq_printf.
      
      Changelog:
      ==========
      V2:
      * Instead of breaking the ABI by suppressing the columns that have no
        meaning for unresolved entries, fill them with 0 values.
      Signed-off-by: NBenjamin Thery <benjamin.thery@bull.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ea472e2
    • I
      tcp: make urg+gso work for real this time · f8269a49
      Ilpo Järvinen 提交于
      I should have noticed this earlier... :-) The previous solution
      to URG+GSO/TSO will cause SACK block tcp_fragment to do zig-zig
      patterns, or even worse, a steep downward slope into packet
      counting because each skb pcount would be truncated to pcount
      of 2 and then the following fragments of the later portion would
      restore the window again.
      
      Basically this reverts "tcp: Do not use TSO/GSO when there is
      urgent data" (33cf71ce). It also removes some unnecessary code
      from tcp_current_mss that didn't work as intented either (could
      be that something was changed down the road, or it might have
      been broken since the dawn of time) because it only works once
      urg is already written while this bug shows up starting from
      ~64k before the urg point.
      
      The retransmissions already are split to mss sized chunks, so
      only new data sending paths need splitting in case they have
      a segment otherwise suitable for gso/tso. The actually check
      can be improved to be more narrow but since this is late -rc
      already, I'll postpone thinking the more fine-grained things.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8269a49
  4. 02 12月, 2008 1 次提交
  5. 26 11月, 2008 11 次提交
  6. 25 11月, 2008 13 次提交
    • E
      netfilter: nfmark routing in OUTPUT, mangle, NFQUEUE · 5f145e44
      Eric Leblond 提交于
      This patch let nfmark to be evaluated for routing decision for OUTPUT
      packet, in mangle table, when process paquet in NFQUEUE
      Until now, only change (in NFQUEUE process) on fields src_addr,
      dest_addr and tos could make netfilter to reevalute the routing.
      
      From: Laurent Licour <laurent@licour.com>
      Signed-off-by: NEric Leblond <eric@inl.fr>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      5f145e44
    • A
      fb7e0674
    • A
      ah4/ah6: remove useless NULL assignments · 6daad372
      Alexey Dobriyan 提交于
      struct will be kfreed in a moment, so...
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6daad372
    • I
      111cc8b9
    • I
      tcp: Make shifting not clear the hints · 92ee76b6
      Ilpo Järvinen 提交于
      The earlier version was just very basic one which is "playing
      safe" by always clearing the hints. However, clearing of a hint
      is extremely costly operation with large windows, so it must be
      avoided at all cost whenever possible, there is a way with
      shifting too achieve not-clearing.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      92ee76b6
    • I
      tcp: Try to restore large SKBs while SACK processing · 832d11c5
      Ilpo Järvinen 提交于
      During SACK processing, most of the benefits of TSO are eaten by
      the SACK blocks that one-by-one fragment SKBs to MSS sized chunks.
      Then we're in problems when cleanup work for them has to be done
      when a large cumulative ACK comes. Try to return back to pre-split
      state already while more and more SACK info gets discovered by
      combining newly discovered SACK areas with the previous skb if
      that's SACKed as well.
      
      This approach has a number of benefits:
      
      1) The processing overhead is spread more equally over the RTT
      2) Write queue has less skbs to process (affect everything
         which has to walk in the queue past the sacked areas)
      3) Write queue is consistent whole the time, so no other parts
         of TCP has to be aware of this (this was not the case with
         some other approach that was, well, quite intrusive all
         around).
      4) Clean_rtx_queue can release most of the pages using single
         put_page instead of previous PAGE_SIZE/mss+1 calls
      
      In case a hole is fully filled by the new SACK block, we attempt
      to combine the next skb too which allows construction of skbs
      that are even larger than what tso split them to and it handles
      hole per on every nth patterns that often occur during slow start
      overshoot pretty nicely. Though this to be really useful also
      a retransmission would have to get lost since cumulative ACKs
      advance one hole at a time in the most typical case.
      
      TODO: handle upwards only merging. That should be rather easy
      when segment is fully sacked but I'm leaving that as future
      work item (it won't make very large difference anyway since
      this current approach already covers quite a lot of normal
      cases).
      
      I was earlier thinking of some sophisticated way of tracking
      timestamps of the first and the last segment but later on
      realized that it won't be that necessary at all to store the
      timestamp of the last segment. The cases that can occur are
      basically either:
        1) ambiguous => no sensible measurement can be taken anyway
        2) non-ambiguous is due to reordering => having the timestamp
           of the last segment there is just skewing things more off
           than does some good since the ack got triggered by one of
           the holes (besides some substle issues that would make
           determining right hole/skb even harder problem). Anyway,
           it has nothing to do with this change then.
      
      I choose to route some abnormal looking cases with goto noop,
      some could be handled differently (eg., by stopping the
      walking at that skb but again). In general, they either
      shouldn't happen at all or are rare enough to make no difference
      in practice.
      
      In theory this change (as whole) could cause some macroscale
      regression (global) because of cache misses that are taken over
      the round-trip time but it gets very likely better because of much
      less (local) cache misses per other write queue walkers and the
      big recovery clearing cumulative ack.
      
      Worth to note that these benefits would be very easy to get also
      without TSO/GSO being on as long as the data is in pages so that
      we can merge them. Currently I won't let that happen because
      DSACK splitting at fragment that would mess up pcounts due to
      sk_can_gso in tcp_set_skb_tso_segs. Once DSACKs fragments gets
      avoided, we have some conditions that can be made less strict.
      
      TODO: I will probably have to convert the excessive pointer
      passing to struct sacktag_state... :-)
      
      My testing revealed that considerable amount of skbs couldn't
      be shifted because they were cloned (most likely still awaiting
      tx reclaim)...
      
      [The rest is considering future work instead since I got
      repeatably EFAULT to tcpdump's recvfrom when I added
      pskb_expand_head to deal with clones, so I separated that
      into another, later patch]
      
      ...To counter that, I gave up on the fifth advantage:
      
      5) When growing previous SACK block, less allocs for new skbs
         are done, basically a new alloc is needed only when new hole
         is detected and when the previous skb runs out of frags space
      
      ...which now only happens of if reclaim is fast enough to dispose
      the clone before the SACK block comes in (the window is RTT long),
      otherwise we'll have to alloc some.
      
      With clones being handled I got these numbers (will be somewhat
      worse without that), taken with fine-grained mibs:
      
                        TCPSackShifted 398
                         TCPSackMerged 877
                  TCPSackShiftFallback 320
            TCPSACKCOLLAPSEFALLBACKGSO 0
        TCPSACKCOLLAPSEFALLBACKSKBBITS 0
        TCPSACKCOLLAPSEFALLBACKSKBDATA 0
          TCPSACKCOLLAPSEFALLBACKBELOW 0
          TCPSACKCOLLAPSEFALLBACKFIRST 1
       TCPSACKCOLLAPSEFALLBACKPREVBITS 318
            TCPSACKCOLLAPSEFALLBACKMSS 1
         TCPSACKCOLLAPSEFALLBACKNOHEAD 0
          TCPSACKCOLLAPSEFALLBACKSHIFT 0
                TCPSACKCOLLAPSENOOPSEQ 0
        TCPSACKCOLLAPSENOOPSMALLPCOUNT 0
           TCPSACKCOLLAPSENOOPSMALLLEN 0
                   TCPSACKCOLLAPSEHOLE 12
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      832d11c5
    • I
      tcp: make tcp_sacktag_one able to handle partial skb too · f58b22fd
      Ilpo Järvinen 提交于
      This is preparatory work for SACK combiner patch which may
      have to count TCP state changes for only a part of the skb
      because it will intentionally avoids splitting skb to SACKed
      and not sacked parts.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f58b22fd
    • I
      tcp: Make SACK code to split only at mss boundaries · adb92db8
      Ilpo Järvinen 提交于
      Sadly enough, this adds possible divide though we try to avoid
      it by checking one mss as common case.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      adb92db8
    • I
      tcp: more aggressive skipping · e8bae275
      Ilpo Järvinen 提交于
      I knew already when rewriting the sacktag that this condition
      was too conservative, change it now since it prevent lot of
      useless work (especially in the sack shifter decision code
      that is being added by a later patch). This shouldn't change
      anything really, just save some processing regardless of the
      shifter.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e8bae275
    • I
      e1aa680f
    • I
      tcp: collapse more than two on retransmission · 4a17fc3a
      Ilpo Järvinen 提交于
      I always had thought that collapsing up to two at a time was
      intentional decision to avoid excessive processing if 1 byte
      sized skbs are to be combined for a full mtu, and consecutive
      retransmissions would make the size of the retransmittee
      double each round anyway, but some recent discussion made me
      to understand that was not the case. Thus make collapse work
      more and wait less.
      
      It would be possible to take advantage of the shifting
      machinery (added in the later patch) in the case of paged
      data but that can be implemented on top of this change.
      
      tcp_skb_is_last check is now provided by the loop.
      
      I tested a bit (ss-after-idle-off, fill 4096x4096B xfer,
      10s sleep + 4096 x 1byte writes while dropping them for
      some a while with netem):
      
      . 16774097:16775545(1448) ack 1 win 46
      . 16775545:16776993(1448) ack 1 win 46
      . ack 16759617 win 2399
      P 16776993:16777217(224) ack 1 win 46
      . ack 16762513 win 2399
      . ack 16765409 win 2399
      . ack 16768305 win 2399
      . ack 16771201 win 2399
      . ack 16774097 win 2399
      . ack 16776993 win 2399
      . ack 16777217 win 2399
      P 16777217:16777257(40) ack 1 win 46
      . ack 16777257 win 2399
      P 16777257:16778705(1448) ack 1 win 46
      P 16778705:16780153(1448) ack 1 win 46
      FP 16780153:16781313(1160) ack 1 win 46
      . ack 16778705 win 2399
      . ack 16780153 win 2399
      F 1:1(0) ack 16781314 win 2399
      
      While without drop-all period I get this:
      
      . 16773585:16775033(1448) ack 1 win 46
      . ack 16764897 win 9367
      . ack 16767793 win 9367
      . ack 16770689 win 9367
      . ack 16773585 win 9367
      . 16775033:16776481(1448) ack 1 win 46
      P 16776481:16777217(736) ack 1 win 46
      . ack 16776481 win 9367
      . ack 16777217 win 9367
      P 16777217:16777218(1) ack 1 win 46
      P 16777218:16777219(1) ack 1 win 46
      P 16777219:16777220(1) ack 1 win 46
        ...
      P 16777247:16777248(1) ack 1 win 46
      . ack 16777218 win 9367
      . ack 16777219 win 9367
        ...
      . ack 16777233 win 9367
      . ack 16777248 win 9367
      P 16777248:16778696(1448) ack 1 win 46
      P 16778696:16780144(1448) ack 1 win 46
      FP 16780144:16781313(1169) ack 1 win 46
      . ack 16780144 win 9367
      F 1:1(0) ack 16781314 win 9367
      
      The window seems to be 30-40 segments, which were successfully
      combined into: P 16777217:16777257(40) ack 1 win 46
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4a17fc3a
    • E
      net: avoid a pair of dst_hold()/dst_release() in ip_push_pending_frames() · a21bba94
      Eric Dumazet 提交于
      We can reduce pressure on dst entry refcount that slowdown UDP transmit
      path on SMP machines. This pressure is visible on RTP servers when
      delivering content to mediagateways, especially big ones, handling
      thousand of streams. Several cpus send UDP frames to the same
      destination, hence use the same dst entry.
      
      This patch makes ip_push_pending_frames() steal the refcount its
      callers had to take when filling inet->cork.dst.
      
      This doesnt avoid all refcounting, but still gives speedups on SMP,
      on UDP/RAW transmit path.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a21bba94
    • E
      net: avoid a pair of dst_hold()/dst_release() in ip_append_data() · 2e77d89b
      Eric Dumazet 提交于
      We can reduce pressure on dst entry refcount that slowdown UDP transmit
      path on SMP machines. This pressure is visible on RTP servers when
      delivering content to mediagateways, especially big ones, handling
      thousand of streams. Several cpus send UDP frames to the same
      destination, hence use the same dst entry.
      
      This patch makes ip_append_data() eventually steal the refcount its
      callers had to take on the dst entry.
      
      This doesnt avoid all refcounting, but still gives speedups on SMP,
      on UDP/RAW transmit path
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2e77d89b
  7. 24 11月, 2008 2 次提交
    • E
      net: Make sure BHs are disabled in sock_prot_inuse_add() · 920de804
      Eric Dumazet 提交于
      The rule of calling sock_prot_inuse_add() is that BHs must
      be disabled.  Some new calls were added where this was not
      true and this tiggers warnings as reported by Ilpo.
      
      Fix this by adding explicit BH disabling around those call sites,
      or moving sock_prot_inuse_add() call inside an existing BH disabled
      section.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      920de804
    • A
      net: fix tunnels in netns after ndo_ changes · be77e593
      Alexey Dobriyan 提交于
      dev_net_set() should be the very first thing after alloc_netdev().
      
      "ndo_" changes turned simple assignment (which is OK to do before netns
      assignment) into quite non-trivial operation (which is not OK, init_net was
      used). This leads to incomplete initialisation of tunnel device in netns.
      
      BUG: unable to handle kernel NULL pointer dereference at 00000004
      IP: [<c02efdb5>] ip6_tnl_exit_net+0x37/0x4f
      *pde = 00000000 
      Oops: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
      last sysfs file: /sys/class/net/lo/operstate
      
      Pid: 10, comm: netns Not tainted (2.6.28-rc6 #1) 
      EIP: 0060:[<c02efdb5>] EFLAGS: 00010246 CPU: 0
      EIP is at ip6_tnl_exit_net+0x37/0x4f
      EAX: 00000000 EBX: 00000020 ECX: 00000000 EDX: 00000003
      ESI: c5caef30 EDI: c782bbe8 EBP: c7909f50 ESP: c7909f48
       DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
      Process netns (pid: 10, ti=c7908000 task=c7905780 task.ti=c7908000)
      Stack:
       c03e75e0 c7390bc8 c7909f60 c0245448 c7390bd8 c7390bf0 c7909fa8 c012577a
       00000000 00000002 00000000 c0125736 c782bbe8 c7909f90 c0308fe3 c782bc04
       c7390bd4 c0245406 c084b718 c04f0770 c03ad785 c782bbe8 c782bc04 c782bc0c
      Call Trace:
       [<c0245448>] ? cleanup_net+0x42/0x82
       [<c012577a>] ? run_workqueue+0xd6/0x1ae
       [<c0125736>] ? run_workqueue+0x92/0x1ae
       [<c0308fe3>] ? schedule+0x275/0x285
       [<c0245406>] ? cleanup_net+0x0/0x82
       [<c0125ae1>] ? worker_thread+0x81/0x8d
       [<c0128344>] ? autoremove_wake_function+0x0/0x33
       [<c0125a60>] ? worker_thread+0x0/0x8d
       [<c012815c>] ? kthread+0x39/0x5e
       [<c0128123>] ? kthread+0x0/0x5e
       [<c0103b9f>] ? kernel_thread_helper+0x7/0x10
      Code: db e8 05 ff ff ff 89 c6 e8 dc 04 f6 ff eb 08 8b 40 04 e8 38 89 f5 ff 8b 44 9e 04 85 c0 75 f0 43 83 fb 20 75 f2 8b 86 84 00 00 00 <8b> 40 04 e8 1c 89 f5 ff e8 98 04 f6 ff 89 f0 e8 f8 63 e6 ff 5b 
      EIP: [<c02efdb5>] ip6_tnl_exit_net+0x37/0x4f SS:ESP 0068:c7909f48
      ---[ end trace 6c2f2328fccd3e0c ]---
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      be77e593