1. 13 3月, 2011 1 次提交
  2. 11 3月, 2011 3 次提交
  3. 10 3月, 2011 7 次提交
  4. 09 3月, 2011 2 次提交
  5. 08 3月, 2011 3 次提交
  6. 05 3月, 2011 5 次提交
    • D
      ipv4: Remove flowi from struct rtable. · 5e2b61f7
      David S. Miller 提交于
      The only necessary parts are the src/dst addresses, the
      interface indexes, the TOS, and the mark.
      
      The rest is unnecessary bloat, which amounts to nearly
      50 bytes on 64-bit.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5e2b61f7
    • D
      ipv4: Set rt->rt_iif more sanely on output routes. · 1018b5c0
      David S. Miller 提交于
      rt->rt_iif is only ever inspected on input routes, for example DCCP
      uses this to populate a route lookup flow key when generating replies
      to another packet.
      
      Therefore, setting it to anything other than zero on output routes
      makes no sense.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1018b5c0
    • D
      ipv4: Get peer more cheaply in rt_init_metrics(). · 3c0afdca
      David S. Miller 提交于
      We know this is a new route object, so doing atomics and
      stuff makes no sense at all.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3c0afdca
    • D
      ipv4: Optimize flow initialization in output route lookup. · 44713b67
      David S. Miller 提交于
      We burn a lot of useless cycles, cpu store buffer traffic, and
      memory operations memset()'ing the on-stack flow used to perform
      output route lookups in __ip_route_output_key().
      
      Only the first half of the flow object members even matter for
      output route lookups in this context, specifically:
      
      FIB rules matching cares about:
      
      	dst, src, tos, iif, oif, mark
      
      FIB trie lookup cares about:
      
      	dst
      
      FIB semantic match cares about:
      
      	tos, scope, oif
      
      Therefore only initialize these specific members and elide the
      memset entirely.
      
      On Niagara2 this kills about ~300 cycles from the output route
      lookup path.
      
      Likely, we can take things further, since all callers of output
      route lookups essentially throw away the on-stack flow they use.
      So they don't care if we use it as a scratch-pad to compute the
      final flow key.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      44713b67
    • E
      inetpeer: seqlock optimization · 65e8354e
      Eric Dumazet 提交于
      David noticed :
      
      ------------------
      Eric, I was profiling the non-routing-cache case and something that
      stuck out is the case of calling inet_getpeer() with create==0.
      
      If an entry is not found, we have to redo the lookup under a spinlock
      to make certain that a concurrent writer rebalancing the tree does
      not "hide" an existing entry from us.
      
      This makes the case of a create==0 lookup for a not-present entry
      really expensive.  It is on the order of 600 cpu cycles on my
      Niagara2.
      
      I added a hack to not do the relookup under the lock when create==0
      and it now costs less than 300 cycles.
      
      This is now a pretty common operation with the way we handle COW'd
      metrics, so I think it's definitely worth optimizing.
      -----------------
      
      One solution is to use a seqlock instead of a spinlock to protect struct
      inet_peer_base.
      
      After a failed avl tree lookup, we can easily detect if a writer did
      some changes during our lookup. Taking the lock and redo the lookup is
      only necessary in this case.
      
      Note: Add one private rcu_deref_locked() macro to place in one spot the
      access to spinlock included in seqlock.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      65e8354e
  7. 04 3月, 2011 2 次提交
  8. 03 3月, 2011 3 次提交
  9. 02 3月, 2011 13 次提交
  10. 25 2月, 2011 1 次提交
    • D
      ipv4: Rearrange how ip_route_newports() gets port keys. · dca8b089
      David S. Miller 提交于
      ip_route_newports() is the only place in the entire kernel that
      cares about the port members in the routing cache entry's lookup
      flow key.
      
      Therefore the only reason we store an entire flow inside of the
      struct rtentry is for this one special case.
      
      Rewrite ip_route_newports() such that:
      
      1) The caller passes in the original port values, so we don't need
         to use the rth->fl.fl_ip_{s,d}port values to remember them.
      
      2) The lookup flow is constructed by hand instead of being copied
         from the routing cache entry's flow.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dca8b089