1. 07 4月, 2010 1 次提交
    • T
      flow: virtualize flow cache entry methods · fe1a5f03
      Timo Teräs 提交于
      This allows to validate the cached object before returning it.
      It also allows to destruct object properly, if the last reference
      was held in flow cache. This is also a prepartion for caching
      bundles in the flow cache.
      
      In return for virtualizing the methods, we save on:
      - not having to regenerate the whole flow cache on policy removal:
        each flow matching a killed policy gets refreshed as the getter
        function notices it smartly.
      - we do not have to call flow_cache_flush from policy gc, since the
        flow cache now properly deletes the object if it had any references
      Signed-off-by: NTimo Teras <timo.teras@iki.fi>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fe1a5f03
  2. 03 3月, 2010 1 次提交
  3. 23 2月, 2010 3 次提交
  4. 13 2月, 2010 1 次提交
  5. 28 1月, 2010 1 次提交
  6. 24 1月, 2010 1 次提交
  7. 26 11月, 2009 1 次提交
  8. 09 11月, 2009 1 次提交
    • Y
      xfrm: SAD entries do not expire correctly after suspend-resume · 9e0d57fd
      Yury Polyanskiy 提交于
        This fixes the following bug in the current implementation of
      net/xfrm: SAD entries timeouts do not count the time spent by the machine 
      in the suspended state. This leads to the connectivity problems because 
      after resuming local machine thinks that the SAD entry is still valid, while 
      it has already been expired on the remote server.
      
        The cause of this is very simple: the timeouts in the net/xfrm are bound to 
      the old mod_timer() timers. This patch reassigns them to the
      CLOCK_REALTIME hrtimer.
      
        I have been using this version of the patch for a few months on my
      machines without any problems. Also run a few stress tests w/o any
      issues.
      
        This version of the patch uses tasklet_hrtimer by Peter Zijlstra
      (commit 9ba5f0).
      
        This patch is against 2.6.31.4. Please CC me.
      Signed-off-by: NYury Polyanskiy <polyanskiy@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e0d57fd
  9. 04 11月, 2009 1 次提交
  10. 19 10月, 2009 1 次提交
  11. 31 7月, 2009 1 次提交
    • N
      xfrm: select sane defaults for xfrm[4|6] gc_thresh · a33bc5c1
      Neil Horman 提交于
      Choose saner defaults for xfrm[4|6] gc_thresh values on init
      
      Currently, the xfrm[4|6] code has hard-coded initial gc_thresh values
      (set to 1024).  Given that the ipv4 and ipv6 routing caches are sized
      dynamically at boot time, the static selections can be non-sensical.
      This patch dynamically selects an appropriate gc threshold based on
      the corresponding main routing table size, using the assumption that
      we should in the worst case be able to handle as many connections as
      the routing table can.
      
      For ipv4, the maximum route cache size is 16 * the number of hash
      buckets in the route cache.  Given that xfrm4 starts garbage
      collection at the gc_thresh and prevents new allocations at 2 *
      gc_thresh, we set gc_thresh to half the maximum route cache size.
      
      For ipv6, its a bit trickier.  there is no maximum route cache size,
      but the ipv6 dst_ops gc_thresh is statically set to 1024.  It seems
      sane to select a simmilar gc_thresh for the xfrm6 code that is half
      the number of hash buckets in the v6 route cache times 16 (like the v4
      code does).
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a33bc5c1
  12. 23 6月, 2009 1 次提交
  13. 03 6月, 2009 1 次提交
  14. 26 11月, 2008 23 次提交
  15. 31 10月, 2008 1 次提交
  16. 29 10月, 2008 1 次提交