1. 31 7月, 2009 1 次提交
    • N
      xfrm: select sane defaults for xfrm[4|6] gc_thresh · a33bc5c1
      Neil Horman 提交于
      Choose saner defaults for xfrm[4|6] gc_thresh values on init
      
      Currently, the xfrm[4|6] code has hard-coded initial gc_thresh values
      (set to 1024).  Given that the ipv4 and ipv6 routing caches are sized
      dynamically at boot time, the static selections can be non-sensical.
      This patch dynamically selects an appropriate gc threshold based on
      the corresponding main routing table size, using the assumption that
      we should in the worst case be able to handle as many connections as
      the routing table can.
      
      For ipv4, the maximum route cache size is 16 * the number of hash
      buckets in the route cache.  Given that xfrm4 starts garbage
      collection at the gc_thresh and prevents new allocations at 2 *
      gc_thresh, we set gc_thresh to half the maximum route cache size.
      
      For ipv6, its a bit trickier.  there is no maximum route cache size,
      but the ipv6 dst_ops gc_thresh is statically set to 1024.  It seems
      sane to select a simmilar gc_thresh for the xfrm6 code that is half
      the number of hash buckets in the v6 route cache times 16 (like the v4
      code does).
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a33bc5c1
  2. 14 1月, 2009 1 次提交
  3. 15 8月, 2008 1 次提交
  4. 26 7月, 2008 1 次提交
  5. 23 7月, 2008 5 次提交
  6. 22 7月, 2008 1 次提交
  7. 12 6月, 2008 1 次提交
  8. 22 4月, 2008 1 次提交
  9. 18 4月, 2008 1 次提交
  10. 26 3月, 2008 1 次提交
  11. 05 3月, 2008 2 次提交
  12. 04 3月, 2008 10 次提交
  13. 19 2月, 2008 1 次提交
  14. 29 1月, 2008 7 次提交
  15. 11 10月, 2007 1 次提交
  16. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  17. 31 5月, 2007 2 次提交
  18. 26 4月, 2007 1 次提交
  19. 26 3月, 2007 1 次提交
    • D
      [IPV6]: Fix routing round-robin locking. · f11e6659
      David S. Miller 提交于
      As per RFC2461, section 6.3.6, item #2, when no routers on the
      matching list are known to be reachable or probably reachable we
      do round robin on those available routes so that we make sure
      to probe as many of them as possible to detect when one becomes
      reachable faster.
      
      Each routing table has a rwlock protecting the tree and the linked
      list of routes at each leaf.  The round robin code executes during
      lookup and thus with the rwlock taken as a reader.  A small local
      spinlock tries to provide protection but this does not work at all
      for two reasons:
      
      1) The round-robin list manipulation, as coded, goes like this (with
         read lock held):
      
      	walk routes finding head and tail
      
      	spin_lock();
      	rotate list using head and tail
      	spin_unlock();
      
         While one thread is rotating the list, another thread can
         end up with stale values of head and tail and then proceed
         to corrupt the list when it gets the lock.  This ends up causing
         the OOPS in fib6_add() later onthat many people have been hitting.
      
      2) All the other code paths that run with the rwlock held as
         a reader do not expect the list to change on them, they
         expect it to remain completely fixed while they hold the
         lock in that way.
      
      So, simply stated, it is impossible to implement this correctly using
      a manipulation of the list without violating the rwlock locking
      semantics.
      
      Reimplement using a per-fib6_node round-robin pointer.  This way we
      don't need to manipulate the list at all, and since the round-robin
      pointer can only ever point to real existing entries we don't need
      to perform any locking on the changing of the round-robin pointer
      itself.  We only need to reset the round-robin pointer to NULL when
      the entry it is pointing to is removed.
      
      The idea is from Thomas Graf and it is very similar to how this
      was implemented before the advanced router selection code when in.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f11e6659