1. 15 7月, 2019 1 次提交
  2. 09 9月, 2017 1 次提交
    • D
      rbtree: cache leftmost node internally · cd9e61ed
      Davidlohr Bueso 提交于
      Patch series "rbtree: Cache leftmost node internally", v4.
      
      A series to extending rbtrees to internally cache the leftmost node such
      that we can have fast overlap check optimization for all interval tree
      users[1].  The benefits of this series are that:
      
      (i)   Unify users that do internal leftmost node caching.
      (ii)  Optimize all interval tree users.
      (iii) Convert at least two new users (epoll and procfs) to the new interface.
      
      This patch (of 16):
      
      Red-black tree semantics imply that nodes with smaller or greater (or
      equal for duplicates) keys always be to the left and right,
      respectively.  For the kernel this is extremely evident when considering
      our rb_first() semantics.  Enabling lookups for the smallest node in the
      tree in O(1) can save a good chunk of cycles in not having to walk down
      the tree each time.  To this end there are a few core users that
      explicitly do this, such as the scheduler and rtmutexes.  There is also
      the desire for interval trees to have this optimization allowing faster
      overlap checking.
      
      This patch introduces a new 'struct rb_root_cached' which is just the
      root with a cached pointer to the leftmost node.  The reason why the
      regular rb_root was not extended instead of adding a new structure was
      that this allows the user to have the choice between memory footprint
      and actual tree performance.  The new wrappers on top of the regular
      rb_root calls are:
      
       - rb_first_cached(cached_root) -- which is a fast replacement
           for rb_first.
      
       - rb_insert_color_cached(node, cached_root, new)
      
       - rb_erase_cached(node, cached_root)
      
      In addition, augmented cached interfaces are also added for basic
      insertion and deletion operations; which becomes important for the
      interval tree changes.
      
      With the exception of the inserts, which adds a bool for updating the
      new leftmost, the interfaces are kept the same.  To this end, porting rb
      users to the cached version becomes really trivial, and keeping current
      rbtree semantics for users that don't care about the optimization
      requires zero overhead.
      
      Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.netSigned-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd9e61ed
  3. 15 7月, 2017 1 次提交
  4. 14 9月, 2015 1 次提交
  5. 05 5月, 2014 1 次提交
  6. 09 10月, 2012 2 次提交
    • M
      rbtree: move augmented rbtree functionality to rbtree_augmented.h · 9c079add
      Michel Lespinasse 提交于
      Provide rb_insert_augmented() and rb_erase_augmented() through a new
      rbtree_augmented.h include file.  rb_erase_augmented() is defined there as
      an __always_inline function, in order to allow inlining of augmented
      rbtree callbacks into it.  Since this generates a relatively large
      function, each augmented rbtree user should make sure to have a single
      call site.
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c079add
    • M
      rbtree: faster augmented rbtree manipulation · 14b94af0
      Michel Lespinasse 提交于
      Introduce new augmented rbtree APIs that allow minimal recalculation of
      augmented node information.
      
      A new callback is added to the rbtree insertion and erase rebalancing
      functions, to be called on each tree rotations. Such rotations preserve
      the subtree's root augmented value, but require recalculation of the one
      child that was previously located at the subtree root.
      
      In the insertion case, the handcoded search phase must be updated to
      maintain the augmented information on insertion, and then the rbtree
      coloring/rebalancing algorithms keep it up to date.
      
      In the erase case, things are more complicated since it is library
      code that manipulates the rbtree in order to remove internal nodes.
      This requires a couple additional callbacks to copy a subtree's
      augmented value when a new root is stitched in, and to recompute
      augmented values down the ancestry path when a node is removed from
      the tree.
      
      In order to preserve maximum speed for the non-augmented case,
      we provide two versions of each tree manipulation function.
      rb_insert_augmented() is the augmented equivalent of rb_insert_color(),
      and rb_erase_augmented() is the augmented equivalent of rb_erase().
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      14b94af0
  7. 25 7月, 2011 1 次提交
  8. 11 11月, 2010 1 次提交
  9. 19 2月, 2010 1 次提交
  10. 13 6月, 2009 2 次提交
  11. 12 2月, 2007 1 次提交
反馈
建议
客服 返回
顶部