1. 22 8月, 2018 2 次提交
    • M
      ida: Remove old API · b03f8e43
      Matthew Wilcox 提交于
      Delete ida_pre_get(), ida_get_new(), ida_get_new_above() and ida_remove()
      from the public API.  Some of these functions still exist as internal
      helpers, but they should not be called by consumers.
      Signed-off-by: NMatthew Wilcox <willy@infradead.org>
      b03f8e43
    • M
      ida: Add new API · 5ade60dd
      Matthew Wilcox 提交于
      Add ida_alloc(), ida_alloc_min(), ida_alloc_max(), ida_alloc_range()
      and ida_free().  The ida_alloc_max() and ida_alloc_range() functions
      differ from ida_simple_get() in that they take an inclusive 'max'
      parameter instead of an exclusive 'end' parameter.  Callers are about
      evenly split whether they'd like inclusive or exclusive parameters and
      'max' is easier to document than 'end'.
      
      Change the IDA allocation to first attempt to allocate a bit using
      existing memory, and only allocate memory afterwards.  Also change the
      behaviour of 'min' > INT_MAX from being a BUG() to returning -ENOSPC.
      
      Leave compatibility wrappers in place for ida_simple_get() and
      ida_simple_remove() to avoid changing all callers.
      Signed-off-by: NMatthew Wilcox <willy@infradead.org>
      5ade60dd
  2. 12 4月, 2018 2 次提交
    • M
      xarray: add the xa_lock to the radix_tree_root · f6bb2a2c
      Matthew Wilcox 提交于
      This results in no change in structure size on 64-bit machines as it
      fits in the padding between the gfp_t and the void *.  32-bit machines
      will grow the structure from 8 to 12 bytes.  Almost all radix trees are
      protected with (at least) a spinlock, so as they are converted from
      radix trees to xarrays, the data structures will shrink again.
      
      Initialising the spinlock requires a name for the benefit of lockdep, so
      RADIX_TREE_INIT() now needs to know the name of the radix tree it's
      initialising, and so do IDR_INIT() and IDA_INIT().
      
      Also add the xa_lock() and xa_unlock() family of wrappers to make it
      easier to use the lock.  If we could rely on -fplan9-extensions in the
      compiler, we could avoid all of this syntactic sugar, but that wasn't
      added until gcc 4.6.
      
      Link: http://lkml.kernel.org/r/20180313132639.17387-8-willy@infradead.orgSigned-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      Reviewed-by: NJeff Layton <jlayton@kernel.org>
      Cc: Darrick J. Wong <darrick.wong@oracle.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f6bb2a2c
    • M
      radix tree: use GFP_ZONEMASK bits of gfp_t for flags · fa290cda
      Matthew Wilcox 提交于
      Patch series "XArray", v9.  (First part thereof).
      
      This patchset is, I believe, appropriate for merging for 4.17.  It
      contains the XArray implementation, to eventually replace the radix
      tree, and converts the page cache to use it.
      
      This conversion keeps the radix tree and XArray data structures in sync
      at all times.  That allows us to convert the page cache one function at
      a time and should allow for easier bisection.  Other than renaming some
      elements of the structures, the data structures are fundamentally
      unchanged; a radix tree walk and an XArray walk will touch the same
      number of cachelines.  I have changes planned to the XArray data
      structure, but those will happen in future patches.
      
      Improvements the XArray has over the radix tree:
      
       - The radix tree provides operations like other trees do; 'insert' and
         'delete'. But what most users really want is an automatically
         resizing array, and so it makes more sense to give users an API that
         is like an array -- 'load' and 'store'. We still have an 'insert'
         operation for users that really want that semantic.
      
       - The XArray considers locking as part of its API. This simplifies a
         lot of users who formerly had to manage their own locking just for
         the radix tree. It also improves code generation as we can now tell
         RCU that we're holding a lock and it doesn't need to generate as much
         fencing code. The other advantage is that tree nodes can be moved
         (not yet implemented).
      
       - GFP flags are now parameters to calls which may need to allocate
         memory. The radix tree forced users to decide what the allocation
         flags would be at creation time. It's much clearer to specify them at
         allocation time.
      
       - Memory is not preloaded; we don't tie up dozens of pages on the off
         chance that the slab allocator fails. Instead, we drop the lock,
         allocate a new node and retry the operation. We have to convert all
         the radix tree, IDA and IDR preload users before we can realise this
         benefit, but I have not yet found a user which cannot be converted.
      
       - The XArray provides a cmpxchg operation. The radix tree forces users
         to roll their own (and at least four have).
      
       - Iterators take a 'max' parameter. That simplifies many users and will
         reduce the amount of iteration done.
      
       - Iteration can proceed backwards. We only have one user for this, but
         since it's called as part of the pagefault readahead algorithm, that
         seemed worth mentioning.
      
       - RCU-protected pointers are not exposed as part of the API. There are
         some fun bugs where the page cache forgets to use rcu_dereference()
         in the current codebase.
      
       - Value entries gain an extra bit compared to radix tree exceptional
         entries. That gives us the extra bit we need to put huge page swap
         entries in the page cache.
      
       - Some iterators now take a 'filter' argument instead of having
         separate iterators for tagged/untagged iterations.
      
      The page cache is improved by this:
      
       - Shorter, easier to read code
      
       - More efficient iterations
      
       - Reduction in size of struct address_space
      
       - Fewer walks from the top of the data structure; the XArray API
         encourages staying at the leaf node and conducting operations there.
      
      This patch (of 8):
      
      None of these bits may be used for slab allocations, so we can use them
      as radix tree flags as long as we mask them off before passing them to
      the slab allocator. Move the IDR flag from the high bits to the
      GFP_ZONEMASK bits.
      
      Link: http://lkml.kernel.org/r/20180313132639.17387-3-willy@infradead.orgSigned-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      Acked-by: NJeff Layton <jlayton@kernel.org>
      Cc: Darrick J. Wong <darrick.wong@oracle.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fa290cda
  3. 07 2月, 2018 8 次提交
    • M
      idr: Add documentation · ac665d94
      Matthew Wilcox 提交于
      Move the idr kernel-doc to its own idr.rst file and add a few
      paragraphs about how to use it.  Also add some more kernel-doc.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      ac665d94
    • M
      idr: Make 1-based IDRs more efficient · 6ce711f2
      Matthew Wilcox 提交于
      About 20% of the IDR users in the kernel want the allocated IDs to start
      at 1.  The implementation currently searches all the way down the left
      hand side of the tree, finds no free ID other than ID 0, walks all the
      way back up, and then all the way down again.  This patch 'rebases' the
      ID so we fill the entire radix tree, rather than leave a gap at 0.
      
      Chris Wilson says: "I did the quick hack of allocating index 0 of the
      idr and that eradicated idr_get_free() from being at the top of the
      profiles for the many-object stress tests. This improvement will be
      much appreciated."
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      6ce711f2
    • M
      idr: Rename idr_for_each_entry_ext · 7a457577
      Matthew Wilcox 提交于
      Most places in the kernel that we need to distinguish functions by the
      type of their arguments, we use '_ul' as a suffix for the unsigned long
      variant, not '_ext'.  Also add kernel-doc.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      7a457577
    • M
      idr: Remove idr_alloc_ext · 460488c5
      Matthew Wilcox 提交于
      It has no more users, so remove it.  Move idr_alloc() back into idr.c,
      move the guts of idr_alloc_cmn() into idr_alloc_u32(), remove the
      wrappers around idr_get_free_cmn() and rename it to idr_get_free().
      While there is now no interface to allocate IDs larger than a u32,
      the IDR internals remain ready to handle a larger ID should a need arise.
      
      These changes make it possible to provide the guarantee that, if the
      nextid pointer points into the object, the object's ID will be initialised
      before a concurrent lookup can find the object.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      460488c5
    • M
      idr: Add idr_alloc_u32 helper · e096f6a7
      Matthew Wilcox 提交于
      All current users of idr_alloc_ext() actually want to allocate a u32
      and idr_alloc_u32() fits their needs better.
      
      Like idr_get_next(), it uses a 'nextid' argument which serves as both
      a pointer to the start ID and the assigned ID (instead of a separate
      minimum and pointer-to-assigned-ID argument).  It uses a 'max' argument
      rather than 'end' because the semantics that idr_alloc has for 'end'
      don't work well for unsigned types.
      
      Since idr_alloc_u32() returns an errno instead of the allocated ID, mark
      it as __must_check to help callers use it correctly.  Include copious
      kernel-doc.  Chris Mi <chrism@mellanox.com> has promised to contribute
      test-cases for idr_alloc_u32.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      e096f6a7
    • M
      idr: Delete idr_find_ext function · 322d884b
      Matthew Wilcox 提交于
      Simply changing idr_remove's 'id' argument to 'unsigned long' works
      for all callers.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      322d884b
    • M
      idr: Delete idr_replace_ext function · 234a4624
      Matthew Wilcox 提交于
      Changing idr_replace's 'id' argument to 'unsigned long' works for all
      callers.  Callers which passed a negative ID now get -ENOENT instead of
      -EINVAL.  No callers relied on this error value.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      234a4624
    • M
      idr: Delete idr_remove_ext function · 9c160941
      Matthew Wilcox 提交于
      Simply changing idr_remove's 'id' argument to 'unsigned long' suffices
      for all callers.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      9c160941
  4. 15 12月, 2017 1 次提交
  5. 31 8月, 2017 1 次提交
  6. 14 2月, 2017 3 次提交
    • M
      idr: Return the deleted entry from idr_remove · d3e709e6
      Matthew Wilcox 提交于
      It is a relatively common idiom (8 instances) to first look up an IDR
      entry, and then remove it from the tree if it is found, possibly doing
      further operations upon the entry afterwards.  If we change idr_remove()
      to return the removed object, all of these users can save themselves a
      walk of the IDR tree.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      d3e709e6
    • M
      ida: Move ida_bitmap to a percpu variable · 7ad3d4d8
      Matthew Wilcox 提交于
      When we preload the IDA, we allocate an IDA bitmap.  Instead of storing
      that preallocated bitmap in the IDA, we store it in a percpu variable.
      Generally there are more IDAs in the system than CPUs, so this cuts down
      on the number of preallocated bitmaps that are unused, and about half
      of the IDA users did not call ida_destroy() so they were leaking IDA
      bitmaps.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      7ad3d4d8
    • M
      Reimplement IDR and IDA using the radix tree · 0a835c4f
      Matthew Wilcox 提交于
      The IDR is very similar to the radix tree.  It has some functionality that
      the radix tree did not have (alloc next free, cyclic allocation, a
      callback-based for_each, destroy tree), which is readily implementable on
      top of the radix tree.  A few small changes were needed in order to use a
      tag to represent nodes with free space below them.  More extensive
      changes were needed to support storing NULL as a valid entry in an IDR.
      Plain radix trees still interpret NULL as a not-present entry.
      
      The IDA is reimplemented as a client of the newly enhanced radix tree.  As
      in the current implementation, it uses a bitmap at the last level of the
      tree.
      Signed-off-by: NMatthew Wilcox <willy@infradead.org>
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      Tested-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      0a835c4f
  7. 15 12月, 2016 3 次提交
  8. 26 11月, 2015 1 次提交
  9. 07 6月, 2014 1 次提交
  10. 08 4月, 2014 1 次提交
  11. 17 2月, 2014 1 次提交
  12. 31 1月, 2014 1 次提交
  13. 30 4月, 2013 1 次提交
    • J
      idr: introduce idr_alloc_cyclic() · 3e6628c4
      Jeff Layton 提交于
      As Tejun points out, there are several users of the IDR facility that
      attempt to use it in a cyclic fashion.  These users are likely to see
      -ENOSPC errors after the counter wraps one or more times however.
      
      This patchset adds a new idr_alloc_cyclic routine and converts several
      of these users to it.  Many of these users are in obscure parts of the
      kernel, and I don't have a good way to test some of them.  The change is
      pretty straightforward though, so hopefully it won't be an issue.
      
      There is one other cyclic user of idr_alloc that I didn't touch in
      ipc/util.c.  That one is doing some strange stuff that I didn't quite
      understand, but it looks like it should probably be converted later
      somehow.
      
      This patch:
      
      Thus spake Tejun Heo:
      
          Ooh, BTW, the cyclic allocation is broken.  It's prone to -ENOSPC
          after the first wraparound.  There are several cyclic users in the
          kernel and I think it probably would be best to implement cyclic
          support in idr.
      
      This patch does that by adding new idr_alloc_cyclic function that such
      users in the kernel can use.  With this, there's no need for a caller to
      keep track of the last value used as that's now tracked internally.  This
      should prevent the ENOSPC problems that can hit when the "last allocated"
      counter exceeds INT_MAX.
      
      Later patches will convert existing cyclic users to the new interface.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NTejun Heo <tj@kernel.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Eric Paris <eparis@parisplace.org>
      Cc: Jack Morgenstein <jackm@dev.mellanox.co.il>
      Cc: John McCutchan <john@johnmccutchan.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Or Gerlitz <ogerlitz@mellanox.com>
      Cc: Robert Love <rlove@rlove.org>
      Cc: Roland Dreier <roland@purestorage.com>
      Cc: Sridhar Samudrala <sri@us.ibm.com>
      Cc: Steve Wise <swise@opengridcomputing.com>
      Cc: Tom Tucker <tom@opengridcomputing.com>
      Cc: Vlad Yasevich <vyasevich@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e6628c4
  14. 29 3月, 2013 1 次提交
  15. 14 3月, 2013 1 次提交
  16. 13 3月, 2013 1 次提交
  17. 28 2月, 2013 10 次提交
    • T
      idr: implement lookup hint · 0ffc2a9c
      Tejun Heo 提交于
      While idr lookup isn't a particularly heavy operation, it still is too
      substantial to use in hot paths without worrying about the performance
      implications.  With recent changes, each idr_layer covers 256 slots
      which should be enough to cover most use cases with single idr_layer
      making lookup hint very attractive.
      
      This patch adds idr->hint which points to the idr_layer which
      allocated an ID most recently and the fast path lookup becomes
      
      	if (look up target's prefix matches that of the hinted layer)
      		return hint->ary[ID's offset in the leaf layer];
      
      which can be inlined.
      
      idr->hint is set to the leaf node on idr_fill_slot() and cleared from
      free_layer().
      
      [andriy.shevchenko@linux.intel.com: always do slow path when hint is uninitialized]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ffc2a9c
    • T
      idr: add idr_layer->prefix · 54616283
      Tejun Heo 提交于
      Add a field which carries the prefix of ID the idr_layer covers.  This
      will be used to implement lookup hint.
      
      This patch doesn't make use of the new field and doesn't introduce any
      behavior difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54616283
    • T
      idr: make idr_layer larger · 050a6b47
      Tejun Heo 提交于
      With recent preloading changes, idr no longer keeps full layer cache per
      each idr instance (used to be ~6.5k per idr on 64bit) and the previous
      patch removed restriction on the bitmap size.  Both now allow us to have
      larger layers.
      
      Increase IDR_BITS to 8 regardless of BITS_PER_LONG.  Each layer is
      slightly larger than 2k on 64bit and 1k on 32bit and carries 256 entries.
      The size isn't too large, especially compared to what we used to waste on
      per-idr caches, and 256 entries should be able to serve most use cases
      with single layer.  The max tree depth is 4 which is much better than the
      previous 6 on 64bit and 7 on 32bit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050a6b47
    • T
      idr: remove length restriction from idr_layer->bitmap · 1d9b2e1e
      Tejun Heo 提交于
      Currently, idr->bitmap is declared as an unsigned long which restricts
      the number of bits an idr_layer can contain.  All bitops can handle
      arbitrary positive integer bit number and there's no reason for this
      restriction.
      
      Declare idr_layer->bitmap using DECLARE_BITMAP() instead of a single
      unsigned long.
      
      * idr_layer->bitmap is now an array.  '&' dropped from params to
        bitops.
      
      * Replaced "== IDR_FULL" tests with bitmap_full() and removed
        IDR_FULL.
      
      * Replaced find_next_bit() on ~bitmap with find_next_zero_bit().
      
      * Replaced "bitmap = 0" with bitmap_clear().
      
      This patch doesn't (or at least shouldn't) introduce any behavior
      changes.
      
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1d9b2e1e
    • T
      idr: remove MAX_IDR_MASK and move left MAX_IDR_* into idr.c · e8c8d1bc
      Tejun Heo 提交于
      MAX_IDR_MASK is another weirdness in the idr interface.  As idr covers
      whole positive integer range, it's defined as 0x7fffffff or INT_MAX.
      
      Its usage in idr_find(), idr_replace() and idr_remove() is bizarre.
      They basically mask off the sign bit and operate on the rest, so if
      the caller, by accident, passes in a negative number, the sign bit
      will be masked off and the remaining part will be used as if that was
      the input, which is worse than crashing.
      
      The constant is visible in idr.h and there are several users in the
      kernel.
      
      * drivers/i2c/i2c-core.c:i2c_add_numbered_adapter()
      
        Basically used to test if adap->nr is a negative number which isn't
        -1 and returns -EINVAL if so.  idr_alloc() already has negative
        @start checking (w/ WARN_ON_ONCE), so this can go away.
      
      * drivers/infiniband/core/cm.c:cm_alloc_id()
        drivers/infiniband/hw/mlx4/cm.c:id_map_alloc()
      
        Used to wrap cyclic @start.  Can be replaced with max(next, 0).
        Note that this type of cyclic allocation using idr is buggy.  These
        are prone to spurious -ENOSPC failure after the first wraparound.
      
      * fs/super.c:get_anon_bdev()
      
        The ID allocated from ida is masked off before being tested whether
        it's inside valid range.  ida allocated ID can never be a negative
        number and the masking is unnecessary.
      
      Update idr_*() functions to fail with -EINVAL when negative @id is
      specified and update other MAX_IDR_MASK users as described above.
      
      This leaves MAX_IDR_MASK without any user, remove it and relocate
      other MAX_IDR_* constants to lib/idr.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jean Delvare <khali@linux-fr.org>
      Cc: Roland Dreier <roland@kernel.org>
      Cc: Sean Hefty <sean.hefty@intel.com>
      Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
      Cc: "Marciniszyn, Mike" <mike.marciniszyn@intel.com>
      Cc: Jack Morgenstein <jackm@dev.mellanox.co.il>
      Cc: Or Gerlitz <ogerlitz@mellanox.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Acked-by: NWolfram Sang <wolfram@the-dreams.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8c8d1bc
    • T
      idr: implement idr_preload[_end]() and idr_alloc() · d5c7409f
      Tejun Heo 提交于
      The current idr interface is very cumbersome.
      
      * For all allocations, two function calls - idr_pre_get() and
        idr_get_new*() - should be made.
      
      * idr_pre_get() doesn't guarantee that the following idr_get_new*()
        will not fail from memory shortage.  If idr_get_new*() returns
        -EAGAIN, the caller is expected to retry pre_get and allocation.
      
      * idr_get_new*() can't enforce upper limit.  Upper limit can only be
        enforced by allocating and then freeing if above limit.
      
      * idr_layer buffer is unnecessarily per-idr.  Each idr ends up keeping
        around MAX_IDR_FREE idr_layers.  The memory consumed per idr is
        under two pages but it makes it difficult to make idr_layer larger.
      
      This patch implements the following new set of allocation functions.
      
      * idr_preload[_end]() - Similar to radix preload but doesn't fail.
        The first idr_alloc() inside preload section can be treated as if it
        were called with @gfp_mask used for idr_preload().
      
      * idr_alloc() - Allocate an ID w/ lower and upper limits.  Takes
        @gfp_flags and can be used w/o preloading.  When used inside
        preloaded section, the allocation mask of preloading can be assumed.
      
      If idr_alloc() can be called from a context which allows sufficiently
      relaxed @gfp_mask, it can be used by itself.  If, for example,
      idr_alloc() is called inside spinlock protected region, preloading can
      be used like the following.
      
      	idr_preload(GFP_KERNEL);
      	spin_lock(lock);
      
      	id = idr_alloc(idr, ptr, start, end, GFP_NOWAIT);
      
      	spin_unlock(lock);
      	idr_preload_end();
      	if (id < 0)
      		error;
      
      which is much simpler and less error-prone than idr_pre_get and
      idr_get_new*() loop.
      
      The new interface uses per-pcu idr_layer buffer and thus the number of
      idr's in the system doesn't affect the amount of memory used for
      preloading.
      
      idr_layer_alloc() is introduced to handle idr_layer allocations for
      both old and new ID allocation paths.  This is a bit hairy now but the
      new interface is expected to replace the old and the internal
      implementation eventually will become simpler.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d5c7409f
    • T
      idr: remove _idr_rc_to_errno() hack · 12d1b439
      Tejun Heo 提交于
      idr uses -1, IDR_NEED_TO_GROW and IDR_NOMORE_SPACE to communicate
      exception conditions internally.  The return value is later translated
      to errno values using _idr_rc_to_errno().
      
      This is confusing.  Drop the custom ones and consistently use -EAGAIN
      for "tree needs to grow", -ENOMEM for "need more memory" and -ENOSPC for
      "ran out of ID space".
      
      Due to the weird memory preloading mechanism, [ra]_get_new*() return
      -EAGAIN on memory shortage, so we need to substitute -ENOMEM w/
      -EAGAIN on those interface functions.  They'll eventually be cleaned
      up and the translations will go away.
      
      This patch doesn't introduce any functional changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      12d1b439
    • T
      idr: relocate idr_for_each_entry() and reorganize id[r|a]_get_new() · 49038ef4
      Tejun Heo 提交于
      * Move idr_for_each_entry() definition next to other idr related
        definitions.
      
      * Make id[r|a]_get_new() inline wrappers of id[r|a]_get_new_above().
      
      This changes the implementation of idr_get_new() but the new
      implementation is trivial.  This patch doesn't introduce any
      functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      49038ef4
    • T
      idr: cosmetic updates to struct / initializer definitions · 4106ecaf
      Tejun Heo 提交于
      * Tab align fields like a normal person.
      
      * Drop the unnecessary 0 inits from IDR_INIT().
      
      This patch is purely cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4106ecaf
    • T
      idr: deprecate idr_remove_all() · fe6e24ec
      Tejun Heo 提交于
      There was only one legitimate use of idr_remove_all() and a lot more of
      incorrect uses (or lack of it).  Now that idr_destroy() implies
      idr_remove_all() and all the in-kernel users updated not to use it,
      there's no reason to keep it around.  Mark it deprecated so that we can
      later unexport it.
      
      idr_remove_all() is made an inline function calling __idr_remove_all()
      to avoid triggering deprecated warning on EXPORT_SYMBOL().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe6e24ec
  18. 19 2月, 2013 1 次提交