1. 26 8月, 2016 1 次提交
    • P
      rhashtable: add rhashtable_lookup_get_insert_key() · 5ca8cc5b
      Pablo Neira Ayuso 提交于
      This patch modifies __rhashtable_insert_fast() so it returns the
      existing object that clashes with the one that you want to insert.
      In case the object is successfully inserted, NULL is returned.
      Otherwise, you get an error via ERR_PTR().
      
      This patch adapts the existing callers of __rhashtable_insert_fast()
      so they handle this new logic, and it adds a new
      rhashtable_lookup_get_insert_key() interface to fetch this existing
      object.
      
      nf_tables needs this change to improve handling of EEXIST cases via
      honoring the NLM_F_EXCL flag and by checking if the data part of the
      mapping matches what we have.
      
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Thomas Graf <tgraf@suug.ch>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      5ca8cc5b
  2. 05 4月, 2016 1 次提交
  3. 16 12月, 2015 1 次提交
  4. 05 12月, 2015 1 次提交
    • H
      rhashtable: Prevent spurious EBUSY errors on insertion · 3cf92222
      Herbert Xu 提交于
      Thomas and Phil observed that under stress rhashtable insertion
      sometimes failed with EBUSY, even though this error should only
      ever been seen when we're under attack and our hash chain length
      has grown to an unacceptable level, even after a rehash.
      
      It turns out that the logic for detecting whether there is an
      existing rehash is faulty.  In particular, when two threads both
      try to grow the same table at the same time, one of them may see
      the newly grown table and thus erroneously conclude that it had
      been rehashed.  This is what leads to the EBUSY error.
      
      This patch fixes this by remembering the current last table we
      used during insertion so that rhashtable_insert_rehash can detect
      when another thread has also done a resize/rehash.  When this is
      detected we will give up our resize/rehash and simply retry the
      insertion with the new table.
      Reported-by: NThomas Graf <tgraf@suug.ch>
      Reported-by: NPhil Sutter <phil@nwl.cc>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Tested-by: NPhil Sutter <phil@nwl.cc>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3cf92222
  5. 17 5月, 2015 1 次提交
    • H
      rhashtable: Add cap on number of elements in hash table · 07ee0722
      Herbert Xu 提交于
      We currently have no limit on the number of elements in a hash table.
      This is a problem because some users (tipc) set a ceiling on the
      maximum table size and when that is reached the hash table may
      degenerate.  Others may encounter OOM when growing and if we allow
      insertions when that happens the hash table perofrmance may also
      suffer.
      
      This patch adds a new paramater insecure_max_entries which becomes
      the cap on the table.  If unset it defaults to max_size * 2.  If
      it is also zero it means that there is no cap on the number of
      elements in the table.  However, the table will grow whenever the
      utilisation hits 100% and if that growth fails, you will get ENOMEM
      on insertion.
      
      As allowing oversubscription is potentially dangerous, the name
      contains the word insecure.
      
      Note that the cap is not a hard limit.  This is done for performance
      reasons as enforcing a hard limit will result in use of atomic ops
      that are heavier than the ones we currently use.
      
      The reasoning is that we're only guarding against a gross over-
      subscription of the table, rather than a small breach of the limit.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07ee0722
  6. 24 4月, 2015 1 次提交
    • J
      rhashtable: don't attempt to grow when at max_size · 1d8dc3d3
      Johannes Berg 提交于
      The conversion of mac80211's station table to rhashtable had a bug
      that I found by accident in code review, that hadn't been found as
      rhashtable apparently managed to have a maximum hash chain length
      of one (!) in all our testing.
      
      In order to test the bug and verify the fix I set my rhashtable's
      max_size very low (4) in order to force getting hash collisions.
      
      At that point, rhashtable WARNed in rhashtable_insert_rehash() but
      didn't actually reject the hash table insertion. This caused it to
      lose insertions - my master list of stations would have 9 entries,
      but the rhashtable only had 5. This may warrant a deeper look, but
      that WARN_ON() just shouldn't happen.
      
      Fix this by not returning true from rht_grow_above_100() when the
      rhashtable's max_size has been reached - in this case the user is
      explicitly configuring it to be at most that big, so even if it's
      now above 100% it shouldn't attempt to resize.
      
      This fixes the "lost insertion" issue and consequently allows my
      code to display its error (and verify my fix for it.)
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d8dc3d3
  7. 26 3月, 2015 1 次提交
  8. 25 3月, 2015 5 次提交
  9. 24 3月, 2015 5 次提交
  10. 21 3月, 2015 4 次提交
  11. 19 3月, 2015 3 次提交
  12. 15 3月, 2015 4 次提交
  13. 13 3月, 2015 1 次提交
    • D
      rhashtable: kill ht->shift atomic operations · a5b6846f
      Daniel Borkmann 提交于
      Commit c0c09bfd ("rhashtable: avoid unnecessary wakeup for worker
      queue") changed ht->shift to be atomic, which is actually unnecessary.
      
      Instead of leaving the current shift in the core rhashtable structure,
      it can be cached inside the individual bucket tables.
      
      There, it will only be initialized once during a new table allocation
      in the shrink/expansion slow path, and from then onward it stays immutable
      for the rest of the bucket table liftime.
      
      That allows shift to be non-atomic. The patch also moves hash_rnd
      management into the table setup. The rhashtable structure now consumes
      3 instead of 4 cachelines.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: Ying Xue <ying.xue@windriver.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5b6846f
  14. 12 3月, 2015 1 次提交
  15. 28 2月, 2015 1 次提交
  16. 22 2月, 2015 1 次提交
  17. 05 2月, 2015 1 次提交
    • H
      rhashtable: Introduce rhashtable_walk_* · f2dba9c6
      Herbert Xu 提交于
      Some existing rhashtable users get too intimate with it by walking
      the buckets directly.  This prevents us from easily changing the
      internals of rhashtable.
      
      This patch adds the helpers rhashtable_walk_init/exit/start/next/stop
      which will replace these custom walkers.
      
      They are meant to be usable for both procfs seq_file walks as well
      as walking by a netlink dump.  The iterator structure should fit
      inside a netlink dump cb structure, with at least one element to
      spare.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f2dba9c6
  18. 26 1月, 2015 1 次提交
  19. 16 1月, 2015 1 次提交
    • Y
      rhashtable: Fix race in rhashtable_destroy() and use regular work_struct · 57699a40
      Ying Xue 提交于
      When we put our declared work task in the global workqueue with
      schedule_delayed_work(), its delay parameter is always zero.
      Therefore, we should define a regular work in rhashtable structure
      instead of a delayed work.
      
      By the way, we add a condition to check whether resizing functions
      are NULL before cancelling the work, avoiding to cancel an
      uninitialized work.
      
      Lastly, while we wait for all work items we submitted before to run
      to completion with cancel_delayed_work(), ht->mutex has been taken in
      rhashtable_destroy(). Moreover, cancel_delayed_work() doesn't return
      until all work items are accomplished, and when work items are
      scheduled, the work's function - rht_deferred_worker() will be called.
      However, as rht_deferred_worker() also needs to acquire the lock,
      deadlock might happen at the moment as the lock is already held before.
      So if the cancel work function is moved out of the lock covered scope,
      this will avoid the deadlock.
      
      Fixes: 97defe1e ("rhashtable: Per bucket locks & deferred expansion/shrinking")
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Cc: Thomas Graf <tgraf@suug.ch>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      57699a40
  20. 14 1月, 2015 2 次提交
  21. 09 1月, 2015 2 次提交
  22. 05 1月, 2015 1 次提交