1. 03 12月, 2015 1 次提交
    • E
      [Fix] FreeHeapBlock should check if the addr is in the redis heap. · b186c277
      Enrico Giordani 提交于
      Since the forked process allocates the memory from the system heap,
      it must verify if the address is in the system heap or in the
      redis heap before freeing it.
      Changed dictRehash to NOOP when called by the forked process to avoid
      extra processing that is not required when the forked process is
      saving the dataset.
      b186c277
  2. 02 9月, 2015 2 次提交
  3. 09 7月, 2015 1 次提交
  4. 08 7月, 2015 1 次提交
  5. 03 6月, 2015 1 次提交
    • A
      Initial merge of antirez\3.0 · 3895a7d1
      Alexis Campailla 提交于
      Conflicts:
      	src/anet.c
      	src/aof.c
      	src/bitops.c
      	src/config.c
      	src/db.c
      	src/debug.c
      	src/dict.c
      	src/migrate.c
      	src/object.c
      	src/redis.c
      	src/redis.h
      	src/sentinel.c
      	src/t_list.c
      	src/t_set.c
      	src/t_zset.c
      	src/util.c
      	tests/instances.tcl
      3895a7d1
  6. 01 4月, 2015 1 次提交
  7. 27 3月, 2015 1 次提交
    • A
      dict.c: add casting to avoid compilation warning. · adcb4701
      antirez 提交于
      rehashidx is always positive in the two code paths, since the only
      negative value it could have is -1 when there is no rehashing in
      progress, and the condition is explicitly checked.
      adcb4701
  8. 11 2月, 2015 10 次提交
  9. 06 10月, 2014 2 次提交
    • M
      Cleanup wording of dictScan() comment · eb19819d
      Matt Stancliff 提交于
      Some language in the comment was difficult
      to understand, so this commit: clarifies wording, removes
      unnecessary words, and relocates some dependent clauses
      closer to what they actually describe.
      
      I also tried to break up longer chains of thought
      (if X, then Y, and Q, and also F, so obviously M)
      into more manageable chunks for ease of understanding.
      eb19819d
    • M
      Fix hash table size in comment for dictScan · c51baf64
      Michael Parker 提交于
      Closes #1351
      c51baf64
  10. 26 8月, 2014 4 次提交
  11. 21 3月, 2014 1 次提交
    • A
      Added dictGetRandomKeys() to dict.c: mass get random entries. · 26292670
      antirez 提交于
      This new function is useful to get a number of random entries from an
      hash table when we just need to do some sampling without particularly
      good distribution.
      
      It just jumps at a random place of the hash table and returns the first
      N items encountered by scanning linearly.
      
      The main usefulness of this function is to speedup Redis internal
      sampling of the key space, for example for key eviction or expiry.
      26292670
  12. 11 3月, 2014 2 次提交
  13. 11 12月, 2013 1 次提交
    • A
      dict.c: added optional callback to dictEmpty(). · 2eb781b3
      antirez 提交于
      Redis hash table implementation has many non-blocking features like
      incremental rehashing, however while deleting a large hash table there
      was no way to have a callback called to do some incremental work.
      
      This commit adds this support, as an optiona callback argument to
      dictEmpty() that is currently called at a fixed interval (one time every
      65k deletions).
      2eb781b3
  14. 05 12月, 2013 1 次提交
  15. 05 11月, 2013 1 次提交
  16. 28 10月, 2013 1 次提交
  17. 25 10月, 2013 4 次提交
  18. 20 8月, 2013 1 次提交
  19. 19 8月, 2013 4 次提交
    • A
      Revert "Fixed type in dict.c comment: 265 -> 256." · 00ddc350
      antirez 提交于
      This reverts commit 6253180a.
      00ddc350
    • A
      Fixed type in dict.c comment: 265 -> 256. · 6253180a
      antirez 提交于
      6253180a
    • A
      assert.h replaced with redisassert.h when appropriate. · 1c754084
      antirez 提交于
      Also a warning was suppressed by including unistd.h in redisassert.h
      (needed for _exit()).
      1c754084
    • A
      dictFingerprint() fingerprinting made more robust. · 905d4822
      antirez 提交于
      The previous hashing used the trivial algorithm of xoring the integers
      together. This is not optimal as it is very likely that different
      hash table setups will hash the same, for instance an hash table at the
      start of the rehashing process, and at the end, will have the same
      fingerprint.
      
      Now we hash N integers in a smarter way, by summing every integer to the
      previous hash, and taking the integer hashing again (see the code for
      further details). This way it is a lot less likely that we get a
      collision. Moreover this way of hashing explicitly protects from the
      same set of integers in a different order to hash to the same number.
      
      This commit is related to issue #1240.
      905d4822