- 03 12月, 2015 1 次提交
-
-
由 Enrico Giordani 提交于
Since the forked process allocates the memory from the system heap, it must verify if the address is in the system heap or in the redis heap before freeing it. Changed dictRehash to NOOP when called by the forked process to avoid extra processing that is not required when the forked process is saving the dataset.
-
- 02 9月, 2015 2 次提交
-
-
由 Enrico Giordani 提交于
-
由 Enrico Giordani 提交于
[Fix] Windows portability: explicit type casting. [Cleanup] Code refactoring. Comments. Changed functions prefix to match the functions type.
-
- 09 7月, 2015 1 次提交
-
-
由 Enrico Giordani 提交于
-
- 08 7月, 2015 1 次提交
-
-
由 Enrico Giordani 提交于
-
- 03 6月, 2015 1 次提交
-
-
由 Alexis Campailla 提交于
Conflicts: src/anet.c src/aof.c src/bitops.c src/config.c src/db.c src/debug.c src/dict.c src/migrate.c src/object.c src/redis.c src/redis.h src/sentinel.c src/t_list.c src/t_set.c src/t_zset.c src/util.c tests/instances.tcl
-
- 01 4月, 2015 1 次提交
-
-
由 antirez 提交于
-
- 27 3月, 2015 1 次提交
-
-
由 antirez 提交于
rehashidx is always positive in the two code paths, since the only negative value it could have is -1 when there is no rehashing in progress, and the condition is explicitly checked.
-
- 11 2月, 2015 10 次提交
-
-
由 antirez 提交于
Fixed by @oranagra, thank you.
-
由 antirez 提交于
-
由 antirez 提交于
Avoid code repetition introduced with PR #2367, also fixes the return value to always return 0 if there is nothing more to rehash.
-
由 Sun He 提交于
-
由 antirez 提交于
This is very similar to the optimization applied to dictGetRandomKeys, but applied to the single key variant. Related to issue #2306.
-
由 antirez 提交于
Related to issue #2306.
-
由 antirez 提交于
We use the invariant that the original table ht[0] is never populated up to the index before the current rehashing index. Related to issue #2306.
-
由 antirez 提交于
Related to issue #2306.
-
由 antirez 提交于
Related to issue #2306.
-
由 antirez 提交于
Related to issue #2306.
-
- 06 10月, 2014 2 次提交
-
-
由 Matt Stancliff 提交于
Some language in the comment was difficult to understand, so this commit: clarifies wording, removes unnecessary words, and relocates some dependent clauses closer to what they actually describe. I also tried to break up longer chains of thought (if X, then Y, and Q, and also F, so obviously M) into more manageable chunks for ease of understanding.
-
由 Michael Parker 提交于
Closes #1351
-
- 26 8月, 2014 4 次提交
- 21 3月, 2014 1 次提交
-
-
由 antirez 提交于
This new function is useful to get a number of random entries from an hash table when we just need to do some sampling without particularly good distribution. It just jumps at a random place of the hash table and returns the first N items encountered by scanning linearly. The main usefulness of this function is to speedup Redis internal sampling of the key space, for example for key eviction or expiry.
-
- 11 3月, 2014 2 次提交
-
-
由 zhanghailei 提交于
-
由 zhanghailei 提交于
-
- 11 12月, 2013 1 次提交
-
-
由 antirez 提交于
Redis hash table implementation has many non-blocking features like incremental rehashing, however while deleting a large hash table there was no way to have a callback called to do some incremental work. This commit adds this support, as an optiona callback argument to dictEmpty() that is currently called at a fixed interval (one time every 65k deletions).
-
- 05 12月, 2013 1 次提交
-
-
由 antirez 提交于
-
- 05 11月, 2013 1 次提交
-
-
由 antirez 提交于
-
- 28 10月, 2013 1 次提交
-
-
由 antirez 提交于
-
- 25 10月, 2013 4 次提交
-
-
由 antirez 提交于
-
由 antirez 提交于
-
由 Pieter Noordhuis 提交于
The irrelevant bits shouldn't be masked to 1. This can result in slots being skipped when the hash table is resized between calls to the iterator.
-
由 Pieter Noordhuis 提交于
-
- 20 8月, 2013 1 次提交
-
-
由 antirez 提交于
-
- 19 8月, 2013 4 次提交
-
-
由 antirez 提交于
-
由 antirez 提交于
Also a warning was suppressed by including unistd.h in redisassert.h (needed for _exit()).
-
由 antirez 提交于
The previous hashing used the trivial algorithm of xoring the integers together. This is not optimal as it is very likely that different hash table setups will hash the same, for instance an hash table at the start of the rehashing process, and at the end, will have the same fingerprint. Now we hash N integers in a smarter way, by summing every integer to the previous hash, and taking the integer hashing again (see the code for further details). This way it is a lot less likely that we get a collision. Moreover this way of hashing explicitly protects from the same set of integers in a different order to hash to the same number. This commit is related to issue #1240.