1. 04 3月, 2014 2 次提交
  2. 11 12月, 2013 1 次提交
    • A
      dict.c: added optional callback to dictEmpty(). · 2eb781b3
      antirez 提交于
      Redis hash table implementation has many non-blocking features like
      incremental rehashing, however while deleting a large hash table there
      was no way to have a callback called to do some incremental work.
      
      This commit adds this support, as an optiona callback argument to
      dictEmpty() that is currently called at a fixed interval (one time every
      65k deletions).
      2eb781b3
  3. 05 12月, 2013 1 次提交
  4. 05 11月, 2013 1 次提交
  5. 28 10月, 2013 1 次提交
  6. 25 10月, 2013 4 次提交
  7. 20 8月, 2013 1 次提交
  8. 19 8月, 2013 5 次提交
    • A
      Revert "Fixed type in dict.c comment: 265 -> 256." · 00ddc350
      antirez 提交于
      This reverts commit 6253180a.
      00ddc350
    • A
      Fixed type in dict.c comment: 265 -> 256. · 6253180a
      antirez 提交于
      6253180a
    • A
      assert.h replaced with redisassert.h when appropriate. · 1c754084
      antirez 提交于
      Also a warning was suppressed by including unistd.h in redisassert.h
      (needed for _exit()).
      1c754084
    • A
      dictFingerprint() fingerprinting made more robust. · 905d4822
      antirez 提交于
      The previous hashing used the trivial algorithm of xoring the integers
      together. This is not optimal as it is very likely that different
      hash table setups will hash the same, for instance an hash table at the
      start of the rehashing process, and at the end, will have the same
      fingerprint.
      
      Now we hash N integers in a smarter way, by summing every integer to the
      previous hash, and taking the integer hashing again (see the code for
      further details). This way it is a lot less likely that we get a
      collision. Moreover this way of hashing explicitly protects from the
      same set of integers in a different order to hash to the same number.
      
      This commit is related to issue #1240.
      905d4822
    • A
      dict.c iterator API misuse protection. · 48cde3fe
      antirez 提交于
      dict.c allows the user to create unsafe iterators, that are iterators
      that will not touch the dictionary data structure in any way, preventing
      copy on write, but at the same time are limited in their usage.
      
      The limitation is that when itearting with an unsafe iterator, no call
      to other dictionary functions must be done inside the iteration loop,
      otherwise the dictionary may be incrementally rehashed resulting into
      missing elements in the set of the elements returned by the iterator.
      
      However after introducing this kind of iterators a number of bugs were
      found due to misuses of the API, and we are still finding
      bugs about this issue. The bugs are not trivial to track because the
      effect is just missing elements during the iteartion.
      
      This commit introduces auto-detection of the API misuse. The idea is
      that an unsafe iterator has a contract: from initialization to the
      release of the iterator the dictionary should not change.
      
      So we take a fingerprint of the dictionary state, xoring a few important
      dict properties when the unsafe iteartor is initialized. We later check
      when the iterator is released if the fingerprint is still the same. If it
      is not, we found a misuse of the iterator, as not allowed API calls
      changed the internal state of the dictionary.
      
      This code was checked against a real bug, issue #1240.
      
      This is what Redis prints (aborting) when a misuse is detected:
      
      Assertion failed: (iter->fingerprint == dictFingerprint(iter->d)),
      function dictReleaseIterator, file dict.c, line 587.
      48cde3fe
  9. 19 1月, 2013 1 次提交
  10. 28 11月, 2012 1 次提交
  11. 09 11月, 2012 1 次提交
  12. 05 10月, 2012 1 次提交
    • A
      Hash function switched to murmurhash2. · da920e75
      antirez 提交于
      The previously used hash function, djbhash, is not secure against
      collision attacks even when the seed is randomized as there are simple
      ways to find seed-independent collisions.
      
      The new hash function appears to be safe (or much harder to exploit at
      least) in this case, and has better distribution.
      
      Better distribution does not always means that's better. For instance in
      a fast benchmark with "DEBUG POPULATE 1000000" I obtained the following
      results:
      
          1.6 seconds with djbhash
          2.0 seconds with murmurhash2
      
      This is due to the fact that djbhash will hash objects that follow the
      pattern `prefix:<id>` and where the id is numerically near, to near
      buckets. This improves the locality.
      
      However in other access patterns with keys that have no relation
      murmurhash2 has some (apparently minimal) speed advantage.
      
      On the other hand a better distribution should significantly
      improve the quality of the distribution of elements returned with
      dictGetRandomKey() that is used in SPOP, SRANDMEMBER, RANDOMKEY, and
      other commands.
      
      Everything considered, and under the suspect that this commit fixes a
      security issue in Redis, we are switching to the new hash function.
      If some serious speed regression will be found in the future we'll be able
      to step back easiliy.
      
      This commit fixes issue #663.
      da920e75
  13. 22 4月, 2012 1 次提交
  14. 19 4月, 2012 1 次提交
  15. 07 4月, 2012 2 次提交
  16. 15 3月, 2012 1 次提交
  17. 22 1月, 2012 2 次提交
  18. 09 11月, 2011 2 次提交
  19. 08 11月, 2011 1 次提交
  20. 02 11月, 2011 1 次提交
  21. 10 5月, 2011 1 次提交
  22. 11 2月, 2011 1 次提交
  23. 03 11月, 2010 1 次提交
  24. 15 9月, 2010 1 次提交
    • A
      This should fix Issue 332: when there is a background process saving we still... · 3856f147
      antirez 提交于
      This should fix Issue 332: when there is a background process saving we still allow the hash tables to grow, but only when a critical treshold is reached. Formerly we prevented the resize at all triggering pathological O(N) behavior. Also there is a fix for the statistics in INFO about the number of keys expired
      3856f147
  25. 27 7月, 2010 1 次提交
  26. 25 7月, 2010 3 次提交
    • B
      Add zcalloc and use it where appropriate · 399f2f40
      Benjamin Kramer 提交于
      calloc is more effecient than malloc+memset when the system uses mmap to
      allocate memory. mmap always returns zeroed memory so the memset can be
      avoided.  The threshold to use mmap is 16k in osx libc and 128k in bsd
      libc and glibc. The kernel can lazily allocate the pages, this reduces
      memory usage when we have a page table or hash table that is mostly
      empty.
      
      This change is most visible when you start a new redis instance with vm
      enabled.  You'll see no increased memory usage no matter how big your
      page table is.
      399f2f40
    • B
      Remove _dictAlloc and friends · d9dd352b
      Benjamin Kramer 提交于
      zmalloc calls abort() so _dictPanic will never be called.
      d9dd352b
    • B
      Reduce code duplication · b1e0bd4b
      Benjamin Kramer 提交于
      b1e0bd4b
  27. 01 7月, 2010 1 次提交
    • A
      redis.c split into many different C files. · e2641e09
      antirez 提交于
      networking related stuff moved into networking.c
      
      moved more code
      
      more work on layout of source code
      
      SDS instantaneuos memory saving. By Pieter and Salvatore at VMware ;)
      
      cleanly compiling again after the first split, now splitting it in more C files
      
      moving more things around... work in progress
      
      split replication code
      
      splitting more
      
      Sets split
      
      Hash split
      
      replication split
      
      even more splitting
      
      more splitting
      
      minor change
      e2641e09