1. 19 8月, 2013 8 次提交
    • A
      Revert "Fixed type in dict.c comment: 265 -> 256." · 00ddc350
      antirez 提交于
      This reverts commit 6253180a.
      00ddc350
    • A
      Fixed type in dict.c comment: 265 -> 256. · 6253180a
      antirez 提交于
      6253180a
    • A
      assert.h replaced with redisassert.h when appropriate. · 1c754084
      antirez 提交于
      Also a warning was suppressed by including unistd.h in redisassert.h
      (needed for _exit()).
      1c754084
    • A
      Added redisassert.h as drop in replacement for assert.h. · ca294c6b
      antirez 提交于
      By using redisassert.h version of assert() you get stack traces in the
      log instead of a process disappearing on assertions.
      ca294c6b
    • A
      dictFingerprint() fingerprinting made more robust. · 905d4822
      antirez 提交于
      The previous hashing used the trivial algorithm of xoring the integers
      together. This is not optimal as it is very likely that different
      hash table setups will hash the same, for instance an hash table at the
      start of the rehashing process, and at the end, will have the same
      fingerprint.
      
      Now we hash N integers in a smarter way, by summing every integer to the
      previous hash, and taking the integer hashing again (see the code for
      further details). This way it is a lot less likely that we get a
      collision. Moreover this way of hashing explicitly protects from the
      same set of integers in a different order to hash to the same number.
      
      This commit is related to issue #1240.
      905d4822
    • A
      Fix comments for correctness in zunionInterGenericCommand(). · 3039e806
      antirez 提交于
      Related to issue #1240.
      3039e806
    • A
      Properly init/release iterators in zunionInterGenericCommand(). · cfb9d025
      antirez 提交于
      This commit does mainly two things:
      
      1) It fixes zunionInterGenericCommand() by removing mass-initialization
      of all the iterators used, so that we don't violate the unsafe iterator
      API of dictionaries. This fixes issue #1240.
      
      2) Since the zui* APIs required the allocator to be initialized in the
      zsetopsrc structure in order to use non-iterator related APIs, this
      commit fixes this strict requirement by accessing objects directly via
      the op->subject->ptr pointer we have to the object.
      cfb9d025
    • A
      dict.c iterator API misuse protection. · 48cde3fe
      antirez 提交于
      dict.c allows the user to create unsafe iterators, that are iterators
      that will not touch the dictionary data structure in any way, preventing
      copy on write, but at the same time are limited in their usage.
      
      The limitation is that when itearting with an unsafe iterator, no call
      to other dictionary functions must be done inside the iteration loop,
      otherwise the dictionary may be incrementally rehashed resulting into
      missing elements in the set of the elements returned by the iterator.
      
      However after introducing this kind of iterators a number of bugs were
      found due to misuses of the API, and we are still finding
      bugs about this issue. The bugs are not trivial to track because the
      effect is just missing elements during the iteartion.
      
      This commit introduces auto-detection of the API misuse. The idea is
      that an unsafe iterator has a contract: from initialization to the
      release of the iterator the dictionary should not change.
      
      So we take a fingerprint of the dictionary state, xoring a few important
      dict properties when the unsafe iteartor is initialized. We later check
      when the iterator is released if the fingerprint is still the same. If it
      is not, we found a misuse of the iterator, as not allowed API calls
      changed the internal state of the dictionary.
      
      This code was checked against a real bug, issue #1240.
      
      This is what Redis prints (aborting) when a misuse is detected:
      
      Assertion failed: (iter->fingerprint == dictFingerprint(iter->d)),
      function dictReleaseIterator, file dict.c, line 587.
      48cde3fe
  2. 12 8月, 2013 6 次提交
  3. 08 8月, 2013 2 次提交
    • A
      redis-benchmark: changes to random arguments substitution. · db862e8e
      antirez 提交于
      Before this commit redis-benchmark supported random argumetns in the
      form of :rand:000000000000. In every string of that form, the zeros were
      replaced with a random number of 12 digits at every command invocation.
      
      However this was far from perfect as did not allowed to generate simply
      random numbers as arguments, there was always the :rand: prefix.
      
      Now instead every argument in the form __rand_int__ is replaced with a
      12 digits number. Note that "__rand_int__" is 12 characters itself.
      
      In order to implement the new semantic, it was needed to change a few
      thigns in the internals of redis-benchmark, as new clients are created
      cloning old clients, so without a stable prefix such as ":rand:" the old
      way of cloning the client was no longer able to understand, from the old
      command line, what was the position of the random strings to substitute.
      
      Now instead a client structure is passed as a reference for cloning, so
      that we can directly clone the offsets inside the command line.
      db862e8e
    • A
      redis-benchmark: replace snprintf()+memcpy with faster code. · 92ab77f8
      antirez 提交于
      This change was profiler-driven, but the actual effect is hard to
      measure in real-world redis benchmark runs.
      92ab77f8
  4. 07 8月, 2013 6 次提交
  5. 06 8月, 2013 4 次提交
    • A
      Add per-db average TTL information in INFO output. · 112fa479
      antirez 提交于
      Example:
      
      db0:keys=221913,expires=221913,avg_ttl=655
      
      The algorithm uses a running average with only two samples (current and
      previous). Keys found to be expired are considered at TTL zero even if
      the actual TTL can be negative.
      
      The TTL is reported in milliseconds.
      112fa479
    • A
      activeExpireCycle(): fix about fast cycle early start. · 4befe73b
      antirez 提交于
      We don't want to repeat a fast cycle too soon, the previous code was
      broken, we need to wait two times the period *since* the start of the
      previous cycle in order to avoid there is an even space between cycles:
      
      .-> start                   .-> second start
      |                           |
      +-------------+-------------+--------------+
      | first cycle |    pause    | second cycle |
      +-------------+-------------+--------------+
      
      The second and first start must be PERIOD*2 useconds apart hence the *2
      in the new code.
      4befe73b
    • A
      Some activeExpireCycle() refactoring. · 6500fabf
      antirez 提交于
      6500fabf
    • A
      Remove dead code and fix comments for new expire code. · d398f388
      antirez 提交于
      d398f388
  6. 05 8月, 2013 2 次提交
    • A
      Darft #2 for key collection algo: more improvements. · 66a26471
      antirez 提交于
      This commit makes the fast collection cycle time configurable, at
      the same time it does not allow to run a new fast collection cycle
      for the same amount of time as the max duration of the fast
      collection cycle.
      66a26471
    • A
      Draft #1 of a new expired keys collection algorithm. · b09ea1bd
      antirez 提交于
      The main idea here is that when we are no longer to expire keys at the
      rate the are created, we can't block more in the normal expire cycle as
      this would result in too big latency spikes.
      
      For this reason the commit introduces a "fast" expire cycle that does
      not run for more than 1 millisecond but is called in the beforeSleep()
      hook of the event loop, so much more often, and with a frequency bound
      to the frequency of executed commnads.
      
      The fast expire cycle is only called when the standard expiration
      algorithm runs out of time, that is, consumed more than
      REDIS_EXPIRELOOKUPS_TIME_PERC of CPU in a given cycle without being able
      to take the number of already expired keys that are yet not collected
      to a number smaller than 25% of the number of keys.
      
      You can test this commit with different loads, but a simple way is to
      use the following:
      
      Extreme load with pipelining:
      
      redis-benchmark -r 100000000 -n 100000000  \
              -P 32 set ele:rand:000000000000 foo ex 2
      
      Remove the -P32 in order to avoid the pipelining for a more real-world
      load.
      
      In another terminal tab you can monitor the Redis behavior with:
      
      redis-cli -i 0.1 -r -1 info keyspace
      
      and
      
      redis-cli --latency-history
      
      Note: this commit will make Redis printing a lot of debug messages, it
      is not a good idea to use it in production.
      b09ea1bd
  7. 29 7月, 2013 1 次提交
  8. 28 7月, 2013 2 次提交
  9. 25 7月, 2013 2 次提交
  10. 24 7月, 2013 2 次提交
  11. 23 7月, 2013 4 次提交
  12. 22 7月, 2013 1 次提交