1. 03 12月, 2012 4 次提交
    • A
      Test: fixed osx "leaks" support in test. · e3973521
      antirez 提交于
      Due to changes in recent releases of osx leaks utility, the osx leak
      detection no longer worked. Now it is fixed in a way that should be
      backward compatible.
      e3973521
    • A
      Blocking POP: use a dictionary to store keys clinet side. · 07a9f854
      antirez 提交于
      To store the keys we block for during a blocking pop operation, in the
      case the client is blocked for more data to arrive, we used a simple
      linear array of redis objects, in the blockingState structure:
      
          robj **keys;
          int count;
      
      However in order to fix issue #801 we also use a dictionary in order to
      avoid to end in the blocked clients queue for the same key multiple
      times with the same client.
      
      The dictionary was only temporary, just to avoid duplicates, but since
      we create / destroy it there is no point in doing this duplicated work,
      so this commit simply use a dictionary as the main structure to store
      the keys we are blocked for. So instead of the previous fields we now
      just have:
      
          dict *keys;
      
      This simplifies the code and reduces the work done by the server during
      a blocking POP operation.
      07a9f854
    • A
      Test: regression for issue #801. · fbf3e33d
      antirez 提交于
      fbf3e33d
    • A
      Client should not block multiple times on the same key. · 64e69c0d
      antirez 提交于
      Sending a command like:
      
      BLPOP foo foo foo foo 0
      
      Resulted into a crash before this commit since the client ended being
      inserted in the waiting list for this key multiple times.
      This resulted into the function handleClientsBlockedOnLists() to fail
      because we have code like that:
      
          if (de) {
              list *clients = dictGetVal(de);
              int numclients = listLength(clients);
      
              while(numclients--) {
                  listNode *clientnode = listFirst(clients);
      
                  /* server clients here... */
              }
          }
      
      The code to serve clients used to remove the served client from the
      waiting list, so if a client is blocking multiple times, eventually the
      call to listFirst() will return NULL or worse will access random memory
      since the list may no longer exist as it is removed by the function
      unblockClientWaitingData() if there are no more clients waiting for this
      list.
      
      To avoid making the rest of the implementation more complex, this commit
      modifies blockForKeys() so that a client will be put just a single time
      into the waiting list for a given key.
      
      Since it is Saturday, I hope this fixes issue #801.
      64e69c0d
  2. 01 12月, 2012 3 次提交
    • A
      SDIFF fuzz test added. · c1eda786
      antirez 提交于
      c1eda786
    • A
      SDIFF is now able to select between two algorithms for speed. · ccc974d9
      antirez 提交于
      SDIFF used an algorithm that was O(N) where N is the total number
      of elements of all the sets involved in the operation.
      
      The algorithm worked like that:
      
      ALGORITHM 1:
      
      1) For the first set, add all the members to an auxiliary set.
      2) For all the other sets, remove all the members of the set from the
      auxiliary set.
      
      So it is an O(N) algorithm where N is the total number of elements in
      all the sets involved in the diff operation.
      
      Cristobal Viedma suggested to modify the algorithm to the following:
      
      ALGORITHM 2:
      
      1) Iterate all the elements of the first set.
      2) For every element, check if the element also exists in all the other
      remaining sets.
      3) Add the element to the auxiliary set only if it does not exist in any
      of the other sets.
      
      The complexity of this algorithm on the worst case is O(N*M) where N is
      the size of the first set and M the total number of sets involved in the
      operation.
      
      However when there are elements in common, with this algorithm we stop
      the computation for a given element as long as we find a duplicated
      element into another set.
      
      I (antirez) added an additional step to algorithm 2 to make it faster,
      that is to sort the set to subtract from the biggest to the
      smallest, so that it is more likely to find a duplicate in a larger sets
      that are checked before the smaller ones.
      
      WHAT IS BETTER?
      
      None of course, for instance if the first set is much larger than the
      other sets the second algorithm does a lot more work compared to the
      first algorithm.
      
      Similarly if the first set is much smaller than the other sets, the
      original algorithm will less work.
      
      So this commit makes Redis able to guess the number of operations
      required by each algorithm, and select the best at runtime according
      to the input received.
      
      However, since the second algorithm has better constant times and can do
      less work if there are duplicated elements, an advantage is given to the
      second algorithm.
      ccc974d9
    • A
      redis-benchmark: seed the PRNG with time() at startup. · 2572bb13
      antirez 提交于
      2572bb13
  3. 29 11月, 2012 4 次提交
    • A
      Make an EXEC test more latency proof. · e34c14df
      antirez 提交于
      e34c14df
    • A
      Introduced the Build ID in INFO and --version output. · 3b71404d
      antirez 提交于
      The idea is to be able to identify a build in a unique way, so for
      instance after a bug report we can recognize that the build is the one
      of a popular Linux distribution and perform the debugging in the same
      environment.
      3b71404d
    • A
      On crash memory test rewrote so that it actaully works. · cd99c14e
      antirez 提交于
      1) We no longer test location by location, otherwise the CPU write cache
      completely makes our business useless.
      2) We still need a memory test that operates in steps from the first to
      the last location in order to never hit the cache, but that is still
      able to retain the memory content.
      
      This was tested using a Linux box containing a bad memory module with a
      zingle bit error (always zero).
      
      So the final solution does has an error propagation step that is:
      
      1) Invert bits at every location.
      2) Swap adiacent locations.
      3) Swap adiacent locations again.
      4) Invert bits at every location.
      5) Swap adiacent locations.
      6) Swap adiacent locations again.
      
      Before and after these steps, and after step 4, a CRC64 checksum is computed.
      If the three CRC64 checksums don't match, a memory error was detected.
      cd99c14e
    • A
      Jemalloc updated to version 3.2.0. · 053f1f35
      antirez 提交于
      053f1f35
  4. 28 11月, 2012 3 次提交
  5. 22 11月, 2012 9 次提交
  6. 19 11月, 2012 2 次提交
    • A
      Children creating AOF or RDB files now report memory used by COW. · c342b075
      antirez 提交于
      Finally Redis is able to report the amount of memory used by
      copy-on-write while saving an RDB or writing an AOF file in background.
      
      Note that this information is currently only logged (at NOTICE level)
      and not shown in INFO because this is less trivial (but surely doable
      with some minor form of interprocess communication).
      
      The reason we can't capture this information on the parent before we
      call wait3() is that the Linux kernel will release the child memory
      ASAP, and only retain the minimal state for the process that is useful
      to report the child termination to the parent.
      
      The COW size is obtained by summing all the Private_Dirty fields found
      in the "smap" file inside the proc filesystem for the process.
      
      All this is Linux specific and is not available on other systems.
      c342b075
    • A
      zmalloc_get_private_dirty() function added (Linux only). · 1d8af607
      antirez 提交于
      For non Linux systmes it just returns 0.
      
      This function is useful to estimate copy-on-write because of childs
      saving stuff on disk.
      1d8af607
  7. 14 11月, 2012 1 次提交
  8. 13 11月, 2012 1 次提交
    • A
      TTL API change: TTL returns -2 for non existing keys. · 50c41de7
      antirez 提交于
      The previous behavior was to return -1 if:
      
      1) Existing key but without an expire set.
      2) Non existing key.
      
      Now the second case is handled in a different, and TTL will return -2
      if the key does not exist at all.
      
      PTTL follows the same behavior as well.
      50c41de7
  9. 09 11月, 2012 3 次提交
  10. 07 11月, 2012 1 次提交
    • A
      Type mismatch errors are now prefixed with WRONGTYPE. · 46c5d396
      antirez 提交于
      So instead to reply with a generic error like:
      
      -ERR ... wrong kind of value ...
      
      now it replies with:
      
      -WRONGTYPE ... wrong kind of value ...
      
      This makes this particular error easy to check without resorting to
      (fragile) pattern matching of the error string (however the error string
      used to be consistent already).
      
      Client libraries should return a specific exeption type for this error.
      
      Most of the commit is about fixing unit tests.
      46c5d396
  11. 02 11月, 2012 4 次提交
  12. 01 11月, 2012 1 次提交
    • A
      32 bit build fixed on Linux. · 655e6838
      antirez 提交于
      It failed because of the way jemalloc was compiled (without passing the
      right flags to make, but just to configure). Now the same set of flags
      are also passed to the make command, fixing the issue.
      
      This fixes issue #744
      655e6838
  13. 31 10月, 2012 4 次提交