1. 19 1月, 2013 1 次提交
  2. 30 11月, 2012 1 次提交
    • A
      SDIFF is now able to select between two algorithms for speed. · f50e6584
      antirez 提交于
      SDIFF used an algorithm that was O(N) where N is the total number
      of elements of all the sets involved in the operation.
      
      The algorithm worked like that:
      
      ALGORITHM 1:
      
      1) For the first set, add all the members to an auxiliary set.
      2) For all the other sets, remove all the members of the set from the
      auxiliary set.
      
      So it is an O(N) algorithm where N is the total number of elements in
      all the sets involved in the diff operation.
      
      Cristobal Viedma suggested to modify the algorithm to the following:
      
      ALGORITHM 2:
      
      1) Iterate all the elements of the first set.
      2) For every element, check if the element also exists in all the other
      remaining sets.
      3) Add the element to the auxiliary set only if it does not exist in any
      of the other sets.
      
      The complexity of this algorithm on the worst case is O(N*M) where N is
      the size of the first set and M the total number of sets involved in the
      operation.
      
      However when there are elements in common, with this algorithm we stop
      the computation for a given element as long as we find a duplicated
      element into another set.
      
      I (antirez) added an additional step to algorithm 2 to make it faster,
      that is to sort the set to subtract from the biggest to the
      smallest, so that it is more likely to find a duplicate in a larger sets
      that are checked before the smaller ones.
      
      WHAT IS BETTER?
      
      None of course, for instance if the first set is much larger than the
      other sets the second algorithm does a lot more work compared to the
      first algorithm.
      
      Similarly if the first set is much smaller than the other sets, the
      original algorithm will less work.
      
      So this commit makes Redis able to guess the number of operations
      required by each algorithm, and select the best at runtime according
      to the input received.
      
      However, since the second algorithm has better constant times and can do
      less work if there are duplicated elements, an advantage is given to the
      second algorithm.
      f50e6584
  3. 09 11月, 2012 1 次提交
  4. 21 9月, 2012 2 次提交
    • A
      SRANDMEMBER <count> leak fixed. · 578c9459
      antirez 提交于
      For "CASE 4" (see code) we need to free the element if it's already in
      the result dictionary and adding it failed.
      578c9459
    • A
      Added the SRANDMEMBER key <count> variant. · be90c803
      antirez 提交于
      SRANDMEMBER called with just the key argument can just return a single
      random element from a Redis Set. However many users need to return
      multiple unique elements from a Set, this is not a trivial problem to
      handle in the client side, and for truly good performance a C
      implementation was required.
      
      After many requests for this feature it was finally implemented.
      
      The problem implementing this command is the strategy to follow when
      the number of elements the user asks for is near to the number of
      elements that are already inside the set. In this case asking random
      elements to the dictionary API, and trying to add it to a temporary set,
      may result into an extremely poor performance, as most add operations
      will be wasted on duplicated elements.
      
      For this reason this implementation uses a different strategy in this
      case: the Set is copied, and random elements are returned to reach the
      specified count.
      
      The code actually uses 4 different algorithms optimized for the
      different cases.
      
      If the count is negative, the command changes behavior and allows for
      duplicated elements in the returned subset.
      be90c803
  5. 07 4月, 2012 1 次提交
  6. 09 11月, 2011 1 次提交
  7. 05 10月, 2011 1 次提交
  8. 20 6月, 2011 1 次提交
  9. 01 6月, 2011 1 次提交
  10. 15 5月, 2011 1 次提交
  11. 19 4月, 2011 1 次提交
  12. 16 4月, 2011 1 次提交
  13. 16 2月, 2011 1 次提交
  14. 30 12月, 2010 1 次提交
  15. 10 12月, 2010 1 次提交
  16. 09 12月, 2010 1 次提交
  17. 17 10月, 2010 1 次提交
  18. 03 9月, 2010 1 次提交
  19. 02 9月, 2010 1 次提交
  20. 30 8月, 2010 1 次提交
  21. 27 8月, 2010 2 次提交
  22. 26 8月, 2010 1 次提交
  23. 21 8月, 2010 1 次提交
  24. 12 7月, 2010 1 次提交
  25. 01 7月, 2010 1 次提交
    • A
      redis.c split into many different C files. · e2641e09
      antirez 提交于
      networking related stuff moved into networking.c
      
      moved more code
      
      more work on layout of source code
      
      SDS instantaneuos memory saving. By Pieter and Salvatore at VMware ;)
      
      cleanly compiling again after the first split, now splitting it in more C files
      
      moving more things around... work in progress
      
      split replication code
      
      splitting more
      
      Sets split
      
      Hash split
      
      replication split
      
      even more splitting
      
      more splitting
      
      minor change
      e2641e09