1. 22 1月, 2013 1 次提交
    • A
      UNSUBSCRIBE and PUNSUBSCRIBE: always provide a reply. · 3ff75e58
      antirez 提交于
      UNSUBSCRIBE and PUNSUBSCRIBE commands are designed to mass-unsubscribe
      the client respectively all the channels and patters if called without
      arguments.
      
      However when these functions are called without arguments, but there are
      no channels or patters we are subscribed to, the old behavior was to
      don't reply at all.
      
      This behavior is broken, as every command should always reply.
      Also it is possible that we are no longer subscribed to a channels but we
      are subscribed to patters or the other way around, and the client should
      be notified with the correct number of subscriptions.
      
      Also it is not pretty that sometimes we did not receive a reply at all
      in a redis-cli session from these commands, blocking redis-cli trying
      to read the reply.
      
      This fixes issue #714.
      3ff75e58
  2. 21 1月, 2013 3 次提交
  3. 19 1月, 2013 8 次提交
    • A
      Additionally two typos fixed thanks to @jodal · 635c532c
      antirez 提交于
      635c532c
    • A
      Whitelist SIGUSR1 to avoid auto-triggering errors. · 39f0a33f
      antirez 提交于
      This commit fixes issue #875 that was caused by the following events:
      
      1) There is an active child doing BGSAVE.
      2) flushall is called (or any other condition that makes Redis killing
      the saving child process).
      3) An error is sensed by Redis as the child exited with an error (killed
      by a singal), that stops accepting write commands until a BGSAVE happens
      to be executed with success.
      
      Whitelisting SIGUSR1 and making sure Redis always uses this signal in
      order to kill its own children fixes the issue.
      39f0a33f
    • A
      Clear server.shutdown_asap on failed shutdown. · 1e20c939
      antirez 提交于
      When a SIGTERM is received Redis schedules a shutdown. However if it
      fails to perform the shutdown it must be clear the shutdown_asap flag
      otehrwise it will try again and again possibly making the server
      unusable.
      1e20c939
    • A
      Slowlog: don't log EXEC but just the executed commands. · d766907c
      antirez 提交于
      The Redis Slow Log always used to log the slow commands executed inside
      a MULTI/EXEC block. However also EXEC was logged at the end, which is
      perfectly useless.
      
      Now EXEC is no longer logged and a test was added to test this behavior.
      
      This fixes issue #759.
      d766907c
    • G
      Fixed many typos. · 1caf0939
      guiquanz 提交于
      Conflicts fixed, mainly because 2.8 has no cluster support / files:
      	00-RELEASENOTES
      	src/cluster.c
      	src/crc16.c
      	src/redis-trib.rb
      	src/redis.h
      1caf0939
    • C
      redis-cli prompt bug fix · ff1e4d22
      charsyam 提交于
      ff1e4d22
    • J
      Always exit if connection fails. · a8f9cec1
      Jan-Erik Rediger 提交于
      This avoids unnecessary core dumps. Fixes antirez/redis#894
      a8f9cec1
    • B
      Fix an error reply for CLIENT command · aadcda99
      bitterb 提交于
      aadcda99
  4. 18 1月, 2013 1 次提交
    • N
      redis-cli --rdb fails if server sends a ping · f2bc198d
      Nathan Parry 提交于
      Redis pings slaves in "pre-synchronization stage" with newlines. (See
      https://github.com/antirez/redis/blob/2.6.9/src/replication.c#L814)
      However, redis-cli does not expect this - it sees the newline as the end
      of the bulk length line, and ends up returning 0 as bulk the length.
      This manifests as the following when running redis-cli:
      
          $ ./src/redis-cli --rdb some_file
          SYNC sent to master, writing 0 bytes to 'some_file'
          Transfer finished with success.
      
      With this commit, we just ignore leading newlines while reading the bulk
      length line.
      
      To reproduce the problem, load enough data into Redis so that the
      preparation of the RDB snapshot takes long enough for a ping to occur
      while redis-cli is waiting for the data.
      f2bc198d
  5. 17 1月, 2013 1 次提交
  6. 15 1月, 2013 4 次提交
    • A
      Tests for CLIENT GETNAME/SETNAME. · 7897e0a0
      antirez 提交于
      7897e0a0
    • A
      Typo fixed, ASCI -> ASCII. · f31d10b9
      antirez 提交于
      f31d10b9
    • A
      CLIENT GETNAME and CLIENT SETNAME introduced. · 786bd393
      antirez 提交于
      Sometimes it is much simpler to debug complex Redis installations if it
      is possible to assign clients a name that is displayed in the CLIENT
      LIST output.
      
      This is the case, for example, for "leaked" connections. The ability to
      provide a name to the client makes it quite trivial to understand what
      is the part of the code implementing the client not releasing the
      resources appropriately.
      
      Behavior:
      
          CLIENT SETNAME: set a name for the client, or remove the current
                          name if an empty name is set.
          CLIENT GETNAME: get the current name, or a nil.
          CLIENT LIST: now displays the client name if any.
      
      Thanks to Mark Gravell for pushing this idea forward.
      786bd393
    • A
      Undo slave-master handshake when SLAVEOF sets a new slave. · aa9497fd
      antirez 提交于
      Issue #828 shows how Redis was not correctly undoing a non-blocking
      connection attempt with the previous master when the master was set to a
      new address using the SLAVEOF command.
      
      This was also a result of lack of refactoring, so now there is a
      function to cancel the non blocking handshake with the master.
      The new function is now used when SLAVEOF NO ONE is called or when
      SLAVEOF is used to set the master to a different address.
      aa9497fd
  7. 12 1月, 2013 1 次提交
  8. 10 1月, 2013 3 次提交
  9. 03 1月, 2013 2 次提交
  10. 20 12月, 2012 1 次提交
  11. 17 12月, 2012 2 次提交
  12. 15 12月, 2012 1 次提交
    • A
      serverCron() frequency is now a runtime parameter (was REDIS_HZ). · a6d117b6
      antirez 提交于
      REDIS_HZ is the frequency our serverCron() function is called with.
      A more frequent call to this function results into less latency when the
      server is trying to handle very expansive background operations like
      mass expires of a lot of keys at the same time.
      
      Redis 2.4 used to have an HZ of 10. This was good enough with almost
      every setup, but the incremental key expiration algorithm was working a
      bit better under *extreme* pressure when HZ was set to 100 for Redis
      2.6.
      
      However for most users a latency spike of 30 milliseconds when million
      of keys are expiring at the same time is acceptable, on the other hand a
      default HZ of 100 in Redis 2.6 was causing idle instances to use some
      CPU time compared to Redis 2.4. The CPU usage was in the order of 0.3%
      for an idle instance, however this is a shame as more energy is consumed
      by the server, if not important resources.
      
      This commit introduces HZ as a runtime parameter, that can be queried by
      INFO or CONFIG GET, and can be modified with CONFIG SET. At the same
      time the default frequency is set back to 10.
      
      In this way we default to a sane value of 10, but allows users to
      easily switch to values up to 500 for near real-time applications if
      needed and if they are willing to pay this small CPU usage penalty.
      a6d117b6
  13. 13 12月, 2012 1 次提交
  14. 12 12月, 2012 1 次提交
    • A
      Fix config.h endianess detection to work on Linux / PPC64. · bbee226e
      antirez 提交于
      Config.h performs endianess detection including OS-specific headers to
      define the endianess macros, or when this is not possible, checking the
      processor type via ifdefs.
      
      Sometimes when the OS-specific macro is included, only __BYTE_ORDER is
      defined, while BYTE_ORDER remains undefined. There is code at the end of
      config.h endianess detection in order to define the macros without the
      underscore, but it was not working correctly.
      
      This commit fixes endianess detection fixing Redis on Linux / PPC64 and
      possibly other systems.
      bbee226e
  15. 03 12月, 2012 6 次提交
    • B
      Issue 804 Add Default-Start and Default-Stop LSB tags for RedHat startup and... · 17e31eb0
      Brian J. McManus 提交于
      Issue 804 Add Default-Start and Default-Stop LSB tags for RedHat startup and update-rc.d compatability.
      17e31eb0
    • A
      Memory leak fixed: release client's bpop->keys dictionary. · 124cb6dd
      antirez 提交于
      Refactoring performed after issue #801 resolution (see commit
      2f87cf8b) introduced a memory leak that
      is fixed by this commit.
      
      I simply forgot to free the new allocated dictionary in the client
      structure trusting the output of "make test" on OSX.
      
      However due to changes in the "leaks" utility the test was no longer
      testing memory leaks. This problem was also fixed.
      
      Fortunately the CI test running at ci.redis.io spotted the bug in the
      valgrind run.
      
      The leak never ended into a stable release.
      124cb6dd
    • A
      Test: fixed osx "leaks" support in test. · e3973521
      antirez 提交于
      Due to changes in recent releases of osx leaks utility, the osx leak
      detection no longer worked. Now it is fixed in a way that should be
      backward compatible.
      e3973521
    • A
      Blocking POP: use a dictionary to store keys clinet side. · 07a9f854
      antirez 提交于
      To store the keys we block for during a blocking pop operation, in the
      case the client is blocked for more data to arrive, we used a simple
      linear array of redis objects, in the blockingState structure:
      
          robj **keys;
          int count;
      
      However in order to fix issue #801 we also use a dictionary in order to
      avoid to end in the blocked clients queue for the same key multiple
      times with the same client.
      
      The dictionary was only temporary, just to avoid duplicates, but since
      we create / destroy it there is no point in doing this duplicated work,
      so this commit simply use a dictionary as the main structure to store
      the keys we are blocked for. So instead of the previous fields we now
      just have:
      
          dict *keys;
      
      This simplifies the code and reduces the work done by the server during
      a blocking POP operation.
      07a9f854
    • A
      Test: regression for issue #801. · fbf3e33d
      antirez 提交于
      fbf3e33d
    • A
      Client should not block multiple times on the same key. · 64e69c0d
      antirez 提交于
      Sending a command like:
      
      BLPOP foo foo foo foo 0
      
      Resulted into a crash before this commit since the client ended being
      inserted in the waiting list for this key multiple times.
      This resulted into the function handleClientsBlockedOnLists() to fail
      because we have code like that:
      
          if (de) {
              list *clients = dictGetVal(de);
              int numclients = listLength(clients);
      
              while(numclients--) {
                  listNode *clientnode = listFirst(clients);
      
                  /* server clients here... */
              }
          }
      
      The code to serve clients used to remove the served client from the
      waiting list, so if a client is blocking multiple times, eventually the
      call to listFirst() will return NULL or worse will access random memory
      since the list may no longer exist as it is removed by the function
      unblockClientWaitingData() if there are no more clients waiting for this
      list.
      
      To avoid making the rest of the implementation more complex, this commit
      modifies blockForKeys() so that a client will be put just a single time
      into the waiting list for a given key.
      
      Since it is Saturday, I hope this fixes issue #801.
      64e69c0d
  16. 01 12月, 2012 3 次提交
    • A
      SDIFF fuzz test added. · c1eda786
      antirez 提交于
      c1eda786
    • A
      SDIFF is now able to select between two algorithms for speed. · ccc974d9
      antirez 提交于
      SDIFF used an algorithm that was O(N) where N is the total number
      of elements of all the sets involved in the operation.
      
      The algorithm worked like that:
      
      ALGORITHM 1:
      
      1) For the first set, add all the members to an auxiliary set.
      2) For all the other sets, remove all the members of the set from the
      auxiliary set.
      
      So it is an O(N) algorithm where N is the total number of elements in
      all the sets involved in the diff operation.
      
      Cristobal Viedma suggested to modify the algorithm to the following:
      
      ALGORITHM 2:
      
      1) Iterate all the elements of the first set.
      2) For every element, check if the element also exists in all the other
      remaining sets.
      3) Add the element to the auxiliary set only if it does not exist in any
      of the other sets.
      
      The complexity of this algorithm on the worst case is O(N*M) where N is
      the size of the first set and M the total number of sets involved in the
      operation.
      
      However when there are elements in common, with this algorithm we stop
      the computation for a given element as long as we find a duplicated
      element into another set.
      
      I (antirez) added an additional step to algorithm 2 to make it faster,
      that is to sort the set to subtract from the biggest to the
      smallest, so that it is more likely to find a duplicate in a larger sets
      that are checked before the smaller ones.
      
      WHAT IS BETTER?
      
      None of course, for instance if the first set is much larger than the
      other sets the second algorithm does a lot more work compared to the
      first algorithm.
      
      Similarly if the first set is much smaller than the other sets, the
      original algorithm will less work.
      
      So this commit makes Redis able to guess the number of operations
      required by each algorithm, and select the best at runtime according
      to the input received.
      
      However, since the second algorithm has better constant times and can do
      less work if there are duplicated elements, an advantage is given to the
      second algorithm.
      ccc974d9
    • A
      redis-benchmark: seed the PRNG with time() at startup. · 2572bb13
      antirez 提交于
      2572bb13
  17. 29 11月, 2012 1 次提交