1. 30 9月, 2015 3 次提交
  2. 29 9月, 2015 1 次提交
  3. 15 9月, 2015 2 次提交
    • A
      Test: fix false positive in HSTRLEN test. · 846da5b2
      antirez 提交于
      HINCRBY* tests later used the value "tmp" that was sometimes generated
      by the random key generation function. The result was ovewriting what
      Tcl expected to be inside Redis with another value, causing the next
      HSTRLEN test to fail.
      846da5b2
    • A
      GEORADIUS: Don't report duplicates when radius is huge. · 3c23b5ff
      antirez 提交于
      Georadius works by computing the center + neighbors squares covering all
      the area of the specified position and radius. Then a distance filter is
      used to remove elements which are actually outside the range.
      
      When a huge radius is used, like 5000 km or more, adjacent neighbors may
      collide and be the same, leading to the reporting of the same element
      multiple times. This only happens in the edge case of huge radius but is
      not ideal.
      
      A robust but slow solution would involve qsorting the range to remove
      all the duplicates. However since the collisions are only in adjacent
      boxes, for the way they are ordered in the code, it is much faster to
      just check if the current box is the same as the previous one processed.
      
      This commit adds a regression test for the bug.
      
      Fixes #2767.
      3c23b5ff
  4. 14 9月, 2015 3 次提交
    • A
      Test: MOVE expire test improved. · 0a91fc45
      antirez 提交于
      Related to #2765.
      0a91fc45
    • A
      MOVE re-add TTL check fixed. · 4fec5ee1
      antirez 提交于
      getExpire() returns -1 when no expire exists.
      
      Related to #2765.
      4fec5ee1
    • A
      MOVE now can move TTL metadata as well. · f529a01c
      antirez 提交于
      MOVE was not able to move the TTL: when a key was moved into a different
      database number, it became persistent like if PERSIST was used.
      
      In some incredible way (I guess almost nobody uses Redis MOVE) this bug
      remained unnoticed inside Redis internals for many years.
      Finally Andy Grunwald discovered it and opened an issue.
      
      This commit fixes the bug and adds a regression test.
      
      Close #2765.
      f529a01c
  5. 08 9月, 2015 2 次提交
  6. 07 9月, 2015 4 次提交
    • A
      Undo slaves state change on failed rdbSaveToSlavesSockets(). · 8e555374
      antirez 提交于
      As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE
      attempt returns an error, we scan the list of slaves in order to remove
      them since there is no way to serve them currently.
      
      However we check for the replication state BGSAVE_START, which was
      modified by rdbSaveToSlaveSockets() before forking(). So when fork fails
      the state of slaves remain BGSAVE_END and no cleanup is performed.
      
      This commit fixes the problem by making rdbSaveToSlavesSockets() able to
      undo the state change on fork failure.
      8e555374
    • S
      Merge pull request #2753 from ofirluzon/unstable · 5f813035
      Salvatore Sanfilippo 提交于
      SCAN iter parsing changed from atoi to chartoull
      5f813035
    • U
      SCAN iter parsing changed from atoi to chartoull · 11381b09
      ubuntu 提交于
      11381b09
    • A
      Test: print info on HSTRLEN test failure. · 467de61c
      antirez 提交于
      This additional info may provide more clues about the test randomly
      failing from time to time. Probably the failure is due to some previous
      test that overwrites the logical content in the Tcl variable, but this
      will make the problem more obvious.
      467de61c
  7. 21 8月, 2015 1 次提交
  8. 20 8月, 2015 1 次提交
    • A
      startBgsaveForReplication(): handle waiting slaves state change. · f18e5b63
      antirez 提交于
      Before this commit, after triggering a BGSAVE it was up to the caller of
      startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
      order to update them accordingly. However when the replication target is
      the socket, this is not possible since the process of updating the
      slaves and sending the FULLRESYNC reply must be coupled with the process
      of starting an RDB save (the reason is, we need to send the FULLSYNC
      command and spawn a child that will start to send RDB data to the slaves
      ASAP).
      
      This commit moves the responsibility of handling slaves in
      WAIT_BGSAVE_START to startBgsavForReplication() so that for both
      diskless and disk-based replication we have the same chain of
      responsiblity. In order accomodate such change, the syncCommand() also
      needs to put the client in the slave list ASAP (just after the initial
      checks) and not at the end, so that startBgsavForReplication() can find
      the new slave alrady in the list.
      
      Another related change is what happens if the BGSAVE fails because of
      fork() or other errors: we now remove the slave from the list of slaves
      and send an error, scheduling the slave connection to be terminated.
      
      As a side effect of this change the following errors found by
      Oran Agra are fixed (thanks!):
      
      1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
      up, otherwise they remain in a wrong state forever since we setup them
      for full resync before actually trying to fork.
      
      2. updateSlavesWaitingBgsave() with replication target set as "socket"
      was broken since the function changed the slaves state from
      WAIT_BGSAVE_START to WAIT_BGSAVE_END via
      replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
      will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
      f18e5b63
  9. 07 8月, 2015 2 次提交
  10. 06 8月, 2015 4 次提交
    • A
      flushSlavesOutputBuffers(): details clarified via comments. · 55cb64bb
      antirez 提交于
      Talking with @oranagra we had to reason a little bit to understand if
      this function could ever flush the output buffers of the wrong slaves,
      having online state but actually not being ready to receive writes
      before the first ACK is received from them (this happens with diskless
      replication).
      
      Next time we'll just read this comment.
      55cb64bb
    • A
      startBgsaveForReplication(): log what you really do. · ce5761e0
      antirez 提交于
      ce5761e0
    • A
      Client structure comments improved. · fd08839a
      antirez 提交于
      fd08839a
    • A
      Replication: add REPLCONF CAPA EOF support. · 3e6d4d59
      antirez 提交于
      Add the concept of slaves capabilities to Redis, the slave now presents
      to the Redis master with a set of capabilities in the form:
      
          REPLCONF capa SOMECAPA capa OTHERCAPA ...
      
      This has the effect of setting slave->slave_capa with the corresponding
      SLAVE_CAPA macros that the master can test later to understand if it
      the slave will understand certain formats and protocols of the
      replication process. This makes it much simpler to introduce new
      replication capabilities in the future in a way that don't break old
      slaves or masters.
      
      This patch was designed and implemented together with Oran Agra
      (@oranagra).
      3e6d4d59
  11. 05 8月, 2015 9 次提交
  12. 04 8月, 2015 2 次提交
    • A
      PSYNC initial offset fix. · 292fec05
      antirez 提交于
      This commit attempts to fix a bug involving PSYNC and diskless
      replication (currently experimental) found by Yuval Inbar from Redis Labs
      and that was later found to have even more far reaching effects (the bug also
      exists when diskstore is off).
      
      The gist of the bug is that, a Redis master replies with +FULLRESYNC to
      a PSYNC attempt that fails and requires a full resynchronization.
      However, the baseline offset sent along with FULLRESYNC was always the
      current master replication offset. This is not ok, because there are
      many reasosn that may delay the RDB file creation. And... guess what,
      the master offset we communicate must be the one of the time the RDB
      was created. So for example:
      
      1) When the BGSAVE for replication is delayed since there is one
         already but is not good for replication.
      2) When the BGSAVE is not needed as we attach one currently ongoing.
      3) When because of diskless replication the BGSAVE is delayed.
      
      In all the above cases the PSYNC reply is wrong and the slave may
      reconnect later claiming to need a wrong offset: this may cause
      data curruption later.
      292fec05
    • A
      Test PSYNC with diskless replication. · d1ff3281
      antirez 提交于
      Thanks to Oran Agra from Redis Labs for providing this patch.
      d1ff3281
  13. 29 7月, 2015 2 次提交
  14. 28 7月, 2015 4 次提交
    • A
      Support for CLIENT KILL TYPE MASTER. · 3c8861a7
      antirez 提交于
      3c8861a7
    • A
      CLIENT_MASTER introduced. · e6f39338
      antirez 提交于
      e6f39338
    • A
      Force slaves to resync after unsuccessful PSYNC. · c1e94b6b
      antirez 提交于
      Using chained replication where C is slave of B which is in turn slave of
      A, if B reconnects the replication link with A but discovers it is no
      longer possible to PSYNC, slaves of B must be disconnected and PSYNC
      not allowed, since the new B dataset may be completely different after
      the synchronization with the master.
      
      Note that there are varius semantical differences in the way this is
      handled now compared to the past. In the past the semantics was:
      
      1. When a slave lost connection with its master, disconnected the chained
      slaves ASAP. Which is not needed since after a successful PSYNC with the
      master, the slaves can continue and don't need to resync in turn.
      
      2. However after a failed PSYNC the replication backlog was not reset, so a
      slave was able to PSYNC successfully even if the instance did a full
      sync with its master, containing now an entirely different data set.
      
      Now instead chained slaves are not disconnected when the slave lose the
      connection with its master, but only when it is forced to full SYNC with
      its master. This means that if the slave having chained slaves does a
      successful PSYNC all its slaves can continue without troubles.
      
      See issue #2694 for more details.
      c1e94b6b
    • A
      278ea9d1