1. 15 1月, 2014 2 次提交
    • A
      Cluster: ignore empty lines in nodes.conf. · fb659cd3
      antirez 提交于
      Even without the user messing manually with the file, it is still
      possible to have blank lines (just a single "\n" per line) because of
      how the nodes.conf update/write process works.
      fb659cd3
    • A
      Cluster: atomic update of nodes.conf file. · 6c63df30
      antirez 提交于
      The way the file was generated was unsafe and leaded to nodes.conf file
      corruption (zero length file) on server stop/crash during the creation
      of the file.
      
      The previous file update method was as simple as open with O_TRUNC
      followed by the write call. While the write call was a single one with
      the full payload, ensuring no half-written files for POSIX semantics,
      stopping the server just after the open call resulted into a zero-length
      file (all the nodes information lost!).
      6c63df30
  2. 14 1月, 2014 4 次提交
  3. 13 1月, 2014 3 次提交
  4. 10 1月, 2014 9 次提交
  5. 09 1月, 2014 4 次提交
  6. 08 1月, 2014 2 次提交
    • A
      Don't send REPLCONF ACK to old masters. · 90a81b4e
      antirez 提交于
      Masters not understanding REPLCONF ACK will reply with errors to our
      requests causing a number of possible issues.
      
      This commit detects a global replication offest set to -1 at the end of
      the replication, and marks the client representing the master with the
      REDIS_PRE_PSYNC flag.
      
      Note that this flag was called REDIS_PRE_PSYNC_SLAVE but now it is just
      REDIS_PRE_PSYNC as it is used for both slaves and masters starting with
      this commit.
      
      This commit fixes issue #1488.
      90a81b4e
    • A
      Clarify a comment in slaveTryPartialResynchronization(). · 3f92e056
      antirez 提交于
      3f92e056
  7. 26 12月, 2013 6 次提交
  8. 23 12月, 2013 2 次提交
    • A
      Fix CONFIG REWRITE handling of unknown options. · e7893842
      antirez 提交于
      There were two problems with the implementation.
      
      1) "save" was not correctly processed when no save point was configured,
         as reported in issue #1416.
      2) The way the code checked if an option existed in the "processed"
         dictionary was wrong, as we add the element with as a key associated
         with a NULL value, so dictFetchValue() can't be used to check for
         existance, but dictFind() must be used, that returns NULL only if the
         entry does not exist at all.
      e7893842
    • A
      Configuring port to 0 disables IP socket as specified. · 7e9433ce
      antirez 提交于
      This was no longer the case with 2.8 becuase of a bug introduced with
      the IPv6 support. Now it is fixed.
      
      This fixes issue #1287 and #1477.
      7e9433ce
  9. 22 12月, 2013 3 次提交
    • A
      Make new masters inherit replication offsets. · 94e8c9e7
      antirez 提交于
      Currently replication offsets could be used into a limited way in order
      to understand, out of a set of slaves, what is the one with the most
      updated data. For example this comparison is possible of N slaves
      were replicating all with the same master.
      
      However the replication offset was not transferred from master to slaves
      (that are later promoted as masters) in any way, so for instance if
      there were three instances A, B, C, with A master and B and C
      replication from A, the following could happen:
      
      C disconnects from A.
      B is turned into master.
      A is switched to master of B.
      B receives some write.
      
      In this context there was no way to compare the offset of A and C,
      because B would use its own local master replication offset as
      replication offset to initialize the replication with A.
      
      With this commit what happens is that when B is turned into master it
      inherits the replication offset from A, making A and C comparable.
      In the above case assuming no inconsistencies are created during the
      disconnection and failover process, A will show to have a replication
      offset greater than C.
      
      Note that this does not mean offsets are always comparable to understand
      what is, in a set of instances, since in more complex examples the
      replica with the higher replication offset could be partitioned away
      when picking the instance to elect as new master. However this in
      general improves the ability of a system to try to pick a good replica
      to promote to master.
      94e8c9e7
    • A
      Slave disconnection is an event worth logging. · ba5eb44d
      antirez 提交于
      ba5eb44d
    • A
      Redis Cluster: add repl_ping_slave_period to slave data validity time. · 66ec1412
      antirez 提交于
      When the configured node timeout is very small, the data validity time
      (maximum data age for a slave to try a failover) is too little (ten
      times the configured node timeout) when the replication link with the
      master is mostly idle. In this case we'll receive some data from the
      master only every server.repl_ping_slave_period to refresh the last
      interaction with the master.
      
      This commit adds to the max data validity time the slave ping period to
      avoid this problem of slaves sensing too old data without a good reason.
      However this max data validity time is likely a setting that should be
      configurable by the Redis Cluster user in a way completely independent
      from the node timeout.
      66ec1412
  10. 21 12月, 2013 2 次提交
  11. 20 12月, 2013 3 次提交