1. 22 12月, 2013 1 次提交
    • A
      Redis Cluster: add repl_ping_slave_period to slave data validity time. · 66ec1412
      antirez 提交于
      When the configured node timeout is very small, the data validity time
      (maximum data age for a slave to try a failover) is too little (ten
      times the configured node timeout) when the replication link with the
      master is mostly idle. In this case we'll receive some data from the
      master only every server.repl_ping_slave_period to refresh the last
      interaction with the master.
      
      This commit adds to the max data validity time the slave ping period to
      avoid this problem of slaves sensing too old data without a good reason.
      However this max data validity time is likely a setting that should be
      configurable by the Redis Cluster user in a way completely independent
      from the node timeout.
      66ec1412
  2. 21 12月, 2013 1 次提交
  3. 20 12月, 2013 4 次提交
  4. 17 12月, 2013 6 次提交
  5. 05 12月, 2013 1 次提交
  6. 02 12月, 2013 1 次提交
  7. 30 11月, 2013 1 次提交
  8. 29 11月, 2013 1 次提交
  9. 09 11月, 2013 4 次提交
  10. 08 11月, 2013 2 次提交
  11. 05 11月, 2013 1 次提交
  12. 11 10月, 2013 1 次提交
  13. 09 10月, 2013 4 次提交
  14. 08 10月, 2013 1 次提交
    • A
      Cluster: masters don't vote for a slave with stale config. · ae2763f5
      antirez 提交于
      When a slave requests our vote, the configEpoch he claims for its master
      and the set of served slots must be greater or equal to the configEpoch
      of the nodes serving these slots in the current configuraiton of the
      master granting its vote.
      
      In other terms, masters don't vote for slaves having a stale
      configuration for the slots they want to serve.
      ae2763f5
  15. 07 10月, 2013 3 次提交
  16. 03 10月, 2013 1 次提交
    • A
      Cluster: new clusterDoBeforeSleep() API. · 7afc0dd5
      antirez 提交于
      The new API is able to remember operations to perform before returning
      to the event loop, such as checking if there is the failover quorum for
      a slave, save and fsync the configuraiton file, and so forth.
      
      Because this operations are performed before returning on the event
      loop we are sure that messages that are sent in the same event loop run
      will be delivered *after* the configuration is already saved, that is a
      requirement sometimes. For instance we want to publish a new epoch only
      when it is already stored in nodes.conf in order to avoid returning back
      in the logical clock when a node is restarted.
      
      This new API provides a big performance advantage compared to saving and
      possibly fsyncing the configuration file multiple times in the same
      event loop run, especially in the case of big clusters with tens or
      hundreds of nodes.
      7afc0dd5
  17. 02 10月, 2013 3 次提交
  18. 01 10月, 2013 2 次提交
  19. 30 9月, 2013 2 次提交