1. 03 6月, 2015 1 次提交
  2. 29 5月, 2015 1 次提交
  3. 06 2月, 2015 2 次提交
    • A
      Port diskless replication to Windows · b93dc1c7
      Alexis Campailla 提交于
      During diskless replication the master forks a child, which on posix
      simply inherits the socket file descriptors for the connections to
      the slaves.
      A unix pipe is also used for the child to report the results back
      to the master.
      
      The bulk of the porting work is in making sure that the socket
      file descriptors and pipe file descriptor are propagated correctly
      from the master to its child.
      b93dc1c7
    • A
      Refactor BeginForkOperation · 5677f532
      Alexis Campailla 提交于
      Refactor BeginForkOperation in preparation for diskless replication:
      - Separate copying of operation data and child process creation
      - Provide specific entry points for each operation type
      5677f532
  4. 23 12月, 2014 1 次提交
    • A
      INFO loading stats: three fixes. · 22a0fe8d
      antirez 提交于
      1. Server unxtime may remain not updated while loading AOF, so ETA is
      not updated correctly.
      
      2. Number of processed byte was not initialized.
      
      3. Possible division by zero condition (likely cause of issue #1932).
      22a0fe8d
  5. 22 12月, 2014 1 次提交
  6. 04 12月, 2014 1 次提交
    • A
      Issue 173: add child process log messages to main log file · 7827f413
      Alexis Campailla 提交于
      Slave processes were not using the master process log file.
      On Unix this is relying on the server.logfile variable being available
      to the slave processes through fork(), and reopening the logfile
      in the slaves (on every log event).
      On Windows we don't use server.logfile and require an explicity call
      to setLogFile.
      I resorted to explicitly passing the logfile to the slaves as a
      command line argument, so the logfile argument (and logging) can be
      available to the slave before qfork and globals setup have completed.
      
      Writing to the same file atomically from multiple processes requires
      using CreateFile with FILE_APPEND_DATA, instead of fopen, which provides
      atomicity on Unix but not on Windows.
      
      Also changed the implementation to not reopen the logfile on every
      log event, and not flushing the file on every write. Performance is
      dramaticaly improved this way.
      7827f413
  7. 29 10月, 2014 12 次提交
  8. 06 10月, 2014 1 次提交
    • Z
      Fix incorrect comments · b72422e1
      zionwu 提交于
      error != success; and 0 != number of bytes written
      
      Closes #1806
      b72422e1
  9. 27 8月, 2014 3 次提交
  10. 28 7月, 2014 1 次提交
  11. 10 7月, 2014 2 次提交
  12. 27 6月, 2014 1 次提交
  13. 22 5月, 2014 1 次提交
    • A
      Process events with processEventsWhileBlocked() when blocked. · f4823497
      antirez 提交于
      When we are blocked and a few events a processed from time to time, it
      is smarter to call the event handler a few times in order to handle the
      accept, read, write, close cycle of a client in a single pass, otherwise
      there is too much latency added for clients to receive a reply while the
      server is busy in some way (for example during the DB loading).
      f4823497
  14. 12 5月, 2014 2 次提交
  15. 17 4月, 2014 1 次提交
  16. 16 4月, 2014 1 次提交
  17. 25 3月, 2014 1 次提交
    • M
      Fix data loss when save AOF/RDB with no free space · 88c6c669
      Matt Stancliff 提交于
      Previously, the (!fp) would only catch lack of free space
      under OS X.  Linux waits to discover it can't write until
      it actually writes contents to disk.
      
      (fwrite() returns success even if the underlying file
      has no free space to write into.  All the errors
      only show up at flush/sync/close time.)
      
      Fixes antirez/redis#1604
      88c6c669
  18. 13 2月, 2014 1 次提交
    • A
      Update cached time in rdbLoad() callback. · 85492dcf
      antirez 提交于
      server.unixtime and server.mstime are cached less precise timestamps
      that we use every time we don't need an accurate time representation and
      a syscall would be too slow for the number of calls we require.
      
      Such an example is the initialization and update process of the last
      interaction time with the client, that is used for timeouts.
      
      However rdbLoad() can take some time to load the DB, but at the same
      time it did not updated the time during DB loading. This resulted in the
      bug described in issue #1535, where in the replication process the slave
      loads the DB, creates the redisClient representation of its master, but
      the timestamp is so old that the master, under certain conditions, is
      sensed as already "timed out".
      
      Thanks to @yoav-steinberg and Redis Labs Inc for the bug report and
      analysis.
      85492dcf
  19. 08 1月, 2014 1 次提交
  20. 03 1月, 2014 1 次提交
  21. 11 12月, 2013 2 次提交
    • A
      Slaves heartbeats during sync improved. · 563d6b3f
      antirez 提交于
      The previous fix for false positive timeout detected by master was not
      complete. There is another blocking stage while loading data for the
      first synchronization with the master, that is, flushing away the
      current data from the DB memory.
      
      This commit uses the newly introduced dict.c callback in order to make
      some incremental work (to send "\n" heartbeats to the master) while
      flushing the old data from memory.
      
      It is hard to write a regression test for this issue unfortunately. More
      support for debugging in the Redis core would be needed in terms of
      functionalities to simulate a slow DB loading / deletion.
      563d6b3f
    • A
      Don't send more than 1 newline/sec while loading RDB. · 303cc97f
      antirez 提交于
      303cc97f
  22. 10 12月, 2013 1 次提交
    • A
      Slaves heartbeat while loading RDB files. · 75bf5a4a
      antirez 提交于
      Starting with Redis 2.8 masters are able to detect timed out slaves,
      while before 2.8 only slaves were able to detect a timed out master.
      
      Now that timeout detection is bi-directional the following problem
      happens as described "in the field" by issue #1449:
      
      1) Master and slave setup with big dataset.
      2) Slave performs the first synchronization, or a full sync
         after a failed partial resync.
      3) Master sends the RDB payload to the slave.
      4) Slave loads this payload.
      5) Master detects the slave as timed out since does not receive back the
         REPLCONF ACK acknowledges.
      
      Here the problem is that the master has no way to know how much the
      slave will take to load the RDB file in memory. The obvious solution is
      to use a greater replication timeout setting, but this is a shame since
      for the 0.1% of operation time we are forced to use a timeout that is
      not what is suited for 99.9% of operation time.
      
      This commit tries to fix this problem with a solution that is a bit of
      an hack, but that modifies little of the replication internals, in order
      to be back ported to 2.8 safely.
      
      During the RDB loading time, we send the master newlines to avoid
      being sensed as timed out. This is the same that the master already does
      while saving the RDB file to still signal its presence to the slave.
      
      The single newline is used because:
      
      1) It can't desync the protocol, as it is only transmitted all or
      nothing.
      2) It can be safely sent while we don't have a client structure for the
      master or in similar situations just with write(2).
      75bf5a4a
  23. 05 12月, 2013 1 次提交