1. 21 6月, 2014 2 次提交
  2. 22 5月, 2014 2 次提交
    • A
      Process events with processEventsWhileBlocked() when blocked. · f4823497
      antirez 提交于
      When we are blocked and a few events a processed from time to time, it
      is smarter to call the event handler a few times in order to handle the
      accept, read, write, close cycle of a client in a single pass, otherwise
      there is too much latency added for clients to receive a reply while the
      server is busy in some way (for example during the DB loading).
      f4823497
    • A
      Accept multiple clients per iteration. · f3d3c606
      antirez 提交于
      When the listening sockets readable event is fired, we have the chance
      to accept multiple clients instead of accepting a single one. This makes
      Redis more responsive when there is a mass-connect event (for example
      after the server startup), and in workloads where a connect-disconnect
      pattern is used often, so that multiple clients are waiting to be
      accepted continuously.
      
      As a side effect, this commit makes the LOADING, BUSY, and similar
      errors much faster to deliver to the client, making Redis more
      responsive when there is to return errors to inform the clients that the
      server is blocked in an not interruptible operation.
      f3d3c606
  3. 23 4月, 2014 1 次提交
  4. 10 3月, 2014 1 次提交
  5. 25 1月, 2014 1 次提交
  6. 26 12月, 2013 1 次提交
  7. 22 12月, 2013 1 次提交
  8. 21 12月, 2013 1 次提交
  9. 11 12月, 2013 1 次提交
  10. 10 12月, 2013 2 次提交
    • A
      Slaves heartbeat while loading RDB files. · 75bf5a4a
      antirez 提交于
      Starting with Redis 2.8 masters are able to detect timed out slaves,
      while before 2.8 only slaves were able to detect a timed out master.
      
      Now that timeout detection is bi-directional the following problem
      happens as described "in the field" by issue #1449:
      
      1) Master and slave setup with big dataset.
      2) Slave performs the first synchronization, or a full sync
         after a failed partial resync.
      3) Master sends the RDB payload to the slave.
      4) Slave loads this payload.
      5) Master detects the slave as timed out since does not receive back the
         REPLCONF ACK acknowledges.
      
      Here the problem is that the master has no way to know how much the
      slave will take to load the RDB file in memory. The obvious solution is
      to use a greater replication timeout setting, but this is a shame since
      for the 0.1% of operation time we are forced to use a timeout that is
      not what is suited for 99.9% of operation time.
      
      This commit tries to fix this problem with a solution that is a bit of
      an hack, but that modifies little of the replication internals, in order
      to be back ported to 2.8 safely.
      
      During the RDB loading time, we send the master newlines to avoid
      being sensed as timed out. This is the same that the master already does
      while saving the RDB file to still signal its presence to the slave.
      
      The single newline is used because:
      
      1) It can't desync the protocol, as it is only transmitted all or
      nothing.
      2) It can be safely sent while we don't have a client structure for the
      master or in similar situations just with write(2).
      75bf5a4a
    • A
      Handle inline requested terminated with just \n. · 8d0083ba
      antirez 提交于
      8d0083ba
  11. 05 12月, 2013 1 次提交
  12. 03 12月, 2013 2 次提交
  13. 04 10月, 2013 1 次提交
    • A
      Replication: fix master timeout. · d7fa6d9a
      antirez 提交于
      Since we started sending REPLCONF ACK from slaves to masters, the
      lastinteraction field of the client structure is always refreshed as
      soon as there is room in the socket output buffer, so masters in timeout
      are detected with too much delay (the socket buffer takes a lot of time
      to be filled by small REPLCONF ACK <number> entries).
      
      This commit only counts data received as interactions with a master,
      solving the issue.
      d7fa6d9a
  14. 27 8月, 2013 2 次提交
  15. 12 8月, 2013 1 次提交
  16. 24 7月, 2013 2 次提交
  17. 17 7月, 2013 1 次提交
  18. 11 7月, 2013 8 次提交
  19. 30 5月, 2013 1 次提交
  20. 27 5月, 2013 2 次提交
    • A
      Replication: send REPLCONF ACK to master. · 146f1d7d
      antirez 提交于
      146f1d7d
    • A
      REPLCONF ACK command. · 1e77b77d
      antirez 提交于
      This special command is used by the slave to inform the master the
      amount of replication stream it currently consumed.
      
      it does not return anything so that we not need to consume additional
      bandwidth needed by the master to reply something.
      
      The master can do a number of things knowing the amount of stream
      processed, such as understanding the "lag" in bytes of the slave, verify
      if a given command was already processed by the slave, and so forth.
      1e77b77d
  21. 25 5月, 2013 3 次提交
  22. 06 3月, 2013 1 次提交
    • A
      API to lookup commands with their original name. · bc1b2e8f
      antirez 提交于
      A new server.orig_commands table was added to the server structure, this
      contains a copy of the commant table unaffected by rename-command
      statements in redis.conf.
      
      A new API lookupCommandOrOriginal() was added that checks both tables,
      new first, old later, so that rewriteClientCommandVector() and friends
      can lookup commands with their new or original name in order to fix the
      client->cmd pointer when the argument vector is renamed.
      
      This fixes the segfault of issue #986, but does not fix a wider range of
      problems resulting from renaming commands that actually operate on data
      and are registered into the AOF file or propagated to slaves... That is
      command renaming should be handled with care.
      bc1b2e8f
  23. 12 2月, 2013 1 次提交
  24. 11 2月, 2013 1 次提交