1. 15 5月, 2015 1 次提交
  2. 04 5月, 2015 1 次提交
  3. 01 4月, 2015 1 次提交
    • O
      fixes to diskless replication. · c72253ec
      Oran Agra 提交于
      master was closing the connection if the RDB transfer took long time.
      and also sent PINGs to the slave before it got the initial ACK, in which case the slave wouldn't be able to find the EOF marker.
      c72253ec
  4. 03 12月, 2014 1 次提交
    • A
      Network bandwidth tracking + refactoring. · d56ef629
      antirez 提交于
      Track bandwidth used by clients and replication (but diskless
      replication is not tracked since the actual transfer happens in the
      child process).
      
      This includes a refactoring that makes tracking new instantaneous
      metrics simpler.
      d56ef629
  5. 25 11月, 2014 1 次提交
    • A
      Avoid valgrind memory leak false positive in processInlineBuffer(). · 9854d03f
      antirez 提交于
      zmalloc(0) cauesd to actually trigger a non-zero allocation since with
      standard libc malloc we have our own zmalloc header for memory tracking,
      but at the same time the returned pointer is at the end of the block and
      not in the middle. This triggers a false positive when testing with
      valgrind.
      
      When the inline protocol args count is 0, we now avoid reallocating
      c->argv, preventing the issue to happen.
      9854d03f
  6. 12 11月, 2014 1 次提交
    • A
      Diskless SYNC: fix RDB EOF detection. · a5fcf44f
      antirez 提交于
      RDB EOF detection was relying on the final part of the RDB transfer to
      be a magic 40 bytes EOF marker. However as the slave is put online
      immediately, and because of sockets timeouts, the replication stream is
      actually contiguous with the RDB file.
      
      This means that to detect the EOF correctly we should either:
      
      1) Scan all the stream searching for the mark. Sucks CPU-wise.
      2) Start to send the replication stream only after an acknowledge.
      3) Implement a proper chunked encoding.
      
      For now solution "2" was picked, so the master does not start to send
      ASAP the stream of commands in the case of diskless replication. We wait
      for the first REPLCONF ACK command from the slave, that certifies us
      that the slave correctly loaded the RDB file and is ready to get more
      data.
      a5fcf44f
  7. 29 10月, 2014 2 次提交
    • A
      Added a function to get slave name for logs. · 72ea77af
      antirez 提交于
      72ea77af
    • A
      Replication: better way to send a preamble before RDB payload. · ff8a3baa
      antirez 提交于
      During the replication full resynchronization process, the RDB file is
      transfered from the master to the slave. However there is a short
      preamble to send, that is currently just the bulk payload length of the
      file in the usual Redis form $..length..<CR><LF>.
      
      This preamble used to be sent with a direct write call, assuming that
      there was alway room in the socket output buffer to hold the few bytes
      needed, however this does not scale in case we'll need to send more
      stuff, and is not very robust code in general.
      
      This commit introduces a more general mechanism to send a preamble up to
      2GB in size (the max length of an sds string) in a non blocking way.
      ff8a3baa
  8. 06 10月, 2014 1 次提交
  9. 27 8月, 2014 1 次提交
  10. 18 7月, 2014 1 次提交
    • A
      PubSub clients refactoring and new PUBSUB flag. · 294bcfc4
      antirez 提交于
      The code tested many times if a client had active Pub/Sub subscriptions
      by checking the length of a list and dictionary where the patterns and
      channels are stored. This was substituted with a client flag called
      REDIS_PUBSUB that is simpler to test for. Moreover in order to manage
      this flag some code was refactored.
      
      This commit is believed to have no effects in the behavior of the
      server.
      294bcfc4
  11. 27 6月, 2014 2 次提交
  12. 24 6月, 2014 1 次提交
  13. 21 6月, 2014 7 次提交
    • A
      CLIENT KILL API modified. · 674194ad
      antirez 提交于
      Added a new SKIPME option that is true by default, that prevents the
      client sending the command to be killed, unless SKIPME NO is sent.
      674194ad
    • A
      CLIENT KILL: fix closing link of the current client. · 61d9a73d
      antirez 提交于
      61d9a73d
    • A
      New features for CLIENT KILL. · 09dc6dad
      antirez 提交于
      09dc6dad
    • A
      Assign an unique non-repeating ID to each new client. · cad13223
      antirez 提交于
      This will be used by CLIENT KILL and is also a good way to ensure a
      given client is still the same across CLIENT LIST calls.
      
      The output of CLIENT LIST was modified to include the new ID, but this
      change is considered to be backward compatible as the API does not imply
      you can do positional parsing, since each filed as a different name.
      cad13223
    • A
      Client types generalized. · b6a26b52
      antirez 提交于
      Because of output buffer limits Redis internals had this idea of type of
      clients: normal, pubsub, slave. It is possible to set different output
      buffer limits for the three kinds of clients.
      
      However all the macros and API were named after output buffer limit
      classes, while the idea of a client type is a generic one that can be
      reused.
      
      This commit does two things:
      
      1) Rename the API and defines with more general names.
      2) Change the class of clients executing the MONITOR command from "slave"
         to "normal".
      
      "2" is a good idea because you want to have very special settings for
      slaves, that are not a good idea for MONITOR clients that are instead
      normal clients even if they are conceptually slave-alike (since it is a
      push protocol).
      
      The backward-compatibility breakage resulting from "2" is considered to
      be minimal to care, since MONITOR is a debugging command, and because
      anyway this change is not going to break the format or the behavior, but
      just when a connection is closed on big output buffer issues.
      b6a26b52
    • A
      CLIENT LIST speedup via peerid caching + smart allocation. · d8d415e7
      antirez 提交于
      This commit adds peer ID caching in the client structure plus an API
      change and the use of sdsMakeRoomFor() in order to improve the
      reallocation pattern to generate the CLIENT LIST output.
      
      Both the changes account for a very significant speedup.
      d8d415e7
    • A
      52189cb9
  14. 22 5月, 2014 2 次提交
    • A
      Process events with processEventsWhileBlocked() when blocked. · f4823497
      antirez 提交于
      When we are blocked and a few events a processed from time to time, it
      is smarter to call the event handler a few times in order to handle the
      accept, read, write, close cycle of a client in a single pass, otherwise
      there is too much latency added for clients to receive a reply while the
      server is busy in some way (for example during the DB loading).
      f4823497
    • A
      Accept multiple clients per iteration. · f3d3c606
      antirez 提交于
      When the listening sockets readable event is fired, we have the chance
      to accept multiple clients instead of accepting a single one. This makes
      Redis more responsive when there is a mass-connect event (for example
      after the server startup), and in workloads where a connect-disconnect
      pattern is used often, so that multiple clients are waiting to be
      accepted continuously.
      
      As a side effect, this commit makes the LOADING, BUSY, and similar
      errors much faster to deliver to the client, making Redis more
      responsive when there is to return errors to inform the clients that the
      server is blocked in an not interruptible operation.
      f3d3c606
  15. 23 4月, 2014 1 次提交
  16. 10 3月, 2014 1 次提交
  17. 25 1月, 2014 1 次提交
  18. 26 12月, 2013 1 次提交
  19. 22 12月, 2013 1 次提交
  20. 21 12月, 2013 1 次提交
  21. 11 12月, 2013 1 次提交
  22. 10 12月, 2013 2 次提交
    • A
      Slaves heartbeat while loading RDB files. · 75bf5a4a
      antirez 提交于
      Starting with Redis 2.8 masters are able to detect timed out slaves,
      while before 2.8 only slaves were able to detect a timed out master.
      
      Now that timeout detection is bi-directional the following problem
      happens as described "in the field" by issue #1449:
      
      1) Master and slave setup with big dataset.
      2) Slave performs the first synchronization, or a full sync
         after a failed partial resync.
      3) Master sends the RDB payload to the slave.
      4) Slave loads this payload.
      5) Master detects the slave as timed out since does not receive back the
         REPLCONF ACK acknowledges.
      
      Here the problem is that the master has no way to know how much the
      slave will take to load the RDB file in memory. The obvious solution is
      to use a greater replication timeout setting, but this is a shame since
      for the 0.1% of operation time we are forced to use a timeout that is
      not what is suited for 99.9% of operation time.
      
      This commit tries to fix this problem with a solution that is a bit of
      an hack, but that modifies little of the replication internals, in order
      to be back ported to 2.8 safely.
      
      During the RDB loading time, we send the master newlines to avoid
      being sensed as timed out. This is the same that the master already does
      while saving the RDB file to still signal its presence to the slave.
      
      The single newline is used because:
      
      1) It can't desync the protocol, as it is only transmitted all or
      nothing.
      2) It can be safely sent while we don't have a client structure for the
      master or in similar situations just with write(2).
      75bf5a4a
    • A
      Handle inline requested terminated with just \n. · 8d0083ba
      antirez 提交于
      8d0083ba
  23. 05 12月, 2013 1 次提交
  24. 03 12月, 2013 2 次提交
  25. 04 10月, 2013 1 次提交
    • A
      Replication: fix master timeout. · d7fa6d9a
      antirez 提交于
      Since we started sending REPLCONF ACK from slaves to masters, the
      lastinteraction field of the client structure is always refreshed as
      soon as there is room in the socket output buffer, so masters in timeout
      are detected with too much delay (the socket buffer takes a lot of time
      to be filled by small REPLCONF ACK <number> entries).
      
      This commit only counts data received as interactions with a master,
      solving the issue.
      d7fa6d9a
  26. 27 8月, 2013 2 次提交
  27. 12 8月, 2013 1 次提交
  28. 24 7月, 2013 1 次提交
    • A
      sdsrange() does not need to return a value. · f899ab55
      antirez 提交于
      Actaully the string is modified in-place and a reallocation is never
      needed, so there is no need to return the new sds string pointer as
      return value of the function, that is now just "void".
      f899ab55