1. 16 4月, 2014 1 次提交
  2. 15 4月, 2014 1 次提交
  3. 14 4月, 2014 2 次提交
  4. 07 4月, 2014 1 次提交
    • A
      Add casting to match printf format. · 67bb2c46
      antirez 提交于
      adjustOpenFilesLimit() and clusterUpdateSlotsWithConfig() that were
      assuming uint64_t is the same as unsigned long long, which is true
      probably for all the systems out there that we target, but still GCC
      emitted a warning since technically they are two different types.
      67bb2c46
  5. 05 4月, 2014 1 次提交
  6. 03 4月, 2014 1 次提交
    • A
      PFGETREG added for testing purposes. · aaaed66c
      antirez 提交于
      The new command allows to get a dump of the registers stored
      into an HyperLogLog data structure for testing / debugging purposes.
      aaaed66c
  7. 01 4月, 2014 2 次提交
  8. 31 3月, 2014 1 次提交
    • A
      HLLMERGE implemented. · f2277475
      antirez 提交于
      Merge N HLL data structures by selecting the max value for every
      M[i] register among the set of HLLs.
      f2277475
  9. 29 3月, 2014 1 次提交
  10. 28 3月, 2014 2 次提交
    • A
      HLLADD implemented. · 156929ee
      antirez 提交于
      156929ee
    • A
      HLLSELFTEST command implemented. · 552eb540
      antirez 提交于
      To test the bitfield array of counters set/get macros from the Redis Tcl
      suite is hard, so a specialized command that is able to test the
      internals was developed.
      552eb540
  11. 25 3月, 2014 4 次提交
    • A
      Fix off by one bug in freeMemoryIfNeeded() eviction pool. · 6540e9ee
      antirez 提交于
      Bug found by the continuous integration test running the Redis
      with valgrind:
      
      ==6245== Invalid read of size 8
      ==6245==    at 0x4C2DEEF: memcpy@GLIBC_2.2.5 (mc_replace_strmem.c:876)
      ==6245==    by 0x41F9E6: freeMemoryIfNeeded (redis.c:3010)
      ==6245==    by 0x41D2CC: processCommand (redis.c:2069)
      
      memmove() size argument was accounting for an extra element, going
      outside the bounds of the array.
      6540e9ee
    • A
      adjustOpenFilesLimit() refactoring. · 6e33c908
      antirez 提交于
      In this commit:
      * Decrement steps are semantically differentiated from the reserved FDs.
        Previously both values were 32 but the meaning was different.
      * Make it clear that we save setrlimit errno.
      * Don't explicitly handle wrapping of 'f', but prevent it from
        happening.
      * Add comments to make the function flow more readable.
      
      This integrates PR #1630
      6e33c908
    • M
      Fix potentially incorrect errno usage · 386a4694
      Matt Stancliff 提交于
      errno may be reset by the previous call to redisLog, so capture
      the original value for proper error reporting.
      386a4694
    • M
      Add REDIS_MIN_RESERVED_FDS define for open fds · 3b54ee6e
      Matt Stancliff 提交于
      Also update the original REDIS_EVENTLOOP_FDSET_INCR to
      include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR
      exists to make sure more than (maxclients+RESERVED) entries
      are allocated, but we can only guarantee that if we include
      the current value of REDIS_MIN_RESERVED_FDS as a minimum
      for the INCR size.
      3b54ee6e
  12. 24 3月, 2014 5 次提交
    • M
      Fix infinite loop on startup if ulimit too low · 90b84421
      Matt Stancliff 提交于
      Fun fact: rlim_t is an unsigned long long on all platforms.
      
      Continually subtracting from a rlim_t makes it get smaller
      and smaller until it wraps, then you're up to 2^64-1.
      
      This was causing an infinite loop on Redis startup if
      your ulimit was extremely (almost comically) low.
      
      The case of (f > oldlimit) would never be met in a case like:
      
          f = 150
          while (f > 20) f -= 128
      
      Since f is unsigned, it can't go negative and would
      take on values of:
      
          Iteration 1: 150 - 128 => 22
          Iteration 2:  22 - 128 => 18446744073709551510
          Iterations 3-∞: ...
      
      To catch the wraparound, we use the previous value of f
      stored in limit.rlimit_cur.  If we subtract from f and
      get a larger number than the value it had previously,
      we print an error and exit since we don't have enough
      file descriptors to help the user at this point.
      
      Thanks to @bs3g for the inspiration to fix this problem.
      Patches existed from @bs3g at antirez#1227, but I needed to repair a few other
      parts of Redis simultaneously, so I didn't get a chance to use them.
      90b84421
    • M
      Improve error handling around setting ulimits · 4a25983f
      Matt Stancliff 提交于
      The log messages about open file limits have always
      been slightly opaque and confusing.  Here's an attempt to
      fix their wording, detail, and meaning.  Users will have a
      better understanding of how to fix very common problems
      with these reworded messages.
      
      Also, we handle a new error case when maxclients becomes less
      than one, essentially rendering the server unusable.  We
      now exit on startup instead of leaving the user with a server
      unable to handle any connections.
      
      This fixes antirez#356 as well.
      4a25983f
    • M
      Replace magic 32 with REDIS_EVENTLOOP_FDSET_INCR · 491532a7
      Matt Stancliff 提交于
      32 was the additional number of file descriptors Redis
      would reserve when managing a too-low ulimit.  The
      number 32 was in too many places statically, so now
      we use a macro instead that looks more appropriate.
      
      When Redis sets up the server event loop, it uses:
          server.maxclients+REDIS_EVENTLOOP_FDSET_INCR
      
      So, when reserving file descriptors, it makes sense to
      reserve at least REDIS_EVENTLOOP_FDSET_INCR FDs instead
      of only 32.  Currently, REDIS_EVENTLOOP_FDSET_INCR is
      set to 128 in redis.h.
      
      Also, I replaced the static 128 in the while f < old loop
      with REDIS_EVENTLOOP_FDSET_INCR as well, which results
      in no change since it was already 128.
      
      Impact: Users now need at least maxclients+128 as
      their open file limit instead of maxclients+32 to obtain
      actual "maxclients" number of clients.  Redis will carve
      the extra REDIS_EVENTLOOP_FDSET_INCR file descriptors it
      needs out of the "maxclients" range instead of failing
      to start (unless the local ulimit -n is too low to accomidate
      the request).
      491532a7
    • A
      Sample and cache RSS in serverCron(). · 93253c27
      antirez 提交于
      Obtaining the RSS (Resident Set Size) info is slow in Linux and OSX.
      This slowed down the generation of the INFO 'memory' section.
      
      Since the RSS does not require to be a real-time measurement, we
      now sample it with server.hz frequency (10 times per second by default)
      and use this value both to show the INFO rss field and to compute the
      fragmentation ratio.
      
      Practically this does not make any difference for memory profiling of
      Redis but speeds up the INFO call significantly.
      93253c27
    • A
      Cache uname() output across INFO calls. · d3efe04c
      antirez 提交于
      Uname was profiled to be a slow syscall. It produces always the same
      output in the context of a single execution of Redis, so calling it at
      every INFO output generation does not make too much sense.
      
      The uname utsname structure was modified as a static variable. At the
      same time a static integer was added to check if we need to call uname
      the first time.
      d3efe04c
  13. 21 3月, 2014 1 次提交
  14. 20 3月, 2014 5 次提交
    • A
      Use new dictGetRandomKeys() API to get samples for eviction. · c641b670
      antirez 提交于
      The eviction quality degradates a bit in my tests, but since the API is
      faster, it allows to raise the number of samples, and overall is a win.
      c641b670
    • A
      struct dictEntry -> dictEntry. · 82b53c65
      antirez 提交于
      82b53c65
    • A
      LRU eviction pool implementation. · 22c9cfaf
      antirez 提交于
      This is an improvement over the previous eviction algorithm where we use
      an eviction pool that is persistent across evictions of keys, and gets
      populated with the best candidates for evictions found so far.
      
      It allows to approximate LRU eviction at a given number of samples
      better than the previous algorithm used.
      22c9cfaf
    • A
      Obtain LRU clock in a resolution dependent way. · ad6b0f70
      antirez 提交于
      For testing purposes it is handy to have a very high resolution of the
      LRU clock, so that it is possible to experiment with scripts running in
      just a few seconds how the eviction algorithms works.
      
      This commit allows Redis to use the cached LRU clock, or a value
      computed on demand, depending on the resolution. So normally we have the
      good performance of a precomputed value, and a clock that wraps in many
      days using the normal resolution, but if needed, changing a define will
      switch behavior to an high resolution LRU clock.
      ad6b0f70
    • A
      Specify LRU resolution in milliseconds. · d77e2316
      antirez 提交于
      d77e2316
  15. 19 3月, 2014 1 次提交
  16. 13 3月, 2014 1 次提交
  17. 10 3月, 2014 3 次提交
    • A
      Cluster: SORT get keys helper implemented. · 04cf02e8
      antirez 提交于
      04cf02e8
    • A
      Cluster: evalGetKey() added for EVAL/EVALSHA. · c0e818ab
      antirez 提交于
      Previously we used zunionInterGetKeys(), however after this function was
      fixed to account for the destination key (not needed when the API was
      designed for "diskstore") the two set of commands can no longer be served
      by an unique keys-extraction function.
      c0e818ab
    • A
      Cluster: getKeysFromCommand() API cleaned up. · 787b2970
      antirez 提交于
      This API originated from the "diskstore" experiment, not for Redis
      Cluster itself, so there were legacy/useless things trying to
      differentiate between keys that are going to be overwritten and keys
      that need to be fetched from disk (preloaded).
      
      All useless with Cluster, so removed with the result of code
      simplification.
      787b2970
  18. 07 3月, 2014 1 次提交
  19. 04 3月, 2014 1 次提交
  20. 03 3月, 2014 1 次提交
  21. 01 3月, 2014 1 次提交
    • M
      Force INFO used_memory_peak to match peak memory · f1c9a203
      Matt Stancliff 提交于
      used_memory_peak only updates in serverCron every server.hz,
      but Redis can use more memory and a user can request memory
      INFO before used_memory_peak gets updated in the next
      cron run.
      
      This patch updates used_memory_peak to the current
      memory usage if the current memory usage is higher
      than the recorded used_memory_peak value.
      
      (And it only calls zmalloc_used_memory() once instead of
      twice as it was doing before.)
      f1c9a203
  22. 27 2月, 2014 1 次提交
    • A
      Initial implementation of BITPOS. · 38c620b3
      antirez 提交于
      It appears to work but more stress testing, and both unit tests and
      fuzzy testing, is needed in order to ensure the implementation is sane.
      38c620b3
  23. 21 2月, 2014 1 次提交
    • M
      Add cluster or sentinel to proc title · 2c273e35
      Matt Stancliff 提交于
      If you launch redis with `redis-server --sentinel` then
      in a ps, your output only says "redis-server IP:Port" — this
      patch changes the proc title to include [sentinel] or
      [cluster] depending on the current server mode:
      e.g.  "redis-server IP:Port [sentinel]"
            "redis-server IP:Port [cluster]"
      2c273e35
  24. 20 2月, 2014 1 次提交
    • M
      Fix "can't bind to address" error reporting. · b20ae393
      Matt Stancliff 提交于
      Report the actual port used for the listening attempt instead of
      server.port.
      
      Originally, Redis would just listen on server.port.
      But, with clustering, Redis uses a Cluster Port too,
      so we can't say server.port is always where we are listening.
      
      If you tried to launch Redis with a too-high port number (any
      port where Port+10000 > 65535), Redis would refuse to start, but
      only print an error saying it can't connect to the Redis port.
      
      This patch fixes much confusions.
      b20ae393