- 11 9月, 2018 3 次提交
- 07 9月, 2018 4 次提交
-
-
由 Salvatore Sanfilippo 提交于
Use geohash limit defines in constraint check
-
由 Salvatore Sanfilippo 提交于
CLI Help text loop verifies arg count
-
由 Salvatore Sanfilippo 提交于
sentinel: fix randomized sentinelTimer.
-
由 Salvatore Sanfilippo 提交于
bio: fix bioWaitStepOfType.
-
- 06 9月, 2018 4 次提交
-
-
由 Salvatore Sanfilippo 提交于
fix usage typo in redis-cli
-
由 Weiliang Li 提交于
-
由 antirez 提交于
-
由 antirez 提交于
See issue #5250 and issue #5292 for more info.
-
- 05 9月, 2018 5 次提交
-
-
由 antirez 提交于
Here the idea is that we do not want freeMemoryIfNeeded() to propagate a DEL command before the script and change what happens in the script execution once it reaches the slave. For example see this potential issue (in the words of @soloestoy): On master, we run the following script: if redis.call('get','key') then redis.call('set','xxx','yyy') end redis.call('set','c','d') Then when redis attempts to execute redis.call('set','xxx','yyy'), we call freeMemoryIfNeeded(), and the key may get deleted, and because redis.call('set','xxx','yyy') has already been executed on master, this script will be replicated to slave. But the slave received "DEL key" before the script, and will ignore maxmemory, so after that master has xxx and c, slave has only one key c. Note that this patch (and other related work) was authored collaboratively in issue #5250 with the help of @soloestoy and @oranagra. Related to issue #5250.
-
由 antirez 提交于
See issue #5250 and the new comments added to the code in this commit for details.
-
由 antirez 提交于
Related to #5250.
-
由 youjiali1995 提交于
-
由 youjiali1995 提交于
-
- 04 9月, 2018 11 次提交
-
-
由 antirez 提交于
See #5304.
-
由 antirez 提交于
-
由 Salvatore Sanfilippo 提交于
networking: fix unexpected negative or zero readlen
-
由 Sascha Roland 提交于
The conclusion, that a xread request can be answered syncronously in case that the stream's last_id is larger than the passed last-received-id parameter, assumes, that there must be entries present, which could be returned immediately. This assumption fails for empty streams that actually contained some entries which got removed by xdel, ... . As result, the client is answered synchronously with an empty result, instead of blocking for new entries to arrive. An additional check for a non-empty stream is required.
-
由 Salvatore Sanfilippo 提交于
Fix typo
-
由 Salvatore Sanfilippo 提交于
networking: optimize parsing large bulk greater than 32k
-
由 maya-rv 提交于
-
由 antirez 提交于
-
由 Salvatore Sanfilippo 提交于
fix multiple unblock for clientsArePaused()
-
由 antirez 提交于
Related to #5305.
-
由 zhaozhao.zz 提交于
If we are going to read a large object from network try to make it likely that it will start at c->querybuf boundary so that we can optimize object creation avoiding a large copy of data. But only when the data we have not parsed is less than or equal to ll+2. If the data length is greater than ll+2, trimming querybuf is just a waste of time, because at this time the querybuf contains not only our bulk. It's easy to reproduce the that: Time1: call `client pause 10000` on slave. Time2: redis-benchmark -t set -r 10000 -d 33000 -n 10000. Then slave hung after 10 seconds.
-
- 03 9月, 2018 2 次提交
-
-
由 zhaozhao.zz 提交于
-
由 zhaozhao.zz 提交于
-
- 02 9月, 2018 1 次提交
-
-
由 Amin Mesbah 提交于
Slight robustness improvement, especially if the limit values are changed, as was suggested in antires/redis#4291 [1]. [1] https://github.com/antirez/redis/pull/4291
-
- 31 8月, 2018 6 次提交
-
-
由 antirez 提交于
See #5297.
-
由 antirez 提交于
Technically speaking we don't really need to put the master client in the clients that need to be processed, since in practice the PING commands from the master will take care, however it is conceptually more sane to do so.
-
由 antirez 提交于
Processing command from the master while the slave is in busy state is not correct, however we cannot, also, just reply -BUSY to the replication stream commands from the master. The correct solution is to stop processing data from the master, but just accumulate the stream into the buffers and resume the processing later. Related to #5297.
-
由 antirez 提交于
However the master scripts will be impossible to kill. Related to #5297.
-
由 antirez 提交于
See reasoning in #5297.
-
由 zhaozhao.zz 提交于
To avoid copying buffers to create a large Redis Object which exceeding PROTO_IOBUF_LEN 32KB, we just read the remaining data we need, which may less than PROTO_IOBUF_LEN. But the remaining len may be zero, if the bulklen+2 equals sdslen(c->querybuf), in client pause context. For example: Time1: python >>> import os, socket >>> server="127.0.0.1" >>> port=6379 >>> data1="*3\r\n$3\r\nset\r\n$1\r\na\r\n$33000\r\n" >>> data2="".join("x" for _ in range(33000)) + "\r\n" >>> data3="\n\n" >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.settimeout(10) >>> s.connect((server, port)) >>> s.send(data1) 28 Time2: redis-cli client pause 10000 Time3: >>> s.send(data2) 33002 >>> s.send(data3) 2 >>> s.send(data3) Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.error: [Errno 104] Connection reset by peer To fix that, we should check if remaining is greater than zero.
-
- 29 8月, 2018 4 次提交
-
-
由 Salvatore Sanfilippo 提交于
Revise the comments of latency command.
-
由 Salvatore Sanfilippo 提交于
Correct "did not received" -> "did not receive" typos/grammar.
-
由 Salvatore Sanfilippo 提交于
remove duplicate bind in sentinel.conf
-
由 Salvatore Sanfilippo 提交于
Supplement to PR #4835, just take info/memory/command as random commands
-