- 28 4月, 2020 1 次提交
-
-
由 Guy Benoish 提交于
Introducing XINFO STREAM <key> FULL
-
- 24 4月, 2020 1 次提交
-
-
由 antirez 提交于
STRALGO should be a container for mostly read-only string algorithms in Redis. The algorithms should have two main characteristics: 1. They should be non trivial to compute, and often not part of programming language standard libraries. 2. They should be fast enough that it is a good idea to have optimized C implementations. Next thing I would love to see? A small strings compression algorithm.
-
- 22 4月, 2020 2 次提交
- 19 4月, 2020 1 次提交
-
-
由 Guy Benoish 提交于
-
- 16 4月, 2020 1 次提交
-
-
由 Oran Agra 提交于
this test is time sensitive and it sometimes fail to pass below the latency threshold, even on strong machines. this test was the reson we're running just 2 parallel tests in the github actions CI, revering this.
-
- 06 4月, 2020 2 次提交
- 03 4月, 2020 1 次提交
-
-
由 Guy Benoish 提交于
There is an inherent race between the deferring client and the "main" client of the test: While the deferring client issues a blocking command, we can't know for sure that by the time the "main" client tries to issue another command (Usually one that unblocks the deferring client) the deferring client is even blocked... For lack of a better choice this commit uses TCL's 'after' in order to give some time for the deferring client to issues its blocking command before the "main" client does its thing. This problem probably exists in many other tests but this commit tries to fix blockonkeys.tcl
-
- 02 4月, 2020 1 次提交
-
-
由 Guy Benoish 提交于
-
- 01 4月, 2020 1 次提交
-
-
由 Guy Benoish 提交于
By using a "circular BRPOPLPUSH"-like scenario it was possible the get the same client on db->blocking_keys twice (See comment in moduleTryServeClientBlockedOnKey) The fix was actually already implememnted in moduleTryServeClientBlockedOnKey but it had a bug: the funxction should return 0 or 1 (not OK or ERR) Other changes: 1. Added two commands to blockonkeys.c test module (To reproduce the case described above) 2. Simplify blockonkeys.c in order to make testing easier 3. cast raxSize() to avoid warning with format spec
-
- 31 3月, 2020 3 次提交
-
-
由 Guy Benoish 提交于
Other changes: Support stream in serverLogObjectDebugInfo
-
由 Guy Benoish 提交于
Makse sure call() doesn't wrap replicated commands with a redundant MULTI/EXEC Other, unrelated changes: 1. Formatting compiler warning in INFO CLIENTS 2. Use CLIENT_ID_AOF instead of UINT64_MAX
-
由 antirez 提交于
-
- 26 3月, 2020 3 次提交
-
-
由 Valentino Geron 提交于
-
由 Valentino Geron 提交于
First, we must parse the IDs, so that we abort ASAP. The return value of this command cannot be an error if the client successfully acknowledged some messages, so it should be executed in a "all or nothing" fashion.
-
由 Oran Agra 提交于
the AOF will be loaded successfully, but the stream will be missing, i.e inconsistencies with the original db. this was because XADD with id of 0-0 would error. add a test to reproduce.
-
- 24 3月, 2020 1 次提交
-
-
由 Oran Agra 提交于
Redis refusing to run MULTI or EXEC during script timeout may cause partial transactions to run. 1) if the client sends MULTI+commands+EXEC in pipeline without waiting for response, but these arrive to the shards partially while there's a busy script, and partially after it eventually finishes: we'll end up running only part of the transaction (since multi was ignored, and exec would fail). 2) similar to the above if EXEC arrives during busy script, it'll be ignored and the client state remains in a transaction. the 3rd test which i added for a case where MULTI and EXEC are ok, and only the body arrives during busy script was already handled correctly since processCommand calls flagTransaction
-
- 23 3月, 2020 1 次提交
-
-
由 antirez 提交于
-
- 20 3月, 2020 1 次提交
-
-
由 antirez 提交于
-
- 04 3月, 2020 1 次提交
-
-
由 bodong.ybd 提交于
-
- 27 2月, 2020 1 次提交
-
-
由 Oran Agra 提交于
it seems that running two clients at a time is ok too, resuces action time from 20 minutes to 10. we'll use this for now, and if one day it won't be enough we'll have to run just the sensitive tests one by one separately from the others. this commit also fixes an issue with the defrag test that appears to be very rare.
-
- 25 2月, 2020 1 次提交
-
-
由 Oran Agra 提交于
seems that github actions are slow, using just one client to reduce false positives. also adding verbose, testing only on latest ubuntu, and building on older one. when doing that, i can reduce the test threshold back to something saner
-
- 23 2月, 2020 2 次提交
-
-
由 Oran Agra 提交于
in some cases we were trying to kill the fork before it got created
-
由 Oran Agra 提交于
I saw that the new defag test for list was failing in CI recently, so i reduce it's threshold from 12 to 60. besides that, i add / improve the latency test for that other two defrag tests (add a sensitive latency and digest / save checks) and fix bad usage of debug populate (can't overrides existing keys). this was the original intention, which creates higher fragmentation.
-
- 19 2月, 2020 1 次提交
-
-
由 Guy Benoish 提交于
-
- 18 2月, 2020 1 次提交
-
-
由 Oran Agra 提交于
When active defrag kicks in and finds a big list, it will create a bookmark to a node so that it is able to resume iteration from that node later. The quicklist manages that bookmark, and updates it in case that node is deleted. This will increase memory usage only on lists of over 1000 (see active-defrag-max-scan-fields) quicklist nodes (1000 ziplists, not 1000 items) by 16 bytes. In 32 bit build, this change reduces the maximum effective config of list-compress-depth and list-max-ziplist-size (from 32767 to 8191)
-
- 14 2月, 2020 1 次提交
-
-
由 antirez 提交于
-
- 04 2月, 2020 3 次提交
- 30 1月, 2020 2 次提交
-
-
由 Guy Benoish 提交于
This bug affected RM_StringToLongDouble and HINCRBYFLOAT. I added tests for both cases. Main changes: 1. Fixed string2ld to fail if string contains \0 in the middle 2. Use string2ld in getLongDoubleFromObject - No point of having duplicated code here The two changes above broke RM_SaveLongDouble/RM_LoadLongDouble because the long double string was saved with length+1 (An innocent mistake, but it's actually a bug - The length passed to RM_SaveLongDouble should not include the last \0).
-
由 antirez 提交于
-
- 30 12月, 2019 2 次提交
-
-
由 Guy Benoish 提交于
If a blocked module client times-out (or disconnects, unblocked by CLIENT command, etc.) we need to call moduleUnblockClient in order to free memory allocated by the module sub-system and blocked-client private data Other changes: Made blockedonkeys.tcl tests a bit more aggressive in order to smoke-out potential memory leaks
-
由 Guy Benoish 提交于
This commit solves the following bug: 127.0.0.1:6379> XGROUP CREATE x grp $ MKSTREAM OK 127.0.0.1:6379> XADD x 666 f v "666-0" 127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x > 1) 1) "x" 2) 1) 1) "666-0" 2) 1) "f" 2) "v" 127.0.0.1:6379> XADD x 667 f v "667-0" 127.0.0.1:6379> XDEL x 667 (integer) 1 127.0.0.1:6379> XREADGROUP GROUP grp Alice BLOCK 0 STREAMS x > 1) 1) "x" 2) (empty array) The root cause is that we use s->last_id in streamCompareID while we should use the last *valid* ID
-
- 26 12月, 2019 2 次提交
-
-
由 Oran Agra 提交于
- make lua-replicate-commands mutable (it never was, but i don't see why) - make tcp-backlog immutable (fix a recent refactory mistake) - increase the max limit of a few configs to match what they were before the recent refactory
-
由 Guy Benoish 提交于
This commit solves several edge cases that are related to exhausting the streamID limits: We should correctly calculate the succeeding streamID instead of blindly incrementing 'seq' This affects both XREAD and XADD. Other (unrelated) changes: Reply with a better error message when trying to add an entry to a stream that has exhausted last_id
-
- 18 12月, 2019 2 次提交
-
-
由 zhaozhao.zz 提交于
- 17 12月, 2019 1 次提交
-
-
由 Madelyn Olson 提交于
-