- 15 10月, 2021 1 次提交
-
-
由 guoxiang1996 提交于
On MacOS calling fsync does not guarantee the cache on the disk itself is flushed.
-
- 14 10月, 2021 1 次提交
-
-
由 Shaya Potter 提交于
Verbatim Stings in RESP3 have a type/extension. The existing redismoule reply function, hard coded it to "txt".
-
- 13 10月, 2021 4 次提交
-
-
由 Ofir Luzon 提交于
Adding -i option (sleep interval) of repeat and bigkeys to redis-cli --scan. When the keyspace contains many already expired keys scanning the dataset with redis-cli --scan can impact the performance Co-authored-by: NOran Agra <oran@redislabs.com>
-
由 Madelyn Olson 提交于
Improved the reliability of cluster replica sync tests
-
由 Ning Xie 提交于
bigkeys sleep is defined each 100 scanned keys, and it is checked it only between scan cycles. In cases that scan does not return exactly 10 keys it will never sleep. In addition the comment was sleep each 100 SCANs but it was 100 scanned keys.
-
由 Yossi Gottlieb 提交于
Cherry pick a more complete fix to 0215324a that also doesn't leak memory from latest hiredis.
-
- 11 10月, 2021 2 次提交
-
-
由 yoav-steinberg 提交于
When calling `XADD` with a predefined id (instead of `*`) there's no need to run the code which replaces the supplied id with itself. Only when we pass a wildcard id we need to do this. For apps which always supply their own id this is a slight optimization.
-
由 zhaozhao.zz 提交于
-
- 10 10月, 2021 1 次提交
-
-
由 menwen 提交于
looks like this field was never actually used and the call to time() is excessive.
-
- 08 10月, 2021 2 次提交
-
-
由 Bjorn Svensson 提交于
Move config `logfile` to generic configs
-
由 Bjorn Svensson 提交于
-
- 07 10月, 2021 3 次提交
-
-
由 yoav-steinberg 提交于
obuf based eviction tests run until eviction occurs instead of assuming a certain amount of writes will fill the obuf enough for eviction to occur. This handles the kernel buffering written data and emptying the obuf even though no one actualy reads from it. The tests have a new timeout of 20sec: if the test doesn't pass after 20 sec it'll fail. Hopefully this enough for our slow CI targets. This also eliminates the need to skip some tests in TLS.
-
由 Huang Zhw 提交于
Tracking invalidation messages were sometimes sent in inconsistent order, before the command's reply rather than after. In addition to that, they were sometimes embedded inside other commands responses, like MULTI-EXEC and MGET.
-
由 GutovskyMaria 提交于
Hide empty and loading replicas from CLUSTER SLOTS responses
-
- 06 10月, 2021 5 次提交
-
-
由 Andy Pan 提交于
Implement createPipe() to combine creating pipe and setting flags, also reduce system calls by prioritizing pipe2() over pipe(). Without createPipe(), we have to call pipe() to create a pipe and then call some functions (like anetCloexec() and anetNonBlock()) of anet.c to set flags respectively, which leads to some extra system calls, now we can leverage pipe2() to combine them and make the process of creating pipe more convergent in createPipe(). Co-authored-by: NViktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by: NOran Agra <oran@redislabs.com>
-
由 yoav-steinberg 提交于
Flush db and *then* wait for the bgsave to complete.
-
由 yoav-steinberg 提交于
When queuing a multi command we duplicated the argv (meaning an alloc and a memcpy). This isn't needed since we can use the previously allocated argv and just reset the client objects argv to NULL. This should saves some memory and is a minor optimization in heavy MULTI/EXEC traffic, especially if there are lots of arguments.
-
The new value indicates how long Redis wait to acquire the GIL after sleep. This can help identify problems where a module perform some background operation for a long time (with the GIL held) and blocks the Redis main thread.
-
由 tzongw 提交于
Scenario: 1. client block on command `XREAD BLOCK 0 STREAMS mystream $` 2. in a module, calling `XADD mystream * field value` via lua from a timer callback 3. client will receive response after some latency up to 100ms Reason: When `XADD` signal the key `mystream` as ready, `beforeSleep` in next eventloop will call `handleClientsBlockedOnKeys` to unblock the client and add pending data to write but not actually install a write handler, so next redis will block in `aeApiPoll` up to 100ms given `hz` config as default 10, pending data will be sent in another next eventloop by `handleClientsWithPendingWritesUsingThreads`. Calling `handleClientsBlockedOnKeys` before `handleClientsWithPendingWritesUsingThreads` in `beforeSleep` solves the problem.
-
- 05 10月, 2021 2 次提交
-
-
由 yoav-steinberg 提交于
* Reduce delay between publishes to allow less time to write the obufs. * More subscribed clients to buffer more data per publish. * Make sure main connection isn't evicted (it has a large qbuf).
-
由 yoav-steinberg 提交于
Changes in #9528 lead to memory leak if the command implementation used rewriteClientCommandArgument inside MULTI-EXEC. Adding an explicit test for that case since the test that uncovered it didn't specifically target this scenario
-
- 04 10月, 2021 9 次提交
-
-
When LUA call our C code, by default, the LUA stack has room for 10 elements. In most cases, this is more than enough but sometimes it's not and the caller must verify the LUA stack size before he pushes elements. On 3 places in the code, there was no verification of the LUA stack size. On specific inputs this missing verification could have lead to invalid memory write: 1. On 'luaReplyToRedisReply', one might return a nested reply that will explode the LUA stack. 2. On 'redisProtocolToLuaType', the Redis reply might be deep enough to explode the LUA stack (notice that currently there is no such command in Redis that returns such a nested reply, but modules might do it) 3. On 'ldbRedis', one might give a command with enough arguments to explode the LUA stack (all the arguments will be pushed to the LUA stack) This commit is solving all those 3 issues by calling 'lua_checkstack' and verify that there is enough room in the LUA stack to push elements. In case 'lua_checkstack' returns an error (there is not enough room in the LUA stack and it's not possible to increase the stack), we will do the following: 1. On 'luaReplyToRedisReply', we will return an error to the user. 2. On 'redisProtocolToLuaType' we will exit with panic (we assume this scenario is rare because it can only happen with a module). 3. On 'ldbRedis', we return an error.
-
由 Oran Agra 提交于
Recently merged PR introduced a leak when loading AOF files. This was because argv_len wasn't set, so rewriteClientCommandArgument would shrink the argv array and updating argc to a small value.
-
由 Oran Agra 提交于
The protocol parsing on 'ldbReplParseCommand' (LUA debugging) Assumed protocol correctness. This means that if the following is given: *1 $100 test The parser will try to read additional 94 unallocated bytes after the client buffer. This commit fixes this issue by validating that there are actually enough bytes to read. It also limits the amount of data that can be sent by the debugger client to 1M so the client will not be able to explode the memory. Co-authored-by: Nmeir@redislabs.com <meir@redislabs.com>
-
由 Oran Agra 提交于
- fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error. Co-authored-by: Nsundb <sundbcn@gmail.com>
-
由 Oran Agra 提交于
This change sets a low limit for multibulk and bulk length in the protocol for unauthenticated connections, so that they can't easily cause redis to allocate massive amounts of memory by sending just a few characters on the network. The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
-
由 Oran Agra 提交于
The redis-cli command line tool and redis-sentinel service may be vulnerable to integer overflow when parsing specially crafted large multi-bulk network replies. This is a result of a vulnerability in the underlying hiredis library which does not perform an overflow check before calling the calloc() heap allocation function. This issue only impacts systems with heap allocators that do not perform their own overflow checks. Most modern systems do and are therefore not likely to be affected. Furthermore, by default redis-sentinel uses the jemalloc allocator which is also not vulnerable. Co-authored-by: NYossi Gottlieb <yossigo@gmail.com>
-
由 Oran Agra 提交于
The vulnerability involves changing the default set-max-intset-entries configuration parameter to a very large value and constructing specially crafted commands to manipulate sets
-
由 yiyuaner 提交于
The existing overflow checks handled the greedy growing, but didn't handle a case where the addition of the header size is what causes the overflow.
-
由 YaacovHazan 提交于
Since we measure the COW size in this test by changing some keys and reading the reported COW size, we need to ensure that the "dismiss mechanism" (#8974) will not free memory and reduce the COW size. For that, this commit changes the size of the keys to 512B (less than a page). and because some keys may fall into the same page, we are modifying ten keys on each iteration and check for at least 50% change in the COW size.
-
- 03 10月, 2021 3 次提交
-
-
由 yoav-steinberg 提交于
Note that this breaks compatibility because in the past doing: DECRBY x -9223372036854775808 would succeed (and create an invalid result) and now this returns an error.
-
由 yoav-steinberg 提交于
Remove hard coded multi-bulk limit (was 1,048,576), new limit is INT_MAX. When client sends an m-bulk that's higher than 1024, we initially only allocate the argv array for 1024 arguments, and gradually grow that allocation as arguments are received.
-
由 Binbin 提交于
1. Remove forward declarations from header files to functions that do not exist: hmsetCommand and rdbSaveTime. 2. Minor phrasing fixes in #9519 3. Add missing sdsfree(title) and fix typo in redis-benchmark. 4. Modify some error comments in some zset commands. 5. Fix copy-paste bug comment in syncWithMaster about `ip-address`.
-
- 01 10月, 2021 1 次提交
-
-
由 Viktor Söderqvist 提交于
Just a cleanup to make the code easier to maintain and reduce the risk of something being overlooked.
-
- 30 9月, 2021 3 次提交
-
-
由 Eduardo Semprebon 提交于
seems that his piece of doc was always wrong (no such error in the code)
-
由 Yunier Pérez 提交于
While the original issue was on Linux, this should work for other platforms as well.
-
由 Hanna Fadida 提交于
adding an advanced api to enable loading data that was sereialized with a specific encoding version
-
- 29 9月, 2021 2 次提交
-
-
由 yoav-steinberg 提交于
-
由 Wen Hui 提交于
-
- 27 9月, 2021 1 次提交
-
-
由 Ozan Tezcan 提交于
-