- 25 10月, 2018 6 次提交
-
-
由 Jigar Bhati 提交于
Summary: 1. `WriteBufferManager` should have a reference alive in Java side through `Options`/`DBOptions` otherwise, if it's GC'ed at java side, native side can seg fault. 2. native method `setWriteBufferManager()` in `DBOptions.java` doesn't have it's jni method invocation in rocksdbjni which is added in this PR 3. `DBOptionsTest.java` is referencing object of `Options`. Instead it should be testing against `DBOptions`. Seems like a copy paste error. 4. Add a getter for WriteBufferManager. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4579 Differential Revision: D10561150 Pulled By: sagar0 fbshipit-source-id: 139a15c7f051a9f77b4200215b88267b48fbc487
-
由 Abhishek Madan 提交于
Summary: Previously, range tombstones were accumulated from every level, which was necessary if a range tombstone in a higher level covered a key in a lower level. However, RangeDelAggregator::AddTombstones's complexity is based on the number of tombstones that are currently stored in it, which is wasteful in the Get case, where we only need to know the highest sequence number of range tombstones that cover the key from higher levels, and compute the highest covering sequence number at the current level. This change introduces this optimization, and removes the use of RangeDelAggregator from the Get path. In the benchmark results, the following command was used to initialize the database: ``` ./db_bench -db=/dev/shm/5k-rts -use_existing_db=false -benchmarks=filluniquerandom -write_buffer_size=1048576 -compression_type=lz4 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304 -value_size=112 -key_size=16 -block_size=4096 -level_compaction_dynamic_level_bytes=true -num=5000000 -max_background_jobs=12 -benchmark_write_rate_limit=20971520 -range_tombstone_width=100 -writes_per_range_tombstone=100 -max_num_range_tombstones=50000 -bloom_bits=8 ``` ...and the following command was used to measure read throughput: ``` ./db_bench -db=/dev/shm/5k-rts/ -use_existing_db=true -benchmarks=readrandom -disable_auto_compactions=true -num=5000000 -reads=100000 -threads=32 ``` The filluniquerandom command was only run once, and the resulting database was used to measure read performance before and after the PR. Both binaries were compiled with `DEBUG_LEVEL=0`. Readrandom results before PR: ``` readrandom : 4.544 micros/op 220090 ops/sec; 16.9 MB/s (63103 of 100000 found) ``` Readrandom results after PR: ``` readrandom : 11.147 micros/op 89707 ops/sec; 6.9 MB/s (63103 of 100000 found) ``` So it's actually slower right now, but this PR paves the way for future optimizations (see #4493). ---- Pull Request resolved: https://github.com/facebook/rocksdb/pull/4449 Differential Revision: D10370575 Pulled By: abhimadan fbshipit-source-id: 9a2e152be1ef36969055c0e9eb4beb0d96c11f4d
-
由 Zhongyi Xie 提交于
Summary: PR https://github.com/facebook/rocksdb/pull/4226 introduced per-level perf context which allows breaking down perf context by levels. This PR takes advantage of the feature to populate a few counters related to bloom filters Pull Request resolved: https://github.com/facebook/rocksdb/pull/4581 Differential Revision: D10518010 Pulled By: miasantreble fbshipit-source-id: 011244561783ec860d32d5b0fa6bce6e78d70ef8
-
由 Simon Grätzer 提交于
Summary: SetId and GetId are the experimental API that so far being used in WritePrepared and WriteUnPrepared transactions, where the id is assigned at the prepare time. The patch extends the API to WriteCommitted transactions, by setting the id at commit time. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4565 Differential Revision: D10557862 Pulled By: maysamyabandeh fbshipit-source-id: 2b27a140682b6185a4988fa88f8152628e0d67af
-
由 Sagar Vemuri 提交于
Summary: `WritableFileWrapper` was missing some newer methods that were added to `WritableFile`. Without these functions, the missing wrapper methods would fallback to using the default implementations in WritableFile instead of using the corresponding implementations in, say, `PosixWritableFile` or `WinWritableFile`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4584 Differential Revision: D10559199 Pulled By: sagar0 fbshipit-source-id: 0d0f18a486aee727d5b8eebd3110a41988e27391
-
由 Yi Wu 提交于
Summary: Option to print malloc stats to stdout at the end of db_bench. This is different from `--dump_malloc_stats`, which periodically print the same information to LOG file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4582 Differential Revision: D10520814 Pulled By: yiwu-arbug fbshipit-source-id: beff5e514e414079d31092b630813f82939ffe5c
-
- 24 10月, 2018 5 次提交
-
-
由 Neil Mayhew 提交于
Summary: This fixes three tests that fail with relatively recent tools and libraries: The tests are: * `spatial_db_test` * `table_test` * `db_universal_compaction_test` I'm using: * `gcc` 7.3.0 * `glibc` 2.27 * `snappy` 1.1.7 * `gflags` 2.2.1 * `zlib` 1.2.11 * `bzip2` 1.0.6.0.1 * `lz4` 1.8.2 * `jemalloc` 5.0.1 The versions used in the Travis environment (which is two Ubuntu LTS versions behind the current one and doesn't use `lz4` or `jemalloc`) don't seem to have a problem. However, to be safe, I verified that these tests pass with and without my changes in a trusty Docker container without `lz4` and `jemalloc`. However, I do get an unrelated set of other failures when using a trusty Docker container that uses `lz4` and `jemalloc`: ``` db/db_universal_compaction_test.cc:506: Failure Value of: num + 1 Actual: 3 Expected: NumSortedRuns(1) Which is: 4 [ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/0, where GetParam() = (1, false) (1189 ms) [ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/1 db/db_universal_compaction_test.cc:506: Failure Value of: num + 1 Actual: 3 Expected: NumSortedRuns(1) Which is: 4 [ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/1, where GetParam() = (1, true) (1246 ms) [ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/2 db/db_universal_compaction_test.cc:506: Failure Value of: num + 1 Actual: 3 Expected: NumSortedRuns(1) Which is: 4 [ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/2, where GetParam() = (3, false) (1237 ms) [ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/3 db/db_universal_compaction_test.cc:506: Failure Value of: num + 1 Actual: 3 Expected: NumSortedRuns(1) Which is: 4 [ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/3, where GetParam() = (3, true) (1195 ms) [ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/4 db/db_universal_compaction_test.cc:506: Failure Value of: num + 1 Actual: 3 Expected: NumSortedRuns(1) Which is: 4 [ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/4, where GetParam() = (5, false) (1161 ms) [ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/5 db/db_universal_compaction_test.cc:506: Failure Value of: num + 1 Actual: 3 Expected: NumSortedRuns(1) Which is: 4 [ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/5, where GetParam() = (5, true) (1229 ms) ``` I haven't attempted to fix these since I'm not using trusty and Travis doesn't use `lz4` and `jemalloc`. However, the final commit in this PR does at least fix the compilation errors that occur when using trusty's version of `lz4`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4562 Differential Revision: D10510917 Pulled By: maysamyabandeh fbshipit-source-id: 59534042015ec339270e5fc2f6ac4d859370d189
-
由 Zhongyi Xie 提交于
Summary: clang analyzer currently fails with the following warnings: > db/log_reader.cc:323:9: warning: Undefined or garbage value returned to caller return r; ^~~~~~~~ db/log_reader.cc:344:11: warning: Undefined or garbage value returned to caller return r; ^~~~~~~~ db/log_reader.cc:369:11: warning: Undefined or garbage value returned to caller return r; Pull Request resolved: https://github.com/facebook/rocksdb/pull/4583 Differential Revision: D10523517 Pulled By: miasantreble fbshipit-source-id: 0cc8b8f27657b202bead148bbe7c4aa84fed095b
-
由 Yi Wu 提交于
Summary: A fix similar to #4410 but on the write path. On IO error on `SelectBlobFile()` we didn't return error code properly, but simply a nullptr of `BlobFile`. The `AppendBlob()` method didn't have null check for the pointer and caused crash. The fix make sure we properly return error code in this case. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4580 Differential Revision: D10513849 Pulled By: yiwu-arbug fbshipit-source-id: 80bca920d1d7a3541149de981015ad83e0aa14b5
-
由 Yi Wu 提交于
Summary: In fbcode when we build with clang7++, although -faligned-new is available in compile phase, we link with an older version of libstdc++.a and it doesn't come with aligned-new support (e.g. `nm libstdc++.a | grep align_val_t` return empty). In this case the previous -faligned-new detection can pass but will end up with link error. Fixing it by only have the detection for non-fbcode build. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4576 Differential Revision: D10500008 Pulled By: yiwu-arbug fbshipit-source-id: b375de4fbb61d2a08e54ab709441aa8e7b4b08cf
-
由 jsteemann 提交于
Summary: Couple of very minor improvements (typos in comments, full qualification of class name, reordering members of a struct to make it smaller) Pull Request resolved: https://github.com/facebook/rocksdb/pull/4564 Differential Revision: D10510183 Pulled By: maysamyabandeh fbshipit-source-id: c7ddf9bfbf2db08cd31896c3fd93789d3fa68c8b
-
- 23 10月, 2018 2 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: There was a bug that the user comparator would receive the internal key instead of the user key. The bug was due to RangeMightExistAfterSortedRun expecting user key but receiving internal key when called in GenerateBottommostFiles. The patch augment an existing unit test to reproduce the bug and fixes it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4575 Differential Revision: D10500434 Pulled By: maysamyabandeh fbshipit-source-id: 858346d2fd102cce9e20516d77338c112bdfe366
-
由 Siying Dong 提交于
Summary: Level compaction usually performs poorly when the writes so heavy that the level targets can't be guaranteed. With this improvement, we improve level_compaction_dynamic_level_bytes = true so that in the write heavy cases, the level multiplier can be slightly adjusted based on the size of L0. We keep the behavior the same if number of L0 files is under 2X compaction trigger and the total size is less than options.max_bytes_for_level_base, so that unless write is so heavy that compaction cannot keep up, the behavior doesn't change. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4338 Differential Revision: D9636782 Pulled By: siying fbshipit-source-id: e27fc17a7c29c84b00064cc17536a01dacef7595
-
- 22 10月, 2018 1 次提交
-
-
由 Yi Wu 提交于
Summary: When `MockTimeEnv` is used in test to mock time methods, we cannot use `CondVar::TimedWait` because it is using real time, not the mocked time for wait timeout. On Mac the method can return immediately without awaking other waiting threads, if the real time is larger than `wait_until` (which is a mocked time). When that happen, the `wait()` method will fall into an infinite loop. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4560 Differential Revision: D10472851 Pulled By: yiwu-arbug fbshipit-source-id: 898902546ace7db7ac509337dd8677a527209d19
-
- 20 10月, 2018 3 次提交
-
-
由 Simon Grätzer 提交于
Summary: On MacOS with clang the compilation of _tools/db_bench_tool.cc_ always fails because the format used in a `fprintf` call has the wrong type. This PR should hopefully fix this issue ``` tools/db_bench_tool.cc:4233:61: error: format specifies type 'unsigned long long' but the argument has type 'size_t' (aka 'unsigned long') ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4533 Differential Revision: D10471657 Pulled By: maysamyabandeh fbshipit-source-id: f20f5f3756d3571b586c895c845d0d4d1e34a398
-
由 Siying Dong 提交于
Summary: WriteBatchWithIndex's SeekForPrev() has a bug that we internally place the position just before the seek key rather than after. This makes the iterator to miss the result that is the same as the seek key. Fix it by position the iterator equal or smaller. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4559 Differential Revision: D10468534 Pulled By: siying fbshipit-source-id: 2fb371ae809c561b60a1c11cef71e1c66fea1f19
-
由 Yanqin Jin 提交于
Summary: Current `log::Reader` does not perform retry after encountering `EOF`. In the future, we need the log reader to be able to retry tailing the log even after `EOF`. Current implementation is simple. It does not provide more advanced retry policies. Will address this in the future. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4394 Differential Revision: D9926508 Pulled By: riversand963 fbshipit-source-id: d86d145792a41bd64a72f642a2a08c7b7b5201e1
-
- 19 10月, 2018 4 次提交
-
-
由 Abhishek Madan 提交于
Summary: The new flag allows tombstones to be generated after enough keys have been written to the database, which makes it easier to ensure that tombstones cover a lot of keys. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4538 Differential Revision: D10455685 Pulled By: abhimadan fbshipit-source-id: f25d5421745a353c830dea12b79784e852056551
-
由 Maysam Yabandeh 提交于
Summary: We have already disabled it on Travis since it has been too flaky. The same problem arises in Appveyor as well. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4536 Differential Revision: D10452240 Pulled By: maysamyabandeh fbshipit-source-id: 728f4ecddf780097159dc0a0737d460eb5ce4f09
-
由 Philip Jameson 提交于
Reviewed By: andrewjcg Differential Revision: D10409082 fbshipit-source-id: a1432270f79c2baf2e52e3351b5f481c7398c58d
-
由 Huachao Huang 提交于
Summary: When HAVE_SSE42 is true, it should always add "-msse4.2 -mpclmul" no matter if FORCE_SSE42 is true or not. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4490 Differential Revision: D10384256 Pulled By: yiwu-arbug fbshipit-source-id: c82bc988f017981a0f84ddc136f36e2366c3ea8a
-
- 18 10月, 2018 3 次提交
-
-
由 Jigar Bhati 提交于
Summary: Allow rocks java to explicitly create WriteBufferManager by plumbing it to the native code through JNI. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4492 Differential Revision: D10428506 Pulled By: sagar0 fbshipit-source-id: cd9dd8c2ef745a0303416b44e2080547bdcca1fd
-
由 Abhishek Madan 提交于
Summary: When there are no range deletions, flush and compaction perform a binary search on an effectively empty map every time they call ShouldDelete. This PR lazily initializes each stripe map entry so that the binary search can be elided in these cases. After this PR, the total amount of time spent in compactions is 52.541331s, and the total amount of time spent in flush is 5.532608s, the former of which is a significant improvement from the results after #4495. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4497 Differential Revision: D10428610 Pulled By: abhimadan fbshipit-source-id: 6f7e1ce3698fac3ef86d1197955e6b72e0931a0f
-
由 Zhongyi Xie 提交于
Summary: Current implementation of perf context is level agnostic. Making it hard to do performance evaluation for the LSM tree. This PR adds `PerfContextByLevel` to decompose the counters by level. This will be helpful when analyzing point and range query performance as well as tuning bloom filter Also replaced __thread with thread_local keyword for perf_context Pull Request resolved: https://github.com/facebook/rocksdb/pull/4226 Differential Revision: D10369509 Pulled By: miasantreble fbshipit-source-id: f1ced4e0de5fcebdb7f9cff36164516bc6382d82
-
- 16 10月, 2018 6 次提交
-
-
由 anand1976 提交于
Summary: When a CompactRange() call for a level is truncated before the end key is reached, because it exceeds max_compaction_bytes, we need to properly set the compaction_end parameter to indicate the stop key. The next CompactRange will use that as the begin key. We set it to the smallest key of the next file in the level after expanding inputs to get a clean cut. Previously, we were setting it before expanding inputs. So we could end up recompacting some files. In a pathological case, where a single key has many entries spanning all the files in the level (possibly due to merge operands without a partial merge operator, thus resulting in compaction output identical to the input), this would result in an endless loop over the same set of files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4496 Differential Revision: D10395026 Pulled By: anand1976 fbshipit-source-id: f0c2f89fee29b4b3be53b6467b53abba8e9146a9
-
由 Yanqin Jin 提交于
Summary: Leverage existing `FlushJob` to implement atomic flush of multiple column families. This PR depends on other PRs and is a subset of #3752 . This PR itself is not sufficient in fulfilling atomic flush. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4262 Differential Revision: D9283109 Pulled By: riversand963 fbshipit-source-id: 65401f913e4160b0a61c0be6cd02adc15dad28ed
-
由 Andrew Kryczka 提交于
Summary: `CompactionIterator::snapshots_` is ordered by ascending seqnum, just like `DBImpl`'s linked list of snapshots from which it was copied. This PR exploits this ordering to make `findEarliestVisibleSnapshot` do binary search rather than linear scan. This can make flush/compaction significantly faster when many snapshots exist since that function is called on every single key. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4495 Differential Revision: D10386470 Pulled By: ajkr fbshipit-source-id: 29734991631227b6b7b677e156ac567690118a8b
-
由 Maysam Yabandeh 提交于
Summary: WritePrepared is declared production ready (overdue update) and the benchmark results are also reported. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4494 Differential Revision: D10385336 Pulled By: maysamyabandeh fbshipit-source-id: 662672ddfa286aa46af544f505b4d4b7a882d408
-
由 Yanqin Jin 提交于
Summary: Using const string& can avoid one extra string copy. This PR addresses a recent comment made by siying on #3933. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4491 Differential Revision: D10381211 Pulled By: riversand963 fbshipit-source-id: 27fc2d65d84bc7cd07833c77cdc47f06dcfaeb31
-
由 Yi Wu 提交于
Summary: Set the macro if default allocator is jemalloc. It doesn't handle the case when allocator is specified, e.g. ``` cpp_binary( name="xxx" allocator="jemalloc", # or "malloc" or something else deps=["//rocksdb:rocksdb"], ) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4489 Differential Revision: D10363683 Pulled By: yiwu-arbug fbshipit-source-id: 5da490336a8e78e0feb0900c29e8036e7ec6f12b
-
- 13 10月, 2018 4 次提交
-
-
由 Yanqin Jin 提交于
Summary: We would like to collect file-system-level statistics including file name, offset, length, return code, latency, etc., which requires to add callbacks to intercept file IO function calls when RocksDB is running. To collect file-system-level statistics, users can inherit the class `EventListener`, as in `TestFileOperationListener `. Note that `TestFileOperationListener::ShouldBeNotifiedOnFileIO()` returns true. Pull Request resolved: https://github.com/facebook/rocksdb/pull/3933 Differential Revision: D10219571 Pulled By: riversand963 fbshipit-source-id: 7acc577a2d31097766a27adb6f78eaf8b1e8ff15
-
由 John Calcote 提交于
Summary: Closes https://github.com/facebook/rocksdb/issues/4447 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4448 Differential Revision: D10351852 Pulled By: ajkr fbshipit-source-id: 18287b5190ae0b8153ce425da9a0bdfe1af88c34
-
由 Yi Wu 提交于
Summary: The "je_" prefix of jemalloc APIs presents only when the macro `JEMALLOC_NO_RENAME` from jemalloc.h presents. With the patch I'm also adding -DROCKSDB_JEMALLOC flag in buck TARGETS. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4488 Differential Revision: D10355971 Pulled By: yiwu-arbug fbshipit-source-id: 03a2d69790a44ac89219c7525763fa937a63d95a
-
由 Chinmay Kamat 提交于
Summary: This commit adds code to acquire lock on the DB LOCK file before starting the repair process. This will prevent multiple processes from performing repair on the same DB simultaneously. Fixes repair_test to work with this change. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4435 Differential Revision: D10361499 Pulled By: riversand963 fbshipit-source-id: 3c512c48b7193d383b2279ccecabdb660ac1cf22
-
- 12 10月, 2018 3 次提交
-
-
由 Wilfried Goesgens 提交于
Summary: The default behaviour of rocksdb is to use the `*A(` windows API functions. These accept filenames in the currently configured system encoding, be it Latin 1, utf8 or whatever. If the Application intends to completely work with utf8 strings internally, converting these to that codepage properly isn't even always possible. Thus this patch adds a switch to use the `*W(` functions, which accept UTF-16 filenames, and uses C++11 features to translate the UTF8 containing std::string to an UTF16 containing std::wstring. This feature is a compile time options, that can be enabled by setting `WITH_WINDOWS_UTF8_FILENAMES` to true. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4469 Differential Revision: D10356011 Pulled By: yiwu-arbug fbshipit-source-id: 27b6ae9171f209085894cdf80069e8a896642044
-
由 Abhishek Madan 提交于
Summary: Using `./range_del_aggregator_bench --use_collapsed=false --num_range_tombstones=5000 --num_runs=1000`, here are the results before and after this change: Before: ``` ========================= Results: ========================= AddTombstones: 1822.61 us ShouldDelete (first): 94.5286 us ``` After: ``` ========================= Results: ========================= AddTombstones: 199.26 us ShouldDelete (first): 38.9344 us ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4487 Differential Revision: D10347288 Pulled By: abhimadan fbshipit-source-id: d44efe3a166d583acfdc3ec1199e0892f34dbfb7
-
由 zpalmtree 提交于
Summary: Closes https://github.com/facebook/rocksdb/issues/4462 I'm not sure if you'll be happy with `std::random_device{}`, perhaps you would want to use your rand instance instead. I didn't test to see if your rand instance supports the requirements that `std::shuffle` takes. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4482 Differential Revision: D10325133 Pulled By: yiwu-arbug fbshipit-source-id: 47b7adaf4bb2b8d64cf090ea6b1b48ef53180581
-
- 11 10月, 2018 3 次提交
-
-
由 Young Tack Jin 提交于
Summary: "Write (GB)" of $9 rather than "Rnp1 (GB)" of $8 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4442 Differential Revision: D10318193 Pulled By: yiwu-arbug fbshipit-source-id: 03a7ef1938d9332e06fb3fd8490ca212f61fac6b
-
由 UncP 提交于
Summary: in `WriteThread::LaunchParallelMemTableWriters`, there is ` write_group->running.store(write_group->size); ` https://github.com/facebook/rocksdb/blob/master/db/write_thread.cc#L510 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4450 Differential Revision: D10201900 Pulled By: yiwu-arbug fbshipit-source-id: 96c8fbbba5aff7ba8a6ceb3117a2bd7cc9b2f34b
-
由 Simon Grätzer 提交于
Summary: Wrong I overwrite `WriteBatch::Handler::Continue` to return _false_ at some point, I always get the `Status::Corruption` error. I don't think this check is used correctly here: The counter in `found` cannot reflect all entries in the WriteBatch when we exit the loop early. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4478 Differential Revision: D10317416 Pulled By: yiwu-arbug fbshipit-source-id: cccae3382805035f9b3239b66682b5fcbba6bb61
-