- 14 6月, 2021 2 次提交
-
-
由 Jay Zhuang 提交于
Summary: Recalculate the total size after generate new sst files. New generated files might have different size as the previous time which could cause the test failed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8396 Test Plan: ``` gtest-parallel ./db_compaction_test --gtest_filter=DBCompactionTest.ManualCompactionMax -r 1000 -w 100 ``` Reviewed By: akankshamahajan15 Differential Revision: D29083299 Pulled By: jay-zhuang fbshipit-source-id: 49d4bd619cefc0f9a1f452f8759ff4c2ba1b6fdb
-
由 Peter Dillinger 提交于
Summary: Internal builds failing Pull Request resolved: https://github.com/facebook/rocksdb/pull/8399 Test Plan: I can reproduce a failure by putting a bad version of `as` in my PATH. This indicates that before this change, the custom compiler is falsely relying on host `as`. This change fixes that, ignoring the bad `as` on PATH. Reviewed By: akankshamahajan15 Differential Revision: D29094159 Pulled By: pdillinger fbshipit-source-id: c432e90404ea4d39d885a685eebbb08be9eda1c8
-
- 13 6月, 2021 1 次提交
-
-
由 Levi Tamasi 提交于
Summary: The subcompaction boundary picking logic does not currently guarantee that all user keys that differ only by timestamp get processed by the same subcompaction. This can cause issues with the `CompactionIterator` state machine: for instance, one subcompaction that processes a subset of such KVs might drop a tombstone based on the KVs it sees, while in reality the tombstone might not have been eligible to be optimized out. (See also https://github.com/facebook/rocksdb/issues/6645, which adjusted the way compaction inputs are picked for the same reason.) Pull Request resolved: https://github.com/facebook/rocksdb/pull/8393 Test Plan: Ran `make check` and the crash test script with timestamps enabled. Reviewed By: jay-zhuang Differential Revision: D29071635 Pulled By: ltamasi fbshipit-source-id: f6c72442122b4e581871e096fabe3876a9e8a5a6
-
- 12 6月, 2021 3 次提交
-
-
由 Peter Dillinger 提交于
Summary: DBImpl::DumpStats is supposed to do this: Dump DB stats to LOG For each CF, dump CFStatsNoFileHistogram to LOG For each CF, dump CFFileHistogram to LOG Instead, due to a longstanding bug from 2017 (https://github.com/facebook/rocksdb/issues/2126), it would dump CFStats, which includes both CFStatsNoFileHistogram and CFFileHistogram, in both loops, resulting in near-duplicate output. This fixes the bug. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8380 Test Plan: Manual inspection of LOG after db_bench Reviewed By: jay-zhuang Differential Revision: D29017535 Pulled By: pdillinger fbshipit-source-id: 3010604c4a629a80347f129cd746ce9b0d0cbda6
-
由 Zhichao Cao 提交于
Summary: In the current logic, any IO Error with retryable flag == true will be handled by the special logic and in most cases, StartRecoverFromRetryableBGIOError will be called to do the auto resume. If the NoSpace error with retryable flag is set during WAL write, it is mapped as a hard error, which will trigger the auto recovery. During the recover process, if write continues and append to the WAL, the write process sees that bg_error is set to HardError and it calls WriteStatusCheck(), which calls SetBGError() with Status (not IOStatus). This will redirect to the regular SetBGError interface, in which recovery_error_ will be set to the corresponding error. With the recovery_error_ set, the auto resume thread created in StartRecoverFromRetryableBGIOError will keep failing as long as user keeps trying to write. To fix this issue. All the NoSpace error (no matter retryable flag is set or not) will be redirect to the regular SetBGError, and RecoverFromNoSpace() will do the recovery job which calls SstFileManager::StartErrorRecovery(). Pull Request resolved: https://github.com/facebook/rocksdb/pull/8376 Test Plan: make check and added the new testing case Reviewed By: anand1976 Differential Revision: D29071828 Pulled By: zhichao-cao fbshipit-source-id: 7171d7e14cc4620fdab49b7eff7a2fe9a89942c2
-
由 Peter Dillinger 提交于
Summary: platform007 being phased out and sometimes broken Pull Request resolved: https://github.com/facebook/rocksdb/pull/8389 Test Plan: `make V=1` to see which compiler is being used Reviewed By: jay-zhuang Differential Revision: D29067183 Pulled By: pdillinger fbshipit-source-id: d1b07267cbc55baa9395f2f4fe3967cc6dad52f7
-
- 11 6月, 2021 5 次提交
-
-
由 mrambacher 提交于
Summary: Makes the Comparator class into a Customizable object. Added/Updated the CreateFromString method to create Comparators. Added test for using the ObjectRegistry to create one. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8336 Reviewed By: jay-zhuang Differential Revision: D28999612 Pulled By: mrambacher fbshipit-source-id: bff2cb2814eeb9fef6a00fddc61d6e34b6fbcf2e
-
由 Akanksha Mahajan 提交于
Summary: This PR add support for Merge operation in Integrated BlobDB with base values(i.e DB::Put). Merged values can be retrieved through DB::Get, DB::MultiGet, DB::GetMergeOperands and Iterator operation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8292 Test Plan: Add new unit tests Reviewed By: ltamasi Differential Revision: D28415896 Pulled By: akankshamahajan15 fbshipit-source-id: e9b3478bef51d2f214fb88c31ed3c8d2f4a531ff
-
由 Baptiste Lemaire 提交于
Summary: Changed fprintf function to fputc in ApplyVersionEdit, and replaced null characters with whitespaces. Added unit test in ldb_test.py - verifies that manifest_dump --verbose output is correct when keys and values containing null characters are inserted. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8378 Reviewed By: pdillinger Differential Revision: D29034584 Pulled By: bjlemaire fbshipit-source-id: 50833687a8a5f726e247c38457eadc3e6dbab862
-
由 matthewvon 提交于
Summary: fs_posix.cc GetFreeSpace() calculates free space based upon a call to statvfs(). However, there are two extremely different values in statvfs's returned structure: f_bfree which is free space for root and f_bavail which is free space for non-root users. The existing code uses f_bfree. Many disks have 5 to 10% of the total disk space reserved for root only. Therefore GetFreeSpace() does not realize that non-root users may not have storage available. This PR detects whether the effective posix user is root or not, then selects the appropriate available space value. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8370 Reviewed By: mrambacher Differential Revision: D29032710 Pulled By: jay-zhuang fbshipit-source-id: 57feba34ed035615a479956d28f98d85735281c0
-
由 Zhichao Cao 提交于
Summary: Currently, we either use the file system inode or a monotonically incrementing runtime ID as the block cache key prefix. However, if we use a monotonically incrementing runtime ID (in the case that the file system does not support inode id generation), in some cases, it cannot ensure uniqueness (e.g., we have secondary cache migrated from host to host). We use DbSessionID (20 bytes) + current file number (at most 10 bytes) as the new cache block key prefix when the secondary cache is enabled. So can accommodate scenarios such as transfer of cache state across hosts. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8360 Test Plan: add the test to lru_cache_test Reviewed By: pdillinger Differential Revision: D29006215 Pulled By: zhichao-cao fbshipit-source-id: 6cff686b38d83904667a2bd39923cd030df16814
-
- 10 6月, 2021 1 次提交
-
-
由 Levi Tamasi 提交于
Summary: Logically, subcompactions process a key range [start, end); however, the way this is currently implemented is that the `CompactionIterator` for any given subcompaction keeps processing key-values until it actually outputs a key that is out of range, which is then discarded. Instead of doing this, the patch introduces a new type of internal iterator called `ClippingIterator` which wraps another internal iterator and "clips" its range of key-values so that any KVs returned are strictly in the [start, end) interval. This does eliminate a (minor) inefficiency by stopping processing in subcompactions exactly at the limit; however, the main motivation is related to BlobDB: namely, we need this to be able to measure the amount of garbage generated by a subcompaction precisely and prevent off-by-one errors. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8327 Test Plan: `make check` Reviewed By: siying Differential Revision: D28761541 Pulled By: ltamasi fbshipit-source-id: ee0e7229f04edabbc7bed5adb51771fbdc287f69
-
- 08 6月, 2021 2 次提交
-
-
由 Peter Dillinger 提交于
Summary: In final polishing of https://github.com/facebook/rocksdb/issues/8297 (after most manual testing), I broke my own caching layer by sanitizing an input parameter with std::min(0, x) instead of std::max(0, x). I resisted unit testing the timing part of the result caching because historically, these test are either flaky or difficult to write, and this was not a correctness issue. This bug is essentially unnoticeable with a small number of column families but can explode background work with a large number of column families. This change fixes the logical error, removes some unnecessary related optimization, and adds mock time/sleeps to the unit test to ensure we can cache hit within the age limit. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8369 Test Plan: added time testing logic to existing unit test Reviewed By: ajkr Differential Revision: D28950892 Pulled By: pdillinger fbshipit-source-id: e79cd4ff3eec68fd0119d994f1ed468c38026c3b
-
由 David Devecsery 提交于
Summary: Added the ability to cancel an in-progress range compaction by storing to an atomic "canceled" variable pointed to within the CompactRangeOptions structure. Tested via two tests added to db_tests2.cc. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8351 Reviewed By: ajkr Differential Revision: D28808894 Pulled By: ddevec fbshipit-source-id: cb321361c9e23b084b188bb203f11c375a22c2dd
-
- 05 6月, 2021 2 次提交
-
-
由 Stepan Koltsov 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8366 Test Plan: Run it, `TARGETS` now unchanged. Reviewed By: jay-zhuang Differential Revision: D28914138 Pulled By: stepancheg fbshipit-source-id: 04d24cdf1439edf4204a3ba1f646e9e75a00d92b
-
由 Stiopa Koltsov 提交于
Summary: #forcetdhashing Reviewed By: ndmitchell Differential Revision: D28873060 fbshipit-source-id: 7d3be3e7d38619ec5b0b117f462ca1b9f427aa94
-
- 04 6月, 2021 2 次提交
-
-
由 Andrew Kryczka 提交于
Summary: This is a duplicate of https://github.com/facebook/rocksdb/issues/4948 by mzhaom to fix tests after rebase. This change is a follow-up to https://github.com/facebook/rocksdb/issues/4927, which made this possible by allowing tombstone dropping/seqnum zeroing optimizations on the last key in the compaction. Now the `largest_seqno != 0` condition suffices to prevent snapshot release triggered compaction from entering an infinite loop. The issues caused by the extraneous condition `level_and_file.second->num_deletions > 1` are: - files could have `largest_seqno > 0` forever making it impossible to tell they cannot contain any covering keys - it doesn't trigger compaction when there are many overwritten keys. Some MyRocks use case actually doesn't use Delete but instead calls Put with empty value to "delete" keys, so we'd like to be able to trigger compaction in this case too. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8357 Test Plan: - make check Reviewed By: jay-zhuang Differential Revision: D28855340 Pulled By: ajkr fbshipit-source-id: a261b51eecafec492499e6d01e8e43112f801798
-
由 anand76 提交于
Summary: Update HISTORY and version to 6.21 on master. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8363 Reviewed By: jay-zhuang Differential Revision: D28888818 Pulled By: anand1976 fbshipit-source-id: 9e5fac3b99ecc9f3b7d9f21474a39fa50decb117
-
- 02 6月, 2021 3 次提交
-
-
由 PiyushDatta 提交于
Summary: Reference: https://github.com/facebook/rocksdb/issues/7201 Before fix: `/tmp/rocksdb_test_file/LOG.old.1622492586055679:Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s` After fix: `/tmp/rocksdb_test_file/LOG:Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s` Tests: ``` Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete ETA: 0s Left: 0 AVG: 0.05s local:0/7720/100%/0.0s rm -rf /dev/shm/rocksdb.CLRh /usr/bin/python3 tools/check_all_python.py No syntax errors in 34 .py files /usr/bin/python3 tools/ldb_test.py Running testCheckConsistency... .Running testColumnFamilies... .Running testCountDelimDump... .Running testCountDelimIDump... .Running testDumpLiveFiles... .Running testDumpLoad... Warning: 7 bad lines ignored. .Running testGetProperty... .Running testHexPutGet... .Running testIDumpBasics... .Running testIngestExternalSst... .Running testInvalidCmdLines... .Running testListColumnFamilies... .Running testManifestDump... .Running testMiscAdminTask... Sequence,Count,ByteSize,Physical Offset,Key(s) .Running testSSTDump... .Running testSimpleStringPutGet... .Running testStringBatchPut... .Running testTtlPutGet... .Running testWALDump... . ---------------------------------------------------------------------- Ran 19 tests in 15.945s OK sh tools/rocksdb_dump_test.sh make check-format make[1]: Entering directory '/home/piydatta/Documents/rocksdb' $DEBUG_LEVEL is 1 Makefile:176: Warning: Compiling in debug mode. Don't use the resulting binary in production build_tools/format-diff.sh -c Checking format of uncommitted changes... ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/8350 Reviewed By: jay-zhuang Differential Revision: D28790567 Pulled By: zhichao-cao fbshipit-source-id: dcb1e4c124361156435122f21f0a288335b2c8c8
-
由 Jay Zhuang 提交于
Summary: - Fix cmake build failure with gflags. - Add CI tests for both gflags 2.1 and 2.2. - Fix ctest config with gtest. - Add CI to run test with ctest. One benefit of ctest is it support timeout, it's set to 5min in our CI, so we will know which test is hang. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8324 Test Plan: CI pass Reviewed By: ajkr Differential Revision: D28762517 Pulled By: jay-zhuang fbshipit-source-id: 09063c5af5f9f33abfcdeb48593acbd9826cd199
-
由 sdong 提交于
Summary: Whitebox crash test can run significantly over the time limit for test slowness or no kiling points. This indefinite job can create problem when this test is periodically scheduled as a job. Instead, kill the job if it is 15 minutes over the limit. Refactor the code slightly to consolidate the code for executing commands for white and black box tests. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8341 Test Plan: Run both of black and white box tests with both of natual and explicit kill condition. Reviewed By: jay-zhuang Differential Revision: D28756170 fbshipit-source-id: f253149890e62ace78f871be927e093e9b12f49b
-
- 01 6月, 2021 2 次提交
-
-
由 Andrew Kryczka 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8342 Reviewed By: ramvadiv Differential Revision: D28762140 Pulled By: ajkr fbshipit-source-id: c66ca865f5136d6ad321d0f54a62cbf46d9251ba
-
由 anand76 提交于
Summary: Update graphs to remove FB specific terms such as WSF, and update link to the Github issue in the secondary cache blog post. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8348 Reviewed By: ramvadiv Differential Revision: D28773858 Pulled By: anand1976 fbshipit-source-id: 86281d5c6928550d68d5aa66aae39a41a41f928f
-
- 28 5月, 2021 4 次提交
-
-
由 sdong 提交于
Summary: A new blog post to introduce recent development related to online validation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8338 Test Plan: Local test with "bundle exec jekyll serve" Reviewed By: ltamasi Differential Revision: D28757134 fbshipit-source-id: 42268e1af8dc0c6a42ae62ea61568409b7ce10e4
-
由 sdong 提交于
Summary: Now SyncPoint is used in crash test but can signiciantly slow down the run. Add a bloom filter before each process to speed itup Pull Request resolved: https://github.com/facebook/rocksdb/pull/8337 Test Plan: Run all existing tests Reviewed By: ajkr Differential Revision: D28730282 fbshipit-source-id: a187377a9d47877a36c5649e4b1f67d5e3033238
-
由 anand76 提交于
Summary: Blog post about SecondaryCache Pull Request resolved: https://github.com/facebook/rocksdb/pull/8339 Reviewed By: zhichao-cao Differential Revision: D28753501 Pulled By: anand1976 fbshipit-source-id: d3241b746a9266fb523e13ad45fd0288083f7470
-
由 Peter (Stig) Edwards 提交于
Summary: I noticed ```openat``` system call with ```O_WRONLY``` flag and ```sync_file_range``` and ```truncate``` on WAL file when using ```rocksdb::DB::OpenForReadOnly``` by way of ```db_bench --readonly=true --benchmarks=readseq --use_existing_db=1 --num=1 ...``` Noticed in ```strace``` after seeing the last modification time of the WAL file change after each run (with ```--readonly=true```). I think introduced by https://github.com/facebook/rocksdb/commit/7d7f14480e135a4939ed6903f46b3f7056aa837a from https://github.com/facebook/rocksdb/pull/8122 I added a test to catch the WAL file being truncated and the modification time on it changing. I am not sure if a mock filesystem with mock clock could be used to avoid having to sleep 1.1s. The test could also check the set of files is the same and that the sizes are also unchanged. Before: ``` [ RUN ] DBBasicTest.ReadOnlyReopenMtimeUnchanged db/db_basic_test.cc:182: Failure Expected equality of these values: file_mtime_after_readonly_reopen Which is: 1621611136 file_mtime_before_readonly_reopen Which is: 1621611135 file is: 000010.log [ FAILED ] DBBasicTest.ReadOnlyReopenMtimeUnchanged (1108 ms) ``` After: ``` [ RUN ] DBBasicTest.ReadOnlyReopenMtimeUnchanged [ OK ] DBBasicTest.ReadOnlyReopenMtimeUnchanged (1108 ms) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/8313 Reviewed By: pdillinger Differential Revision: D28656925 Pulled By: jay-zhuang fbshipit-source-id: ea9e215cb53e7c830e76bc5fc75c45e21f12a1d6
-
- 27 5月, 2021 2 次提交
-
-
由 sdong 提交于
Summary: ReadOptions.iterate_upper_bound's comment is confusing. Improve it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8331 Reviewed By: jay-zhuang Differential Revision: D28696635 fbshipit-source-id: 7d9fa6fd1642562572140998c89d434058db8dda
-
由 Levi Tamasi 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8335 Reviewed By: ramvadiv Differential Revision: D28715167 Pulled By: ltamasi fbshipit-source-id: 1816196664b0d31aed0b9002df426579441da3f1
-
- 26 5月, 2021 1 次提交
-
-
由 Peter Dillinger 提交于
Summary: Avoid people hitting bugs Pull Request resolved: https://github.com/facebook/rocksdb/pull/8330 Test Plan: comments only Reviewed By: siying Differential Revision: D28683157 Pulled By: pdillinger fbshipit-source-id: 2b34d3efb5e2fa34bea93d54c940cbd425212d25
-
- 25 5月, 2021 1 次提交
-
-
由 sdong 提交于
Summary: https://github.com/facebook/rocksdb/pull/8288 introduces a bug: SequenceIterWrapper should do next for seek key using internal key comparator rather than user comparator. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8328 Test Plan: Pass all existing tests Reviewed By: ltamasi Differential Revision: D28647263 fbshipit-source-id: 4081d684fd8a86d248c485ef8a1563c7af136447
-
- 24 5月, 2021 1 次提交
-
-
由 Zhichao Cao 提交于
Summary: Fix for https://github.com/facebook/rocksdb/issues/8315. Inhe lru caching test, 5100 is not enough to hold meta block and first block in some random case, increase to 6100. Fix the reference binding to null pointer, use template. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8326 Test Plan: make check Reviewed By: pdillinger Differential Revision: D28625666 Pulled By: zhichao-cao fbshipit-source-id: 97b85306ae3d09bfb74addc7c65e57fe55a976a5
-
- 22 5月, 2021 6 次提交
-
-
由 Jay Zhuang 提交于
Summary: Error: ``` db/db_compaction_test.cc:5211:47: warning: The left operand of '*' is a garbage value uint64_t total = (l1_avg_size + l2_avg_size * 10) * 10; ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/8325 Test Plan: `$ make analyze` Reviewed By: pdillinger Differential Revision: D28620916 Pulled By: jay-zhuang fbshipit-source-id: f6d58ab84eefbcc905cda45afb9522b0c6d230f8
-
由 Zhichao Cao 提交于
Summary: Secondary cache is implemented to achieve the secondary cache tier for block cache. New Insert and Lookup APIs are introduced in https://github.com/facebook/rocksdb/issues/8271 . To support and use the secondary cache in block based table reader, this PR introduces the corresponding callback functions that will be used in secondary cache, and update the Insert and Lookup APIs accordingly. benchmarking: ./db_bench --benchmarks="fillrandom" -num=1000000 -key_size=32 -value_size=256 -use_direct_io_for_flush_and_compaction=true -db=/tmp/rocks_t/db -partition_index_and_filters=true ./db_bench -db=/tmp/rocks_t/db -use_existing_db=true -benchmarks=readrandom -num=1000000 -key_size=32 -value_size=256 -use_direct_reads=true -cache_size=1073741824 -cache_numshardbits=5 -cache_index_and_filter_blocks=true -read_random_exp_range=17 -statistics -partition_index_and_filters=true -stats_dump_period_sec=30 -reads=50000000 master benchmarking results: readrandom : 3.923 micros/op 254881 ops/sec; 33.4 MB/s (23849796 of 50000000 found) rocksdb.db.get.micros P50 : 2.820992 P95 : 5.636716 P99 : 16.450553 P100 : 8396.000000 COUNT : 50000000 SUM : 179947064 Current PR benchmarking results readrandom : 4.083 micros/op 244925 ops/sec; 32.1 MB/s (23849796 of 50000000 found) rocksdb.db.get.micros P50 : 2.967687 P95 : 5.754916 P99 : 15.665912 P100 : 8213.000000 COUNT : 50000000 SUM : 187250053 About 3.8% throughput reduction. P50: 5.2% increasing, P95, 2.09% increasing, P99 4.77% improvement Pull Request resolved: https://github.com/facebook/rocksdb/pull/8315 Test Plan: added the testing case Reviewed By: anand1976 Differential Revision: D28599774 Pulled By: zhichao-cao fbshipit-source-id: 098c4df0d7327d3a546df7604b2f1602f13044ed
-
由 Jay Zhuang 提交于
Summary: Macos build is taking more than 1 hour, bump the instance type from the default medium to large (large macos instance was not available before). Pull Request resolved: https://github.com/facebook/rocksdb/pull/8320 Test Plan: watch CI pass Reviewed By: ajkr Differential Revision: D28589456 Pulled By: jay-zhuang fbshipit-source-id: cff78dae5aaf9de90ade3468469290176de5ff32
-
由 Peter Dillinger 提交于
Summary: With Ribbon filter work and possible variance in actual bits per key (or prefix; general term "entry") to achieve certain FP rates, I've received a request to be able to track actual bits per key in generated filters. This change adds a num_filter_entries table property, which can be combined with filter_size to get bits per key (entry). This can vary from num_entries in at least these ways: * Different versions of same key are only counted once in filters. * With prefix filters, several user keys map to the same filter entry. * A single filter can include both prefixes and user keys. Note that FilterBlockBuilder::NumAdded() didn't do anything useful except distinguish empty from non-empty. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8323 Test Plan: basic unit test included, others updated Reviewed By: jay-zhuang Differential Revision: D28596210 Pulled By: pdillinger fbshipit-source-id: 529a111f3c84501e5a470bc84705e436ee68c376
-
由 Jay Zhuang 提交于
Summary: Fix a bug that for manual compaction, `max_compaction_bytes` is only limit the SST files from input level, but not overlapped files on output level. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8269 Test Plan: `make check` Reviewed By: ajkr Differential Revision: D28231044 Pulled By: jay-zhuang fbshipit-source-id: 9d7d03004f30cc4b1b9819830141436907554b7c
-
由 sdong 提交于
Summary: By default, try to build with liburing. For make, if ROCKSDB_USE_IO_URING is not set, treat as 1, which means RocksDB will try to build with liburing. For cmake, add WITH_LIBURING to control it, with default on. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8322 Test Plan: Build using cmake and make. Reviewed By: anand1976 Differential Revision: D28586498 fbshipit-source-id: cfd39159ab697f4b93a9293a59c07f839b1e7ed5
-
- 21 5月, 2021 2 次提交
-
-
由 sdong 提交于
Summary: When a memtable is flushed, it will validate number of entries it reads, and compare the number with how many entries inserted into memtable. This serves as one sanity c\ heck against memory corruption. This change will also allow more counters to be added in the future for better validation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8288 Test Plan: Pass all existing tests Reviewed By: ajkr Differential Revision: D28369194 fbshipit-source-id: 7ff870380c41eab7f99eee508550dcdce32838ad
-
由 Jay Zhuang 提交于
Summary: The test want to make sure these's no compaction during `AddFile` (between `DBImpl::AddFile:MutexLock` and `DBImpl::AddFile:MutexUnlock`) but the mutex could be unlocked by `EnterUnbatched()`. Move the lock start point after bumping the ingest file number. Also fix the dead lock when ASSERT fails. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8307 Reviewed By: ajkr Differential Revision: D28479849 Pulled By: jay-zhuang fbshipit-source-id: b3c50f66aa5d5f59c5c27f815bfea189c4cd06cb
-