- 19 9月, 2021 1 次提交
-
-
由 anand76 提交于
Summary: In case of IO uring bugs, we need to provide a way for users to turn it off. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8931 Test Plan: Manually run db_bench with/without the option and verify the behavior Reviewed By: pdillinger Differential Revision: D31040252 Pulled By: anand1976 fbshipit-source-id: 56f2537d6ac8488c9e126296d8190ad9e0158f70
-
- 18 9月, 2021 5 次提交
-
-
由 Jay Zhuang 提交于
Summary: Add support for fallback to local compaction, the user can return `CompactionServiceJobStatus::kUseLocal` to instruct RocksDB to run the compaction locally instead of waiting for the remote compaction result. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8709 Test Plan: unittest Reviewed By: ajkr Differential Revision: D30560163 Pulled By: jay-zhuang fbshipit-source-id: 65d8905a4a1bc185a68daa120997f21d3198dbe1
-
由 Yanqin Jin 提交于
Summary: In batched `MultiGet()`, RocksDB batches blob read IO and uses `RandomAccessFileReader::MultiRead()` to read the blobs instead of issuing multiple `Read()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8699 Test Plan: ``` make check ``` Reviewed By: ltamasi Differential Revision: D31030861 Pulled By: riversand963 fbshipit-source-id: a0df6060cbfd54cff9515a4eee08807b1dbcb0c8
-
由 sdong 提交于
Summary: As title. The reason is that after loading customized options, the env is not set back to the correct one. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8929 Test Plan: Manually validate in an environment where the command failed. Reviewed By: riversand963 Differential Revision: D31026931 fbshipit-source-id: c25dc788bf80ed5bf4b24922c442781943bcd65b
-
由 Peter Dillinger 提交于
Summary: Because even 32-bit systems can have large files This is a "change" that I don't want intermingled with an upcoming refactoring. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8926 Test Plan: CI Reviewed By: zhichao-cao Differential Revision: D31020974 Pulled By: pdillinger fbshipit-source-id: ca9eb4510697df6f1f55e37b37730b88b1809a92
-
由 Hui Xiao 提交于
Summary: - Fixed a bug in `RateLimiterTest.GeneratePriorityIterationOrder` that the callbacks in this test were not called starting from `i = 1`. Fix by increasing `rate_bytes_per_sec` and requested bytes. - The bug is due to the previous `rate_bytes_per_sec` was set too small, resulting in `refill_bytes_per_period` less than `kMinRefillBytesPerPeriod`. Hence the actual `refill_bytes_per_period` was equal to `kMinRefillBytesPerPeriod` due to the logic [here](https://github.com/facebook/rocksdb/blob/main/util/rate_limiter.cc#L302-L303) and it ended up being greater than the previously set requested bytes. Therefore starting from `i = 1`, `RefillBytesAndGrantRequests()` and `GeneratePriorityIterationOrder` won't be called and the test callbacks was not triggered to execute the assertion. - Added internal flag to assert callbacks are called in `RateLimiterTest.GeneratePriorityIterationOrder` to prevent any future changes defeat the purpose of the test [as suggested](https://github.com/facebook/rocksdb/pull/8890#discussion_r704915134) - Increased `rate_bytes_per_sec` and bytes of each request in `RateLimiterTest.GetTotalBytesThrough`, `RateLimiterTest.GetTotalRequests`, `RateLimiterTest.GetTotalPendingRequests` to trigger the "long path" of execution (i.e, the one trigger RefillBytesAndGrantRequests()) to increase test coverage - This increased the running time of the three tests, see test plan for time difference running locally - Cleared up sync point effects after each test by calling `SyncPoint::GetInstance()->DisableProcessing();` and `SyncPoint::GetInstance()->ClearAllCallBacks();` in `~RateLimiterTest()` [as suggested](https://github.com/facebook/rocksdb/pull/8595/files#r697534279) - It's fine to call these two methods even when `EnableProcessing()` or `SetCallBack()` is not called in the test or is already cleaned up. In those cases, calling these two functions in destructor is effectively no-op. - This will allow cleaning up sync point effects of previous test even when the previous test failed in assertion. - Added missing `SyncPoint::GetInstance()->DisableProcessing();` and `SyncPoint::GetInstance()->ClearCallBacks(..);` in existing tests for completeness - Called `SyncPoint::GetInstance()->DisableProcessing();` and `SyncPoint::GetInstance()->ClearCallBacks(..);` in loop in `RateLimiterTest.GeneratePriorityIterationOrder` for completeness Pull Request resolved: https://github.com/facebook/rocksdb/pull/8904 Test Plan: - Passing existing tests - To verify the 1st change, run `RateLimiterTest.GeneratePriorityIterationOrder` with assertions of callbacks are indeed called under original `rate_bytes_per_sec` and request byte and under updated `rate_bytes_per_sec` and request byte. The former will fail the assertion while the latter succeeds. - Here is the increased test time due to the 3rd change mentioned above in the summary. The relevant 3 tests mentioned in total increase the test time by 6s (~6000/33848 = 17.7% of the original total test time), which IMO is acceptable for better test coverage through running the "long path". - current (run on branch rate_limiter_ut_improve locally) [ RUN ] RateLimiterTest.GetTotalBytesThrough [ OK ] RateLimiterTest.GetTotalBytesThrough (3000 ms) [ RUN ] RateLimiterTest.GetTotalRequests [ OK ] RateLimiterTest.GetTotalRequests (3001 ms) [ RUN ] RateLimiterTest.GetTotalPendingRequests [ OK ] RateLimiterTest.GetTotalPendingRequests (0 ms) ... [----------] 10 tests from RateLimiterTest (43349 ms total) [----------] Global test environment tear-down [==========] 10 tests from 1 test case ran. (43349 ms total) [ PASSED ] 10 tests. - previous (run on branch main locally) [ RUN ] RateLimiterTest.GetTotalBytesThrough [ OK ] RateLimiterTest.GetTotalBytesThrough (0 ms) [ RUN ] RateLimiterTest.GetTotalRequests [ OK ] RateLimiterTest.GetTotalRequests (0 ms) [ RUN ] RateLimiterTest.GetTotalPendingRequests [ OK ] RateLimiterTest.GetTotalPendingRequests (0 ms) ... [----------] 10 tests from RateLimiterTest (33848 ms total) [----------] Global test environment tear-down [==========] 10 tests from 1 test case ran. (33848 ms total) [ PASSED ] 10 tests. Reviewed By: ajkr Differential Revision: D30872544 Pulled By: hx235 fbshipit-source-id: ff894f5c1a4bef70e8e407d53b00be45f776b3e4
-
- 17 9月, 2021 5 次提交
-
-
由 mrambacher 提交于
Summary: This keeps the implementations/API backward compatible. Implementations of Statistics will need to override this method (and be registered with the ObjectRegistry) in order to be created via CreateFromString. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8918 Reviewed By: pdillinger Differential Revision: D30958916 Pulled By: mrambacher fbshipit-source-id: 75b99a84e9e11fda2a9e8eff9ee1ef69a17517b2
-
由 Akanksha Mahajan 提交于
Summary: 1. Extend FlushJobInfo and CompactionJobInfo with information about the blob files generated by flush/compaction jobs. This PR add two structures BlobFileInfo and BlobFileGarbageInfo that contains the required information of blob files. 2. Notify the creation and deletion of blob files through OnBlobFileCreationStarted, OnBlobFileCreated, and OnBlobFileDeleted. 3. Test OnFile*Finish operations notifications with Blob Files. 4. Log the blob file creation/deletion events through EventLogger in Log file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8675 Test Plan: Add new unit tests in listener_test Reviewed By: ltamasi Differential Revision: D30412613 Pulled By: akankshamahajan15 fbshipit-source-id: ca51b63c6e8c8d0485a38c503572bc5a82bd5d07
-
由 sdong 提交于
Summary: Right now, the failure injection test for MultiGet() is not sufficient. Improve it with TestFSRandomAccessFile::MultiRead() injecting failures. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8925 Test Plan: Run crash test locally for a while. Reviewed By: anand1976 Differential Revision: D31000529 fbshipit-source-id: 439c7e02cf7440ac5af82deb609e202abdca3e1f
-
由 Jay Zhuang 提交于
Summary: Add compaction priority information in RemoteCompaction, which can be used to schedule high priority job first. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8707 Test Plan: unittest Reviewed By: ajkr Differential Revision: D30548401 Pulled By: jay-zhuang fbshipit-source-id: b30446511fb31b4583c49edd8565d496cf013a34
-
由 sdong 提交于
Summary: One contrun name is incorrect, which mixed error reporting with another one. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8924 Reviewed By: ltamasi Differential Revision: D30999477 fbshipit-source-id: 46a04b2e4b48f755181aa9a47c353d91f1128469
-
- 16 9月, 2021 6 次提交
-
-
由 Peter Dillinger 提交于
Summary: Test did not consider that slower deletion rate only kicks in after a file is deleted Fixes https://github.com/facebook/rocksdb/issues/7546 Pull Request resolved: https://github.com/facebook/rocksdb/pull/8917 Test Plan: no longer reproduces using buck test mode/dev //internal_repo_rocksdb/repo:db_sst_test -- --exact 'internal_repo_rocksdb/repo:db_sst_test - DBWALTestWithParam/DBWALTestWithParam.WALTrashCleanupOnOpen/0' --jobs 40 --stress-runs 600 --record-results Reviewed By: siying Differential Revision: D30949127 Pulled By: pdillinger fbshipit-source-id: 5d0607f8f548071b07410fe8f532b4618cd225e5
-
由 Peter Dillinger 提交于
Summary: kFlushOnly currently means "always" except in the case of remote compaction. This makes it flushes only. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8750 Test Plan: test updated Reviewed By: akankshamahajan15 Differential Revision: D30968034 Pulled By: pdillinger fbshipit-source-id: 5dbd24dde18852a0e937a540995fba9bfbe89037
-
由 Zhichao Cao 提交于
Summary: In order to populate the IOStatus up to the higher level, replace some of the Status to IOStatus. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8820 Test Plan: make check Reviewed By: pdillinger Differential Revision: D30967215 Pulled By: zhichao-cao fbshipit-source-id: ccf9d5cfbd9d3de047c464aaa85f9fa43b474903
-
由 Andrew Kryczka 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8907 Reviewed By: zhichao-cao Differential Revision: D30922081 Pulled By: ajkr fbshipit-source-id: ad7a32c21d0049342fd20c9b7f555e93674c3671
-
由 anand76 提交于
Summary: Potential bugs in the IO uring implementation can cause bad data to be returned in the completion queue. Add some checks in the PosixRandomAccessFile::MultiRead completion handling code to catch such errors and fail the entire MultiRead. Also log some diagnostic messages and stack trace. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8894 Reviewed By: siying, pdillinger Differential Revision: D30826982 Pulled By: anand1976 fbshipit-source-id: af91815ac760e095d6cc0466cf8bd5c10167fd15
-
由 Levi Tamasi 提交于
Summary: Currently, `benchmark.sh` computes write amplification itself; the patch changes the script to use the value calculated by RocksDB (which is printed as part of the periodic statistics). This also has the benefit of being correct for BlobDB as well, since it also considers the amount of data written to blob files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8915 Test Plan: ``` DB_DIR=/tmp/rocksdbtest/dbbench/ WAL_DIR=/tmp/rocksdbtest/dbbench/ NUM_KEYS=20000000 NUM_THREADS=32 tools/benchmark.sh overwrite --enable_blob_files=1 --enable_blob_garbage_collection=1 ... ** Compaction Stats [default] ** Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ L0 7/5 43.93 MB 0.5 0.3 0.0 0.3 0.5 0.3 0.0 1.0 1.3 59.9 201.35 101.88 109 1.847 22M 499K 0.0 11.2 L4 4/4 244.03 MB 0.0 11.4 0.3 1.6 1.6 0.0 0.0 1.1 50.6 49.3 231.10 288.84 7 33.014 156M 26M 9.5 9.5 L5 36/0 3.28 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 Sum 47/9 3.56 GB 0.0 11.7 0.3 1.8 2.2 0.3 0.0 2.0 27.6 54.3 432.45 390.72 116 3.728 179M 26M 9.5 20.8 Int 0/0 0.00 KB 0.0 3.5 0.1 0.5 0.6 0.1 0.0 2.2 31.2 55.6 115.01 109.53 29 3.966 51M 7353K 2.9 5.6 ... Completed overwrite (ID: ) in 289 seconds ops/sec mb/sec Size-GB L0_GB Sum_GB W-Amp W-MB/s usec/op p50 p75 p99 p99.9 p99.99 Uptime Stall-time Stall% Test Date Version Job-ID 111784 44.8 0.0 0.5 2.2 2.0 9.2 285.9 215.3 264.4 1232 13299 23310 243 00:00:0.000 0.0 overwrite.t32.s0 2021-09-14T11:58:26.000-07:00 6.24 ``` Reviewed By: zhichao-cao Differential Revision: D30940352 Pulled By: ltamasi fbshipit-source-id: ae7f5cd5440c8529788dda043266121fc2be0853
-
- 15 9月, 2021 4 次提交
-
-
由 sdong 提交于
Summary: ArenaWrappedDBIter::db_iter_ should never be nullptr. However, when debugging a segfault, it's hard to distinguish it is not initialized (not possible) and other corruption. Add this nullptr to help distinguish the case. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8889 Test Plan: Run existing unit tests. Reviewed By: pdillinger Differential Revision: D30814756 fbshipit-source-id: 4b1f36896a33dc203d4f1f424ded9554927d61ba
-
由 Andrew Kryczka 提交于
Summary: After https://github.com/facebook/rocksdb/issues/8725, keys added to `WriteBatch` may be timestamp-suffixed, while `WriteBatch` has no awareness of the timestamp size. Therefore, `WriteBatch` can no longer calculate timestamp checksum separately from the rest of the key's checksum in all cases. This PR changes the definition of key in KV checksum to include the timestamp suffix. That way we do not need to worry about where the timestamp begins within the key. I believe the only practical effect of this change is now `AssignTimestamp()` requires recomputing the whole key checksum (`UpdateK()`) rather than just the timestamp portion (`UpdateT()`). Pull Request resolved: https://github.com/facebook/rocksdb/pull/8914 Test Plan: run stress command that used to fail ``` $ ./db_stress --batch_protection_bytes_per_key=8 -clear_column_family_one_in=0 -test_batches_snapshots=1 ``` Reviewed By: riversand963 Differential Revision: D30925715 Pulled By: ajkr fbshipit-source-id: c143f7ccb46c0efb390ad57ef415c250d754deff
-
由 Adam Retter 提交于
Summary: * Started on some proper usage text to document the options * Added a `JOB_ID` parameter, so that we can trace jobs and relate them to other assets * Now generates a correct TSV file of the summary * Summary has new additional fields: * RocksDB Version * Date * Job ID * db_bench log files now also include the Job ID Pull Request resolved: https://github.com/facebook/rocksdb/pull/8730 Reviewed By: mrambacher Differential Revision: D30747344 Pulled By: jay-zhuang fbshipit-source-id: 87eb78d20959b6d95804aebf129606fa9c71f407
-
由 Cheng Chang 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8911 Reviewed By: zhichao-cao Differential Revision: D30908552 Pulled By: cheng-chang fbshipit-source-id: df2ab50d94ed46bfb54f0dd520f8a5cdbfa49fd1
-
- 14 9月, 2021 5 次提交
-
-
由 eharry 提交于
Summary: Fix WAL log data corruption when using DBOptions.manual_wal_flush(true) and WriteOptions.sync(true) together (https://github.com/facebook/rocksdb/issues/8723) Pull Request resolved: https://github.com/facebook/rocksdb/pull/8746 Reviewed By: ajkr Differential Revision: D30758468 Pulled By: riversand963 fbshipit-source-id: 07c20899d5f2447dc77861b4845efc68a59aa4e8
-
由 Peter Dillinger 提交于
Summary: These tests would frequently fail to find SST files due to race condition in running ldb (read-only) on an open DB which might do automatic compaction. But only sometimes would that failure translate into test failure because the implementation of ldb file_checksum_dump would swallow many errors. Now, * DB closed while running ldb to avoid unnecessary race condition * Detect and report/propagate more failures in `ldb file_checksum_dump` * Use --hex so that random binary data is not printed to console Pull Request resolved: https://github.com/facebook/rocksdb/pull/8898 Test Plan: ./ldb_cmd_test --gtest_filter=*Checksum* --gtest_repeat=100 Reviewed By: zhichao-cao Differential Revision: D30848738 Pulled By: pdillinger fbshipit-source-id: 20290b517eeceba99bb538bb5a17088f7e878405
-
由 Peter Dillinger 提交于
Summary: Facebook infrastructure doesn't like continuously skipping tests, so fixing this permanently disabled parameterization to BYPASS instead of SKIP. (Internal ref: T100525285) Pull Request resolved: https://github.com/facebook/rocksdb/pull/8910 Test Plan: manual Reviewed By: anand1976 Differential Revision: D30905169 Pulled By: pdillinger fbshipit-source-id: e23d63d2aa800e54676269fad3a093cd3f9f222d
-
由 Levi Tamasi 提交于
Summary: The patch adjusts the definition of BlobDB's DB properties a bit by switching to `GetBlobFileSize` from `GetTotalBlobBytes`. The difference is that the value returned by `GetBlobFileSize` includes the blob file header and footer as well, and thus matches the on-disk size of blob files. In addition, the patch removes the `Version` number from the `blob_stats` property, and updates/extends the unit tests a little. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8902 Test Plan: `make check` Reviewed By: riversand963 Differential Revision: D30859542 Pulled By: ltamasi fbshipit-source-id: e3426d2d567bd1bd8c8636abdafaafa0743c854c
-
由 Romain Péchayre 提交于
Summary: Hi. Hope this helps :) Pull Request resolved: https://github.com/facebook/rocksdb/pull/8906 Reviewed By: jay-zhuang Differential Revision: D30890111 Pulled By: zhichao-cao fbshipit-source-id: 45a4119158dc38cb4220b1d6d571bb1ca9902ffc
-
- 13 9月, 2021 2 次提交
-
-
由 mrambacher 提交于
Summary: This allows the wrapper classes to own the wrapped object and eliminates confusion as to ownership. Previously, many classes implemented their own ownership solutions. Fixes https://github.com/facebook/rocksdb/issues/8606 Pull Request resolved: https://github.com/facebook/rocksdb/pull/8618 Reviewed By: pdillinger Differential Revision: D30136064 Pulled By: mrambacher fbshipit-source-id: d0bf471df8818dbb1770a86335fe98f761cca193
-
由 Yanqin Jin 提交于
Summary: In the past, we unnecessarily requires all keys in the same write batch to be from column families whose timestamps' formats are the same for simplicity. Specifically, we cannot use the same write batch to write to two column families, one of which enables timestamp while the other disables it. The limitation is due to the member `timestamp_size_` that used to exist in each `WriteBatch` object. We pass a timestamp_size to the constructor of `WriteBatch`. Therefore, users can simply use the old `WriteBatch::Put()`, `WriteBatch::Delete()`, etc APIs for write, while the internal implementation of `WriteBatch` will take care of memory allocation for timestamps. The above is not necessary. One the one hand, users can set up a memory buffer to store user key and then contiguously append the timestamp to the user key. Then the user can pass this buffer to the `WriteBatch::Put(Slice&)` API. On the other hand, users can set up a SliceParts object which is an array of Slices and let the last Slice to point to the memory buffer storing timestamp. Then the user can pass the SliceParts object to the `WriteBatch::Put(SliceParts&)` API. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8725 Test Plan: make check Reviewed By: ltamasi Differential Revision: D30654499 Pulled By: riversand963 fbshipit-source-id: 9d848c77ad3c9dd629aa5fc4e2bc16fb0687b4a2
-
- 12 9月, 2021 1 次提交
-
-
由 Levi Tamasi 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8905 Reviewed By: zhichao-cao Differential Revision: D30873416 Pulled By: ltamasi fbshipit-source-id: 6e55ec14a7fd2e562aa24cd0274e2436369923f5
-
- 11 9月, 2021 2 次提交
-
-
由 Peter Dillinger 提交于
Summary: It's always annoying to find a header does not include its own dependencies and only works when included after other includes. This change adds `make check-headers` which validates that each header can be included at the top of a file. Some headers are excluded e.g. because of platform or external dependencies. rocksdb_namespace.h had to be re-worked slightly to enable checking for failure to include it. (ROCKSDB_NAMESPACE is a valid namespace name.) Fixes mostly involve adding and cleaning up #includes, but for FileTraceWriter, a constructor was out-of-lined to make a forward declaration sufficient. This check is not currently run with `make check` but is added to CircleCI build-linux-unity since that one is already relatively fast. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8893 Test Plan: existing tests and resolving issues detected by new check Reviewed By: mrambacher Differential Revision: D30823300 Pulled By: pdillinger fbshipit-source-id: 9fff223944994c83c105e2e6496d24845dc8e572
-
由 mrambacher 提交于
Summary: Make the Statistics object into a Customizable object. Statistics can now be stored and created to/from the Options file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8637 Reviewed By: zhichao-cao Differential Revision: D30530550 Pulled By: mrambacher fbshipit-source-id: 5fc7d01d8431f37b2c205bbbd8342c9f697023bd
-
- 10 9月, 2021 5 次提交
-
-
由 Hui Xiao 提交于
Summary: Context/Summary: As users requested, a public API RateLimiter::GetTotalPendingRequests() is added to expose the total number of pending requests for bytes in the rate limiter, which is the size of the request queue of that priority (or of all priorities, if IO_TOTAL is interested) at the time when this API is called. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8890 Test Plan: - Passing added new unit tests - Passing existing unit tests Reviewed By: ajkr Differential Revision: D30815500 Pulled By: hx235 fbshipit-source-id: 2dfa990f651c1c47378b6215c751ad76a5824300
-
由 mrambacher 提交于
Summary: ManagedObjects are shared pointer objects where RocksDB wants to share a single object between multiple configurations. For example, the Cache may be shared between multiple column families/tables or the Statistics may be shared between multiple databases. ManagedObjects are stored in the ObjectRegistry by Type (e.g. Cache) and ID. For a given type/ID name, a single object is stored. APIs were added to get/set/create these objects. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8658 Reviewed By: pdillinger Differential Revision: D30806273 Pulled By: mrambacher fbshipit-source-id: 832ac4423b210c4c4b4a456b35897334775d3160
-
由 Levi Tamasi 提交于
Summary: As a first step of supporting user-defined timestamps with ingestion, the patch adds timestamp support to `SstFileWriter`; namely, it adds new versions of the `Put` and `Delete` APIs that take timestamps. (`Merge` and `DeleteRange` are currently not supported with user-defined timestamps in general but once those features are implemented, we can handle them in `SstFileWriter` in a similar fashion.) The new APIs validate the size of the timestamp provided by the client. Similarly, calls to the pre-existing timestamp-less APIs are now disallowed when user-defined timestamps are in use according to the comparator. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8899 Test Plan: `make check` Reviewed By: riversand963 Differential Revision: D30850699 Pulled By: ltamasi fbshipit-source-id: 779154373618f19b8f0797976bb7286783c57b67
-
由 Hui Xiao 提交于
Add comment for new_memory_used parameter in CacheReservationManager::UpdateCacheReservation (#8895) Summary: Context/Summary: this PR is to clarify what the parameter new_memory_used is in CacheReservationManager::UpdateCacheReservation Pull Request resolved: https://github.com/facebook/rocksdb/pull/8895 Test Plan: - Passing existing test - Make format Reviewed By: jay-zhuang Differential Revision: D30844814 Pulled By: hx235 fbshipit-source-id: 3177f7abf5668ea9e73818ceaa355566f03acabc
-
由 Hui Xiao 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8896 Reviewed By: ajkr Differential Revision: D30846120 Pulled By: hx235 fbshipit-source-id: 9224ebce5437d63b0fb8af9171c6041a9ea5d90f
-
- 09 9月, 2021 4 次提交
-
-
由 anand76 提交于
Summary: Support custom Env in these tests. Some custom Envs do not support reopening a file for write, either normal mode or Random RW mode. Added some additional checks in external_sst_file_basic_test to accommodate those Envs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8888 Reviewed By: riversand963 Differential Revision: D30824481 Pulled By: anand1976 fbshipit-source-id: c3ac7a628e6df29e94f42e370e679934a4f77eac
-
由 hx235 提交于
Summary: Context: While all the non-trivial write operations in BackupEngine go through the RateLimiter, reads currently do not. In general, this is not a huge issue because (especially since some I/O efficiency fixes) reads in BackupEngine are mostly limited by corresponding writes, for both backup and restore. But in principle we should charge the RateLimiter for reads as well. - Charged read operations in `BackupEngineImpl::CopyOrCreateFile`, `BackupEngineImpl::ReadFileAndComputeChecksum`, `BackupEngineImpl::BackupMeta::LoadFromFile` and `BackupEngineImpl::GetFileDbIdentities` Pull Request resolved: https://github.com/facebook/rocksdb/pull/8722 Test Plan: - Passed existing tests - Passed added unit tests Reviewed By: pdillinger Differential Revision: D30610464 Pulled By: hx235 fbshipit-source-id: 9b08c9387159a5385c8d390d6666377a0d0117e5
-
由 Andrew Kryczka 提交于
Summary: A "LATEST_BACKUP" file was left in the backup directory by "BackupEngineTest.NoDeleteWithReadOnly" test, affecting future test runs. In particular, it caused "BackupEngineTest.IOStats" to fail since it relies on backup directory containing only data written by its `BackupEngine`. The fix is to promote "LATEST_BACKUP" to an explicitly managed file so it is deleted in `BackupEngineTest` constructor if it exists. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8887 Test Plan: below command used to fail. Now it passes: ``` $ TEST_TMPDIR=/dev/shm ./backupable_db_test --gtest_filter='BackupEngineTest.NoDeleteWithReadOnly:BackupEngineTest.IOStats' ``` Reviewed By: pdillinger Differential Revision: D30812336 Pulled By: ajkr fbshipit-source-id: 32dfbe1368ebdab872e610764bfea5daf9a2af09
-
由 Hui Xiao 提交于
Summary: Context: Some data blocks are temporarily buffered in memory in BlockBasedTableBuilder for building compression dictionary used in data block compression. Currently this memory usage is not counted toward our global memory usage utilizing block cache capacity. To improve that, this PR charges that memory usage into the block cache to achieve better memory tracking and limiting. - Reserve memory in block cache for buffered data blocks that are used to build a compression dictionary - Release all the memory associated with buffering the data blocks mentioned above in EnterUnbuffered(), which is called when (a) buffer limit is exceeded after buffering OR (b) the block cache becomes full after reservation OR (c) BlockBasedTableBuilder calls Finish() Pull Request resolved: https://github.com/facebook/rocksdb/pull/8428 Test Plan: - Passing existing unit tests - Passing new unit tests Reviewed By: ajkr Differential Revision: D30755305 Pulled By: hx235 fbshipit-source-id: 6e66665020b775154a94c4c5e0f2adaeaff13981
-