- 15 8月, 2019 2 次提交
-
-
由 Levi Tamasi 提交于
Summary: Some accesses to blob_files_ and open_ttl_files_ in BlobDBImpl, as well as to expiration_range_ in BlobFile were not properly synchronized. The patch fixes this and also makes sure the invariant that obsolete_files_ is a subset of blob_files_ holds even when an attempt to delete an obsolete blob file fails. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5698 Test Plan: COMPILE_WITH_TSAN=1 make blob_db_test gtest-parallel --repeat=1000 ./blob_db_test --gtest_filter="*ShutdownWait*" The test fails with TSAN errors ~20 times out of 1000 without the patch but completes successfully 1000 out of 1000 times with the fix. Differential Revision: D16793235 Pulled By: ltamasi fbshipit-source-id: 8034b987598d4fdc9f15098d4589cc49cde484e9
-
由 Manuel Ung 提交于
Summary: In MyRocks, there are cases where we write while iterating through keys. This currently breaks WBWIIterator, because if a write batch flushes during iteration, the delta iterator would point to invalid memory. For now, fix by disallowing flush if there are active iterators. In the future, we will loop through all the iterators on a transaction, and refresh the iterators when a write batch is flushed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5699 Differential Revision: D16794157 Pulled By: lth fbshipit-source-id: 5d5bf70688bd68fe58e8a766475ae88fd1be3190
-
- 14 8月, 2019 2 次提交
-
-
由 Zhongyi Xie 提交于
Summary: Fix the following clang analyze failures: ``` In file included from utilities/transactions/transaction_test.cc:8: ./utilities/transactions/transaction_test.h:174:14: warning: Attempt to delete released memory delete root_db; ^ ``` The destructor of StackableDB already deletes the root db and there is no need to delete the db separately. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5700 Test Plan: USE_CLANG=1 TEST_TMPDIR=/dev/shm/rocksdb OPT=-g make -j24 analyze Differential Revision: D16800579 Pulled By: maysamyabandeh fbshipit-source-id: 64c2d70f23e07e6a15242add97c744902ea33be5
-
由 Manuel Ung 提交于
Summary: Currently, if a write is done without a snapshot, then `largest_validated_seq_` is set to `kMaxSequenceNumber`. This is too aggressive, because an iterator with a snapshot created after this write should be valid. Set `largest_validated_seq_` to `GetLastPublishedSequence` instead. The variable means that no keys in the current tracked key set has changed by other transactions since `largest_validated_seq_`. Also, do some extra cleanup in Clear() for safety. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5697 Differential Revision: D16788613 Pulled By: lth fbshipit-source-id: f2aa40b8b12e0c0cf9e38c940fecc8f1cc0d2385
-
- 13 8月, 2019 3 次提交
-
-
由 Yi Zhang 提交于
Summary: When updating compiler version for MyRocks I'm seeing this error with rocksdb: ``` ome/yzha/mysql/mysql-fork2/rocksdb/table/get_context.h:91:3: error: explicitly defaulted default constructor is implicitly deleted [-Werror,-Wdefaulted-function-deleted] GetContext() = default; ^ /home/yzha/mysql/mysql-fork2/rocksdb/table/get_context.h:166:18: note: default constructor of 'GetContext' is implicitly deleted because field 'tracing_get_id_' of const-qualified type 'const uint64_t' (aka 'const unsigned long') would not be initialized const uint64_t tracing_get_id_; ^ ``` The error itself is rather self explanatory and makes sense. Given that no one seems to be using the default ctor (they shouldn't, anyway), I'm deleting it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5685 Differential Revision: D16747712 Pulled By: yizhang82 fbshipit-source-id: 95c0acb958a1ed41154c0047d2e6fce7644de53f
-
由 Maysam Yabandeh 提交于
Summary: With changes made in https://github.com/facebook/rocksdb/pull/5664 we meant to pass snap_released parameter of ::IsInSnapshot from the read callbacks. Although the variable was defined, passing it to the callback in WritePreparedTxnReadCallback was missing, which is fixed in this PR. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5691 Differential Revision: D16767310 Pulled By: maysamyabandeh fbshipit-source-id: 3bf53f5964a2756a66ceef7c8f6b3ac75f102f48
-
由 Manuel Ung 提交于
Summary: The changes transaction_test to set `txn_db_options.default_write_batch_flush_threshold = 1` in order to give better test coverage for WriteUnprepared. As part of the change, some tests had to be updated. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5658 Differential Revision: D16740468 Pulled By: lth fbshipit-source-id: 3821eec20baf13917c8c1fab444332f75a509de9
-
- 11 8月, 2019 1 次提交
-
-
由 Zhongyi Xie 提交于
Summary: PR https://github.com/facebook/rocksdb/pull/5676 added some test coverage for `TEST_ENV_URI`, which unfortunately isn't supported in lite mode, causing some test failures for rocksdb lite. For example, ``` db/db_test_util.cc: In constructor ‘rocksdb::DBTestBase::DBTestBase(std::__cxx11::string)’: db/db_test_util.cc:57:16: error: ‘ObjectRegistry’ has not been declared Status s = ObjectRegistry::NewInstance()->NewSharedObject(test_env_uri, ^ ``` This PR fixes these errors by excluding the new code from test functions for lite mode. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5686 Differential Revision: D16749000 Pulled By: miasantreble fbshipit-source-id: e8b3088c31a78b3dffc5fe7814261909d2c3e369
-
- 10 8月, 2019 3 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: SmallestUnCommittedSeq reads two data structures, prepared_txns_ and delayed_prepared_. These two are updated in CheckPreparedAgainstMax when max_evicted_seq_ advances some prepared entires. To avoid the cost of acquiring a mutex, the read from them in SmallestUnCommittedSeq is not atomic. This creates a potential race condition. The fix is to read the two data structures in the reverse order of their update. CheckPreparedAgainstMax copies the prepared entry to delayed_prepared_ before removing it from prepared_txns_ and SmallestUnCommittedSeq looks into prepared_txns_ before reading delayed_prepared_. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5683 Differential Revision: D16744699 Pulled By: maysamyabandeh fbshipit-source-id: b1bdb134018beb0b9de58827f512662bea35cad0
-
由 Yanqin Jin 提交于
Summary: Most existing RocksDB unit tests run on `Env::Default()`. It will be useful to port the unit tests to non-default environments, e.g. `HdfsEnv`, etc. This pull request is one step towards this goal. If RocksDB unit tests are built with a static library exposing a function `RegisterCustomObjects()`, then it is possible to implement custom object registrar logic in the library. RocksDB unit test can call `RegisterCustomObjects()` at the beginning. By default, `ROCKSDB_UNITTESTS_WITH_CUSTOM_OBJECTS_FROM_STATIC_LIBS` is not defined, thus this PR has no impact on existing RocksDB because `RegisterCustomObjects()` is a noop. Test plan (on devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 all $make check ``` All unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5676 Differential Revision: D16679157 Pulled By: riversand963 fbshipit-source-id: aca571af3fd0525277cdc674248d0fe06e060f9d
-
由 haoyuhuang 提交于
Summary: This PR adds support in block cache trace analyzer to read from human readable trace file. This is needed when a user does not have access to the binary trace file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5679 Test Plan: USE_CLANG=1 make check -j32 Differential Revision: D16697239 Pulled By: HaoyuHuang fbshipit-source-id: f2e29d7995816c389b41458f234ec8e184a924db
-
- 08 8月, 2019 2 次提交
-
-
由 Zhongyi Xie 提交于
Summary: This PR fixes two test failures: 1. clang check: ``` third-party/folly/folly/detail/Futex.cpp:52:12: error: implicit conversion loses integer precision: 'long' to 'int' [-Werror,-Wshorten-64-to-32] int rv = syscall( ~~ ^~~~~~~~ third-party/folly/folly/detail/Futex.cpp:114:12: error: implicit conversion loses integer precision: 'long' to 'int' [-Werror,-Wshorten-64-to-32] int rv = syscall( ~~ ^~~~~~~~ ``` 2. lite ``` ./third-party/folly/folly/synchronization/DistributedMutex-inl.h:1337:7: error: exception handling disabled, use -fexceptions to enable } catch (...) { ^ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5680 Differential Revision: D16704042 Pulled By: miasantreble fbshipit-source-id: a53cb06128365d9e864f07476b0af8fc27140f07
-
由 Aaryaman Sagar 提交于
Summary: This ports `folly::DistributedMutex` into RocksDB. The PR includes everything else needed to compile and use DistributedMutex as a component within folly. Most files are unchanged except for some portability stuff and includes. For now, I've put this under `rocksdb/third-party`, but if there is a better folder to put this under, let me know. I also am not sure how or where to put unit tests for third-party stuff like this. It seems like gtest is included already, but I need to link with it from another third-party folder. This also includes some other common components from folly - folly/Optional - folly/ScopeGuard (In particular `SCOPE_EXIT`) - folly/synchronization/ParkingLot (A portable futex-like interface) - folly/synchronization/AtomicNotification (The standard C++ interface for futexes) - folly/Indestructible (For singletons that don't get destroyed without allocations) Pull Request resolved: https://github.com/facebook/rocksdb/pull/5642 Differential Revision: D16544439 fbshipit-source-id: 179b98b5dcddc3075926d31a30f92fd064245731
-
- 07 8月, 2019 3 次提交
-
-
由 haoyuhuang 提交于
Summary: This PR adds four more eviction policies. - OPT [1] - Hyperbolic caching [2] - ARC [3] - GreedyDualSize [4] [1] L. A. Belady. 1966. A Study of Replacement Algorithms for a Virtual-storage Computer. IBM Syst. J. 5, 2 (June 1966), 78-101. DOI=http://dx.doi.org/10.1147/sj.52.0078 [2] Aaron Blankstein, Siddhartha Sen, and Michael J. Freedman. 2017. Hyperbolic caching: flexible caching for web applications. In Proceedings of the 2017 USENIX Conference on Usenix Annual Technical Conference (USENIX ATC '17). USENIX Association, Berkeley, CA, USA, 499-511. [3] Nimrod Megiddo and Dharmendra S. Modha. 2003. ARC: A Self-Tuning, Low Overhead Replacement Cache. In Proceedings of the 2nd USENIX Conference on File and Storage Technologies (FAST '03). USENIX Association, Berkeley, CA, USA, 115-130. [4] N. Young. The k-server dual and loose competitiveness for paging. Algorithmica, June 1994, vol. 11,(no.6):525-41. Rewritten version of ''On-line caching as cache size varies'', in The 2nd Annual ACM-SIAM Symposium on Discrete Algorithms, 241-250, 1991. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5644 Differential Revision: D16548817 Pulled By: HaoyuHuang fbshipit-source-id: 838f76db9179f07911abaab46c97e1c929cfcd63
-
由 Vijay Nadimpalli 提交于
Summary: This is a new API added to db.h to allow for fetching all merge operands associated with a Key. The main motivation for this API is to support use cases where doing a full online merge is not necessary as it is performance sensitive. Example use-cases: 1. Update subset of columns and read subset of columns - Imagine a SQL Table, a row is encoded as a K/V pair (as it is done in MyRocks). If there are many columns and users only updated one of them, we can use merge operator to reduce write amplification. While users only read one or two columns in the read query, this feature can avoid a full merging of the whole row, and save some CPU. 2. Updating very few attributes in a value which is a JSON-like document - Updating one attribute can be done efficiently using merge operator, while reading back one attribute can be done more efficiently if we don't need to do a full merge. ---------------------------------------------------------------------------------------------------- API : Status GetMergeOperands( const ReadOptions& options, ColumnFamilyHandle* column_family, const Slice& key, PinnableSlice* merge_operands, GetMergeOperandsOptions* get_merge_operands_options, int* number_of_operands) Example usage : int size = 100; int number_of_operands = 0; std::vector<PinnableSlice> values(size); GetMergeOperandsOptions merge_operands_info; db_->GetMergeOperands(ReadOptions(), db_->DefaultColumnFamily(), "k1", values.data(), merge_operands_info, &number_of_operands); Description : Returns all the merge operands corresponding to the key. If the number of merge operands in DB is greater than merge_operands_options.expected_max_number_of_operands no merge operands are returned and status is Incomplete. Merge operands returned are in the order of insertion. merge_operands-> Points to an array of at-least merge_operands_options.expected_max_number_of_operands and the caller is responsible for allocating it. If the status returned is Incomplete then number_of_operands will contain the total number of merge operands found in DB for key. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5604 Test Plan: Added unit test and perf test in db_bench that can be run using the command: ./db_bench -benchmarks=getmergeoperands --merge_operator=sortlist Differential Revision: D16657366 Pulled By: vjnadimpalli fbshipit-source-id: 0faadd752351745224ee12d4ae9ef3cb529951bf
-
由 Yun Tang 提交于
Summary: The actual value of default write buffer size within `rocksdb/include/rocksdb/options.h` is 64 MB, we should correct this value in java doc. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5670 Differential Revision: D16668815 Pulled By: maysamyabandeh fbshipit-source-id: cc3a981c9f1c2cd4a8392b0ed5f1fd0a2d729afb
-
- 06 8月, 2019 5 次提交
-
-
由 Kefu Chai 提交于
Summary: - cmake: use the builtin FindBzip2.cmake from CMake - cmake: require CMake v3.5.1 - cmake: add imported target for 3rd party libraries - cmake: extract ReadVersion.cmake out and refactor it Pull Request resolved: https://github.com/facebook/rocksdb/pull/5662 Differential Revision: D16660974 Pulled By: maysamyabandeh fbshipit-source-id: 681594910e74253251fe14ad0befc41a4d0f4fd4
-
由 haoyuhuang 提交于
Summary: This PR updated the python script to plot graphs for stats output from block cache analyzer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5673 Test Plan: Manually run the script to generate graphs. Differential Revision: D16657145 Pulled By: HaoyuHuang fbshipit-source-id: fd510b5fd4307835f9a986fac545734dbe003d28
-
由 Yanqin Jin 提交于
Summary: If a test is one of parallel tests, then it should also be one of the 'tests'. Otherwise, `make all` won't build the binaries. For examle, ``` $COMPILE_WITH_ASAN=1 make -j32 all ``` Then if you do ``` $make check ``` The second command will invoke the compilation and building for db_bloom_test and file_reader_writer_test **without** the `COMPILE_WITH_ASAN=1`, causing the command to fail. Test plan (on devserver): ``` $make -j32 all ``` Verify all binaries are built so that `make check` won't have to compile any thing. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5672 Differential Revision: D16655834 Pulled By: riversand963 fbshipit-source-id: 050131412b5313496f85ae3deeeeb8d28af75746
-
由 Maysam Yabandeh 提交于
Summary: if read_options.snapshot is not set, ::Get will take the last sequence number after taking a super-version and uses that as the sequence number. Theoretically max_eviceted_seq_ could advance this sequence number. This could lead ::IsInSnapshot that will be invoked by the ReadCallback to notice the absence of the snapshot. In this case, the ReadCallback should have passed a non-value to snap_released so that it could be set by the ::IsInSnapshot. The patch does that, and adds a unit test to verify it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5664 Differential Revision: D16614033 Pulled By: maysamyabandeh fbshipit-source-id: 06fb3fd4aacd75806ed1a1acec7961f5d02486f2
-
由 Maysam Yabandeh 提交于
Summary: It sometimes times out when run under valgrind taking around 20m. The patch skips the test under Valgrind. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5671 Differential Revision: D16652382 Pulled By: maysamyabandeh fbshipit-source-id: 0f6f4f76d37337d56226b689e01b14523dd07aae
-
- 03 8月, 2019 1 次提交
-
-
由 Yanqin Jin 提交于
Summary: Users may desire to specify extra dependencies via buck. This PR allows users to pass additional dependencies as a JSON object so that the buckifier script can generate TARGETS file with desired extra dependencies. Test plan (on dev server) ``` $python buckifier/buckify_rocksdb.py '{"fake": {"extra_deps": [":test_dep", "//fakes/module:mock1"], "extra_compiler_flags": ["-DROCKSDB_LITE", "-Os"]}}' Generating TARGETS Extra dependencies: {'': {'extra_compiler_flags': [], 'extra_deps': []}, 'test_dep1': {'extra_compiler_flags': ['-O2', '-DROCKSDB_LITE'], 'extra_deps': [':fake', '//dep1/mock']}} Generated TARGETS Summary: - 5 libs - 0 binarys - 296 tests ``` Verify the TARGETS file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5648 Differential Revision: D16565043 Pulled By: riversand963 fbshipit-source-id: a6ef02274174fcf159692d7b846e828454d01e89
-
- 02 8月, 2019 1 次提交
-
-
由 Zhongyi Xie 提交于
Summary: Currently in `DBImpl::PurgeObsoleteFiles`, the list of candidate files is create through a combination of calling LogFileName using `log_delete_files` and `full_scan_candidate_files`. In full_scan_candidate_files, the filenames look like this {file_name = "074715.log", file_path = "/txlogs/3306"}, but LogFileName produces filenames like this that prepends a slash: {file_name = "/074715.log", file_path = "/txlogs/3306"}, This confuses the dedup step here: https://github.com/facebook/rocksdb/blob/bb4178066dc4f18b9b7f1d371e641db027b3edbe/db/db_impl/db_impl_files.cc#L339-L345 Because duplicates still exist, DeleteFile is called on the same file twice, and hits an error on the second try. Error message: Failed to mark /txlogs/3302/764418.log as trash. The root cause is the use of `kDumbDbName` when generating file names, it creates file names like /074715.log. This PR removes the use of `kDumbDbName` and create paths without leading '/' when dbname can be ignored. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5603 Test Plan: make check Differential Revision: D16413203 Pulled By: miasantreble fbshipit-source-id: 6ba8288382c55f7d5e3892d722fc94b57d2e4491
-
- 01 8月, 2019 3 次提交
-
-
由 Levi Tamasi 提交于
Summary: MergeOperatorPinningTest.Randomized frequently times out under TSAN because it tests ~40 option configurations sequentially in a loop. The patch parallelizes the tests of the various configurations to make the test complete faster. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5659 Test Plan: Tested using buck test mode/dev-tsan ... Differential Revision: D16587518 Pulled By: ltamasi fbshipit-source-id: 65bd25c0ad9a23587fed5592e69c1a0097fa27f6
-
由 Manuel Ung 提交于
Summary: Add savepoint support when the current transaction has flushed unprepared batches. Rolling back to savepoint is similar to rolling back a transaction. It requires the set of keys that have changed since the savepoint, re-reading the keys at the snapshot at that savepoint, and the restoring the old keys by writing out another unprepared batch. For this strategy to work though, we must be capable of reading keys at a savepoint. This does not work if keys were written out using the same sequence number before and after a savepoint. Therefore, when we flush out unprepared batches, we must split the batch by savepoint if any savepoints exist. eg. If we have the following: ``` Put(A) Put(B) Put(C) SetSavePoint() Put(D) Put(E) SetSavePoint() Put(F) ``` Then we will write out 3 separate unprepared batches: ``` Put(A) 1 Put(B) 1 Put(C) 1 Put(D) 2 Put(E) 2 Put(F) 3 ``` This is so that when we rollback to eg. the first savepoint, we can just read keys at snapshot_seq = 1. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5627 Differential Revision: D16584130 Pulled By: lth fbshipit-source-id: 6d100dd548fb20c4b76661bd0f8a2647e64477fa
-
由 Manuel Ung 提交于
Summary: In DeferSnapshotSavePointTest, writes were failing with snapshot validation error because the key with the latest sequence number was an unprepared key from the current transaction. Fix this by passing down the correct read callback. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5657 Differential Revision: D16582466 Pulled By: lth fbshipit-source-id: 11645dac0e7c1374d917ef5fdf757d13c1d1108d
-
- 31 7月, 2019 5 次提交
-
-
由 Eli Pozniansky 提交于
Summary: In some cases, we don't have to get really accurate number. Something like 10% off is fine, we can create a new option for that use case. In this case, we can calculate size for full files first, and avoid estimation inside SST files if full files got us a huge number. For example, if we already covered 100GB of data, we should be able to skip partial dives into 10 SST files of 30MB. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5609 Differential Revision: D16433481 Pulled By: elipoz fbshipit-source-id: 5830b31e1c656d0fd3a00d7fd2678ddc8f6e601b
-
由 Levi Tamasi 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5653 Differential Revision: D16573445 Pulled By: ltamasi fbshipit-source-id: 19c639044fcfd43b5d5c627c8def33ff2dbb2af8
-
由 Fosco Marotto 提交于
Summary: Master branch had been left at 6.2 and history of 6.3 and beyond were merged. Updated this to correct. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5652 Differential Revision: D16570498 Pulled By: gfosco fbshipit-source-id: 79f62ec570539a3e3d7d7c84a6cf7b722395fafe
-
由 Yanqin Jin 提交于
Summary: Update buckifier templates in the scripts. Test plan (on devserver) ``` $python buckifier/buckify_rocksdb.py ``` Then ``` $git diff ``` Verify that generated TARGETS file is the same (except for indentation). Pull Request resolved: https://github.com/facebook/rocksdb/pull/5647 Differential Revision: D16555647 Pulled By: riversand963 fbshipit-source-id: 32574a4d0e820858eab2391304dd731141719bcd
-
由 Yi Wu 提交于
Summary: Fix -Wsign-compare warnings for gcc9. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5651 Test Plan: Tested with ubuntu19.10+gcc9 Differential Revision: D16567428 fbshipit-source-id: 730b2704d42ba0c4e4ea946a3199bbb34be4c25c
-
- 30 7月, 2019 2 次提交
-
-
由 Manuel Ung 提交于
Summary: The `TransactionTest.MultiGetBatchedTest` were failing with unprepared batches because we were not using the correct callbacks. Override MultiGet to pass down the correct ReadCallback. A similar problem is also fixed in WritePrepared. This PR also fixes an issue similar to (https://github.com/facebook/rocksdb/pull/5147), but for MultiGet instead of Get. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5634 Differential Revision: D16552674 Pulled By: lth fbshipit-source-id: 736eaf8e919c6b13d5f5655b1c0d36b57ad04804
-
由 haoyuhuang 提交于
Summary: This PR optimizes the hybrid row-block cache simulator. If a Get request hits the cache, we treat all its future accesses as hits. Consider a Get request (no snapshot) accesses multiple files, e.g, file1, file2, file3. We construct the row key as "fdnumber_key_0". Before this PR, if it hits the cache when searching the key in file1, we continue to process its accesses in file2 and file3 which is unnecessary. With this PR, if "file1_key_0" is in the cache, we treat all future accesses of this Get request as hits. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5616 Differential Revision: D16453187 Pulled By: HaoyuHuang fbshipit-source-id: 56f3169cc322322305baaf5543226a0824fae19f
-
- 27 7月, 2019 7 次提交
-
-
由 Manuel Ung 提交于
Summary: The ssize_t type was introduced in https://github.com/facebook/rocksdb/pull/5633, but it seems like it's a POSIX specific type. I just need a signed type to represent number of bytes, so use int64_t instead. It seems like we have a typedef from SSIZE_T for Windows, but it doesn't seem like we ever include "port/port.h" in our public header files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5638 Differential Revision: D16526269 Pulled By: lth fbshipit-source-id: 8d3a5c41003951b74b29bc5f1d949b2b22da0cee
-
由 Levi Tamasi 提交于
Summary: This test frequently times out under TSAN; reducing the number of random iterations to make it complete faster. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5635 Test Plan: buck test mode/dev-tsan internal_repo_rocksdb/repo:compact_on_deletion_collector_test Differential Revision: D16523505 Pulled By: ltamasi fbshipit-source-id: 6a69909bce9d204c891150fcb3d536547b3253d0
-
由 haoyuhuang 提交于
Summary: This PR implements cache eviction using reinforcement learning. It includes two implementations: 1. An implementation of Thompson Sampling for the Bernoulli Bandit [1]. 2. An implementation of LinUCB with disjoint linear models [2]. The idea is that a cache uses multiple eviction policies, e.g., MRU, LRU, and LFU. The cache learns which eviction policy is the best and uses it upon a cache miss. Thompson Sampling is contextless and does not include any features. LinUCB includes features such as level, block type, caller, column family id to decide which eviction policy to use. [1] Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. 2018. A Tutorial on Thompson Sampling. Found. Trends Mach. Learn. 11, 1 (July 2018), 1-96. DOI: https://doi.org/10.1561/2200000070 [2] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (WWW '10). ACM, New York, NY, USA, 661-670. DOI=http://dx.doi.org/10.1145/1772690.1772758 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5610 Differential Revision: D16435067 Pulled By: HaoyuHuang fbshipit-source-id: 6549239ae14115c01cb1e70548af9e46d8dc21bb
-
由 Manuel Ung 提交于
Summary: Instead of reusing `TransactionOptions::max_write_batch_size` for determining when to flush a write batch for write unprepared, add a new variable called `write_batch_flush_threshold` for this use case instead. Also add `TransactionDBOptions::default_write_batch_flush_threshold` which sets the default value if `TransactionOptions::write_batch_flush_threshold` is unspecified. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5633 Differential Revision: D16520364 Pulled By: lth fbshipit-source-id: d75ae5a2141ce7708982d5069dc3f0b58d250e8c
-
由 Levi Tamasi 提交于
Summary: This test frequently times out under TSAN; parallelizing it should fix this issue. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5632 Test Plan: make check buck test mode/dev-tsan internal_repo_rocksdb/repo:db_bloom_filter_test Differential Revision: D16519399 Pulled By: ltamasi fbshipit-source-id: 66e05a644d6f79c6d544255ffcf6de195d2d62fe
-
由 Manuel Ung 提交于
Summary: Transaction::RollbackToSavePoint undos the modification made since the SavePoint beginning, and also unlocks the corresponding keys, which are tracked in the last SavePoint. Currently ::PopSavePoint simply discard these tracked keys, leaving them locked in the lock manager. This breaks a subsequent ::RollbackToSavePoint behavior as it loses track of such keys, and thus cannot unlock them. The patch fixes ::PopSavePoint by passing on the track key information to the previous SavePoint. Fixes https://github.com/facebook/rocksdb/issues/5618 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5628 Differential Revision: D16505325 Pulled By: lth fbshipit-source-id: 2bc3b30963ab4d36d996d1f66543c93abf358980
-
由 Yanqin Jin 提交于
Summary: current `clean` target in Makefile does not remove parallel test binaries. Fix this. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5629 Test Plan: (on devserver) Take file_reader_writer_test for instance. ``` $make -j32 file_reader_writer_test $make clean ``` Verify that binary file 'file_reader_writer_test' is delete by `make clean`. Differential Revision: D16513176 Pulled By: riversand963 fbshipit-source-id: 70acb9f56c928a494964121b86aacc0090f31ff6
-