- 09 12月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: db_stress_tool.cc now is a giant file. In order to main it easier to improve and maintain, break it down to multiple source files. Most classes are turned into their own files. Separate .h and .cc files are created for gflag definiations. Another .h and .cc files are created for some common functions. Some test execution logic that is only loosely related to class StressTest is moved to db_stress_driver.h and db_stress_driver.cc. All the files are located under db_stress_tool/. The directory name is created as such because if we end it with either stress or test, .gitignore will ignore any file under it and makes it prone to issues in developements. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6134 Test Plan: Build under GCC7 with and without LITE on using GNU Make. Build with GCC 4.8. Build with cmake with -DWITH_TOOL=1 Differential Revision: D18876064 fbshipit-source-id: b25d0a7451840f31ac0f5ebb0068785f783fdf7d
-
- 06 12月, 2019 1 次提交
-
-
由 Adam Simpkins 提交于
Summary: PR https://github.com/facebook/rocksdb/issues/5937 changed the db_stress tool to also require db_stress_tool.cc, and updated the Makefile but not the CMakeLists.txt file. This updates the CMakeLists.txt file so that the CMake build succeeds again. PR https://github.com/facebook/rocksdb/issues/5950 updated the Makefile build to package db_stress_tool.cc into its own librocksdb_stress.a library. I haven't done that here since there didn't really seem to be much benefit: the Makefile-based build does not install this library. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6117 Test Plan: Confirmed the CMake build succeeds on an Ubuntu 18.04 system. Differential Revision: D18835053 Pulled By: riversand963 fbshipit-source-id: 6e2a66834716e73b1eb736d9b7159870defffec5
-
- 04 12月, 2019 1 次提交
-
-
由 Connor 提交于
Summary: ``` In file included from /usr/include/c++/4.8.2/algorithm:62:0, from ./db/merge_context.h:7, from ./db/dbformat.h:16, from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12, from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8: /usr/include/c++/4.8.2/bits/stl_algo.h: In instantiation of ‘_RandomAccessIterator std::__unguarded_partition(_RandomAccessIterator, _RandomAccessIterator, const _Tp&, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Tp = std::pair<std::basic_string<char>, long unsigned int>; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1]’: /usr/include/c++/4.8.2/bits/stl_algo.h:2296:78: required from ‘_RandomAccessIterator std::__unguarded_partition_pivot(_RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1]’ /usr/include/c++/4.8.2/bits/stl_algo.h:2337:62: required from ‘void std::__introsort_loop(_RandomAccessIterator, _RandomAccessIterator, _Size, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Size = long int; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1]’ /usr/include/c++/4.8.2/bits/stl_algo.h:5499:44: required from ‘void std::sort(_RAIter, _RAIter, _Compare) [with _RAIter = __gnu_cxx::__normal_iterator<std::pair<std::basic_string<char>, long unsigned int>*, std::vector<std::pair<std::basic_string<char>, long unsigned int> > >; _Compare = rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1’ tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:79: required from here /usr/include/c++/4.8.2/bits/stl_algo.h:2263:35: error: no match for call to ‘(rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1) (std::pair<std::basic_string<char>, long unsigned int>&, const std::pair<std::basic_string<char>, long unsigned int>&)’ while (__comp(*__first, __pivot)) ^ tools/block_cache_analyzer/block_cache_trace_analyzer.cc:582:9: note: candidates are: [=](std::pair<std::string, uint64_t>& a, ^ In file included from /usr/include/c++/4.8.2/algorithm:62:0, from ./db/merge_context.h:7, from ./db/dbformat.h:16, from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12, from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8: /usr/include/c++/4.8.2/bits/stl_algo.h:2263:35: note: bool (*)(std::pair<std::basic_string<char>, long unsigned int>&, std::pair<std::basic_string<char>, long unsigned int>&) <conversion> while (__comp(*__first, __pivot)) ^ /usr/include/c++/4.8.2/bits/stl_algo.h:2263:35: note: candidate expects 3 arguments, 3 provided tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note: rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1 std::pair<std::string, uint64_t>& b) { return b.second < a.second; }); ^ tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note: no known conversion for argument 2 from ‘const std::pair<std::basic_string<char>, long unsigned int>’ to ‘std::pair<std::basic_string<char>, long unsigned int>&’ In file included from /usr/include/c++/4.8.2/algorithm:62:0, from ./db/merge_context.h:7, from ./db/dbformat.h:16, from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12, from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8: /usr/include/c++/4.8.2/bits/stl_algo.h:2266:34: error: no match for call to ‘(rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1) (const std::pair<std::basic_string<char>, long unsigned int>&, std::pair<std::basic_string<char>, long unsigned int>&)’ while (__comp(__pivot, *__last)) ^ tools/block_cache_analyzer/block_cache_trace_analyzer.cc:582:9: note: candidates are: [=](std::pair<std::string, uint64_t>& a, ^ In file included from /usr/include/c++/4.8.2/algorithm:62:0, from ./db/merge_context.h:7, from ./db/dbformat.h:16, from ./tools/block_cache_analyzer/block_cache_trace_analyzer.h:12, from tools/block_cache_analyzer/block_cache_trace_analyzer.cc:8: /usr/include/c++/4.8.2/bits/stl_algo.h:2266:34: note: bool (*)(std::pair<std::basic_string<char>, long unsigned int>&, std::pair<std::basic_string<char>, long unsigned int>&) <conversion> while (__comp(__pivot, *__last)) ^ /usr/include/c++/4.8.2/bits/stl_algo.h:2266:34: note: candidate expects 3 arguments, 3 provided tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note: rocksdb::BlockCacheTraceAnalyzer::WriteSkewness(const string&, const std::vector<long unsigned int>&, rocksdb::TraceType) const::__lambda1 std::pair<std::string, uint64_t>& b) { return b.second < a.second; }); ^ tools/block_cache_analyzer/block_cache_trace_analyzer.cc:583:46: note: no known conversion for argument 1 from ‘const std::pair<std::basic_string<char>, long unsigned int>’ to ‘std::pair<std::basic_string<char>, long unsigned int>&’ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/6106 Differential Revision: D18783943 Pulled By: riversand963 fbshipit-source-id: cc7fc10565f0210b9eebf46b95cb4950ec0b15fa
-
- 28 11月, 2019 1 次提交
-
-
由 Peter Dillinger 提交于
Summary: format_version=5 enables new Bloom filter. Using 2/5 probability for "latest and greatest" rather than naive 1/4. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6102 Test Plan: start 'make blackbox_crash_test' Differential Revision: D18735685 Pulled By: pdillinger fbshipit-source-id: e81529c8a3f53560d246086ee5f92ee7d79a2eab
-
- 27 11月, 2019 2 次提交
-
-
由 Adam Retter 提交于
Summary: **NOTE**: this also needs to be back-ported to 6.4.6 and possibly older branches if further releases from them is envisaged. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6081 Differential Revision: D18710107 Pulled By: zhichao-cao fbshipit-source-id: 03260f9316566e2bfc12c7d702d6338bb7941e01
-
由 sdong 提交于
Summary: https://github.com/facebook/rocksdb/pull/6060 broke forward compatiblity for releases from 3.10 to 4.2. Update HISTORY.md to mention it. Also remove it from the compatibility tests. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6085 Differential Revision: D18691694 fbshipit-source-id: 4ef903783dc722b8a4d3e8229abbf0f021a114c9
-
- 20 11月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: Recently, a bug was found related to a seek key that is close to SST file boundary. However, it only occurs in a very small chance in db_stress, because the chance that a random key hits SST file boundaries is small. To boost the chance, with 1/16 chance, we pick keys that are close to SST file boundaries. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6037 Test Plan: Did some manual printing out, and hack to cover the key generation logic to be correct. Differential Revision: D18598476 fbshipit-source-id: 13b76687d106c5be4e3e02a0c77fa5578105a071
-
- 19 11月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: Right now, in db_stress, as long as prefix extractor is defined, TestIterator always uses. There is value of cover total_order_seek = true when prefix extractor is define. Add a small chance that this flag is turned on. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6039 Test Plan: Run the test for a while. Differential Revision: D18539689 fbshipit-source-id: 568790dd7789c9986b83764b870df0423a122d99
-
- 15 11月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: Right now, crash_test always uses 16KB max_manifest_file_size value. It is good to cover logic of manifest file switch. However, information stored in manifest files might be useful in debugging failures. Switch to only use small manifest file size in 1/15 of the time. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6034 Test Plan: Observe command generated by db_crash_test.py multiple times and see the --max_manifest_file_size value distribution. Differential Revision: D18513824 fbshipit-source-id: 7b3ae6dbe521a0918df41064e3fa5ecbf2466e04
-
- 12 11月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: Right now, db_stress doesn't cover SeekForPrev(). Add the coverage, which mirrors what we do for Seek(). Pull Request resolved: https://github.com/facebook/rocksdb/pull/6022 Test Plan: Run "make crash_test". Do some manual source code hack to simular iterator wrong results and see it caught. Differential Revision: D18442193 fbshipit-source-id: 879b79000d5e33c625c7e970636de191ccd7776c
-
- 08 11月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: In stress test, all iterator verification is turned off is lower bound is enabled. This might be stricter than needed. This PR relaxes the condition and include the case where lower bound is lower than both of seek key and upper bound. It seems to work mostly fine when I run crash test locally. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5869 Test Plan: Run crash_test Differential Revision: D18363578 fbshipit-source-id: 23d57e11ea507949b8100f4190ddfbe8db052d5a
-
- 07 11月, 2019 3 次提交
-
-
由 sdong 提交于
Summary: Right now, in db_stress's CF consistency test's TestGet case, if failure happens, we do normal string printing, rather than hex printing, so that some text is not printed out, which makes debugging harder. Fix it by printing hex instead. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5989 Test Plan: Build db_stress and see t passes. Differential Revision: D18363552 fbshipit-source-id: 09d1b8f6fbff37441cbe7e63a1aef27551226cec
-
由 Zhichao Cao 提交于
Summary: In the previous PR https://github.com/facebook/rocksdb/issues/4788, user can use db_bench mix_graph option to generate the workload that is from the social graph. The key is generated based on the key access hotness. In this PR, user can further model the key-range hotness and fit those to two-term-exponential distribution. First, user cuts the whole key space into small key ranges (e.g., key-ranges are the same size and the key-range number is the number of SST files). Then, user calculates the average access count per key of each key-range as the key-range hotness. Next, user fits the key-range hotness to two-term-exponential distribution (f(x) = f(x) = a*exp(b*x) + c*exp(d*x)) and generate the value of a, b, c, and d. They are the parameters in db_bench: prefix_dist_a, prefix_dist_b, prefix_dist_c, and prefix_dist_d. Finally, user can run db_bench by specify the parameters. For example: `./db_bench --benchmarks="mixgraph" -use_direct_io_for_flush_and_compaction=true -use_direct_reads=true -cache_size=268435456 -key_dist_a=0.002312 -key_dist_b=0.3467 -keyrange_dist_a=14.18 -keyrange_dist_b=-2.917 -keyrange_dist_c=0.0164 -keyrange_dist_d=-0.08082 -keyrange_num=30 -value_k=0.2615 -value_sigma=25.45 -iter_k=2.517 -iter_sigma=14.236 -mix_get_ratio=0.85 -mix_put_ratio=0.14 -mix_seek_ratio=0.01 -sine_mix_rate_interval_milliseconds=5000 -sine_a=350 -sine_b=0.0105 -sine_d=50000 --perf_level=2 -reads=1000000 -num=5000000 -key_size=48` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5953 Test Plan: run db_bench with different parameters and checked the results. Differential Revision: D18053527 Pulled By: zhichao-cao fbshipit-source-id: 171f8b3142bd76462f1967c58345ad7e4f84bab7
-
由 Maysam Yabandeh 提交于
Summary: DBImpl extends the public GetSnapshot() with GetSnapshotForWriteConflictBoundary() method that takes snapshots specially for write-write conflict checking. Compaction treats such snapshots differently to avoid GCing a value written after that, so that the write conflict remains visible even after the compaction. The patch extends stress tests with such snapshots. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5897 Differential Revision: D17937476 Pulled By: maysamyabandeh fbshipit-source-id: bd8b0c578827990302194f63ae0181e15752951d
-
- 02 11月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: This reverts commit 351e2540. All branches have been fixed to buildable on FB environments, so we can revert it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5999 Differential Revision: D18281947 fbshipit-source-id: 6deaaf1b5df2349eee5d6ed9b91208cd7e23ec8e
-
- 01 11月, 2019 2 次提交
-
-
由 sdong 提交于
Summary: A recent commit make periodic compaction option valid in FIFO, which means TTL. But we fail to disable it in crash test, causing assert failure. Fix it by having it disabled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5993 Test Plan: Restart "make crash_test" many times and make sure --periodic_compaction_seconds=0 is always the case when --compaction_style=2 Differential Revision: D18263223 fbshipit-source-id: c91a802017d83ae89ac43827d1b0012861933814
-
由 Levi Tamasi 提交于
Summary: We have updated earlier release branches going back to 5.5 so they are built using gcc7 by default. Disabling ancient versions before that until we figure out a plan for them. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5990 Test Plan: Ran the script locally. Differential Revision: D18252386 Pulled By: ltamasi fbshipit-source-id: a7bbb30dc52ff2eaaf31a29ecc79f7cf4e2834dc
-
- 31 10月, 2019 2 次提交
-
-
由 sdong 提交于
Summary: Recently, pipelined write is enabled even if atomic flush is enabled, which causing sanitizing failure in db_stress. Revert this change. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5986 Test Plan: Run "make crash_test_with_atomic_flush" and see it to run for some while so that the old sanitizing error (which showed up quickly) doesn't show up. Differential Revision: D18228278 fbshipit-source-id: 27fdf2f8e3e77068c9725a838b9bef4ab25a2553
-
由 sdong 提交于
Summary: More release branches are created. We should include them in continuous format compatibility checks. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5985 Test Plan: Let's see whether it is passes. Differential Revision: D18226532 fbshipit-source-id: 75d8cad5b03ccea4ce16f00cea1f8b7893b0c0c8
-
- 30 10月, 2019 2 次提交
-
-
由 sdong 提交于
Summary: In pipeline writing mode, memtable switching needs to wait for memtable writing to finish to make sure that when memtables are made immutable, inserts are not going to them. This is currently done in DBImpl::SwitchMemtable(). This is done after flush_scheduler_.TakeNextColumnFamily() is called to fetch the list of column families to switch. The function flush_scheduler_.TakeNextColumnFamily() itself, however, is not thread-safe when being called together with flush_scheduler_.ScheduleFlush(). This change provides a fix, which moves the waiting logic before flush_scheduler_.TakeNextColumnFamily(). WaitForPendingWrites() is a natural place where the logic can happen. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5716 Test Plan: Run all tests with ASAN and TSAN. Differential Revision: D18217658 fbshipit-source-id: b9c5e765c9989645bf10afda7c5c726c3f82f6c3
-
由 sdong 提交于
Summary: Right now, in db_stress's iterator tests, we always use the same CF to validate iterator results. This commit changes it so that a randomized CF is used in Cf consistency test, where every CF should have exactly the same data. This would help catch more bugs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5983 Test Plan: Run "make crash_test_with_atomic_flush". Differential Revision: D18217643 fbshipit-source-id: 3ac998852a0378bb59790b20c5f236f6a5d681fe
-
- 24 10月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: Right now in CF consitency stres test's TestGet(), keys are just fetched without validation. With this change, in 1/2 the time, compare all the CFs share the same value with the same key. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5863 Test Plan: Run "make crash_test_with_atomic_flush" and see tests pass. Hack the code to generate some inconsistency and observe the test fails as expected. Differential Revision: D17934206 fbshipit-source-id: 00ba1a130391f28785737b677f80f366fb83cced
-
- 23 10月, 2019 1 次提交
-
-
由 Yanqin Jin 提交于
Summary: Since we already parse env_uri from command line and creates custom Env accordingly, we should invoke the methods of such Envs instead of using Env::Default(). Test Plan (on devserver): ``` $make db_bench db_stress $./db_bench -benchmarks=fillseq ./db_stress ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5943 Differential Revision: D18018550 Pulled By: riversand963 fbshipit-source-id: 03b61329aaae0dfd914a0b902cc677f570f102e3
-
- 19 10月, 2019 4 次提交
-
-
由 Yanqin Jin 提交于
Summary: Since SeekForPrev (used by Prev) is not supported by HashSkipList when prefix is used, we disable it when stress testing HashSkipList. - Change the default memtablerep to skip list. - Avoid Prev() when memtablerep is HashSkipList and prefix is used. Test Plan (on devserver): ``` $make db_stress $./db_stress -ops_per_thread=10000 -reopen=1 -destroy_db_initially=true -column_families=1 -threads=1 -column_families=1 -memtablerep=prefix_hash $# or simply $./db_stress $./db_stress -memtablerep=prefix_hash ``` Results must print "Verification successful". Pull Request resolved: https://github.com/facebook/rocksdb/pull/5942 Differential Revision: D18017062 Pulled By: riversand963 fbshipit-source-id: af867e59aa9e6f533143c984d7d529febf232fd7
-
由 Peter Dillinger 提交于
Summary: Plain table SSTs could crash sst_dump because of a bug in PlainTableReader that can leave table_properties_ as null. Even if it was intended not to keep the table properties in some cases, they were leaked on the offending code path. Steps to reproduce: $ db_bench --benchmarks=fillrandom --num=2000000 --use_plain_table --prefix-size=12 $ sst_dump --file=0000xx.sst --show_properties from [] to [] Process /dev/shm/dbbench/000014.sst Sst file format: plain table Raw user collected properties ------------------------------ Segmentation fault (core dumped) Also added missing unit testing of plain table full_scan_mode, and an assertion in NewIterator to check for regression. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5940 Test Plan: new unit test, manual, make check Differential Revision: D18018145 Pulled By: pdillinger fbshipit-source-id: 4310c755e824c4cd6f3f86a3abc20dfa417c5e07
-
由 Zhichao Cao 提交于
Summary: In the current trace replay, all the queries are serialized and called by single threads. It may not simulate the original application query situations closely. The multi-threads replay is implemented in this PR. Users can set the number of threads to replay the trace. The queries generated according to the trace records are scheduled in the thread pool job queue. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5934 Test Plan: test with make check and real trace replay. Differential Revision: D17998098 Pulled By: zhichao-cao fbshipit-source-id: 87eecf6f7c17a9dc9d7ab29dd2af74f6f60212c8
-
由 Yanqin Jin 提交于
Summary: expose db stress test by providing db_stress_tool.h in public header. This PR does the following: - adds a new header, db_stress_tool.h, in include/rocksdb/ - renames db_stress.cc to db_stress_tool.cc - adds a db_stress.cc which simply invokes a test function. - update Makefile accordingly. Test Plan (dev server): ``` make db_stress ./db_stress ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5937 Differential Revision: D17997647 Pulled By: riversand963 fbshipit-source-id: 1a8d9994f89ce198935566756947c518f0052410
-
- 18 10月, 2019 1 次提交
-
-
由 Levi Tamasi 提交于
Summary: The patch adds a new command line parameter --decode_blob_index to sst_dump. If this switch is specified, sst_dump prints blob indexes in a human readable format, printing the blob file number, offset, size, and expiration (if applicable) for blob references, and the blob value (and expiration) for inlined blobs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5926 Test Plan: Used db_bench's BlobDB mode to generate SST files containing blob references with and without expiration, as well as inlined blobs with and without expiration (note: the latter are stored as plain values), and confirmed sst_dump correctly prints all four types of records. Differential Revision: D17939077 Pulled By: ltamasi fbshipit-source-id: edc5f58fee94ba35f6699c6a042d5758f5b3963d
-
- 15 10月, 2019 2 次提交
-
-
由 Levi Tamasi 提交于
Summary: Currently, db_bench only supports PutWithTTL operations for BlobDB but not regular Puts. The patch adds support for regular (non-TTL) Puts and also changes the default for blob_db_max_ttl_range to zero, which corresponds to no TTL. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5921 Test Plan: make check ./db_bench -benchmarks=fillrandom -statistics -stats_interval_seconds=1 -duration=90 -num=500000 -use_blob_db=1 -blob_db_file_size=1000000 -target_file_size_base=1000000 (issues Put operations with no TTL) ./db_bench -benchmarks=fillrandom -statistics -stats_interval_seconds=1 -duration=90 -num=500000 -use_blob_db=1 -blob_db_file_size=1000000 -target_file_size_base=1000000 -blob_db_max_ttl_range=86400 (issues PutWithTTL operations with random TTLs in the [0, blob_db_max_ttl_range) interval, as before) Differential Revision: D17919798 Pulled By: ltamasi fbshipit-source-id: b946c3522b836b92b4c157ffbad24f92ba2b0a16
-
由 Maysam Yabandeh 提交于
Summary: This is the 3rd attempt after the revert of https://github.com/facebook/rocksdb/issues/4020 and https://github.com/facebook/rocksdb/issues/5895 The last bug is fixed https://github.com/facebook/rocksdb/pull/5907 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5918 Test Plan: ``` make -j32 crash_test ``` Differential Revision: D17909489 Pulled By: maysamyabandeh fbshipit-source-id: 7dfb8cf998c2d295c86465dd21734593d277887e
-
- 11 10月, 2019 1 次提交
-
-
由 Yanqin Jin 提交于
Summary: This reverts commit 2f4e2881. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5904 Differential Revision: D17871282 Pulled By: riversand963 fbshipit-source-id: d210725f8f3b26d8eac25892094da09d9694337e
-
- 10 10月, 2019 1 次提交
-
-
由 anand76 提交于
Summary: The loop in OperateDb() is getting quite complicated with the introduction of multiple key operations such as MultiGet and Reseeks. This is resulting in a number of corner cases that hangs db_stress due to synchronization problems during reopen (i.e when -reopen=<> option is specified). This PR makes it more robust by ensuring all db_stress threads vote to reopen the DB the exact same number of times. Most of the changes in this diff are due to indentation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5893 Test Plan: Run crash test Differential Revision: D17823827 Pulled By: anand1976 fbshipit-source-id: ec893829f611ac7cac4057c0d3d99f9ffb6a6dd9
-
- 09 10月, 2019 2 次提交
-
-
由 Yanqin Jin 提交于
Summary: This PR allows for the creation of custom env when using sst_dump. If the user does not set options.env or set options.env to nullptr, then sst_dump will automatically try to create a custom env depending on the path to the sst file or db directory. In order to use this feature, the user must call ObjectRegistry::Register() beforehand. Test Plan (on devserver): ``` $make all && make check ``` All tests must pass to ensure this change does not break anything. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5845 Differential Revision: D17678038 Pulled By: riversand963 fbshipit-source-id: 58ecb4b3f75246d52b07c4c924a63ee61c1ee626
-
由 Maysam Yabandeh 提交于
Summary: This is the 2nd attempt after the revert of https://github.com/facebook/rocksdb/pull/4020 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5895 Test Plan: ``` ./tools/db_crashtest.py blackbox --simple --interval=10 --max_key=10000000 ``` Differential Revision: D17822137 Pulled By: maysamyabandeh fbshipit-source-id: 3d148c0d8cc129080410ff859c04b544223c8ea3
-
- 04 10月, 2019 1 次提交
-
-
由 anand76 提交于
Summary: When multiple operations are performed in a db_stress thread in one loop iteration, the reopen voting logic needs to take that into account. It was doing that for MultiGet, but a new option was introduced recently to do multiple iterator seeks per iteration, which broke it again. Fix the logic to be more robust and agnostic of the type of operation performed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5876 Test Plan: Run db_stress Differential Revision: D17733590 Pulled By: anand1976 fbshipit-source-id: 787f01abefa1e83bba43e0b4f4abb26699b2089e
-
- 01 10月, 2019 3 次提交
-
-
由 sdong 提交于
Summary: Recent changes trigger clang analyze warning. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5870 Test Plan: "USE_CLANG=1 TEST_TMPDIR=/dev/shm/rocksdb OPT=-g make -j60 analyze" and make sure it passes. Differential Revision: D17682533 fbshipit-source-id: 02716f2a24572550a22db4bbe9b54d4872dfae32
-
由 Jay Zhuang 提交于
Summary: ``` tools/block_cache_analyzer/block_cache_trace_analyzer.cc:653:48: error: implicit conversion loses integer precision: 'uint64_t' (aka 'unsigned long long') to 'std::__1::linear_congruential_engine<unsigned int, 48271, 0, 2147483647>::result_type' (aka 'unsigned int') [-Werror,-Wshorten-64-to-32] std::default_random_engine rand_engine(env_->NowMicros()); ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5864 Differential Revision: D17668962 fbshipit-source-id: e08fa58b2a78a8dd8b334862b5714208f696b8ab
-
由 sdong 提交于
Summary: Two more bug fixes in db_stress: 1. this is to complete the fix of the regression bug causing overflowing when supporting FLAGS_prefix_size = -1. 2. Fix regression bug in compare iterator itself: (1) when creating control iterator, which used the same read option as the normal iterator by mistake; (2) the logic of comparing has some problems. Fix them. (3) disable validation for lower bound now, which generated some wildly different results. Disabling it to make normal tests pass while investigating it. 3. Cleaning up snapshots in verification failure cases. Memory is leaked otherwise. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5867 Test Plan: Run "make crash_test" for a while and see at least 1 is fixed. Differential Revision: D17671712 fbshipit-source-id: 011f98ea1a72aef23e19ff28656830c78699b402
-
- 28 9月, 2019 2 次提交
-
-
由 sdong 提交于
Summary: When prefix_size = -1, stress test crashes with run time error because of overflow. Fix it by not using -1 but 7 in prefix scan mode. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5862 Test Plan: Run python -u tools/db_crashtest.py --simple whitebox --random_kill_odd \ 888887 --compression_type=zstd and see it doesn't crash. Differential Revision: D17642313 fbshipit-source-id: f029e7651498c905af1b1bee6d310ae50cdcda41
-
由 sdong 提交于
Summary: For now, crash_test is not able to report any failure for the logic related to iterator upper, lower bounds or iterators, or reseek. These are features prone to errors. Improve db_stress in several ways: (1) For each iterator run, reseek up to 3 times. (2) For every iterator, create control iterator with upper or lower bound, with total order seek. Compare the results with the iterator. (3) Make simple crash test to avoid prefix size to have more coverage. (4) make prefix_size = 0 a valid size and -1 to indicate disabling prefix extractor. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5846 Test Plan: Manually hack the code to create wrong results and see they are caught by the tool. Differential Revision: D17631760 fbshipit-source-id: acd460a177bd2124a5ffd7fff490702dba63030b
-