- 13 7月, 2019 2 次提交
-
-
由 Sergei Petrunia 提交于
Summary: - Provide assignment operator in CompactionStats - Provide a copy constructor for FileDescriptor - Remove std::move from "return std::move(t)" in BoundedQueue Pull Request resolved: https://github.com/facebook/rocksdb/pull/5553 Differential Revision: D16230170 fbshipit-source-id: fd7c6e52390b2db1be24141e25649cf62424d078
-
由 haoyuhuang 提交于
Summary: This PR provides more command line options for block cache analyzer to better understand block cache access pattern. -analyze_bottom_k_access_count_blocks -analyze_top_k_access_count_blocks -reuse_lifetime_labels -reuse_lifetime_buckets -analyze_callers -access_count_buckets -analyze_blocks_reuse_k_reuse_window Pull Request resolved: https://github.com/facebook/rocksdb/pull/5516 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16037440 Pulled By: HaoyuHuang fbshipit-source-id: b9a4ac0d4712053fab910732077a4d4b91400bc8
-
- 12 7月, 2019 1 次提交
-
-
由 haoyuhuang 提交于
Summary: This PR adds a ghost cache for admission control. Specifically, it admits an entry on its second access. It also adds a hybrid row-block cache that caches the referenced key-value pairs of a Get/MultiGet request instead of its blocks. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5534 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16101124 Pulled By: HaoyuHuang fbshipit-source-id: b99edda6418a888e94eb40f71ece45d375e234b1
-
- 11 7月, 2019 1 次提交
-
-
由 Yanqin Jin 提交于
Summary: Add an extra cleanup step so that db directory can be saved and uploaded. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5554 Reviewed By: yancouto Differential Revision: D16168844 Pulled By: riversand963 fbshipit-source-id: ec7b2cee5f11c7d388c36531f8b076d648e2fb19
-
- 10 7月, 2019 5 次提交
-
-
由 ggaurav28 提交于
Summary: Current PosixLogger performs IO operations using posix calls. Thus the current implementation will not work for non-posix env. Created a new logger class EnvLogger that uses env specific WritableFileWriter for IO operations. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5491 Test Plan: make check Differential Revision: D15909002 Pulled By: ggaurav28 fbshipit-source-id: 13a8105176e8e42db0c59798d48cb6a0dbccc965
-
由 Yanqin Jin 提交于
Summary: When atomic flush stress test fails, we print internal keys within the range with mismatched key/values for all column families. Test plan (on devserver) Manually hack the code to randomly insert wrong data. Run the test. ``` $make clean && COMPILE_WITH_TSAN=1 make -j32 db_stress $./db_stress -test_atomic_flush=true -ops_per_thread=10000 ``` Check that proper error messages are printed, as follows: ``` 2019/07/08-17:40:14 Starting verification Verification failed Latest Sequence Number: 190903 [default] 000000000000050B => 56290000525350515E5F5C5D5A5B5859 [3] 0000000000000533 => EE100000EAEBE8E9E6E7E4E5E2E3E0E1FEFFFCFDFAFBF8F9 Internal keys in CF 'default', [000000000000050B, 0000000000000533] (max 8) key 000000000000050B seq 139920 type 1 key 0000000000000533 seq 0 type 1 Internal keys in CF '3', [000000000000050B, 0000000000000533] (max 8) key 0000000000000533 seq 0 type 1 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5549 Differential Revision: D16158709 Pulled By: riversand963 fbshipit-source-id: f07fa87763f87b3bd908da03c956709c6456bcab
-
由 sdong 提交于
Summary: Right now ldb can open running DB through read-only DB. However, it might leave info logs files to the read-only DB directory. Add an option to open the DB as secondary to avoid it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5537 Test Plan: Run ./ldb scan --max_keys=10 --db=/tmp/rocksdbtest-2491/dbbench --secondary_path=/tmp --no_value --hex and ./ldb get 0x00000000000000103030303030303030 --hex --db=/tmp/rocksdbtest-2491/dbbench --secondary_path=/tmp against a normal db_bench run and observe the output changes. Also observe that no new info logs files are created under /tmp/rocksdbtest-2491/dbbench. Run without --secondary_path and observe that new info logs created under /tmp/rocksdbtest-2491/dbbench. Differential Revision: D16113886 fbshipit-source-id: 4e09dec47c2528f6ca08a9e7a7894ba2d9daebbb
-
由 sdong 提交于
Summary: https://github.com/facebook/rocksdb/pull/5520 caused a buffer overflow bug in DBWALTest.kTolerateCorruptedTailRecords. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5550 Test Plan: Run the test in UBSAN. It used to fail. Not it succeeds. Differential Revision: D16165516 fbshipit-source-id: 42c56a6bc64eb091f054b87757fcbef60da825f7
-
由 Tim Hatch 提交于
Reviewed By: lisroach Differential Revision: D15362271 fbshipit-source-id: 48fab12ab6e55a8537b19b4623d2545ca9950ec5
-
- 09 7月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: Print out some more information when db_tress fails with verification failures to help debugging problems. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5543 Test Plan: Manually ingest some failures and observe the outputs are like this: Verification failed [default] 0000000000199A5A => 7C3D000078797A7B74757677707172736C6D6E6F68696A6B [6] 000000000019C8BD => 65380000616063626D6C6F6E69686B6A internal keys in default CF [0000000000199A5A, 000000000019C8BD] (max 8) key 0000000000199A5A seq 179246 type 1 key 000000000019C8BD seq 163970 type 1 Lastest Sequence Number: 292234 Differential Revision: D16153717 fbshipit-source-id: b33fa50a828c190cbf8249a37955432044f92daf
-
- 08 7月, 2019 3 次提交
-
-
由 haoyuhuang 提交于
Summary: This PR fixes shadow errors. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5546 Test Plan: make clean && make check -j32 && make clean && USE_CLANG=1 make check -j32 && make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16147841 Pulled By: HaoyuHuang fbshipit-source-id: 1043500d70c134185f537ab4c3900452752f1534
-
由 Yanqin Jin 提交于
Summary: Previously `GetAllKeyVersions()` supports default column family only. This PR add support for other column families. Test plan (devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 db_basic_test $./db_basic_test --gtest_filter=DBBasicTest.GetAllKeyVersions ``` All other unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5544 Differential Revision: D16147551 Pulled By: riversand963 fbshipit-source-id: 5a61aece2a32d789e150226a9b8d53f4a5760168
-
由 Zhongyi Xie 提交于
Summary: PR https://github.com/facebook/rocksdb/pull/5520 adds DBImpl:: wal_in_db_path_ and initializes it in DBImpl::Open, this PR fixes the valgrind error for secondary instance: ``` ==236417== Conditional jump or move depends on uninitialised value(s) ==236417== at 0x62242A: rocksdb::DeleteDBFile(rocksdb::ImmutableDBOptions const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, bool) (file_util.cc:96) ==236417== by 0x512432: rocksdb::DBImpl::DeleteObsoleteFileImpl(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::FileType, unsigned long) (db_impl_files.cc:261) ==236417== by 0x515A7A: rocksdb::DBImpl::PurgeObsoleteFiles(rocksdb::JobContext&, bool) (db_impl_files.cc:492) ==236417== by 0x499153: rocksdb::ColumnFamilyHandleImpl::~ColumnFamilyHandleImpl() (column_family.cc:75) ==236417== by 0x499880: rocksdb::ColumnFamilyHandleImpl::~ColumnFamilyHandleImpl() (column_family.cc:84) ==236417== by 0x4C9AF9: rocksdb::DB::DestroyColumnFamilyHandle(rocksdb::ColumnFamilyHandle*) (db_impl.cc:3105) ==236417== by 0x44E853: CloseSecondary (db_secondary_test.cc:53) ==236417== by 0x44E853: rocksdb::DBSecondaryTest::~DBSecondaryTest() (db_secondary_test.cc:31) ==236417== by 0x44EC77: ~DBSecondaryTest_PrimaryDropColumnFamily_Test (db_secondary_test.cc:443) ==236417== by 0x44EC77: rocksdb::DBSecondaryTest_PrimaryDropColumnFamily_Test::~DBSecondaryTest_PrimaryDropColumnFamily_Test() (db_secondary_test.cc:443) ==236417== by 0x83D1D7: HandleSehExceptionsInMethodIfSupported<testing::Test, void> (gtest-all.cc:3824) ==236417== by 0x83D1D7: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (gtest-all.cc:3860) ==236417== by 0x8346DB: testing::TestInfo::Run() [clone .part.486] (gtest-all.cc:4078) ==236417== by 0x8348D4: Run (gtest-all.cc:4047) ==236417== by 0x8348D4: testing::TestCase::Run() [clone .part.487] (gtest-all.cc:4190) ==236417== by 0x834D14: Run (gtest-all.cc:6100) ==236417== by 0x834D14: testing::internal::UnitTestImpl::RunAllTests() (gtest-all.cc:6062) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5545 Differential Revision: D16146224 Pulled By: miasantreble fbshipit-source-id: 184c90e451352951da4e955f054d4b1a1f29ea29
-
- 07 7月, 2019 1 次提交
-
-
由 anand76 提交于
Summary: 1. Cleanup WAL trash files on open 2. Don't apply deletion rate limit if WAL dir is different from db dir Pull Request resolved: https://github.com/facebook/rocksdb/pull/5520 Test Plan: Add new unit tests and make check Differential Revision: D16096750 Pulled By: anand1976 fbshipit-source-id: 6f07858ad864b754b711db416f0389c45ede599b
-
- 06 7月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: clang analyze fails after https://github.com/facebook/rocksdb/pull/5514 for this failure: table/block_based/block_based_table_reader.cc:3450:16: warning: Called C++ object pointer is null if (!get_context->SaveValue( ^~~~~~~~~~~~~~~~~~~~~~~ 1 warning generated. The reaon is that a branching is added earlier in the function on get_context is null or not, CLANG analyze thinks that it can be null and we make the function call withou the null checking. Fix the issue by removing the branch and add an assert. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5542 Test Plan: "make all check" passes and CLANG analyze failure goes away. Differential Revision: D16133988 fbshipit-source-id: d4627d03c4746254cc11926c523931086ccebcda
-
- 05 7月, 2019 1 次提交
-
-
由 Yi Wu 提交于
Summary: Since https://github.com/facebook/rocksdb/issues/5468 `LevelIterator` compare lower bound and file smallest key on `NewFileIterator` and cache the result to reduce per key lower bound check. However when iterate across file boundary, it doesn't update the cached result since `Valid()=false` because `Valid()` still reflect the status of the previous file iterator. Fixing it by remove the `Valid()` check from `CheckMayBeOutOfLowerBound()`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5540 Test Plan: See the new test. Signed-off-by: NYi Wu <yiwu@pingcap.com> Differential Revision: D16127653 fbshipit-source-id: a0691e1164658d485c17971aaa97028812f74678
-
- 04 7月, 2019 3 次提交
-
-
由 sdong 提交于
Summary: Sometimes it is helpful to fetch the whole history of stats after benchmark runs. Add such an option Pull Request resolved: https://github.com/facebook/rocksdb/pull/5532 Test Plan: Run the benchmark manually and observe the output is as expected. Differential Revision: D16097764 fbshipit-source-id: 10b5b735a22a18be198b8f348be11f11f8806904
-
由 haoyuhuang 提交于
Summary: This PR associates a unique id with Get and MultiGet. This enables us to track how many blocks a Get/MultiGet request accesses. We can also measure the impact of row cache vs block cache. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5514 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16032681 Pulled By: HaoyuHuang fbshipit-source-id: 775b05f4440badd58de6667e3ec9f4fc87a0af4c
-
由 Sagar Vemuri 提交于
Summary: Fixed a bug in compaction reads due to which incorrect number of bytes were being read/utilized. The bug was introduced in https://github.com/facebook/rocksdb/issues/5498 , resulting in "Corruption: block checksum mismatch" and "heap-buffer-overflow" asan errors in our tests. https://github.com/facebook/rocksdb/issues/5498 was introduced recently and is not in any released versions. ASAN: ``` > ==2280939==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6250005e83da at pc 0x000000d57f62 bp 0x7f954f483770 sp 0x7f954f482f20 > === How to use this, how to get the raw stack trace, and more: fburl.com/ASAN === > READ of size 4 at 0x6250005e83da thread T4 > SCARINESS: 27 (4-byte-read-heap-buffer-overflow-far-from-bounds) > #0 tests+0xd57f61 __asan_memcpy > https://github.com/facebook/rocksdb/issues/1 rocksdb/src/util/coding.h:124 rocksdb::DecodeFixed32(char const*) > https://github.com/facebook/rocksdb/issues/2 rocksdb/src/table/block_fetcher.cc:39 rocksdb::BlockFetcher::CheckBlockChecksum() > https://github.com/facebook/rocksdb/issues/3 rocksdb/src/table/block_fetcher.cc:99 rocksdb::BlockFetcher::TryGetFromPrefetchBuffer() > https://github.com/facebook/rocksdb/issues/4 rocksdb/src/table/block_fetcher.cc:209 rocksdb::BlockFetcher::ReadBlockContents() > https://github.com/facebook/rocksdb/issues/5 rocksdb/src/table/block_based/block_based_table_reader.cc:93 rocksdb::(anonymous namespace)::ReadBlockFromFile(rocksdb::RandomAccessFileReader*, rocksdb::FilePrefetchBuffer*, rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, std::unique_ptr<...>*, rocksdb::ImmutableCFOptions const&, bool, bool, rocksdb::UncompressionDict const&, rocksdb::PersistentCacheOptions const&, unsigned long, unsigned long, rocksdb::MemoryAllocator*, bool) > https://github.com/facebook/rocksdb/issues/6 rocksdb/src/table/block_based/block_based_table_reader.cc:2331 rocksdb::BlockBasedTable::RetrieveBlock(rocksdb::FilePrefetchBuffer*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::UncompressionDict const&, rocksdb::CachableEntry<...>*, rocksdb::BlockType, rocksdb::GetContext*, rocksdb::BlockCacheLookupContext*, bool) const > https://github.com/facebook/rocksdb/issues/7 rocksdb/src/table/block_based/block_based_table_reader.cc:2090 rocksdb::DataBlockIter* rocksdb::BlockBasedTable::NewDataBlockIterator<...>(rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::DataBlockIter*, rocksdb::BlockType, bool, bool, rocksdb::GetContext*, rocksdb::BlockCacheLookupContext*, rocksdb::Status, rocksdb::FilePrefetchBuffe r*, bool) const > https://github.com/facebook/rocksdb/issues/8 rocksdb/src/table/block_based/block_based_table_reader.cc:2720 rocksdb::BlockBasedTableIterator<...>::InitDataBlock() > https://github.com/facebook/rocksdb/issues/9 rocksdb/src/table/block_based/block_based_table_reader.cc:2607 rocksdb::BlockBasedTableIterator<...>::SeekToFirst() > https://github.com/facebook/rocksdb/issues/10 rocksdb/src/table/iterator_wrapper.h:83 rocksdb::IteratorWrapperBase<...>::SeekToFirst() > https://github.com/facebook/rocksdb/issues/11 rocksdb/src/table/merging_iterator.cc:100 rocksdb::MergingIterator::SeekToFirst() > https://github.com/facebook/rocksdb/issues/12 rocksdb/compaction/compaction_job.cc:877 rocksdb::CompactionJob::ProcessKeyValueCompaction(rocksdb::CompactionJob::SubcompactionState*) > https://github.com/facebook/rocksdb/issues/13 rocksdb/compaction/compaction_job.cc:590 rocksdb::CompactionJob::Run() > https://github.com/facebook/rocksdb/issues/14 rocksdb/db_impl/db_impl_compaction_flush.cc:2689 rocksdb::DBImpl::BackgroundCompaction(bool*, rocksdb::JobContext*, rocksdb::LogBuffer*, rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) > https://github.com/facebook/rocksdb/issues/15 rocksdb/db_impl/db_impl_compaction_flush.cc:2248 rocksdb::DBImpl::BackgroundCallCompaction(rocksdb::DBImpl::PrepickedCompaction*, rocksdb::Env::Priority) > https://github.com/facebook/rocksdb/issues/16 rocksdb/db_impl/db_impl_compaction_flush.cc:2024 rocksdb::DBImpl::BGWorkCompaction(void*) > https://github.com/facebook/rocksdb/issues/23 rocksdb/src/util/threadpool_imp.cc:266 rocksdb::ThreadPoolImpl::Impl::BGThread(unsigned long) > https://github.com/facebook/rocksdb/issues/24 rocksdb/src/util/threadpool_imp.cc:307 rocksdb::ThreadPoolImpl::Impl::BGThreadWrapper(void*) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5531 Test Plan: Verified that this fixes the fb-internal Logdevice test which caught the issue. Differential Revision: D16109702 Pulled By: sagar0 fbshipit-source-id: 1fc08549cf7b553e338a133ae11eb9f4d5011914
-
- 03 7月, 2019 3 次提交
-
-
由 Andrew Kryczka 提交于
Summary: Fixes the below build failure for clang compiler using glibc and jemalloc. Platform: linux x86-64 Compiler: clang version 6.0.0-1ubuntu2 Build failure: ``` $ CXX=clang++ CC=clang USE_CLANG=1 WITH_JEMALLOC_FLAG=1 JEMALLOC=1 EXTRA_LDFLAGS="-L/home/andrew/jemalloc/lib/" EXTRA_CXXFLAGS="-I/home/andrew/jemalloc/include/" make check -j12 ... CC memory/jemalloc_nodump_allocator.o In file included from memory/jemalloc_nodump_allocator.cc:6: In file included from ./memory/jemalloc_nodump_allocator.h:11: In file included from ./port/jemalloc_helper.h:16: /usr/include/clang/6.0.0/include/mm_malloc.h:39:16: error: 'posix_memalign' is missing exception specification 'throw()' extern "C" int posix_memalign(void **__memptr, size_t __alignment, size_t __size); ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:388:26: note: expanded from macro 'posix_memalign' # define posix_memalign je_posix_memalign ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:77:29: note: expanded from macro 'je_posix_memalign' # define je_posix_memalign posix_memalign ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:232:38: note: previous declaration is here JEMALLOC_EXPORT int JEMALLOC_NOTHROW je_posix_memalign(void **memptr, ^ /home/andrew/jemalloc/include/jemalloc/jemalloc.h:77:29: note: expanded from macro 'je_posix_memalign' # define je_posix_memalign posix_memalign ^ 1 error generated. Makefile:1972: recipe for target 'memory/jemalloc_nodump_allocator.o' failed make: *** [memory/jemalloc_nodump_allocator.o] Error 1 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5522 Differential Revision: D16069869 Pulled By: miasantreble fbshipit-source-id: c489bbc993adee194b9a550134c6237a264bc443
-
由 Andrew Kryczka 提交于
Summary: Previously, if the jemalloc was built with nonempty string for `--with-jemalloc-prefix`, then `HasJemalloc()` would return false on Linux, so jemalloc would not be used at runtime. On Mac, it would cause a linker failure due to no definitions found for the weak functions declared in "port/jemalloc_helper.h". This should be a rare problem because (1) on Linux the default `--with-jemalloc-prefix` value is the empty string, and (2) Homebrew's build explicitly sets `--with-jemalloc-prefix` to the empty string. However, there are cases where `--with-jemalloc-prefix` is nonempty. For example, when building jemalloc from source on Mac, the default setting is `--with-jemalloc-prefix=je_`. Such jemalloc builds should be usable by RocksDB. The fix is simple. Defining `JEMALLOC_MANGLE` before including "jemalloc.h" causes it to define unprefixed symbols that are aliases for each of the prefixed symbols. Thanks to benesch for figuring this out and explaining it to me. Fixes https://github.com/facebook/rocksdb/issues/1462. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5521 Test Plan: build jemalloc with prefixed symbols: ``` $ ./configure --with-jemalloc-prefix=lol $ make ``` compile rocksdb against it: ``` $ WITH_JEMALLOC_FLAG=1 JEMALLOC=1 EXTRA_LDFLAGS="-L/home/andrew/jemalloc/lib/" EXTRA_CXXFLAGS="-I/home/andrew/jemalloc/include/" make -j12 ./db_bench ``` run db_bench and verify jemalloc actually used: ``` $ ./db_bench -benchmarks=fillrandom -statistics=true -dump_malloc_stats=true -stats_dump_period_sec=1 $ grep jemalloc /tmp/rocksdbtest-1000/dbbench/LOG 2019/06/29-12:20:52.088658 7fc5fb7f6700 [_impl/db_impl.cc:837] ___ Begin jemalloc statistics ___ ... ``` Differential Revision: D16092758 fbshipit-source-id: c2c358346190ed62ceb2a3547a6c4c180b12f7c4
-
由 Yi Wu 提交于
Summary: This is a second attempt for https://github.com/facebook/rocksdb/issues/5111, with the fix to redo iterate bounds check after `SeekXXX()`. This is because MyRocks may change iterate bounds between seek. See https://github.com/facebook/rocksdb/issues/5111 for original benchmark result and discussion. Closes https://github.com/facebook/rocksdb/issues/5463. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5468 Test Plan: Existing rocksdb tests, plus myrocks test `rocksdb.optimizer_loose_index_scans` and `rocksdb.group_min_max`. Differential Revision: D15863332 fbshipit-source-id: ab4aba5899838591806b8673899bd465f3f53e18
-
- 02 7月, 2019 8 次提交
-
-
由 Zhongyi Xie 提交于
Summary: Recent commit 3886dddc introduced a new test which is not compatible with lite mode and breaks contrun test: ``` [ RUN ] StatsHistoryTest.ForceManualFlushStatsCF monitoring/stats_history_test.cc:642: Failure Expected: (cfd_stats->GetLogNumber()) < (cfd_test->GetLogNumber()), actual: 15 vs 15 ``` This PR excludes the test from lite mode to appease the failing test Pull Request resolved: https://github.com/facebook/rocksdb/pull/5529 Differential Revision: D16080892 Pulled By: miasantreble fbshipit-source-id: 2f8a22758f71250cd9f204046404226ddc13b028
-
由 haoyuhuang 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5526 Test Plan: OPT=-g V=1 make J=1 unity_test -j32 make clean && make -j32 Differential Revision: D16079315 Pulled By: HaoyuHuang fbshipit-source-id: 294ab439cf0db8dd5da44e30eabf0cbb2bb8c4f6
-
由 Eli Pozniansky 提交于
Summary: Formatting fixes in db_bench_tool that were accidentally omitted Pull Request resolved: https://github.com/facebook/rocksdb/pull/5525 Test Plan: Unit tests Differential Revision: D16078516 Pulled By: elipoz fbshipit-source-id: bf8df0e3f08092a91794ebf285396d9b8a335bb9
-
由 Yanqin Jin 提交于
Summary: This is to prevent bg flush thread from unrefing and deleting the cfd that has been dropped by a concurrent thread. Before RocksDB calls `DBImpl::WaitForFlushMemTables`, we should increase the refcount of each `ColumnFamilyData` so that its ref count will not drop to 0 even if the column family is dropped by another thread. Otherwise the bg flush thread can deref the cfd and deletes it, causing a segfault in `WaitForFlushMemtables` upon accessing `cfd`. Test plan (on devserver): ``` $make clean && COMPILE_WITH_ASAN=1 make -j32 $make check ``` All unit tests must pass. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5513 Differential Revision: D16062898 Pulled By: riversand963 fbshipit-source-id: 37dc511f1dc99f036d0201bbd7f0a8f5677c763d
-
由 Eli Pozniansky 提交于
Summary: Fix from some C-style casting in bloom.cc and ./tools/db_bench_tool.cc Pull Request resolved: https://github.com/facebook/rocksdb/pull/5524 Differential Revision: D16075626 Pulled By: elipoz fbshipit-source-id: 352948885efb64a7ef865942c75c3c727a914207
-
由 haoyuhuang 提交于
Cache simulator: Refactor the cache simulator so that we can add alternative policies easily (#5517) Summary: This PR creates cache_simulator.h file. It contains a CacheSimulator that runs against a block cache trace record. We can add alternative cache simulators derived from CacheSimulator later. For example, this PR adds a PrioritizedCacheSimulator that inserts filter/index/uncompressed dictionary blocks with high priority. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5517 Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32 Differential Revision: D16043689 Pulled By: HaoyuHuang fbshipit-source-id: 65f28ed52b866ffb0e6eceffd7f9ca7c45bb680d
-
由 Zhongyi Xie 提交于
Summary: WAL records RocksDB writes to all column families. When user flushes a a column family, the old WAL will not accept new writes but cannot be deleted yet because it may still contain live data for other column families. (See https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log#life-cycle-of-a-wal for detailed explanation) Because of this, if there is a column family that receive very infrequent writes and no manual flush is called for it, it could prevent a lot of WALs from being deleted. PR https://github.com/facebook/rocksdb/pull/5046 introduced persistent stats column family which is a good example of such column families. Depending on the config, it may have long intervals between writes, and user is unaware of it which makes it difficult to call manual flush for it. This PR addresses the problem for persistent stats column family by forcing a flush for persistent stats column family when 1) another column family is flushed 2) persistent stats column family's log number is the smallest among all column families, this way persistent stats column family will keep advancing its log number when necessary, allowing RocksDB to delete old WAL files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5509 Differential Revision: D16045896 Pulled By: miasantreble fbshipit-source-id: 286837b633e988417f0096ff38384742d3b40ef4
-
由 Yanqin Jin 提交于
Summary: This PR allows users to run stress tests on secondary instance. Test plan (on devserver) ``` ./db_stress -ops_per_thread=100000 -enable_secondary=true -threads=32 -secondary_catch_up_one_in=10000 -clear_column_family_one_in=1000 -reopen=100 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5479 Differential Revision: D16074325 Pulled By: riversand963 fbshipit-source-id: c0ed959e7b6c7cda3efd0b3070ab379de3b29f1c
-
- 01 7月, 2019 2 次提交
-
-
由 anand76 提交于
Summary: Enhancement to MultiGet batching to read data blocks required for keys in a batch in parallel from disk. It uses Env::MultiRead() API to read multiple blocks and reduce latency. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5464 Test Plan: 1. make check 2. make asan_check 3. make asan_crash Differential Revision: D15911771 Pulled By: anand1976 fbshipit-source-id: 605036b9af0f90ca0020dc87c3a86b4da6e83394
-
由 haoyuhuang 提交于
Summary: This PR is needed for integration into MyRocks. A second call on StartTrace returns Busy so that MyRocks may return an error to the user. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5519 Test Plan: make clean && USE_CLANG=1 make check -j32 Differential Revision: D16055476 Pulled By: HaoyuHuang fbshipit-source-id: a51772fb0965c873922757eb470a332b1e02a91d
-
- 29 6月, 2019 1 次提交
-
-
由 sdong 提交于
Summary: tools/check_format_compatible.sh is lagged behind. Catch up. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5518 Test Plan: Run the command Differential Revision: D16063180 fbshipit-source-id: d063eb42df9653dec06a2cf0fb982b8a60ca3d2f
-
- 28 6月, 2019 2 次提交
-
-
由 Aaron Gao 提交于
Summary: `create_column_family` cmd already exists but was somehow missed in the help message. also add `drop_column_family` cmd which can drop a cf without opening db. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5503 Test Plan: Updated existing ldb_test.py to test deleting a column family. Differential Revision: D16018414 Pulled By: lightmark fbshipit-source-id: 1fc33680b742104fea86b10efc8499f79e722301
-
由 sdong 提交于
Summary: Mid-point insertion is a useful feature and is mature now. Make it default. Also changed cache_index_and_filter_blocks_with_high_priority=true as default accordingly, so that we won't evict index and filter blocks easier after the change, to avoid too many surprises to users. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5508 Test Plan: Run all existing tests. Differential Revision: D16021179 fbshipit-source-id: ce8456e8d43b3bfb48df6c304b5290a9d19817eb
-
- 27 6月, 2019 2 次提交
-
-
由 Yanqin Jin 提交于
Summary: Add C binding for secondary instance as well as unit test. Test plan (on devserver) ``` $make clean && COMPILE_WITH_ASAN=1 make -j20 all $./c_test $make check ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/5505 Differential Revision: D16000043 Pulled By: riversand963 fbshipit-source-id: 3361ef6bfdf4ce12438cee7290a0ac203b5250bd
-
由 haoyuhuang 提交于
Summary: This PR makes sure that trace record is not populated when tracing is disabled. Before this PR: DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-13-04-41/db] readwhilewriting : 9.803 micros/op 1550408 ops/sec; 107.9 MB/s (5000000 of 5000000 found) Microseconds per read: Count: 80000000 Average: 9.8045 StdDev: 12.64 Min: 1 Median: 7.5246 Max: 25343 Percentiles: P50: 7.52 P75: 12.10 P99: 37.44 P99.9: 75.07 P99.99: 133.60 After this PR: DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-14-08-21/db] readwhilewriting : 8.723 micros/op 1662882 ops/sec; 115.8 MB/s (5000000 of 5000000 found) Microseconds per read: Count: 80000000 Average: 8.7236 StdDev: 12.19 Min: 1 Median: 6.7262 Max: 25229 Percentiles: P50: 6.73 P75: 10.50 P99: 31.54 P99.9: 74.81 P99.99: 132.82 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5510 Differential Revision: D16016428 Pulled By: HaoyuHuang fbshipit-source-id: 3b3d11e6accf207d18ec2545b802aa01ee65901f
-
- 26 6月, 2019 1 次提交
-
-
由 Mike Kolupaev 提交于
Summary: Found by valgrind_check. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5507 Differential Revision: D16002612 Pulled By: miasantreble fbshipit-source-id: 13c11c183190e0a0571844635457d434da3ac59a
-
- 25 6月, 2019 2 次提交
-
-
由 Mike Kolupaev 提交于
Summary: The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes. Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it. So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks. Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files. This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289 Differential Revision: D15256423 Pulled By: al13n321 fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
-
由 haoyuhuang 提交于
Summary: This PR adds a feature in block cache trace analysis tool to write statistics into csv files. 1. The analysis tool supports grouping the number of accesses per second by various labels, e.g., block, column family, block type, or a combination of them. 2. It also computes reuse distance and reuse interval. Reuse distance: The cumulated size of unique blocks read between two consecutive accesses on the same block. Reuse interval: The time between two consecutive accesses on the same block. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5490 Differential Revision: D15901322 Pulled By: HaoyuHuang fbshipit-source-id: b5454fea408a32757a80be63de6fe1c8149ca70e
-