- 08 11月, 2018 2 次提交
-
-
由 Zhongyi Xie 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/4637 Differential Revision: D12963727 Pulled By: miasantreble fbshipit-source-id: 76053501afbecc6ef388ddc56542fa0185243e3f
-
由 Peter (Stig) Edwards 提交于
Summary: Ensure delete[] and not delete is called on buffer_, as it is reset with new char[buffer_size_]. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4647 Differential Revision: D12961327 Pulled By: ajkr fbshipit-source-id: c1af373b98359edfdc291caebe4e0acdfb8afdd8
-
- 07 11月, 2018 4 次提交
-
-
由 Andrew Gallagher 提交于
Summary: clang modules warns about `#include`s inside of namespaces. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4629 Reviewed By: ajkr Differential Revision: D12927333 Pulled By: andrewjcg fbshipit-source-id: a9e0b069e63d8224f78b7c3be1c3acf09bb83d3f
-
由 Yanqin Jin 提交于
Summary: Include 5.16 and 5.17 in check_format_compatible.sh Pull Request resolved: https://github.com/facebook/rocksdb/pull/4634 Differential Revision: D12947140 Pulled By: riversand963 fbshipit-source-id: 79852b76d5139b2f31db59ed14cb368be01f2c32
-
由 Siying Dong 提交于
Summary: valgrind tests with 1 thread run too long. To make it shorter, black list some long tests. These are already blacklisted in parallel valgrind tests, but they are not in non-parallel mode Pull Request resolved: https://github.com/facebook/rocksdb/pull/4642 Differential Revision: D12945237 Pulled By: siying fbshipit-source-id: 04cf977d435996480fe87aa09f14b17975b74f7d
-
由 Yanqin Jin 提交于
Summary: Move the line `Add xxhash64 checksum support` to `Unreleased` section because it has not been released to 5.17. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4627 Differential Revision: D12944123 Pulled By: riversand963 fbshipit-source-id: 2762857065b9e741c64ff8b6116ca62fe31891d8
-
- 06 11月, 2018 2 次提交
-
-
由 Maysam Yabandeh 提交于
Summary: When evicting an entry form the commit_cache, it is verified against the list of old snapshots to see if it overlaps with any. The list of old snapshots is split into two lists: an efficient concurrent cache and an slow vector protected by a lock. The patch fixes a bug that would stop the search in the cache if it finds any and yet would not include the larger snapshots in the slower list. An extra info log entry is also removed. The condition to trigger that although very rare is still feasible and should not spam the LOG when that happens. Fixes #4621 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4639 Differential Revision: D12934989 Pulled By: maysamyabandeh fbshipit-source-id: 4e0fe8147ba292b554ae78e94c21c2ef31e03e2d
-
由 Andrew Kryczka 提交于
Summary: This property can help debug why SST files aren't being deleted. Previously we only had the property "rocksdb.is-file-deletions-enabled". However, even when that returned true, obsolete SSTs may still not be deleted due to the coarse-grained mechanism we use to prevent newly created SSTs from being accidentally deleted. That coarse-grained mechanism uses a lower bound file number for SSTs that should not be deleted, and this property exposes that lower bound. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4618 Differential Revision: D12898179 Pulled By: ajkr fbshipit-source-id: fe68acc041ddbcc9276bbd48976524d95aafc776
-
- 03 11月, 2018 5 次提交
-
-
由 Bo Hou 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/4626 Differential Revision: D12911848 Pulled By: jsjhoubo fbshipit-source-id: db6c59665e7cdbda20c6c63b0abd3ce24b473ae9
-
由 Siying Dong 提交于
Summary: ExternalSSTFileTest.IngestNonExistingFile occasionally fail for number of SST files after manual compaction doesn't go down as expected. Although I don't find a reason how this can happen, adding an extra waiting to make sure obsolete file purging has finished before we check the files doesn't hurt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4625 Differential Revision: D12910586 Pulled By: siying fbshipit-source-id: 2a5ddec6908c99cf3bcc78431c6f93151c2cab59
-
由 Philip Jameson 提交于
Summary: Slightly changes the format of generated BUCK files for Facebook consumption. Generated targets end up looking like this: ``` cpp_library( name = "rocksdb_tools_lib", srcs = [ "tools/db_bench_tool.cc", "tools/trace_analyzer_tool.cc", "util/testutil.cc", ], auto_headers = AutoHeaders.RECURSIVE_GLOB, arch_preprocessor_flags = rocksdb_arch_preprocessor_flags, compiler_flags = rocksdb_compiler_flags, preprocessor_flags = rocksdb_preprocessor_flags, deps = [":rocksdb_lib"], external_deps = rocksdb_external_deps, ) ``` Instead of ``` cpp_library( name = "rocksdb_tools_lib", srcs = [ "tools/db_bench_tool.cc", "tools/trace_analyzer_tool.cc", "util/testutil.cc", ], headers = AutoHeaders.RECURSIVE_GLOB, arch_preprocessor_flags = rocksdb_arch_preprocessor_flags, compiler_flags = rocksdb_compiler_flags, preprocessor_flags = rocksdb_preprocessor_flags, deps = [":rocksdb_lib"], external_deps = rocksdb_external_deps, ) ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4624 Reviewed By: riversand963 Differential Revision: D12906711 Pulled By: philipjameson fbshipit-source-id: 32ab64a3390cdcf2c4043ff77517ac1ad58a5e2b
-
由 Zhongyi Xie 提交于
Summary: fix current failing lite test: > In file included from ./util/testharness.h:15:0, from ./table/mock_table.h:23, from ./db/db_test_util.h:44, from db/db_flush_test.cc:10: db/db_flush_test.cc: In member function ‘virtual void rocksdb::DBFlushTest_ManualFlushFailsInReadOnlyMode_Test::TestBody()’: db/db_flush_test.cc:250:35: error: ‘Properties’ is not a member of ‘rocksdb::DB’ ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBackgroundErrors, ^ make: *** [db/db_flush_test.o] Error 1 Pull Request resolved: https://github.com/facebook/rocksdb/pull/4619 Differential Revision: D12898319 Pulled By: miasantreble fbshipit-source-id: 72de603b1f2e972fc8caa88611798c4e98e348c6
-
由 jiachun.fjc 提交于
Summary: In 'StatisticsCollector', the call of Thread.sleep() might be better outside the loop? Pull Request resolved: https://github.com/facebook/rocksdb/pull/4588 Differential Revision: D12903406 Pulled By: sagar0 fbshipit-source-id: 1647ed779e9972bc2cea03f4c38e37ab3ad7c361
-
- 02 11月, 2018 4 次提交
-
-
由 Yanqin Jin 提交于
Summary: The new case is directIO = true, write_global_seqno = false in which we no longer write global_seqno to the external SST file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4614 Differential Revision: D12885001 Pulled By: riversand963 fbshipit-source-id: 7541bdc608b3a0c93d3c3c435da1b162b36673d4
-
由 Soli 提交于
Summary: Long absolute file names in log make it hard to read the LOG files. So we shorter them to relative to the root of RocksDB project path. In most cases, they will only have one level directory and one file name. There was [a talk](#4316) about making "util/logging.h" a public header file. But we concern the conflicts that might be introduced in for macros named `STRINGIFY`, `TOSTRING`, and `PREPEND_FILE_LINE`. So I prepend a prefix `ROCKS_LOG_` to them. I also remove the line that includes "port.h" which seems unneccessary here. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4616 Differential Revision: D12892857 Pulled By: siying fbshipit-source-id: af79aaf82153b8fd66b5966aced39a51fbca9c6c
-
由 Bo Hou 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/4607 Reviewed By: siying Differential Revision: D12836696 Pulled By: jsjhoubo fbshipit-source-id: 7122ccb712d0b0f1cd998aa4477e0da1401bd870
-
由 Andrew Kryczka 提交于
Summary: The logic to wait for stall conditions to clear before beginning a manual flush didn't take into account whether the DB was in read-only mode. In read-only mode the stall conditions would never clear since no background work is happening, so the wait would be never-ending. It's probably better to return an error to the user. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4615 Differential Revision: D12888008 Pulled By: ajkr fbshipit-source-id: 1c474b42a7ac38d9fd0d0e2340ff1d53e684d83c
-
- 01 11月, 2018 1 次提交
-
-
由 Andrew Kryczka 提交于
Summary: A background compaction with pre-picked files (i.e., either a manual compaction or a bottom-pri compaction) fails when the DB is in read-only mode. In the failure handling, we forgot to unregister the compaction and the files it covered. Then subsequent manual compactions could conflict with this zombie compaction (possibly Halloween related) and wait forever for it to finish. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4611 Differential Revision: D12871217 Pulled By: ajkr fbshipit-source-id: 9d24e921d5bbd2ee8c2c9536a30abfa42a220c6e
-
- 31 10月, 2018 6 次提交
-
-
由 Yanqin Jin 提交于
Summary: Originally, the manual flush calls in db_stress flushes only a single column family, which is not sufficient when atomic flush is enabled. With atomic flush, we should call `Flush(flush_opts, cfhs)` to better test this new feature. Specifically, we manuall flush all column families so that database verification is easier. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4608 Differential Revision: D12849160 Pulled By: riversand963 fbshipit-source-id: ae1f0dd825247b42c0aba520a5c967335102c876
-
由 Yanqin Jin 提交于
Summary: Add unit tests to demonstrate that `VersionSet::Recover` is able to detect and handle cases in which the MANIFEST has valid atomic group, incomplete trailing atomic group, atomic group mixed with normal version edits and atomic group with incorrect size. With this capability, RocksDB identifies non-valid groups of version edits and do not apply them, thus guaranteeing that the db is restored to a state consistent with the most recent successful atomic flush before applying WAL. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4433 Differential Revision: D10079202 Pulled By: riversand963 fbshipit-source-id: a0e0b8bf4da1cf68e044d397588c121b66c68876
-
由 Abhishek Madan 提交于
Summary: Since the number of range deletions are reported in TableProperties, it is confusing to not report the number of merge operands and point deletions as top-level properties; they are accessible through the public API, but since they are not the "main" properties, they do not appear in aggregated table properties, or the string representation of table properties. This change promotes those two property keys to `rocksdb/table_properties.h`, adds corresponding uint64 members for them, deprecates the old access methods `GetDeletedKeys()` and `GetMergeOperands()` (though they are still usable for now), and removes `InternalKeyPropertiesCollector`. The property key strings are the same as before this change, so this should be able to read DBs written from older versions (though I haven't tested this yet). Pull Request resolved: https://github.com/facebook/rocksdb/pull/4594 Differential Revision: D12826893 Pulled By: abhimadan fbshipit-source-id: 9e4e4fbdc5b0da161c89582566d184101ba8eb68
-
由 Yanqin Jin 提交于
Summary: This PR adds test of atomic flush to our continuous stress tests. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4605 Differential Revision: D12840607 Pulled By: riversand963 fbshipit-source-id: 0da187572791a59530065a7952697c05b1197ad9
-
由 Ben Clay 提交于
Summary: Punch through more flags for BlockBasedTableConfig, mostly around caching index + filter blocks and partitioned filters. sagar0 adamretter Pull Request resolved: https://github.com/facebook/rocksdb/pull/4589 Differential Revision: D12840626 Pulled By: sagar0 fbshipit-source-id: 3c289d367ceb2a012023aa791b990a437dd1393a
-
由 Siying Dong 提交于
Summary: EnableFileDeletions() does info logging inside db mutex. This is not recommended in the code base, since there could be I/O involved. Move this outside the DB mutex. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4604 Differential Revision: D12834432 Pulled By: siying fbshipit-source-id: ffe5c2626fcfdb4c54a661a3c3b0bc95054816cf
-
- 30 10月, 2018 4 次提交
-
-
由 Andrew Kryczka 提交于
Summary: When there's a gap between files, we do not need to output tombstones starting at the next output file's begin key to the current output file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4592 Differential Revision: D12808627 Pulled By: ajkr fbshipit-source-id: 77c8b2e7523a95b1cd6611194144092c06acb505
-
由 Yanqin Jin 提交于
Summary: Since ErrorHandler::RecoverFromNoSpace is no-op in LITE mode, then we should not have this test in LITE mode. If we do keep it, it will cause the test thread to wait on bg_cv_ that will not be signalled. How to reproduce ``` $make clean && git checkout a27fce40 $OPT="-DROCKSDB_LITE -g" make -j20 $./db_io_failure_test --gtest_filter=DBIOFailureTest.NoSpaceCompactRange ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4596 Differential Revision: D12818516 Pulled By: riversand963 fbshipit-source-id: bc83524f40fff1e29506979017f7f4c2b70322f3
-
由 Yanqin Jin 提交于
Summary: Test plan ``` $USE_CLANG=1 make -j32 all check ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4593 Differential Revision: D12811159 Pulled By: riversand963 fbshipit-source-id: 5e3bbe058c5a8d5a286a19d7643593fc154a2d6d
-
由 Yanqin Jin 提交于
Summary: For flush triggered by RocksDB due to memory usage approaching certain threshold (WriteBufferManager or Memtable full), we should cut the memtable only when the current active memtable is not empty, i.e. contains data. This is what we do for non-atomic flush. If we always cut memtable even when the active memtable is empty, we will generate extra, empty immutable memtable. This is not ideal since it may cause write stall. It also causes some DBAtomicFlushTest to fail because cfd->imm()->NumNotFlushed() is different from expectation. Test plan ``` $make clean && make J=1 -j32 all check $make clean && OPT="-DROCKSDB_LITE -g" make J=1 -j32 all check $make clean && TEST_TMPDIR=/dev/shm/rocksdb OPT=-g make J=1 -j32 valgrind_test ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/4595 Differential Revision: D12818520 Pulled By: riversand963 fbshipit-source-id: d867bdbeacf4199fdd642debb085f94703c41a18
-
- 27 10月, 2018 3 次提交
-
-
由 Yi Wu 提交于
Summary: Introduce `JemallocNodumpAllocator`, which allow exclusion of block cache usage from core dump. It utilize custom hook of jemalloc arena, and when jemalloc arena request memory from system, the allocator use the hook to set `MADV_DONTDUMP ` to the memory. The implementation is basically the same as `folly::JemallocNodumpAllocator`, except for some minor difference: 1. It only support jemalloc >= 5.0 2. When the allocator destruct, it explicitly destruct the corresponding arena via `arena.<i>.destroy` via `mallctl`. Depending on #4502. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4534 Differential Revision: D10435474 Pulled By: yiwu-arbug fbshipit-source-id: e80edea755d3853182485d2be710376384ce0bb4
-
由 Yanqin Jin 提交于
Summary: Adds a DB option `atomic_flush` to control whether to enable this feature. This PR is a subset of [PR 3752](https://github.com/facebook/rocksdb/pull/3752). Pull Request resolved: https://github.com/facebook/rocksdb/pull/4023 Differential Revision: D8518381 Pulled By: riversand963 fbshipit-source-id: 1e3bb33e99bb102876a31b378d93b0138ff6634f
-
由 Yi Wu 提交于
Summary: Rename the interface, as it is mean to be a generic interface for memory allocation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4590 Differential Revision: D10866340 Pulled By: yiwu-arbug fbshipit-source-id: 85cb753351a40cb856c046aeaa3f3b369eef3d16
-
- 26 10月, 2018 1 次提交
-
-
由 Abhishek Madan 提交于
Summary: This allows tombstone fragmenting to only be performed when the table is opened, and cached for subsequent accesses. On the same DB used in #4449, running `readrandom` results in the following: ``` readrandom : 0.983 micros/op 1017076 ops/sec; 78.3 MB/s (63103 of 100000 found) ``` Now that Get performance in the presence of range tombstones is reasonable, I also compared the performance between a DB with range tombstones, "expanded" range tombstones (several point tombstones that cover the same keys the equivalent range tombstone would cover, a common workaround for DeleteRange), and no range tombstones. The created DBs had 5 million keys each, and DeleteRange was called at regular intervals (depending on the total number of range tombstones being written) after 4.5 million Puts. The table below summarizes the results of a `readwhilewriting` benchmark (in order to provide somewhat more realistic results): ``` Tombstones? | avg micros/op | stddev micros/op | avg ops/s | stddev ops/s ----------------- | ------------- | ---------------- | ------------ | ------------ None | 0.6186 | 0.04637 | 1,625,252.90 | 124,679.41 500 Expanded | 0.6019 | 0.03628 | 1,666,670.40 | 101,142.65 500 Unexpanded | 0.6435 | 0.03994 | 1,559,979.40 | 104,090.52 1k Expanded | 0.6034 | 0.04349 | 1,665,128.10 | 125,144.57 1k Unexpanded | 0.6261 | 0.03093 | 1,600,457.50 | 79,024.94 5k Expanded | 0.6163 | 0.05926 | 1,636,668.80 | 154,888.85 5k Unexpanded | 0.6402 | 0.04002 | 1,567,804.70 | 100,965.55 10k Expanded | 0.6036 | 0.05105 | 1,667,237.70 | 142,830.36 10k Unexpanded | 0.6128 | 0.02598 | 1,634,633.40 | 72,161.82 25k Expanded | 0.6198 | 0.04542 | 1,620,980.50 | 116,662.93 25k Unexpanded | 0.5478 | 0.0362 | 1,833,059.10 | 121,233.81 50k Expanded | 0.5104 | 0.04347 | 1,973,107.90 | 184,073.49 50k Unexpanded | 0.4528 | 0.03387 | 2,219,034.50 | 170,984.32 ``` After a large enough quantity of range tombstones are written, range tombstone Gets can become faster than reading from an equivalent DB with several point tombstones. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4493 Differential Revision: D10842844 Pulled By: abhimadan fbshipit-source-id: a7d44534f8120e6aabb65779d26c6b9df954c509
-
- 25 10月, 2018 8 次提交
-
-
由 Zhongyi Xie 提交于
Summary: Currently there are two contrun test failures: * rocksdb-contrun-lite: > tools/db_bench_tool.cc: In function ‘int rocksdb::db_bench_tool(int, char**)’: tools/db_bench_tool.cc:5814:5: error: ‘DumpMallocStats’ is not a member of ‘rocksdb’ rocksdb::DumpMallocStats(&stats_string); ^ make: *** [tools/db_bench_tool.o] Error 1 * rocksdb-contrun-unity: > In file included from unity.cc:44:0: db/range_tombstone_fragmenter.cc: In member function ‘void rocksdb::FragmentedRangeTombstoneIterator::FragmentTombstones(std::unique_ptr<rocksdb::InternalIteratorBase<rocksdb::Slice> >, rocksdb::SequenceNumber)’: db/range_tombstone_fragmenter.cc:90:14: error: reference to ‘ParsedInternalKeyComparator’ is ambiguous auto cmp = ParsedInternalKeyComparator(icmp_); This PR will fix them Pull Request resolved: https://github.com/facebook/rocksdb/pull/4587 Differential Revision: D10846554 Pulled By: miasantreble fbshipit-source-id: 8d3358879e105060197b1379c84aecf51b352b93
-
由 Yanqin Jin 提交于
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/4585 Differential Revision: D10841983 Pulled By: riversand963 fbshipit-source-id: 6a7e0b40065bcfbb10a2cac0cec1e8da0750a617
-
由 Jigar Bhati 提交于
Summary: 1. `WriteBufferManager` should have a reference alive in Java side through `Options`/`DBOptions` otherwise, if it's GC'ed at java side, native side can seg fault. 2. native method `setWriteBufferManager()` in `DBOptions.java` doesn't have it's jni method invocation in rocksdbjni which is added in this PR 3. `DBOptionsTest.java` is referencing object of `Options`. Instead it should be testing against `DBOptions`. Seems like a copy paste error. 4. Add a getter for WriteBufferManager. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4579 Differential Revision: D10561150 Pulled By: sagar0 fbshipit-source-id: 139a15c7f051a9f77b4200215b88267b48fbc487
-
由 Abhishek Madan 提交于
Summary: Previously, range tombstones were accumulated from every level, which was necessary if a range tombstone in a higher level covered a key in a lower level. However, RangeDelAggregator::AddTombstones's complexity is based on the number of tombstones that are currently stored in it, which is wasteful in the Get case, where we only need to know the highest sequence number of range tombstones that cover the key from higher levels, and compute the highest covering sequence number at the current level. This change introduces this optimization, and removes the use of RangeDelAggregator from the Get path. In the benchmark results, the following command was used to initialize the database: ``` ./db_bench -db=/dev/shm/5k-rts -use_existing_db=false -benchmarks=filluniquerandom -write_buffer_size=1048576 -compression_type=lz4 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304 -value_size=112 -key_size=16 -block_size=4096 -level_compaction_dynamic_level_bytes=true -num=5000000 -max_background_jobs=12 -benchmark_write_rate_limit=20971520 -range_tombstone_width=100 -writes_per_range_tombstone=100 -max_num_range_tombstones=50000 -bloom_bits=8 ``` ...and the following command was used to measure read throughput: ``` ./db_bench -db=/dev/shm/5k-rts/ -use_existing_db=true -benchmarks=readrandom -disable_auto_compactions=true -num=5000000 -reads=100000 -threads=32 ``` The filluniquerandom command was only run once, and the resulting database was used to measure read performance before and after the PR. Both binaries were compiled with `DEBUG_LEVEL=0`. Readrandom results before PR: ``` readrandom : 4.544 micros/op 220090 ops/sec; 16.9 MB/s (63103 of 100000 found) ``` Readrandom results after PR: ``` readrandom : 11.147 micros/op 89707 ops/sec; 6.9 MB/s (63103 of 100000 found) ``` So it's actually slower right now, but this PR paves the way for future optimizations (see #4493). ---- Pull Request resolved: https://github.com/facebook/rocksdb/pull/4449 Differential Revision: D10370575 Pulled By: abhimadan fbshipit-source-id: 9a2e152be1ef36969055c0e9eb4beb0d96c11f4d
-
由 Zhongyi Xie 提交于
Summary: PR https://github.com/facebook/rocksdb/pull/4226 introduced per-level perf context which allows breaking down perf context by levels. This PR takes advantage of the feature to populate a few counters related to bloom filters Pull Request resolved: https://github.com/facebook/rocksdb/pull/4581 Differential Revision: D10518010 Pulled By: miasantreble fbshipit-source-id: 011244561783ec860d32d5b0fa6bce6e78d70ef8
-
由 Simon Grätzer 提交于
Summary: SetId and GetId are the experimental API that so far being used in WritePrepared and WriteUnPrepared transactions, where the id is assigned at the prepare time. The patch extends the API to WriteCommitted transactions, by setting the id at commit time. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4565 Differential Revision: D10557862 Pulled By: maysamyabandeh fbshipit-source-id: 2b27a140682b6185a4988fa88f8152628e0d67af
-
由 Sagar Vemuri 提交于
Summary: `WritableFileWrapper` was missing some newer methods that were added to `WritableFile`. Without these functions, the missing wrapper methods would fallback to using the default implementations in WritableFile instead of using the corresponding implementations in, say, `PosixWritableFile` or `WinWritableFile`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4584 Differential Revision: D10559199 Pulled By: sagar0 fbshipit-source-id: 0d0f18a486aee727d5b8eebd3110a41988e27391
-
由 Yi Wu 提交于
Summary: Option to print malloc stats to stdout at the end of db_bench. This is different from `--dump_malloc_stats`, which periodically print the same information to LOG file. Pull Request resolved: https://github.com/facebook/rocksdb/pull/4582 Differential Revision: D10520814 Pulled By: yiwu-arbug fbshipit-source-id: beff5e514e414079d31092b630813f82939ffe5c
-