1. 19 11月, 2019 1 次提交
  2. 13 11月, 2019 1 次提交
    • A
      Batched MultiGet API for multiple column families (#5816) · 6c7b1a0c
      anand76 提交于
      Summary:
      Add a new API that allows a user to call MultiGet specifying multiple keys belonging to different column families. This is mainly useful for users who want to do a consistent read of keys across column families, with the added performance benefits of batching and returning values using PinnableSlice.
      
      As part of this change, the code in the original multi-column family MultiGet for acquiring the super versions has been refactored into a separate function that can be used by both, the batching and the non-batching versions of MultiGet.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5816
      
      Test Plan:
      make check
      make asan_check
      asan_crash_test
      
      Differential Revision: D18408676
      
      Pulled By: anand1976
      
      fbshipit-source-id: 933e7bec91dd70e7b633be4ff623a1116cc28c8d
      6c7b1a0c
  3. 12 11月, 2019 1 次提交
    • A
      Fix a buffer overrun problem in BlockBasedTable::MultiGet (#6014) · 03ce7fb2
      anand76 提交于
      Summary:
      The calculation in BlockBasedTable::MultiGet for the required buffer length for reading in compressed blocks is incorrect. It needs to take the 5-byte block trailer into account.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/6014
      
      Test Plan: Add a unit test DBBasicTest.MultiGetBufferOverrun that fails in asan_check before the fix, and passes after.
      
      Differential Revision: D18412753
      
      Pulled By: anand1976
      
      fbshipit-source-id: 754dfb66be1d5f161a7efdf87be872198c7e3b72
      03ce7fb2
  4. 08 11月, 2019 1 次提交
  5. 25 10月, 2019 1 次提交
    • P
      Misc hashing updates / upgrades (#5909) · ca7ccbe2
      Peter Dillinger 提交于
      Summary:
      - Updated our included xxhash implementation to version 0.7.2 (== the latest dev version as of 2019-10-09).
      - Using XXH_NAMESPACE (like other fb projects) to avoid potential name collisions.
      - Added fastrange64, and unit tests for it and fastrange32. These are faster alternatives to hash % range.
      - Use preview version of XXH3 instead of MurmurHash64A for NPHash64
      -- Had to update cache_test to increase probability of passing for any given hash function.
      - Use fastrange64 instead of % with uses of NPHash64
      -- Had to fix WritePreparedTransactionTest.CommitOfDelayedPrepared to avoid deadlock apparently caused by new hash collision.
      - Set default seed for NPHash64 because specifying a seed rarely makes sense for it.
      - Removed unnecessary include xxhash.h in a popular .h file
      - Rename preview version of XXH3 to XXH3p for clarity and to ease backward compatibility in case final version of XXH3 is integrated.
      
      Relying on existing unit tests for NPHash64-related changes. Each new implementation of fastrange64 passed unit tests when manipulating my local build to select it. I haven't done any integration performance tests, but I consider the improved performance of the pieces being swapped in to be well established.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5909
      
      Differential Revision: D18125196
      
      Pulled By: pdillinger
      
      fbshipit-source-id: f6bf83d49d20cbb2549926adf454fd035f0ecc0d
      ca7ccbe2
  6. 11 10月, 2019 1 次提交
    • V
      MultiGet batching in memtable (#5818) · 4c49e38f
      Vijay Nadimpalli 提交于
      Summary:
      RocksDB has a MultiGet() API that implements batched key lookup for higher performance (https://github.com/facebook/rocksdb/blob/master/include/rocksdb/db.h#L468). Currently, batching is implemented in BlockBasedTableReader::MultiGet() for SST file lookups. One of the ways it improves performance is by pipelining bloom filter lookups (by prefetching required cachelines for all the keys in the batch, and then doing the probe) and thus hiding the cache miss latency. The same concept can be extended to the memtable as well. This PR involves implementing a pipelined bloom filter lookup in DynamicBloom, and implementing MemTable::MultiGet() that can leverage it.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5818
      
      Test Plan:
      Existing tests
      
      Performance Test:
      Ran the below command which fills up the memtable and makes sure there are no flushes and then call multiget. Ran it on master and on the new change and see atleast 1% performance improvement across all the test runs I did. Sometimes the improvement was upto 5%.
      
      TEST_TMPDIR=/data/users/$USER/benchmarks/feature/ numactl -C 10 ./db_bench -benchmarks="fillseq,multireadrandom" -num=600000 -compression_type="none" -level_compaction_dynamic_level_bytes -write_buffer_size=200000000 -target_file_size_base=200000000 -max_bytes_for_level_base=16777216 -reads=90000 -threads=1 -compression_type=none -cache_size=4194304000 -batch_size=32 -disable_auto_compactions=true -bloom_bits=10 -cache_index_and_filter_blocks=true -pin_l0_filter_and_index_blocks_in_cache=true -multiread_batched=true -multiread_stride=4 -statistics -memtable_whole_key_filtering=true -memtable_bloom_size_ratio=10
      
      Differential Revision: D17578869
      
      Pulled By: vjnadimpalli
      
      fbshipit-source-id: 23dc651d9bf49db11d22375bf435708875a1f192
      4c49e38f
  7. 08 10月, 2019 1 次提交
  8. 21 9月, 2019 1 次提交
  9. 03 9月, 2019 1 次提交
    • V
      Persistent globally unique DB ID in manifest (#5725) · 979fbdc6
      Vijay Nadimpalli 提交于
      Summary:
      Each DB has a globally unique ID. A DB can be physically copied around, or backed-up and restored, and the users should be identify the same DB. This unique ID right now is stored as plain text in file IDENTITY under the DB directory. This approach introduces at least two problems: (1) the file is not checksumed; (2) the source of truth of a DB is the manifest file, which can be copied separately from IDENTITY file, causing the DB ID to be wrong.
      The goal of this PR is solve this problem by moving the  DB ID to manifest. To begin with we will write to both identity file and manifest. Write to Manifest is controlled via the flag write_dbid_to_manifest in Options and default is false.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5725
      
      Test Plan: Added unit tests.
      
      Differential Revision: D16963840
      
      Pulled By: vjnadimpalli
      
      fbshipit-source-id: 8a86a4c8c82c716003c40fd6b9d2d758030d92e9
      979fbdc6
  10. 30 8月, 2019 1 次提交
  11. 29 8月, 2019 1 次提交
    • A
      Support row cache with batched MultiGet (#5706) · e1057033
      anand76 提交于
      Summary:
      This PR adds support for row cache in ```rocksdb::TableCache::MultiGet```.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5706
      
      Test Plan:
      1. Unit tests in db_basic_test
      2. db_bench results with batch size of 2 (```Get``` is faster than ```MultiGet``` for single key) -
      Get -
      readrandom   :       3.935 micros/op 254116 ops/sec;   28.1 MB/s (22870998 of 22870999 found)
      MultiGet -
      multireadrandom :       3.743 micros/op 267190 ops/sec; (24047998 of 24047998 found)
      
      Command used -
      TEST_TMPDIR=/dev/shm/multiget numactl -C 10  ./db_bench -use_existing_db=true -use_existing_keys=false -benchmarks="readtorowcache,[read|multiread]random" -write_buffer_size=16777216 -target_file_size_base=4194304 -max_bytes_for_level_base=16777216 -num=12000000 -reads=12000000 -duration=90 -threads=1 -compression_type=none -cache_size=4194304000 -row_cache_size=4194304000 -batch_size=2 -disable_auto_compactions=true -bloom_bits=10 -cache_index_and_filter_blocks=true -pin_l0_filter_and_index_blocks_in_cache=true -multiread_batched=true -multiread_stride=131072
      
      Differential Revision: D17086297
      
      Pulled By: anand1976
      
      fbshipit-source-id: 85784378da913e05f1baf31ec1b4e7c9345e7f57
      e1057033
  12. 24 8月, 2019 1 次提交
    • Z
      Refactor trimming logic for immutable memtables (#5022) · 2f41ecfe
      Zhongyi Xie 提交于
      Summary:
      MyRocks currently sets `max_write_buffer_number_to_maintain` in order to maintain enough history for transaction conflict checking. The effectiveness of this approach depends on the size of memtables. When memtables are small, it may not keep enough history; when memtables are large, this may consume too much memory.
      We are proposing a new way to configure memtable list history: by limiting the memory usage of immutable memtables. The new option is `max_write_buffer_size_to_maintain` and it will take precedence over the old `max_write_buffer_number_to_maintain` if they are both set to non-zero values. The new option accounts for the total memory usage of flushed immutable memtables and mutable memtable. When the total usage exceeds the limit, RocksDB may start dropping immutable memtables (which is also called trimming history), starting from the oldest one.
      The semantics of the old option actually works both as an upper bound and lower bound. History trimming will start if number of immutable memtables exceeds the limit, but it will never go below (limit-1) due to history trimming.
      In order the mimic the behavior with the new option, history trimming will stop if dropping the next immutable memtable causes the total memory usage go below the size limit. For example, assuming the size limit is set to 64MB, and there are 3 immutable memtables with sizes of 20, 30, 30. Although the total memory usage is 80MB > 64MB, dropping the oldest memtable will reduce the memory usage to 60MB < 64MB, so in this case no memtable will be dropped.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5022
      
      Differential Revision: D14394062
      
      Pulled By: miasantreble
      
      fbshipit-source-id: 60457a509c6af89d0993f988c9b5c2aa9e45f5c5
      2f41ecfe
  13. 22 8月, 2019 1 次提交
    • A
      Fix MultiGet() bug when whole_key_filtering is disabled (#5665) · 9046bdc5
      anand76 提交于
      Summary:
      The batched MultiGet() implementation was not correctly handling bloom filter lookups when whole_key_filtering is disabled. It was incorrectly skipping keys not in the prefix_extractor domain, and not calling transform for keys in domain. This PR fixes both problems by moving the domain check and transformation to the FilterBlockReader.
      
      Tests:
      Unit test (confirmed failed before the fix)
      make check
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5665
      
      Differential Revision: D16902380
      
      Pulled By: anand1976
      
      fbshipit-source-id: a6be81ad68a6e37134a65246aec7a2c590eccf00
      9046bdc5
  14. 10 8月, 2019 1 次提交
    • Y
      Support loading custom objects in unit tests (#5676) · 5d9a67e7
      Yanqin Jin 提交于
      Summary:
      Most existing RocksDB unit tests run on `Env::Default()`. It will be useful to port the unit tests to non-default environments, e.g. `HdfsEnv`, etc.
      This pull request is one step towards this goal. If RocksDB unit tests are built with a static library exposing a function `RegisterCustomObjects()`, then it is possible to implement custom object registrar logic in the library. RocksDB unit test can call `RegisterCustomObjects()` at the beginning.
      By default, `ROCKSDB_UNITTESTS_WITH_CUSTOM_OBJECTS_FROM_STATIC_LIBS` is not defined, thus this PR has no impact on existing RocksDB because `RegisterCustomObjects()` is a noop.
      Test plan (on devserver):
      ```
      $make clean && COMPILE_WITH_ASAN=1 make -j32 all
      $make check
      ```
      All unit tests must pass.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5676
      
      Differential Revision: D16679157
      
      Pulled By: riversand963
      
      fbshipit-source-id: aca571af3fd0525277cdc674248d0fe06e060f9d
      5d9a67e7
  15. 08 7月, 2019 1 次提交
    • Y
      Support GetAllKeyVersions() for non-default cf (#5544) · 7c76a7fb
      Yanqin Jin 提交于
      Summary:
      Previously `GetAllKeyVersions()` supports default column family only. This PR add support for other column families.
      
      Test plan (devserver):
      ```
      $make clean && COMPILE_WITH_ASAN=1 make -j32 db_basic_test
      $./db_basic_test --gtest_filter=DBBasicTest.GetAllKeyVersions
      ```
      All other unit tests must pass.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5544
      
      Differential Revision: D16147551
      
      Pulled By: riversand963
      
      fbshipit-source-id: 5a61aece2a32d789e150226a9b8d53f4a5760168
      7c76a7fb
  16. 01 7月, 2019 1 次提交
    • A
      MultiGet parallel IO (#5464) · 7259e28d
      anand76 提交于
      Summary:
      Enhancement to MultiGet batching to read data blocks required for keys in a batch in parallel from disk. It uses Env::MultiRead() API to read multiple blocks and reduce latency.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5464
      
      Test Plan:
      1. make check
      2. make asan_check
      3. make asan_crash
      
      Differential Revision: D15911771
      
      Pulled By: anand1976
      
      fbshipit-source-id: 605036b9af0f90ca0020dc87c3a86b4da6e83394
      7259e28d
  17. 06 6月, 2019 1 次提交
    • Y
      Add support for timestamp in Get/Put (#5079) · 340ed4fa
      Yanqin Jin 提交于
      Summary:
      It's useful to be able to (optionally) associate key-value pairs with user-provided timestamps. This PR is an early effort towards this goal and continues the work of facebook#4942. A suite of new unit tests exist in DBBasicTestWithTimestampWithParam. Support for timestamp requires the user to provide timestamp as a slice in `ReadOptions` and `WriteOptions`. All timestamps of the same database must share the same length, format, etc. The format of the timestamp is the same throughout the same database, and the user is responsible for providing a comparator function (Comparator) to order the <key, timestamp> tuples. Once created, the format and length of the timestamp cannot change (at least for now).
      
      Test plan (on devserver):
      ```
      $COMPILE_WITH_ASAN=1 make -j32 all
      $./db_basic_test --gtest_filter=Timestamp/DBBasicTestWithTimestampWithParam.PutAndGet/*
      $make check
      ```
      All tests must pass.
      
      We also run the following db_bench tests to verify whether there is regression on Get/Put while timestamp is not enabled.
      ```
      $TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillseq,readrandom -num=1000000
      $TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=1000000
      ```
      Repeat for 6 times for both versions.
      
      Results are as follows:
      ```
      |        | readrandom | fillrandom |
      | master | 16.77 MB/s | 47.05 MB/s |
      | PR5079 | 16.44 MB/s | 47.03 MB/s |
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5079
      
      Differential Revision: D15132946
      
      Pulled By: riversand963
      
      fbshipit-source-id: 833a0d657eac21182f0f206c910a6438154c742c
      340ed4fa
  18. 31 5月, 2019 1 次提交
  19. 16 4月, 2019 1 次提交
    • Y
      Fix MultiGet ASSERT bug when passing unsorted result (#5195) · 3e63e553
      Yi Zhang 提交于
      Summary:
      Found this when test driving the new MultiGet. If you pass unsorted result with sorted_result = false you'll trigger the ASSERT incorrect even though we'll sort down below.
      
      I've also added simple test cover sorted_result=true/false scenario copied from MultiGetSimple.
      
      anand1976
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5195
      
      Differential Revision: D14935475
      
      Pulled By: yizhang82
      
      fbshipit-source-id: 1d2af5e3a003847d965066a16e3b19da68acf170
      3e63e553
  20. 13 4月, 2019 1 次提交
  21. 12 4月, 2019 1 次提交
    • A
      Introduce a new MultiGet batching implementation (#5011) · fefd4b98
      anand76 提交于
      Summary:
      This PR introduces a new MultiGet() API, with the underlying implementation grouping keys based on SST file and batching lookups in a file. The reason for the new API is twofold - the definition allows callers to allocate storage for status and values on stack instead of std::vector, as well as return values as PinnableSlices in order to avoid copying, and it keeps the original MultiGet() implementation intact while we experiment with batching.
      
      Batching is useful when there is some spatial locality to the keys being queries, as well as larger batch sizes. The main benefits are due to -
      1. Fewer function calls, especially to BlockBasedTableReader::MultiGet() and FullFilterBlockReader::KeysMayMatch()
      2. Bloom filter cachelines can be prefetched, hiding the cache miss latency
      
      The next step is to optimize the binary searches in the level_storage_info, index blocks and data blocks, since we could reduce the number of key comparisons if the keys are relatively close to each other. The batching optimizations also need to be extended to other formats, such as PlainTable and filter formats. This also needs to be added to db_stress.
      
      Benchmark results from db_bench for various batch size/locality of reference combinations are given below. Locality was simulated by offsetting the keys in a batch by a stride length. Each SST file is about 8.6MB uncompressed and key/value size is 16/100 uncompressed. To focus on the cpu benefit of batching, the runs were single threaded and bound to the same cpu to eliminate interference from other system events. The results show a 10-25% improvement in micros/op from smaller to larger batch sizes (4 - 32).
      
      Batch   Sizes
      
      1        | 2        | 4         | 8      | 16  | 32
      
      Random pattern (Stride length 0)
      4.158 | 4.109 | 4.026 | 4.05 | 4.1 | 4.074        - Get
      4.438 | 4.302 | 4.165 | 4.122 | 4.096 | 4.075 - MultiGet (no batching)
      4.461 | 4.256 | 4.277 | 4.11 | 4.182 | 4.14        - MultiGet (w/ batching)
      
      Good locality (Stride length 16)
      4.048 | 3.659 | 3.248 | 2.99 | 2.84 | 2.753
      4.429 | 3.728 | 3.406 | 3.053 | 2.911 | 2.781
      4.452 | 3.45 | 2.833 | 2.451 | 2.233 | 2.135
      
      Good locality (Stride length 256)
      4.066 | 3.786 | 3.581 | 3.447 | 3.415 | 3.232
      4.406 | 4.005 | 3.644 | 3.49 | 3.381 | 3.268
      4.393 | 3.649 | 3.186 | 2.882 | 2.676 | 2.62
      
      Medium locality (Stride length 4096)
      4.012 | 3.922 | 3.768 | 3.61 | 3.582 | 3.555
      4.364 | 4.057 | 3.791 | 3.65 | 3.57 | 3.465
      4.479 | 3.758 | 3.316 | 3.077 | 2.959 | 2.891
      
      dbbench command used (on a DB with 4 levels, 12 million keys)-
      TEST_TMPDIR=/dev/shm numactl -C 10  ./db_bench.tmp -use_existing_db=true -benchmarks="readseq,multireadrandom" -write_buffer_size=4194304 -target_file_size_base=4194304 -max_bytes_for_level_base=16777216 -num=12000000 -reads=12000000 -duration=90 -threads=1 -compression_type=none -cache_size=4194304000 -batch_size=32 -disable_auto_compactions=true -bloom_bits=10 -cache_index_and_filter_blocks=true -pin_l0_filter_and_index_blocks_in_cache=true -multiread_batched=true -multiread_stride=4
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5011
      
      Differential Revision: D14348703
      
      Pulled By: anand1976
      
      fbshipit-source-id: 774406dab3776d979c809522a67bedac6c17f84b
      fefd4b98
  22. 15 2月, 2019 1 次提交
    • M
      Apply modernize-use-override (2nd iteration) · ca89ac2b
      Michael Liu 提交于
      Summary:
      Use C++11’s override and remove virtual where applicable.
      Change are automatically generated.
      
      Reviewed By: Orvid
      
      Differential Revision: D14090024
      
      fbshipit-source-id: 1e9432e87d2657e1ff0028e15370a85d1739ba2a
      ca89ac2b
  23. 08 2月, 2019 1 次提交
  24. 03 1月, 2019 1 次提交
    • A
      Lock free MultiGet (#4754) · b9d6ecca
      Anand Ananthabhotla 提交于
      Summary:
      Avoid locking the DB mutex in order to reference SuperVersions. Instead, we get the thread local cached SuperVersion for each column family in the list. It depends on finding a sequence number that overlaps with all the open memtables. We start with the latest published sequence number, and if any of the memtables is sealed before we can get all the SuperVersions, the process is repeated. After a few times, give up and lock the DB mutex.
      
      Tests:
      1. Unit tests
      2. make check
      3. db_bench -
      
      TEST_TMPDIR=/dev/shm ./db_bench -use_existing_db=true -benchmarks=readrandom -write_buffer_size=4194304 -target_file_size_base=4194304 -max_bytes_for_level_base=16777216 -num=5000000 -reads=1000000 -threads=32 -compression_type=none -cache_size=1048576000 -batch_size=1 -bloom_bits=1
      readrandom   :       0.167 micros/op 5983920 ops/sec;  426.2 MB/s (1000000 of 1000000 found)
      
      Multireadrandom with batch size 1:
      multireadrandom :       0.176 micros/op 5684033 ops/sec; (1000000 of 1000000 found)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4754
      
      Differential Revision: D13363550
      
      Pulled By: anand1976
      
      fbshipit-source-id: 6243e8de7dbd9c8bb490a8eca385da0c855b1dd4
      b9d6ecca
  25. 10 11月, 2018 1 次提交
    • S
      Update all unique/shared_ptr instances to be qualified with namespace std (#4638) · dc352807
      Sagar Vemuri 提交于
      Summary:
      Ran the following commands to recursively change all the files under RocksDB:
      ```
      find . -type f -name "*.cc" -exec sed -i 's/ unique_ptr/ std::unique_ptr/g' {} +
      find . -type f -name "*.cc" -exec sed -i 's/<unique_ptr/<std::unique_ptr/g' {} +
      find . -type f -name "*.cc" -exec sed -i 's/ shared_ptr/ std::shared_ptr/g' {} +
      find . -type f -name "*.cc" -exec sed -i 's/<shared_ptr/<std::shared_ptr/g' {} +
      ```
      Running `make format` updated some formatting on the files touched.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4638
      
      Differential Revision: D12934992
      
      Pulled By: sagar0
      
      fbshipit-source-id: 45a15d23c230cdd64c08f9c0243e5183934338a8
      dc352807
  26. 02 11月, 2018 1 次提交
  27. 14 7月, 2018 1 次提交
    • M
      Per-thread unique test db names (#4135) · 8581a93a
      Maysam Yabandeh 提交于
      Summary:
      The patch makes sure that two parallel test threads will operate on different db paths. This enables using open source tools such as gtest-parallel to run the tests of a file in parallel.
      Example: ``` ~/gtest-parallel/gtest-parallel ./table_test```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/4135
      
      Differential Revision: D8846653
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 799bad1abb260e3d346bcb680d2ae207a852ba84
      8581a93a
  28. 07 3月, 2018 1 次提交
    • A
      Disallow compactions if there isn't enough free space · 0a3db28d
      amytai 提交于
      Summary:
      This diff handles cases where compaction causes an ENOSPC error.
      This does not handle corner cases where another background job is started while compaction is running, and the other background job triggers ENOSPC, although we do allow the user to provision for these background jobs with SstFileManager::SetCompactionBufferSize.
      It also does not handle the case where compaction has finished and some other background job independently triggers ENOSPC.
      
      Usage: Functionality is inside SstFileManager. In particular, users should set SstFileManager::SetMaxAllowedSpaceUsage, which is the reference highwatermark for determining whether to cancel compactions.
      Closes https://github.com/facebook/rocksdb/pull/3449
      
      Differential Revision: D7016941
      
      Pulled By: amytai
      
      fbshipit-source-id: 8965ab8dd8b00972e771637a41b4e6c645450445
      0a3db28d
  29. 06 3月, 2018 1 次提交
  30. 24 2月, 2018 1 次提交
    • A
      Fix the Logger::Close() and DBImpl::Close() design pattern · dfbe52e0
      Anand Ananthabhotla 提交于
      Summary:
      The recent Logger::Close() and DBImpl::Close() implementation rely on
      calling the CloseImpl() virtual function from the destructor, which will
      not work. Refactor the implementation to have a private close helper
      function in derived classes that can be called by both CloseImpl() and
      the destructor.
      Closes https://github.com/facebook/rocksdb/pull/3528
      
      Reviewed By: gfosco
      
      Differential Revision: D7049303
      
      Pulled By: anand1976
      
      fbshipit-source-id: 76a64cbf403209216dfe4864ecf96b5d7f3db9f4
      dfbe52e0
  31. 23 2月, 2018 2 次提交
  32. 06 2月, 2018 1 次提交
  33. 17 1月, 2018 2 次提交
    • Y
      Fix multiple build failures · dc360df8
      Yi Wu 提交于
      Summary:
      * Fix DBTest.CompactRangeWithEmptyBottomLevel lite build failure
      * Fix DBTest.AutomaticConflictsWithManualCompaction failure introduce by #3366
      * Fix BlockBasedTableTest::IndexUncompressed should be disabled if snappy is disabled
      * Fix ASAN failure with DBBasicTest::DBClose test
      Closes https://github.com/facebook/rocksdb/pull/3373
      
      Differential Revision: D6732313
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 1eb9b9d9a8d795f56188fa9770db9353f6fdedc5
      dc360df8
    • A
      Add a Close() method to DB to return status when closing a db · d0f1b49a
      Anand Ananthabhotla 提交于
      Summary:
      Currently, the only way to close an open DB is to destroy the DB
      object. There is no way for the caller to know the status. In one
      instance, the destructor encountered an error due to failure to
      close a log file on HDFS. In order to prevent silent failures, we add
      DB::Close() that calls CloseImpl() which must be implemented by its
      descendants.
      The main failure point in the destructor is closing the log file. This
      patch also adds a Close() entry point to Logger in order to get status.
      When DBOptions::info_log is allocated and owned by the DBImpl, it is
      explicitly closed by DBImpl::CloseImpl().
      Closes https://github.com/facebook/rocksdb/pull/3348
      
      Differential Revision: D6698158
      
      Pulled By: anand1976
      
      fbshipit-source-id: 9468e2892553eb09c4c41b8723f590c0dbd8ab7d
      d0f1b49a
  34. 11 1月, 2018 1 次提交
  35. 20 10月, 2017 1 次提交
  36. 24 8月, 2017 1 次提交
  37. 19 8月, 2017 1 次提交
    • A
      perf_context measure user bytes read · ed0a4c93
      Andrew Kryczka 提交于
      Summary:
      With this PR, we can measure read-amp for queries where perf_context is enabled as follows:
      
      ```
      SetPerfLevel(kEnableCount);
      Get(1, "foo");
      double read_amp = static_cast<double>(get_perf_context()->block_read_byte / get_perf_context()->get_read_bytes);
      SetPerfLevel(kDisable);
      ```
      
      Our internal infra enables perf_context for a sampling of queries. So we'll be able to compute the read-amp for the sample set, which can give us a good estimate of read-amp.
      Closes https://github.com/facebook/rocksdb/pull/2749
      
      Differential Revision: D5647240
      
      Pulled By: ajkr
      
      fbshipit-source-id: ad73550b06990cf040cc4528fa885360f308ec12
      ed0a4c93
  38. 28 7月, 2017 1 次提交