1. 15 8月, 2019 1 次提交
    • L
      Fix regression affecting partitioned indexes/filters when... · d92a59b6
      Levi Tamasi 提交于
      Fix regression affecting partitioned indexes/filters when cache_index_and_filter_blocks is false (#5705)
      
      Summary:
      PR https://github.com/facebook/rocksdb/issues/5298 (and subsequent related patches) unintentionally changed the
      semantics of cache_index_and_filter_blocks: historically, this option
      only affected the main index/filter block; with the changes, it affects
      index/filter partitions as well. This can cause performance issues when
      cache_index_and_filter_blocks is false since in this case, partitions are
      neither cached nor preloaded (i.e. they are loaded on demand upon each
      access). The patch reverts to the earlier behavior, that is, partitions
      are cached similarly to data blocks regardless of the value of the above
      option.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5705
      
      Test Plan:
      make check
      ./db_bench -benchmarks=fillrandom --statistics --stats_interval_seconds=1 --duration=30 --num=500000000 --bloom_bits=20 --partition_index_and_filters=true --cache_index_and_filter_blocks=false
      ./db_bench -benchmarks=readrandom --use_existing_db --statistics --stats_interval_seconds=1 --duration=10 --num=500000000 --bloom_bits=20 --partition_index_and_filters=true --cache_index_and_filter_blocks=false --cache_size=8000000000
      
      Relevant statistics from the readrandom benchmark with the old code:
      
      rocksdb.block.cache.index.miss COUNT : 0
      rocksdb.block.cache.index.hit COUNT : 0
      rocksdb.block.cache.index.add COUNT : 0
      rocksdb.block.cache.index.bytes.insert COUNT : 0
      rocksdb.block.cache.index.bytes.evict COUNT : 0
      rocksdb.block.cache.filter.miss COUNT : 0
      rocksdb.block.cache.filter.hit COUNT : 0
      rocksdb.block.cache.filter.add COUNT : 0
      rocksdb.block.cache.filter.bytes.insert COUNT : 0
      rocksdb.block.cache.filter.bytes.evict COUNT : 0
      
      With the new code:
      
      rocksdb.block.cache.index.miss COUNT : 2500
      rocksdb.block.cache.index.hit COUNT : 42696
      rocksdb.block.cache.index.add COUNT : 2500
      rocksdb.block.cache.index.bytes.insert COUNT : 4050048
      rocksdb.block.cache.index.bytes.evict COUNT : 0
      rocksdb.block.cache.filter.miss COUNT : 2500
      rocksdb.block.cache.filter.hit COUNT : 4550493
      rocksdb.block.cache.filter.add COUNT : 2500
      rocksdb.block.cache.filter.bytes.insert COUNT : 10331040
      rocksdb.block.cache.filter.bytes.evict COUNT : 0
      
      Differential Revision: D16817382
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 28a516b0da1f041a03313e0b70b28cf5cf205d00
      d92a59b6
  2. 07 8月, 2019 1 次提交
    • V
      New API to get all merge operands for a Key (#5604) · d150e014
      Vijay Nadimpalli 提交于
      Summary:
      This is a new API added to db.h to allow for fetching all merge operands associated with a Key. The main motivation for this API is to support use cases where doing a full online merge is not necessary as it is performance sensitive. Example use-cases:
      1. Update subset of columns and read subset of columns -
      Imagine a SQL Table, a row is encoded as a K/V pair (as it is done in MyRocks). If there are many columns and users only updated one of them, we can use merge operator to reduce write amplification. While users only read one or two columns in the read query, this feature can avoid a full merging of the whole row, and save some CPU.
      2. Updating very few attributes in a value which is a JSON-like document -
      Updating one attribute can be done efficiently using merge operator, while reading back one attribute can be done more efficiently if we don't need to do a full merge.
      ----------------------------------------------------------------------------------------------------
      API :
      Status GetMergeOperands(
            const ReadOptions& options, ColumnFamilyHandle* column_family,
            const Slice& key, PinnableSlice* merge_operands,
            GetMergeOperandsOptions* get_merge_operands_options,
            int* number_of_operands)
      
      Example usage :
      int size = 100;
      int number_of_operands = 0;
      std::vector<PinnableSlice> values(size);
      GetMergeOperandsOptions merge_operands_info;
      db_->GetMergeOperands(ReadOptions(), db_->DefaultColumnFamily(), "k1", values.data(), merge_operands_info, &number_of_operands);
      
      Description :
      Returns all the merge operands corresponding to the key. If the number of merge operands in DB is greater than merge_operands_options.expected_max_number_of_operands no merge operands are returned and status is Incomplete. Merge operands returned are in the order of insertion.
      merge_operands-> Points to an array of at-least merge_operands_options.expected_max_number_of_operands and the caller is responsible for allocating it. If the status returned is Incomplete then number_of_operands will contain the total number of merge operands found in DB for key.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5604
      
      Test Plan:
      Added unit test and perf test in db_bench that can be run using the command:
      ./db_bench -benchmarks=getmergeoperands --merge_operator=sortlist
      
      Differential Revision: D16657366
      
      Pulled By: vjnadimpalli
      
      fbshipit-source-id: 0faadd752351745224ee12d4ae9ef3cb529951bf
      d150e014
  3. 24 7月, 2019 2 次提交
    • L
      Move the uncompression dictionary object out of the block cache (#5584) · 092f4170
      Levi Tamasi 提交于
      Summary:
      RocksDB has historically stored uncompression dictionary objects in the block
      cache as opposed to storing just the block contents. This neccesitated
      evicting the object upon table close. With the new code, only the raw blocks
      are stored in the cache, eliminating the need for eviction.
      
      In addition, the patch makes the following improvements:
      
      1) Compression dictionary blocks are now prefetched/pinned similarly to
      index/filter blocks.
      2) A copy operation got eliminated when the uncompression dictionary is
      retrieved.
      3) Errors related to retrieving the uncompression dictionary are propagated as
      opposed to silently ignored.
      
      Note: the patch temporarily breaks the compression dictionary evicition stats.
      They will be fixed in a separate phase.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5584
      
      Test Plan: make asan_check
      
      Differential Revision: D16344151
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 2962b295f5b19628f9da88a3fcebbce5a5017a7b
      092f4170
    • E
      Improve CPU Efficiency of ApproximateSize (part 1) (#5613) · 6b7fcc0d
      Eli Pozniansky 提交于
      Summary:
      1. Avoid creating the iterator in order to call BlockBasedTable::ApproximateOffsetOf(). Instead, directly call into it.
      2. Optimize BlockBasedTable::ApproximateOffsetOf() keeps the index block iterator in stack.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5613
      
      Differential Revision: D16442660
      
      Pulled By: elipoz
      
      fbshipit-source-id: 9320be3e918c139b10e758cbbb684706d172e516
      6b7fcc0d
  4. 18 7月, 2019 1 次提交
  5. 17 7月, 2019 1 次提交
    • L
      Move the filter readers out of the block cache (#5504) · 3bde41b5
      Levi Tamasi 提交于
      Summary:
      Currently, when the block cache is used for the filter block, it is not
      really the block itself that is stored in the cache but a FilterBlockReader
      object. Since this object is not pure data (it has, for instance, pointers that
      might dangle, including in one case a back pointer to the TableReader), it's not
      really sharable. To avoid the issues around this, the current code erases the
      cache entries when the TableReader is closed (which, BTW, is not sufficient
      since a concurrent TableReader might have picked up the object in the meantime).
      Instead of doing this, the patch moves the FilterBlockReader out of the cache
      altogether, and decouples the filter reader object from the filter block.
      In particular, instead of the TableReader owning, or caching/pinning the
      FilterBlockReader (based on the customer's settings), with the change the
      TableReader unconditionally owns the FilterBlockReader, which in turn
      owns/caches/pins the filter block. This change also enables us to reuse the code
      paths historically used for data blocks for filters as well.
      
      Note:
      Eviction statistics for filter blocks are temporarily broken. We plan to fix this in a
      separate phase.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5504
      
      Test Plan: make asan_check
      
      Differential Revision: D16036974
      
      Pulled By: ltamasi
      
      fbshipit-source-id: 770f543c5fb4ed126fd1e04bfd3809cf4ff9c091
      3bde41b5
  6. 08 7月, 2019 1 次提交
    • H
      Fix -Werror=shadow (#5546) · 6ca3feed
      haoyuhuang 提交于
      Summary:
      This PR fixes shadow errors.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5546
      
      Test Plan: make clean && make check -j32 && make clean && USE_CLANG=1 make check -j32 && make clean && COMPILE_WITH_ASAN=1 make check -j32
      
      Differential Revision: D16147841
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: 1043500d70c134185f537ab4c3900452752f1534
      6ca3feed
  7. 06 7月, 2019 1 次提交
    • S
      Assert get_context not null in BlockBasedTable::Get() (#5542) · 2de61d91
      sdong 提交于
      Summary:
      clang analyze fails after https://github.com/facebook/rocksdb/pull/5514 for this failure:
      table/block_based/block_based_table_reader.cc:3450:16: warning: Called C++ object pointer is null
                if (!get_context->SaveValue(
                     ^~~~~~~~~~~~~~~~~~~~~~~
      1 warning generated.
      
      The reaon is that a branching is added earlier in the function on get_context is null or not, CLANG analyze thinks that it can be null and we make the function call withou the null checking.
      Fix the issue by removing the branch and add an assert.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5542
      
      Test Plan: "make all check" passes and CLANG analyze failure goes away.
      
      Differential Revision: D16133988
      
      fbshipit-source-id: d4627d03c4746254cc11926c523931086ccebcda
      2de61d91
  8. 04 7月, 2019 1 次提交
  9. 03 7月, 2019 1 次提交
  10. 01 7月, 2019 1 次提交
    • A
      MultiGet parallel IO (#5464) · 7259e28d
      anand76 提交于
      Summary:
      Enhancement to MultiGet batching to read data blocks required for keys in a batch in parallel from disk. It uses Env::MultiRead() API to read multiple blocks and reduce latency.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5464
      
      Test Plan:
      1. make check
      2. make asan_check
      3. make asan_crash
      
      Differential Revision: D15911771
      
      Pulled By: anand1976
      
      fbshipit-source-id: 605036b9af0f90ca0020dc87c3a86b4da6e83394
      7259e28d
  11. 28 6月, 2019 1 次提交
    • S
      LRU Cache to enable mid-point insertion by default (#5508) · 15fd3be0
      sdong 提交于
      Summary:
      Mid-point insertion is a useful feature and is mature now. Make it default. Also changed cache_index_and_filter_blocks_with_high_priority=true as default accordingly, so that we won't evict index and filter blocks easier after the change, to avoid too many surprises to users.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5508
      
      Test Plan: Run all existing tests.
      
      Differential Revision: D16021179
      
      fbshipit-source-id: ce8456e8d43b3bfb48df6c304b5290a9d19817eb
      15fd3be0
  12. 27 6月, 2019 1 次提交
    • H
      Block cache tracer: Do not populate block cache trace record when tracing is disabled. (#5510) · a8975b62
      haoyuhuang 提交于
      Summary:
      This PR makes sure that trace record is not populated when tracing is disabled.
      
      Before this PR:
      DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-13-04-41/db]
      readwhilewriting :       9.803 micros/op 1550408 ops/sec;  107.9 MB/s (5000000 of 5000000 found)
      Microseconds per read:
      Count: 80000000 Average: 9.8045  StdDev: 12.64
      Min: 1  Median: 7.5246  Max: 25343
      Percentiles: P50: 7.52 P75: 12.10 P99: 37.44 P99.9: 75.07 P99.99: 133.60
      
      After this PR:
      DB path: [/data/mysql/rocks_regression_tests/OPTIONS-myrocks-40-33-10000000/2019-06-26-14-08-21/db]
      readwhilewriting :       8.723 micros/op 1662882 ops/sec;  115.8 MB/s (5000000 of 5000000 found)
      Microseconds per read:
      Count: 80000000 Average: 8.7236  StdDev: 12.19
      Min: 1  Median: 6.7262  Max: 25229
      Percentiles: P50: 6.73 P75: 10.50 P99: 31.54 P99.9: 74.81 P99.99: 132.82
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5510
      
      Differential Revision: D16016428
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: 3b3d11e6accf207d18ec2545b802aa01ee65901f
      a8975b62
  13. 26 6月, 2019 1 次提交
  14. 25 6月, 2019 1 次提交
    • M
      Add an option to put first key of each sst block in the index (#5289) · b4d72094
      Mike Kolupaev 提交于
      Summary:
      The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
      
      Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
      
      So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
      
      Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
      
      This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
      
      Differential Revision: D15256423
      
      Pulled By: al13n321
      
      fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
      b4d72094
  15. 21 6月, 2019 2 次提交
    • H
      Add more callers for table reader. (#5454) · 705b8eec
      haoyuhuang 提交于
      Summary:
      This PR adds more callers for table readers. These information are only used for block cache analysis so that we can know which caller accesses a block.
      1. It renames the BlockCacheLookupCaller to TableReaderCaller as passing the caller from upstream requires changes to table_reader.h and TableReaderCaller is a more appropriate name.
      2. It adds more table reader callers in table/table_reader_caller.h, e.g., kCompactionRefill, kExternalSSTIngestion, and kBuildTable.
      
      This PR is long as it requires modification of interfaces in table_reader.h, e.g., NewIterator.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5454
      
      Test Plan: make clean && COMPILE_WITH_ASAN=1 make check -j32.
      
      Differential Revision: D15819451
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: b6caa704c8fb96ddd15b9a934b7e7ea87f88092d
      705b8eec
    • Z
      sanitize and limit block_size under 4GB (#5492) · 24f73436
      Zhongyi Xie 提交于
      Summary:
      `Block::restart_index_`, `Block::restarts_`, and `Block::current_` are defined as uint32_t but  `BlockBasedTableOptions::block_size` is defined as a size_t so user might see corruption as in https://github.com/facebook/rocksdb/issues/5486.
      This PR adds a check in `BlockBasedTableFactory::SanitizeOptions` to disallow such configurations.
      yiwu-arbug
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5492
      
      Differential Revision: D15914047
      
      Pulled By: miasantreble
      
      fbshipit-source-id: c943f153d967e15aee7f2795730ab8259e2be201
      24f73436
  16. 20 6月, 2019 1 次提交
  17. 19 6月, 2019 2 次提交
  18. 15 6月, 2019 1 次提交
  19. 14 6月, 2019 1 次提交
  20. 12 6月, 2019 1 次提交
  21. 11 6月, 2019 2 次提交
    • H
      Create a BlockCacheLookupContext to enable fine-grained block cache tracing. (#5421) · 5efa0d6b
      haoyuhuang 提交于
      Summary:
      BlockCacheLookupContext only contains the caller for now.
      We will trace block accesses at five places:
      1. BlockBasedTable::GetFilter.
      2. BlockBasedTable::GetUncompressedDict.
      3. BlockBasedTable::MaybeReadAndLoadToCache. (To trace access on data, index, and range deletion block.)
      4. BlockBasedTable::Get. (To trace the referenced key and whether the referenced key exists in a fetched data block.)
      5. BlockBasedTable::MultiGet. (To trace the referenced key and whether the referenced key exists in a fetched data block.)
      
      We create the context at:
      1. BlockBasedTable::Get. (kUserGet)
      2. BlockBasedTable::MultiGet. (kUserMGet)
      3. BlockBasedTable::NewIterator. (either kUserIterator, kCompaction, or external SST ingestion calls this function.)
      4. BlockBasedTable::Open. (kPrefetch)
      5. Index/Filter::CacheDependencies. (kPrefetch)
      6. BlockBasedTable::ApproximateOffsetOf. (kCompaction or kUserApproximateSize).
      
      I loaded 1 million key-value pairs into the database and ran the readrandom benchmark with a single thread. I gave the block cache 10 GB to make sure all reads hit the block cache after warmup. The throughput is comparable.
      Throughput of this PR: 231334 ops/s.
      Throughput of the master branch: 238428 ops/s.
      
      Experiment setup:
      RocksDB:    version 6.2
      Date:       Mon Jun 10 10:42:51 2019
      CPU:        24 * Intel Core Processor (Skylake)
      CPUCache:   16384 KB
      Keys:       20 bytes each
      Values:     100 bytes each (100 bytes after compression)
      Entries:    1000000
      Prefix:    20 bytes
      Keys per prefix:    0
      RawSize:    114.4 MB (estimated)
      FileSize:   114.4 MB (estimated)
      Write rate: 0 bytes/second
      Read rate: 0 ops/second
      Compression: NoCompression
      Compression sampling rate: 0
      Memtablerep: skip_list
      Perf Level: 1
      
      Load command: ./db_bench --benchmarks="fillseq" --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 --statistics --cache_index_and_filter_blocks --cache_size=10737418240 --disable_auto_compactions=1 --disable_wal=1 --compression_type=none --min_level_to_compress=-1 --compression_ratio=1 --num=1000000
      
      Run command: ./db_bench --benchmarks="readrandom,stats" --use_existing_db --threads=1 --duration=120 --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 --statistics --cache_index_and_filter_blocks --cache_size=10737418240 --disable_auto_compactions=1 --disable_wal=1 --compression_type=none --min_level_to_compress=-1 --compression_ratio=1 --num=1000000 --duration=120
      
      TODOs:
      1. Create a caller for external SST file ingestion and differentiate the callers for iterator.
      2. Integrate tracer to trace block cache accesses.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5421
      
      Differential Revision: D15704258
      
      Pulled By: HaoyuHuang
      
      fbshipit-source-id: 4aa8a55f8cb1576ffb367bfa3186a91d8f06d93a
      5efa0d6b
    • A
      Reuse data block iterator in BlockBasedTableReader::MultiGet() (#5314) · 63ace8ef
      anand76 提交于
      Summary:
      Instead of creating a new DataBlockIterator for every key in a MultiGet batch, reuse it if the next key is in the same block. This results in a small 1-2% cpu improvement.
      
      TEST_TMPDIR=/dev/shm/multiget numactl -C 10  ./db_bench.tmp -use_existing_db=true -benchmarks="readseq,multireadrandom" -write_buffer_size=4194304 -target_file_size_base=4194304 -max_bytes_for_level_base=16777216 -num=12000000 -reads=12000000 -duration=90 -threads=1 -compression_type=none -cache_size=4194304000 -batch_size=32 -disable_auto_compactions=true -bloom_bits=10 -cache_index_and_filter_blocks=true -pin_l0_filter_and_index_blocks_in_cache=true -multiread_batched=true -multiread_stride=4
      
      Without the change -
      multireadrandom :       3.066 micros/op 326122 ops/sec; (29375968 of 29375968 found)
      
      With the change -
      multireadrandom :       3.003 micros/op 332945 ops/sec; (29983968 of 29983968 found)
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5314
      
      Differential Revision: D15742108
      
      Pulled By: anand1976
      
      fbshipit-source-id: 220fb0b8eea9a0d602ddeb371528f7af7936d771
      63ace8ef
  22. 08 6月, 2019 1 次提交
  23. 07 6月, 2019 2 次提交
    • Z
      simplify include directive involving inttypes (#5402) · d68f9f45
      Zhongyi Xie 提交于
      Summary:
      When using `PRIu64` type of printf specifier, current code base does the following:
      ```
      #ifndef __STDC_FORMAT_MACROS
      #define __STDC_FORMAT_MACROS
      #endif
      #include <inttypes.h>
      ```
      However, this can be simplified to
      ```
      #include <cinttypes>
      ```
      as long as flag `-std=c++11` is used.
      This should solve issues like https://github.com/facebook/rocksdb/issues/5159
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5402
      
      Differential Revision: D15701195
      
      Pulled By: miasantreble
      
      fbshipit-source-id: 6dac0a05f52aadb55e9728038599d3d2e4b59d03
      d68f9f45
    • L
      Refactor the handling of cache related counters and statistics (#5408) · bee2f48a
      Levi Tamasi 提交于
      Summary:
      The patch cleans up the handling of cache hit/miss/insertion related
      performance counters, get context counters, and statistics by
      eliminating some code duplication and factoring out the affected logic
      into separate methods. In addition, it makes the semantics of cache hit
      metrics more consistent by changing the code so that accessing a
      partition of partitioned indexes/filters through a pinned reference no
      longer counts as a cache hit.
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5408
      
      Differential Revision: D15610883
      
      Pulled By: ltamasi
      
      fbshipit-source-id: ee749c18965077aca971d8f8bee8b24ed8fa76f1
      bee2f48a
  24. 06 6月, 2019 1 次提交
    • Y
      Add support for timestamp in Get/Put (#5079) · 340ed4fa
      Yanqin Jin 提交于
      Summary:
      It's useful to be able to (optionally) associate key-value pairs with user-provided timestamps. This PR is an early effort towards this goal and continues the work of facebook#4942. A suite of new unit tests exist in DBBasicTestWithTimestampWithParam. Support for timestamp requires the user to provide timestamp as a slice in `ReadOptions` and `WriteOptions`. All timestamps of the same database must share the same length, format, etc. The format of the timestamp is the same throughout the same database, and the user is responsible for providing a comparator function (Comparator) to order the <key, timestamp> tuples. Once created, the format and length of the timestamp cannot change (at least for now).
      
      Test plan (on devserver):
      ```
      $COMPILE_WITH_ASAN=1 make -j32 all
      $./db_basic_test --gtest_filter=Timestamp/DBBasicTestWithTimestampWithParam.PutAndGet/*
      $make check
      ```
      All tests must pass.
      
      We also run the following db_bench tests to verify whether there is regression on Get/Put while timestamp is not enabled.
      ```
      $TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillseq,readrandom -num=1000000
      $TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=1000000
      ```
      Repeat for 6 times for both versions.
      
      Results are as follows:
      ```
      |        | readrandom | fillrandom |
      | master | 16.77 MB/s | 47.05 MB/s |
      | PR5079 | 16.44 MB/s | 47.03 MB/s |
      ```
      Pull Request resolved: https://github.com/facebook/rocksdb/pull/5079
      
      Differential Revision: D15132946
      
      Pulled By: riversand963
      
      fbshipit-source-id: 833a0d657eac21182f0f206c910a6438154c742c
      340ed4fa
  25. 04 6月, 2019 1 次提交
  26. 01 6月, 2019 2 次提交
  27. 31 5月, 2019 2 次提交