1. 21 7月, 2016 1 次提交
    • I
      Introduce FullMergeV2 (eliminate memcpy from merge operators) · 68a8e6b8
      Islam AbdelRahman 提交于
      Summary:
      This diff update the code to pin the merge operator operands while the merge operation is done, so that we can eliminate the memcpy cost, to do that we need a new public API for FullMerge that replace the std::deque<std::string> with std::vector<Slice>
      
      This diff is stacked on top of D56493 and D56511
      
      In this diff we
      - Update FullMergeV2 arguments to be encapsulated in MergeOperationInput and MergeOperationOutput which will make it easier to add new arguments in the future
      - Replace std::deque<std::string> with std::vector<Slice> to pass operands
      - Replace MergeContext std::deque with std::vector (based on a simple benchmark I ran https://gist.github.com/IslamAbdelRahman/78fc86c9ab9f52b1df791e58943fb187)
      - Allow FullMergeV2 output to be an existing operand
      
      ```
      [Everything in Memtable | 10K operands | 10 KB each | 1 operand per key]
      
      DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="mergerandom,readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --merge_keys=10000 --num=10000 --disable_auto_compactions --value_size=10240 --write_buffer_size=1000000000
      
      [FullMergeV2]
      readseq      :       0.607 micros/op 1648235 ops/sec; 16121.2 MB/s
      readseq      :       0.478 micros/op 2091546 ops/sec; 20457.2 MB/s
      readseq      :       0.252 micros/op 3972081 ops/sec; 38850.5 MB/s
      readseq      :       0.237 micros/op 4218328 ops/sec; 41259.0 MB/s
      readseq      :       0.247 micros/op 4043927 ops/sec; 39553.2 MB/s
      
      [master]
      readseq      :       3.935 micros/op 254140 ops/sec; 2485.7 MB/s
      readseq      :       3.722 micros/op 268657 ops/sec; 2627.7 MB/s
      readseq      :       3.149 micros/op 317605 ops/sec; 3106.5 MB/s
      readseq      :       3.125 micros/op 320024 ops/sec; 3130.1 MB/s
      readseq      :       4.075 micros/op 245374 ops/sec; 2400.0 MB/s
      ```
      
      ```
      [Everything in Memtable | 10K operands | 10 KB each | 10 operand per key]
      
      DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="mergerandom,readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --merge_keys=1000 --num=10000 --disable_auto_compactions --value_size=10240 --write_buffer_size=1000000000
      
      [FullMergeV2]
      readseq      :       3.472 micros/op 288018 ops/sec; 2817.1 MB/s
      readseq      :       2.304 micros/op 434027 ops/sec; 4245.2 MB/s
      readseq      :       1.163 micros/op 859845 ops/sec; 8410.0 MB/s
      readseq      :       1.192 micros/op 838926 ops/sec; 8205.4 MB/s
      readseq      :       1.250 micros/op 800000 ops/sec; 7824.7 MB/s
      
      [master]
      readseq      :      24.025 micros/op 41623 ops/sec;  407.1 MB/s
      readseq      :      18.489 micros/op 54086 ops/sec;  529.0 MB/s
      readseq      :      18.693 micros/op 53495 ops/sec;  523.2 MB/s
      readseq      :      23.621 micros/op 42335 ops/sec;  414.1 MB/s
      readseq      :      18.775 micros/op 53262 ops/sec;  521.0 MB/s
      
      ```
      
      ```
      [Everything in Block cache | 10K operands | 10 KB each | 1 operand per key]
      
      [FullMergeV2]
      $ DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --num=100000 --db="/dev/shm/merge-random-10K-10KB" --cache_size=1000000000 --use_existing_db --disable_auto_compactions
      readseq      :      14.741 micros/op 67837 ops/sec;  663.5 MB/s
      readseq      :       1.029 micros/op 971446 ops/sec; 9501.6 MB/s
      readseq      :       0.974 micros/op 1026229 ops/sec; 10037.4 MB/s
      readseq      :       0.965 micros/op 1036080 ops/sec; 10133.8 MB/s
      readseq      :       0.943 micros/op 1060657 ops/sec; 10374.2 MB/s
      
      [master]
      readseq      :      16.735 micros/op 59755 ops/sec;  584.5 MB/s
      readseq      :       3.029 micros/op 330151 ops/sec; 3229.2 MB/s
      readseq      :       3.136 micros/op 318883 ops/sec; 3119.0 MB/s
      readseq      :       3.065 micros/op 326245 ops/sec; 3191.0 MB/s
      readseq      :       3.014 micros/op 331813 ops/sec; 3245.4 MB/s
      ```
      
      ```
      [Everything in Block cache | 10K operands | 10 KB each | 10 operand per key]
      
      DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --num=100000 --db="/dev/shm/merge-random-10-operands-10K-10KB" --cache_size=1000000000 --use_existing_db --disable_auto_compactions
      
      [FullMergeV2]
      readseq      :      24.325 micros/op 41109 ops/sec;  402.1 MB/s
      readseq      :       1.470 micros/op 680272 ops/sec; 6653.7 MB/s
      readseq      :       1.231 micros/op 812347 ops/sec; 7945.5 MB/s
      readseq      :       1.091 micros/op 916590 ops/sec; 8965.1 MB/s
      readseq      :       1.109 micros/op 901713 ops/sec; 8819.6 MB/s
      
      [master]
      readseq      :      27.257 micros/op 36687 ops/sec;  358.8 MB/s
      readseq      :       4.443 micros/op 225073 ops/sec; 2201.4 MB/s
      readseq      :       5.830 micros/op 171526 ops/sec; 1677.7 MB/s
      readseq      :       4.173 micros/op 239635 ops/sec; 2343.8 MB/s
      readseq      :       4.150 micros/op 240963 ops/sec; 2356.8 MB/s
      ```
      
      Test Plan: COMPILE_WITH_ASAN=1 make check -j64
      
      Reviewers: yhchiang, andrewkr, sdong
      
      Reviewed By: sdong
      
      Subscribers: lovro, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D57075
      68a8e6b8
  2. 20 7月, 2016 4 次提交
    • S
      MemTable::PostProcess() can skip updating num_deletes if the delta is 0 · e70ba4e4
      sdong 提交于
      Summary: In many use cases there is no deletes. No need to pay the overhead of atomically updating num_deletes.
      
      Test Plan: Run existing test.
      
      Reviewers: ngbronson, yiwu, andrewkr, igor
      
      Reviewed By: andrewkr
      
      Subscribers: leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60555
      e70ba4e4
    • S
      DBTablePropertiesTest.GetPropertiesOfTablesInRange: Fix Flaky · 2a282e5f
      sdong 提交于
      Summary:
      Summary
      There is a possibility that there is no L0 file after writing the data. Generate an L0 file to make it work.
      
      Test Plan: Run the test many times.
      
      Reviewers: andrewkr, yiwu
      
      Reviewed By: yiwu
      
      Subscribers: leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60825
      2a282e5f
    • K
      Persistent Read Cache (6) Persistent cache tier implentation - File layout · d9cfaa2b
      krad 提交于
      Summary:
      Persistent cache tier is the tier abstraction that can work for any block
      device based device mounted on a file system. The design/implementation can
      handle any generic block device.
      
      Any generic block support is achieved by generalizing the access patten as
      {io-size, q-depth, direct-io/buffered}.
      
      We have specifically tested and adapted the IO path for NVM and SSD.
      
      Persistent cache tier consists of there parts :
      
      1) File layout
      
      Provides the implementation for handling IO path for reading and writing data
      (key/value pair).
      
      2) Meta-data
      Provides the implementation for handling the index for persistent read cache.
      
      3) Implementation
      It binds (1) and (2) and flushed out the PersistentCacheTier interface
      
      This patch provides implementation for (1)(2). Follow up patch will provide (3)
      and tests.
      
      Test Plan: Compile and run check
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D57117
      d9cfaa2b
    • J
      New Statistics to track Compression/Decompression (#1197) · 9430333f
      John Alexander 提交于
      * Added new statistics and refactored to allow ioptions to be passed around as required to access environment and statistics pointers (and, as a convenient side effect, info_log pointer).
      
      * Prevent incrementing compression counter when compression is turned off in options.
      
      * Prevent incrementing compression counter when compression is turned off in options.
      
      * Added two more supported compression types to test code in db_test.cc
      
      * Prevent incrementing compression counter when compression is turned off in options.
      
      * Added new StatsLevel that excludes compression timing.
      
      * Fixed casting error in coding.h
      
      * Fixed CompressionStatsTest for new StatsLevel.
      
      * Removed unused variable that was breaking the Linux build
      9430333f
  3. 18 7月, 2016 1 次提交
  4. 17 7月, 2016 1 次提交
  5. 16 7月, 2016 2 次提交
    • S
      DBTest.DynamicLevelCompressionPerLevel: Tune Threshold · 21c55bdb
      sdong 提交于
      Summary: Each SST's file size increases after we add more table properties. Threshold in DBTest.DynamicLevelCompressionPerLevel need to adjust accordingly to avoid occasional failures.
      
      Test Plan: Run the test
      
      Reviewers: andrewkr, yiwu
      
      Subscribers: leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60819
      21c55bdb
    • Y
      Refactor cache.cc · 4b952535
      Yi Wu 提交于
      Summary: Refactor cache.cc so that I can plugin clock cache (D55581). Mainly move `ShardedCache` to separate file, move `LRUHandle` back to cache.cc and rename it lru_cache.cc.
      
      Test Plan:
          make check -j64
      
      Reviewers: lightmark, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D59655
      4b952535
  6. 15 7月, 2016 2 次提交
    • I
      Update LANGUAGE-BINDINGS.md · c6a8665b
      Igor Canadi 提交于
      c6a8665b
    • W
      ldb backup support · 880ee363
      Wanning Jiang 提交于
      Summary: add backup support for ldb tool, and use it to run load test for backup on two HDFS envs
      
      Test Plan: first generate some db, then compile against load test in fbcode, run load_test --db=<db path> backup --backup_env_uri=<URI of backup env> --backup_dir=<backup directory> --num_threads=<number of thread>
      
      Reviewers: andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60633
      880ee363
  7. 14 7月, 2016 4 次提交
    • S
      Avoid updating memtable allocated bytes if write_buffer_size is not set · 6797e6ff
      sdong 提交于
      Summary: If options.write_buffer_size is not set, nor options.write_buffer_manager, no need to update the bytes allocated counter in MemTableAllocator, which is expensive in parallel memtable insert case. Remove it can improve parallel memtable insert throughput by 10% with write batch size 128.
      
      Test Plan:
      Run benchmarks
      TEST_TMPDIR=/dev/shm/ ./db_bench --benchmarks=fillrandom -disable_auto_compactions -level0_slowdown_writes_trigger=9999 -level0_stop_writes_trigger=9999 -num=10000000 --writes=1000000 -max_background_flushes=16 -max_write_buffer_number=16 --threads=32 --batch_size=128   -allow_concurrent_memtable_write -enable_write_thread_adaptive_yield
      
      The throughput grows 10% with the benchmark.
      
      Reviewers: andrewkr, yiwu, IslamAbdelRahman, igor, ngbronson
      
      Reviewed By: ngbronson
      
      Subscribers: ngbronson, leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60465
      6797e6ff
    • A
      Add DestroyColumnFamilyHandle(ColumnFamilyHandle**) to db.h · dda6c72a
      Aaron Gao 提交于
      Summary:
      add DestroyColumnFamilyHandle(ColumnFamilyHandle**) to close column family instead of deleting cfh*
      User should call this to close a cf and then we can detect the deletion in this function.
      
      Test Plan: make all check -j64
      
      Reviewers: andrewkr, yiwu, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60765
      dda6c72a
    • A
      Avoid FileMetaData copy · 56222f57
      Andrew Kryczka 提交于
      Summary: as titled
      
      Test Plan: unit tests
      
      Reviewers: sdong, lightmark
      
      Reviewed By: lightmark
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60597
      56222f57
    • J
      Fixed output size and removed unneeded loop · 15b7a4ab
      Jay Edgar 提交于
      Summary: In Zlib_Compress and BZip2_Compress the calculation for size was slightly off when using compression_foramt_version 2 (which includes the decompressed size in the output).  Also there were unnecessary loops around the deflate/BZ2_bzCompress calls.  In Zlib_Compress there was also a possible exit from the function after calling deflateInit2 that didn't call deflateEnd.
      
      Test Plan: Standard tests
      
      Reviewers: sdong, IslamAbdelRahman, igor
      
      Reviewed By: igor
      
      Subscribers: sdong, IslamAbdelRahman, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60537
      15b7a4ab
  8. 13 7月, 2016 2 次提交
    • Y
      Fix deadlock when trying update options when write stalls · 6ea41f85
      Yi Wu 提交于
      Summary:
      When write stalls because of auto compaction is disabled, or stop write trigger is reached,
      user may change these two options to unblock writes. Unfortunately we had issue where the write
      thread will block the attempt to persist the options, thus creating a deadlock. This diff
      fix the issue and add two test cases to detect such deadlock.
      
      Test Plan:
      Run unit tests.
      
      Also, revert db_impl.cc to master (but don't revert `DBImpl::BackgroundCompaction:Finish` sync point) and run db_options_test. Both tests should hit deadlock.
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60627
      6ea41f85
    • J
      Miscellaneous performance improvements · efd013d6
      Jay Edgar 提交于
      Summary:
      I was investigating performance issues in the SstFileWriter and found all of the following:
      
      - The SstFileWriter::Add() function created a local InternalKey every time it was called generating a allocation and free each time.  Changed to have an InternalKey member variable that can be reset with the new InternalKey::Set() function.
      - In SstFileWriter::Add() the smallest_key and largest_key values were assigned the result of a ToString() call, but it is simpler to just assign them directly from the user's key.
      - The Slice class had no move constructor so each time one was returned from a function a new one had to be allocated, the old data copied to the new, and the old one was freed.  I added the move constructor which also required a copy constructor and assignment operator.
      - The BlockBuilder::CurrentSizeEstimate() function calculates the current estimate size, but was being called 2 or 3 times for each key added.  I changed the class to maintain a running estimate (equal to the original calculation) so that the function can return an already calculated value.
      - The code in BlockBuilder::Add() that calculated the shared bytes between the last key and the new key duplicated what Slice::difference_offset does, so I replaced it with the standard function.
      - BlockBuilder::Add() had code to copy just the changed portion into the last key value (and asserted that it now matched the new key).  It is more efficient just to copy the whole new key over.
      - Moved this same code up into the 'if (use_delta_encoding_)' since the last key value is only needed when delta encoding is on.
      - FlushBlockBySizePolicy::BlockAlmostFull calculated a standard deviation value each time it was called, but this information would only change if block_size of block_size_deviation changed, so I created a member variable to hold the value to avoid the calculation each time.
      - Each PutVarint??() function has a buffer and calls std::string::append().  Two or three calls in a row could share a buffer and a single call to std::string::append().
      
      Some of these will be helpful outside of the SstFileWriter.  I'm not 100% the addition of the move constructor is appropriate as I wonder why this wasn't done before - maybe because of compiler compatibility?  I tried it on gcc 4.8 and 4.9.
      
      Test Plan: The changes should not affect the results so the existing tests should all still work and no new tests were added.  The value of the changes was seen by manually testing the SstFileWriter class through MyRocks and adding timing code to identify problem areas.
      
      Reviewers: sdong, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D59607
      efd013d6
  9. 12 7月, 2016 4 次提交
    • O
      Update Makefile to fix dependency · e6f68faf
      omegaga 提交于
      Summary: In D33849 we updated Makefile to generate .d files for all .cc sources. Since we have more types of source files now, this needs to be updated so that this mechanism can work for new files.
      
      Test Plan: change a dependent .h file, re-make and see if .o file is recompiled.
      
      Reviewers: sdong, andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60591
      e6f68faf
    • A
      fix test failure · 816ae098
      Aaron Gao 提交于
      Summary: fix Rocksdb Unit Test USER_FAILURE
      
      Test Plan: make all check -j64
      
      Reviewers: sdong, andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60603
      816ae098
    • A
      Fix Log() doc for default level · e295da12
      Andrew Kryczka 提交于
      Summary: as titled
      
      Test Plan: none
      
      Reviewers: sdong, lightmark
      
      Reviewed By: lightmark
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60507
      e295da12
    • A
      update DB::AddFile to ingest list of sst files · 8e6b38d8
      Aaron Gao 提交于
      Summary:
      DB::AddFile(std::string file_path) API that allow them to ingest an SST file created using SstFileWriter
      We want to update this interface to be able to accept a list of files that will be ingested, DB::AddFile(std::vector<std::string> file_path_list).
      
      Test Plan:
      Add test case `AddExternalSstFileList` in `DBSSTTest`. To make sure:
      1. files key ranges are not overlapping with each other
      2. each file key range dont overlap with the DB key range
      3. make sure no snapshots are held
      
      Reviewers: andrewkr, sdong, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D58587
      8e6b38d8
  10. 09 7月, 2016 3 次提交
    • Y
      Fix clang analyzer errors · 296545a2
      Yi Wu 提交于
      Summary:
      Fixing erros reported by clang static analyzer.
      * Removing some unused variables.
      * Adding assertions to fix false positives reported by clang analyzer.
      * Adding `__clang_analyzer__` macro to suppress false positive warnings.
      
      Test Plan:
          USE_CLANG=1 OPT=-g make analyze -j64
      
      Reviewers: andrewkr, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60549
      296545a2
    • G
      Add release build to RocksDB per-diff/post-commit tests · 61dbfbb6
      Gunnar Kudrjavets 提交于
      Summary: To make sure that we'll have additional verification for release builds, define a new category and add `make release` to per-diff/post-commit tests. This should in theory prevent the release MyRocks integration builds breaks from happening.
      
      Test Plan:
      - `[p]arc diff --preview`
      - Observe the execution in Sandcastle and make sure that release build and tests are executed.
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60441
      61dbfbb6
    • S
      Concurrent memtable inserter to update counters and flush state after all inserts · 907f24d0
      sdong 提交于
      Summary: In concurrent memtable insert case, updating counters in MemTable::Add() can count for 5% CPU usage. By batch all the counters and update in the end of the write batch, the CPU overheads are overhead in the use cases where more than one key is updated in one write batch.
      
      Test Plan:
      Write throughput increases 12% with this benchmark setting:
      
      TEST_TMPDIR=/dev/shm/ ./db_bench --benchmarks=fillrandom -disable_auto_compactions -level0_slowdown_writes_trigger=9999 -level0_stop_writes_trigger=9999 -num=10000000 --writes=1000000 -max_background_flushes=16 -max_write_buffer_number=16 --threads=64 --batch_size=128   -allow_concurrent_memtable_write -enable_write_thread_adaptive_yield
      
      Reviewers: andrewkr, IslamAbdelRahman, ngbronson, igor
      
      Reviewed By: ngbronson
      
      Subscribers: ngbronson, leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60495
      907f24d0
  11. 08 7月, 2016 4 次提交
  12. 07 7月, 2016 4 次提交
  13. 06 7月, 2016 3 次提交
    • S
      Add options.write_buffer_manager: control total memtable size across DB instances · 32df9733
      sdong 提交于
      Summary: Add option write_buffer_manager to help users control total memory spent on memtables across multiple DB instances.
      
      Test Plan: Add a new unit test.
      
      Reviewers: yhchiang, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: adela, benj, sumeet, muthu, leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D59925
      32df9733
    • A
      group multiple batch of flush into one manifest file (one call to LogAndApply) · 5aaef91d
      Aaron Gao 提交于
      Summary: Currently, if several flush outputs are committed together, we issue each manifest write per batch (1 batch = 1 flush = 1 sst file = 1+ continuous memtables). Each manifest write requires one fsync and one fsync to parent directory. In some cases, it becomes the bottleneck of write. We should batch them and write in one manifest write when possible.
      
      Test Plan:
      ` ./db_bench -benchmarks="fillseq" -max_write_buffer_number=16 -max_background_flushes=16 -disable_auto_compactions=true -min_write_buffer_number_to_merge=1 -write_buffer_size=65536 -level0_stop_writes_trigger=10000 -level0_slowdown_writes_trigger=10000`
      **Before**
      ```
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      RocksDB:    version 4.9
      Date:       Fri Jul  1 15:38:17 2016
      CPU:        32 * Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
      CPUCache:   20480 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      Write rate: 0 bytes/second
      Compression: Snappy
      Memtablerep: skip_list
      Perf Level: 1
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      ------------------------------------------------
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      fillseq      :     166.277 micros/op 6014 ops/sec;    0.7 MB/s
      ```
      **After**
      ```
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      RocksDB:    version 4.9
      Date:       Fri Jul  1 15:35:05 2016
      CPU:        32 * Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
      CPUCache:   20480 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      Write rate: 0 bytes/second
      Compression: Snappy
      Memtablerep: skip_list
      Perf Level: 1
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      ------------------------------------------------
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      fillseq      :      52.328 micros/op 19110 ops/sec;    2.1 MB/s
      ```
      
      Reviewers: andrewkr, IslamAbdelRahman, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: igor, andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60075
      5aaef91d
    • O
      Fix a bug that accesses invalid address in iterator cleanup function · a45ee831
      omegaga 提交于
      Summary: Reported in T11889874. When registering the cleanup function we should copy the option so that we can still access it if ReadOptions is deleted.
      
      Test Plan: Add a unit test to reproduce this bug.
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60087
      a45ee831
  14. 05 7月, 2016 2 次提交
  15. 02 7月, 2016 3 次提交