1. 17 3月, 2015 4 次提交
    • I
      rocksdb: Replace ASSERT* with EXPECT* in functions that does not return void value · 9fd6edf8
      Igor Sugak 提交于
      Summary:
      gtest does not use exceptions to fail a unit test by design, and `ASSERT*`s are implemented using `return`. As a consequence we cannot use `ASSERT*` in a function that does not return `void` value ([[ https://code.google.com/p/googletest/wiki/AdvancedGuide#Assertion_Placement | 1]]), and have to fix our existing code. This diff does this in a generic way, with no manual changes.
      
      In order to detect all existing `ASSERT*` that are used in functions that doesn't return void value, I change the code to generate compile errors for such cases.
      
      In `util/testharness.h` I defined `EXPECT*` assertions, the same way as `ASSERT*`, and redefined `ASSERT*` to return `void`. Then executed:
      
      ```lang=bash
      % USE_CLANG=1 make all -j55 -k 2> build.log
      % perl -naF: -e 'print "-- -number=".$F[1]." ".$F[0]."\n" if  /: error:/' \
      build.log | xargs -L 1 perl -spi -e 's/ASSERT/EXPECT/g if $. == $number'
      % make format
      ```
      After that I reverted back change to `ASSERT*` in `util/testharness.h`. But preserved introduced `EXPECT*`, which is the same as `ASSERT*`. This will be deleted once switched to gtest.
      
      This diff is independent and contains manual changes only in `util/testharness.h`.
      
      Test Plan:
      Make sure all tests are passing.
      ```lang=bash
      % USE_CLANG=1 make check
      ```
      
      Reviewers: igor, lgalanis, sdong, yufei.zhu, rven, meyering
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D33333
      9fd6edf8
    • V
      Speed up rocksDB close call. · b2b30865
      Venkatesh Radhakrishnan 提交于
      Summary:
      On RocksDB, when there are multiple instances doing
      flushes/compactions in the background, the close call takes a long time
      because the flushes/compactions need to complete before the database can
      shut down. If another instance is using the background threads and the compaction for this instance is in the queue since it has been scheduled, we still cannot shutdown. We now remove the scheduled background tasks which have not yet started running, so that shutdown is speeded up.
      
      Test Plan: DB Test added.
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D33741
      b2b30865
    • I
      rocksdb: Small refactoring before migrating to gtest · 95344346
      Igor Sugak 提交于
      Summary: These changes are necessary to make tests look more generic, and avoid feature conflicts with gtest.
      
      Test Plan:
      Make sure no build errors, and all test are passing.
      ```
      % make check
      ```
      
      Reviewers: igor, meyering
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35145
      95344346
    • M
      Fix compaction IO stats to handle large file counts · 56337faf
      Mark Callaghan 提交于
      Summary:
      The output did not have space for 6-digit file counts or for 3-digit
      counts of files being compacted. This adds space for that while preserving
      existing alignment. See https://gist.github.com/mdcallag/0a61c6a18dd467224c11
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench, look at output
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35091
      56337faf
  2. 15 3月, 2015 2 次提交
    • I
      Make RecordIn/RecordOut human readable · c6967a1a
      Igor Canadi 提交于
      Summary: I had hard time understanding these big numbers. Here's how the output looks like now: https://gist.github.com/igorcanadi/4c39c17685049584a992
      
      Test Plan: db_bench
      
      Reviewers: sdong, MarkCallaghan
      
      Reviewed By: MarkCallaghan
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35073
      c6967a1a
    • M
      Stop printing per-level stall times. · c8da6703
      Mark Callaghan 提交于
      Summary:
      Per-level stall times are the suggested stall time, not the actual stall time so this change stops printing them
      both in the per-level output lines and in the summary. Also changed output for total stall time to include units
      in all cases. The new output looks like:
      Level   Files   Size(MB) Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) Stall(cnt)    RecordIn   RecordDrop
      ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        L0     4/1          7   0.8      0.0     0.0      0.0       0.6      0.6       0.0   0.0      0.0     12.9        50       352    0.141        882            0            0
        L1     5/0          9   0.9      0.0     0.0      0.0       0.0      0.0       0.6   0.0      0.0      0.0         0         0    0.000          0            0            0
        L2    54/0         99   1.0      0.0     0.0      0.0       0.0      0.0       0.6   0.0      0.0      0.0         0         0    0.000          0            0            0
        L3   289/0        527   0.5      0.0     0.0      0.0       0.0      0.0       0.5   0.0      0.0      0.0         0         0    0.000          0            0            0
       Sum   352/1        642   0.0      0.0     0.0      0.0       0.6      0.6       1.7   1.0      0.0     12.9        50       352    0.141        882            0            0
       Int     0/0          0   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.5         0         3    0.118          7            0            0
      Flush(GB): accumulative 0.627, interval 0.005
      Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 882 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard
      
      Task ID: #6493861
      
      Blame Rev:
      
      Test Plan:
      run db_bench, look at output
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35085
      c8da6703
  3. 14 3月, 2015 12 次提交
    • Y
      Fixed the unit-test issue in PreShutdownCompactionMiddle · 12134139
      Yueh-Hsuan Chiang 提交于
      Summary: Fixed the unit-test issue in PreShutdownCompactionMiddle
      
      Test Plan: export ROCKSDB_TESTS=PreShutdownCompactionMiddle
      
      Reviewers: rven, sdong, igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35061
      12134139
    • Y
      Fix the issue in PreShutdownMultipleCompaction · fd1b3f38
      Yueh-Hsuan Chiang 提交于
      Summary: Fix the issue in PreShutdownMultipleCompaction
      
      Test Plan:
      export ROCKSDB_TESTS=PreShutdownMultipleCompaction
      ./db_test
      
      Reviewers: rven, sdong, igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35055
      fd1b3f38
    • I
      Fix SIGSEGV when not using cache · 417367c4
      Igor Canadi 提交于
      417367c4
    • V
      Prevent slowdowns and stalls in PreShutdown tests · e25ff039
      Venkatesh Radhakrishnan 提交于
      Summary:
      The preshutdown tests check for stopped compactions/flushes.
      Removing stalls on the write path.
      
      Test Plan: DBTests.PreShutdown*
      
      Reviewers: yhchiang, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35037
      e25ff039
    • I
      Speed up db_bench shutdown · f6907126
      Igor Canadi 提交于
      Summary: See t6489044
      
      Test Plan: compiles
      
      Reviewers: MarkCallaghan
      
      Reviewed By: MarkCallaghan
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34977
      f6907126
    • Y
      Improve the robustness of ThreadStatusSingleCompaction · c1b3cde1
      Yueh-Hsuan Chiang 提交于
      Summary:
      Improve the robustness of ThreadStatusSingleCompaction
      by ensuring the number of files flushed in the test.
      
      Test Plan:
      export ROCKSDB_TESTS=ThreadStatus
      ./db_test
      
      Reviewers: sdong, igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35019
      c1b3cde1
    • Y
      Fix the deadlock issue in ThreadStatusSingleCompaction. · 8c12426c
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fix the deadlock issue in ThreadStatusSingleCompaction.
      
      In the previous version of ThreadStatusSingleCompaction, the compaction
      thread will wait for a SYNC_POINT while its db_mutex is held.  However,
      if the test hasn't finished its Put cycle while a compaction is running,
      a deadlock will happen in the test.
      
      Test Plan:
      export ROCKSDB_TESTS=ThreadStatus
      ./db_test
      
      Reviewers: sdong, igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35001
      8c12426c
    • S
      DBTest.DynamicLevelCompressionPerLevel should not run without snappy support · b16ead53
      sdong 提交于
      Summary: The test depends on snappy to be used. Skip the test if it is not supported.
      
      Test Plan: Run the test
      
      Reviewers: meyering, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D34995
      b16ead53
    • Y
      Fix a typo / test failure in ThreadStatusSingleCompaction · a5e60baf
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fix a typo / test failure in ThreadStatusSingleCompaction
      
      Test Plan:
      export ROCKSDB_TESTS=ThreadStatus
      ./db_test
      a5e60baf
    • I
      Don't run some tests is snappy is not present · cb2c9185
      Igor Canadi 提交于
      Summary: Currently, we have `ifdef SNAPPY` around bunch of db_test code. Some tests that don't even use compression are also blocked when running system doesn't have snappy. This also causes hard-to-catch bugs, like D34983. We should dynamically figure out if compression is supported or not.
      
      Test Plan: compiles
      
      Reviewers: sdong, meyering
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34989
      cb2c9185
    • Y
      Allow GetThreadList() to report operation stage. · c594b0e8
      Yueh-Hsuan Chiang 提交于
      Summary: Allow GetThreadList() to report operation stage.
      
      Test Plan:
        ./thread_list_test
        ./db_bench --benchmarks=fillrandom --num=100000 --threads=40 \
          --max_background_compactions=10 --max_background_flushes=3 \
          --thread_status_per_interval=1000 --key_size=16 --value_size=1000 \
          --num_column_families=10
      
        export ROCKSDB_TESTS=ThreadStatus
        ./db_test
      
      Sample output
                ThreadID ThreadType                    cfName    Operation        OP_StartTime    ElapsedTime                                         Stage        State
         140116265861184    Low Pri
         140116270055488    Low Pri
         140116274249792   High Pri column_family_name_000005        Flush 2015/03/10-14:58:11           0 us                    FlushJob::WriteLevel0Table
         140116400078912    Low Pri column_family_name_000004   Compaction 2015/03/10-14:58:11           0 us     CompactionJob::FinishCompactionOutputFile
         140116358135872    Low Pri column_family_name_000006   Compaction 2015/03/10-14:58:10           1 us     CompactionJob::FinishCompactionOutputFile
         140116341358656    Low Pri
         140116295221312   High Pri                   default        Flush 2015/03/10-14:58:11           0 us                    FlushJob::WriteLevel0Table
         140116324581440    Low Pri column_family_name_000009   Compaction 2015/03/10-14:58:11           0 us      CompactionJob::ProcessKeyValueCompaction
         140116278444096    Low Pri
         140116299415616    Low Pri column_family_name_000008   Compaction 2015/03/10-14:58:11           0 us     CompactionJob::FinishCompactionOutputFile
         140116291027008   High Pri column_family_name_000001        Flush 2015/03/10-14:58:11           0 us                    FlushJob::WriteLevel0Table
         140116286832704    Low Pri column_family_name_000002   Compaction 2015/03/10-14:58:11           0 us     CompactionJob::FinishCompactionOutputFile
         140116282638400    Low Pri
      
      Reviewers: rven, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34683
      c594b0e8
    • I
      EventLogger · 52d8347a
      Igor Canadi 提交于
      Summary:
      Here's my proposal for making our LOGs easier to read by machines.
      
      The idea is to dump all events as JSON objects. JSON is easy to read by humans, but more importantly, it's easy to read by machines. That way, we can parse this, load into SQLite/mongo and then query or visualize.
      
      I started with table_create and table_delete events, but if everybody agrees, I'll continue by adding more events (flush/compaction/etc etc)
      
      Test Plan:
      Ran db_bench. Observed:
      2015/01/15-14:13:25.788019 1105ef000 EVENT_LOG_v1 {"time_micros": 1421360005788015, "event": "table_file_creation", "file_number": 12, "file_size": 1909699}
      2015/01/15-14:13:25.956500 110740000 EVENT_LOG_v1 {"time_micros": 1421360005956498, "event": "table_file_deletion", "file_number": 12}
      
      Reviewers: yhchiang, rven, dhruba, MarkCallaghan, lgalanis, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31647
      52d8347a
  4. 13 3月, 2015 2 次提交
  5. 12 3月, 2015 5 次提交
    • I
      Fixing segmentation fault in db_bench · 1d43bc41
      Islam AbdelRahman 提交于
      Summary:
      Fixing segmentation fault when running db_bench
      
      This seg fault happens because num_created is used without being initialized
      
      Test Plan:
      running db_bench using these arguments
      bpl=10485760;overlap=10;mcz=2;del=300000000;levels=6;ctrig=4; delay=8; stop=12; wbn=3; mbc=20; mb=67108864;wbs=134217728; dds=0; sync=0; r=1000000; t=1; vs=800; bs=65536; cs=1048576; of=500000; si=1000000; ./db_bench --benchmarks=overwrite --disable_seek_compaction=1 --mmap_read=0 --statistics=1 --histogram=1 --num=$r --threads=$t --value_size=$vs --block_size=$bs --cache_size=$cs --bloom_bits=10 --cache_numshardbits=4 --open_files=$of --verify_checksum=1 --db=/home/tec/koko/ --sync=$sync --disable_wal=1 --compression_type=zlib --stats_interval=$si --compression_ratio=0.5 --disable_data_sync=$dds --write_buffer_size=$wbs --target_file_size_base=$mb --max_write_buffer_number=$wbn --max_background_compactions=$mbc --level0_file_num_compaction_trigger=$ctrig --level0_slowdown_writes_trigger=$delay --level0_stop_writes_trigger=$stop --num_levels=$levels --delete_obsolete_files_period_micros=$del --min_level_to_compress=$mcz --max_grandparent_overlap_factor=$overlap --stats_per_interval=1 --max_bytes_for_level_base=$bpl --use_existing_db=1
      
      Reviewers: sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D34881
      1d43bc41
    • S
      Change the way options.compression_per_level is used when... · e9de8b65
      sdong 提交于
      Change the way options.compression_per_level is used when options.level_compaction_dynamic_level_bytes=true
      
      Summary:
      Change the way options.compression_per_level is used when options.level_compaction_dynamic_level_bytes=true so that options.compression_per_level[1] determines compression for the level L0 is merged to, options.compression_per_level[2] to the level after that, etc.
      
      Test Plan: run all tests
      
      Reviewers: rven, yhchiang, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: yoshinorim, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D34431
      e9de8b65
    • Y
      Fixed a bug where CompactFiles won't delete obsolete files until flush. · 2b785d76
      Yueh-Hsuan Chiang 提交于
      Summary: Fixed a bug where CompactFiles won't delete obsolete files until flush.
      
      Test Plan:
      ./compact_files_test
      export ROCKSDB_TESTS=CompactFiles
      ./db_test
      
      Reviewers: rven, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34671
      2b785d76
    • S
      db_bench: Better way to randomize repeated read keys in -read_random_exp_range · 2884b100
      sdong 提交于
      Summary: Use a better way to map from a key with locality to a random location. Now with the same -read_random_exp_range setting, hit rate drops, which it is expected.
      
      Test Plan: ./db_bench --benchmarks=readrandom -statistics -use_existing_db -cache_size=5000000 --read_random_exp_range=<multiple_values>
      
      Reviewers: MarkCallaghan, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D34761
      2884b100
    • V
      Provide a mechanism to inform Rocksdb that it is shutting down · 284be570
      Venkatesh Radhakrishnan 提交于
      Summary:
      Provide an API which enables users to infor Rocksdb that it is
      shutting down.
      
      Test Plan: db_test
      
      Reviewers: sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34617
      284be570
  6. 11 3月, 2015 2 次提交
    • I
      Get OptimizeFilterForHits work on Mac · 2ddf53b2
      Igor Canadi 提交于
      Summary: Got it working by some voodoo programming
      
      Test Plan: works!
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34611
      2ddf53b2
    • Y
      Allow GetThreadList() to report the start time of the current operation. · 89597bb6
      Yueh-Hsuan Chiang 提交于
      Summary: Allow GetThreadList() to report the start time of the current operation.
      
      Test Plan:
      ./db_bench --benchmarks=fillrandom --num=100000 --threads=40 \
        --max_background_compactions=10 --max_background_flushes=3 \
        --thread_status_per_interval=1000 --key_size=16 --value_size=1000 \
        --num_column_families=10
      
      Sample output:
                ThreadID ThreadType                    cfName    Operation        OP_StartTime         State
         140338840797248   High Pri column_family_name_000003        Flush 2015/03/09-17:49:59
         140338844991552   High Pri column_family_name_000004        Flush 2015/03/09-17:49:59
         140338849185856    Low Pri
         140338983403584    Low Pri
         140339008569408    Low Pri
         140338861768768    Low Pri
         140338924683328    Low Pri
         140338899517504    Low Pri
         140338853380160    Low Pri
         140338882740288    Low Pri
         140338865963072   High Pri column_family_name_000006        Flush 2015/03/09-17:49:59
         140338954043456    Low Pri
         140338857574464    Low Pri
      
      Reviewers: igor, rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: lgalanis, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34689
      89597bb6
  7. 10 3月, 2015 1 次提交
    • S
      db_bench: Add Option -read_random_exp_range to allow read skewness. · 37921b49
      sdong 提交于
      Summary: Introduce parameter -read_random_exp_range in db_bench to provide some key skewness in readrandom and multireadrandom benchmarks. It will helpful to cover block cache better.
      
      Test Plan:
      Run benchmarks with this new parameter. I can clearly see block cache hit rate change while I increase this value (DB size is about 66MB):
      
      ./db_bench --benchmarks=readrandom -statistics -use_existing_db -cache_size=5000000 --read_random_exp_range=0.0
      rocksdb.block.cache.data.miss COUNT : 958418
      rocksdb.block.cache.data.hit COUNT : 41582
      
      ./db_bench --benchmarks=readrandom -statistics -use_existing_db -cache_size=5000000 --read_random_exp_range=5.0
      rocksdb.block.cache.data.miss COUNT : 819518
      rocksdb.block.cache.data.hit COUNT : 180482
      
      ./db_bench --benchmarks=readrandom -statistics -use_existing_db -cache_size=5000000 --read_random_exp_range=10.0
      rocksdb.block.cache.data.miss COUNT : 450479
      rocksdb.block.cache.data.hit COUNT : 549521
      
      ./db_bench --benchmarks=readrandom -statistics -use_existing_db -cache_size=5000000 --read_random_exp_range=20.0
      rocksdb.block.cache.data.miss COUNT : 223192
      rocksdb.block.cache.data.hit COUNT : 776808
      
      Reviewers: MarkCallaghan, kradhakrishnan, yhchiang, rven, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D34629
      37921b49
  8. 07 3月, 2015 2 次提交
    • I
      Add rate_limiter to string options · 485ac0db
      Igor Canadi 提交于
      Summary: I want to be able to set this through mongo config.
      
      Test Plan: added unit test
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34599
      485ac0db
    • Y
      Add --thread_status_per_interval to db_bench · dc4532c4
      Yueh-Hsuan Chiang 提交于
      Summary:
      Add --thread_status_per_interval to db_bench, which allows
      db_bench to optionally enable print the current thread status
      periodically.
      
      Test Plan:
      ./db_bench --benchmarks=fillrandom --num=100000 --threads=40 --max_background_compactions=10 --max_background_flushes=3 --thread_status_per_interval=1000 --key_size=16 --value_size=1000 --num_column_families=10
      
      Sample output:
                ThreadID ThreadType                         dbName                     cfName       Operation           State
         140281571770432    Low Pri
         140281575964736   High Pri  /tmp/rocksdbtest-5297/dbbench  column_family_name_000001           Flush
         140281710182464    Low Pri  /tmp/rocksdbtest-5297/dbbench  column_family_name_000008      Compaction
         140281638879296    Low Pri  /tmp/rocksdbtest-5297/dbbench  column_family_name_000007      Compaction
         140281592741952    Low Pri
         140281580159040   High Pri  /tmp/rocksdbtest-5297/dbbench  column_family_name_000002           Flush
         140281676628032    Low Pri  /tmp/rocksdbtest-5297/dbbench  column_family_name_000006      Compaction
         140281584353344    Low Pri
         140281622102080    Low Pri  /tmp/rocksdbtest-5297/dbbench  column_family_name_000009      Compaction
         140281605324864    Low Pri  /tmp/rocksdbtest-5297/dbbench  column_family_name_000004      Compaction
         140281601130560   High Pri  /tmp/rocksdbtest-5297/dbbench                    default           Flush
         140281596936256    Low Pri
         140281588547648    Low Pri
      
      Reviewers: igor, rven, sdong
      
      Reviewed By: rven
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D34515
      dc4532c4
  9. 04 3月, 2015 1 次提交
    • Y
      Fix a bug in stall time counter. Improve its output format. · 694988b6
      Yueh-Hsuan Chiang 提交于
      Summary: Fix a bug in stall time counter.  Improve its output format.
      
      Test Plan:
      export ROCKSDB_TESTS=Timeout
      ./db_test
      
      ./db_bench --benchmarks=fillrandom --stats_interval=10000 --statistics=true --stats_per_interval=1 --num=1000000 --threads=4 --level0_stop_writes_trigger=3 --level0_slowdown_writes_trigger=2
      
      sample output:
          Uptime(secs): 35.8 total, 0.0 interval
          Cumulative writes: 359590 writes, 359589 keys, 183047 batches, 2.0 writes per batch, 0.04 GB user ingest, stall seconds: 1786.008 ms
          Cumulative WAL: 359591 writes, 183046 syncs, 1.96 writes per sync, 0.04 GB written
          Interval writes: 253 writes, 253 keys, 128 batches, 2.0 writes per batch, 0.0 MB user ingest, stall time: 0 us
          Interval WAL: 253 writes, 128 syncs, 1.96 writes per sync, 0.00 MB written
      
      Reviewers: MarkCallaghan, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34275
      694988b6
  10. 03 3月, 2015 2 次提交
    • I
      options.level_compaction_dynamic_level_bytes to allow RocksDB to pick size... · db037393
      Igor Canadi 提交于
      options.level_compaction_dynamic_level_bytes to allow RocksDB to pick size bases of levels dynamically.
      
      Summary:
      When having fixed max_bytes_for_level_base, the ratio of size of largest level and the second one can range from 0 to the multiplier. This makes LSM tree frequently irregular and unpredictable. It can also cause poor space amplification in some cases.
      
      In this improvement (proposed by Igor Kabiljo), we introduce a parameter option.level_compaction_use_dynamic_max_bytes. When turning it on, RocksDB is free to pick a level base in the range of (options.max_bytes_for_level_base/options.max_bytes_for_level_multiplier, options.max_bytes_for_level_base] so that real level ratios are close to options.max_bytes_for_level_multiplier.
      
      Test Plan: New unit tests and pass tests suites including valgrind.
      
      Reviewers: MarkCallaghan, rven, yhchiang, igor, ikabiljo
      
      Reviewed By: ikabiljo
      
      Subscribers: yoshinorim, ikabiljo, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D31437
      db037393
    • M
      Fix typo in log message · c4bd03a9
      Mark Callaghan 提交于
      Summary:
      fix typo
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D34251
      c4bd03a9
  11. 28 2月, 2015 1 次提交
  12. 27 2月, 2015 2 次提交
    • S
      Add columnfamily option optimize_filters_for_hits to optimize for key hits only · e7c434c3
      Sameet Agarwal 提交于
      Summary:
          Summary:
          Added a new option to ColumnFamllyOptions  - optimize_filters_for_hits. This option can be used in the case where most
          accesses to the store are key hits and we dont need to optimize performance for key misses.
          This is useful when you have a very large database and most of your lookups succeed.  The option allows the store to
           not store and use filters in the last level (the largest level which contains data). These filters can take a large amount of
           space for large databases (in memory and on-disk). For the last level, these filters are only useful for key misses and not
           for key hits. If we are not optimizing for key misses, we can choose to not store these filters for that level.
      
          This option is only provided for BlockBasedTable. We skip the filters when we are compacting
      
      Test Plan:
      1. Modified db_test toalso run tests with an additonal option (skip_filters_on_last_level)
       2. Added another unit test to db_test which specifically tests that filters are being skipped
      
      Reviewers: rven, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: lgalanis, yoshinorim, MarkCallaghan, rven, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D33717
      e7c434c3
    • I
      rocksdb: Add missing override · 62247ffa
      Igor Sugak 提交于
      Summary:
      When using latest clang (3.6 or 3.7/trunck) rocksdb is failing with many errors. Almost all of them are missing override errors. This diff adds missing override keyword. No manual changes.
      
      Prerequisites: bear and clang 3.5 build with extra tools
      
      ```lang=bash
      % USE_CLANG=1 bear make all # generate a compilation database http://clang.llvm.org/docs/JSONCompilationDatabase.html
      % clang-modernize -p . -include . -add-override
      % make format
      ```
      
      Test Plan:
      Make sure all tests are passing.
      ```lang=bash
      % #Use default fb code clang.
      % make check
      ```
      Verify less error and no missing override errors.
      ```lang=bash
      % # Have trunk clang present in path.
      % ROCKSDB_NO_FBCODE=1 CC=clang CXX=clang++ make
      ```
      
      Reviewers: igor, kradhakrishnan, rven, meyering, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34077
      62247ffa
  13. 26 2月, 2015 1 次提交
    • M
      Limit key range to number of keys, not number of writes · 182b4cea
      Mark Callaghan 提交于
      Summary:
      An old commit (482401) changed DoWrite to use the value of --writes rather
      than --num to determine the range for keys. This restores the old and correct
      behavior which is to limit it using --num.
      
      Task ID: #6353043
      
      Blame Rev:
      
      Test Plan:
      run db_bench
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D34065
      182b4cea
  14. 25 2月, 2015 3 次提交