1. 06 8月, 2015 5 次提交
    • S
      Add statistic histogram "rocksdb.sst.read.micros" · 3ae386ea
      sdong 提交于
      Summary: Measure read latency histogram and put in statistics. Compaction inputs are excluded from it when possible (unfortunately usually no possible as we usually take table reader from table cache.
      
      Test Plan:
      Run db_bench and it shows the stats, like:
      
      rocksdb.sst.read.micros statistics Percentiles :=> 50 : 1.238522 95 : 2.529740 99 : 3.912180
      
      Reviewers: kradhakrishnan, rven, anthony, IslamAbdelRahman, MarkCallaghan, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43275
      3ae386ea
    • S
      "make commit-prereq" should clean up rocksjava properly · 8ecb51a7
      sdong 提交于
      Summary: "make commit-prereq" fails to clean up java, which can cause rocksjava failure.
      
      Test Plan: Run commit-prepreq
      
      Reviewers: IslamAbdelRahman, rven, kradhakrishnan, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43575
      8ecb51a7
    • I
      Enable DBTest.FlushSchedule under TSAN · 9aec75fb
      Islam AbdelRahman 提交于
      Summary: This patch will fix the false positive of DBTest.FlushSchedule under TSAN, we dont need to disable this test
      
      Test Plan: COMPILE_WITH_TSAN=1 make -j64 db_test && ./db_test --gtest_filter="DBTest.FlushSchedule"
      
      Reviewers: yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D43599
      9aec75fb
    • I
      Fix TSAN for delete_scheduler_test · bd2fc5f5
      Islam AbdelRahman 提交于
      Summary: Fixing TSAN false positive and relaxing the conditions when we are running under TSAN
      
      Test Plan: COMPILE_WITH_TSAN=1 make -j64 delete_scheduler_test && ./delete_scheduler_test
      
      Reviewers: yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D43593
      bd2fc5f5
    • S
      Fix misplaced position for reversing iterator direction while current key is a merge · 8e01bd11
      sdong 提交于
      Summary:
      While doing forward iterating, if current key is merge, internal iterator position is placed to the next key. If Prev() is called now, needs to do extra Prev() to recover the location.
      This is second attempt of fixing after reverting ec70fea4. This time shrink the fix to only merge key is the current key and avoid the reseeking logic for max_iterating skipping
      
      Test Plan: enable the two disabled tests and make sure they pass
      
      Reviewers: rven, IslamAbdelRahman, kradhakrishnan, tnovak, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43557
      8e01bd11
  2. 05 8月, 2015 23 次提交
  3. 04 8月, 2015 6 次提交
    • Y
      Fix compile warning in compact_on_deletion_collector in some environment · be8621ff
      Yueh-Hsuan Chiang 提交于
      Summary: Fix compile warning in compact_on_deletion_collector some environment
      
      Test Plan: make
      
      Reviewers: igor, sdong, anthony, IslamAbdelRahman
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43467
      be8621ff
    • Y
      Add CompactOnDeletionCollector in utilities/table_properties_collectors. · 26894303
      Yueh-Hsuan Chiang 提交于
      Summary:
      This diff adds CompactOnDeletionCollector in utilities/table_properties_collectors,
      which applies a sliding window to a sst file and mark this file as need-compaction
      when it observe enough deletion entries within the consecutive keys covered by
      the sliding window.
      
      Test Plan: compact_on_deletion_collector_test
      
      Reviewers: igor, anthony, IslamAbdelRahman, kradhakrishnan, yoshinorim, sdong
      
      Reviewed By: sdong
      
      Subscribers: maykov, dhruba
      
      Differential Revision: https://reviews.facebook.net/D41175
      26894303
    • V
      Fix CompactFiles by adding all necessary files · 20b244fc
      Venkatesh Radhakrishnan 提交于
      Summary:
      The compact files API had a bug where some overlapping files
      are not added. These are files which overlap with files which were
      added to the compaction input files, but not to the original set of
      input files. This happens only when there are more than two levels
      involved in the compaction. An example will illustrate this better.
      
      Level 2 has 1 input file 1.sst which spans [20,30].
      
      Level 3 has added file  2.sst which spans [10,25]
      
      Level 4 has file 3.sst which spans [35,40] and
              input file 4.sst which spans [46,50].
      
      The existing code would not add 3.sst to the set of input_files because
      it only becomes an overlapping file in level 4 and it wasn't one in
      level 3.
      
      When installing the results of the compaction, 3.sst would overlap with
      output file from the compact files and result in the assertion in
      version_set.cc:1130
      
       // Must not overlap
         assert(level <= 0 || level_files->empty() ||
                  internal_comparator_->Compare(
                      (*level_files)[level_files->size() - 1]->largest, f->smallest) <
                      0);
      This change now adds overlapping files from the current level to the set
      of input files also so that we don't hit the assertion above.
      
      Test Plan:
      d=/tmp/j; rm -rf $d; seq 1000 | parallel --gnu --eta
      'd=/tmp/j/d-{}; mkdir -p $d; TEST_TMPDIR=$d ./db_compaction_test
      --gtest_filter=*CompactilesOnLevel* --gtest_also_run_disabled_tests >&
      '$d'/log-{}'
      
      Reviewers: igor, yhchiang, sdong
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43437
      20b244fc
    • V
      Make SuggestCompactRangeNoTwoLevel0Compactions deterministic · 87df6295
      Venkatesh Radhakrishnan 提交于
      Summary:
      Made SuggestCompactRangeNoTwoLevel0Compactions by forcing
      a flush after generating a file and waiting for compaction at the end.
      
      Test Plan: Run SuggestCompactRangeNoTwoLevel0Compactions
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43449
      87df6295
    • A
      Parallelize L0-L1 Compaction: Restructure Compaction Job · 40c64434
      Ari Ekmekji 提交于
      Summary:
      As of now compactions involving files from Level 0 and Level 1 are single
      threaded because the files in L0, although sorted, are not range partitioned like
      the other levels. This means that during L0-L1 compaction each file from L1
      needs to be merged with potentially all the files from L0.
      
      This attempt to parallelize the L0-L1 compaction assigns a thread and a
      corresponding iterator to each L1 file that then considers only the key range
      found in that L1 file and only the L0 files that have those keys (and only the
      specific portion of those L0 files in which those keys are found). In this way
      the overlap is minimized and potentially eliminated between different iterators
      focusing on the same files.
      
      The first step is to restructure the compaction logic to break L0-L1 compactions
      into multiple, smaller, sequential compactions. Eventually each of these smaller
      jobs will be run simultaneously. Areas to pay extra attention to are
      
        # Correct aggregation of compaction job statistics across multiple threads
        # Proper opening/closing of output files (make sure each thread's is unique)
        # Keys that span multiple L1 files
        # Skewed distributions of keys within L0 files
      
      Test Plan: Make and run db_test (newer version has separate compaction tests) and compaction_job_stats_test
      
      Reviewers: igor, noetzli, anthony, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42699
      40c64434
    • S
      dump_manifest supports DB with more number of levels · 47316c2d
      sdong 提交于
      Summary: Now ldb dump_manifest refuses to work if there are 20 levels. Extend the limit to 64.
      
      Test Plan: Run the tool with 20 number of levels
      
      Reviewers: kradhakrishnan, anthony, IslamAbdelRahman, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D42879
      47316c2d
  4. 01 8月, 2015 1 次提交
  5. 31 7月, 2015 2 次提交
    • A
      Fixing fprintf of non string literal · 544be638
      Andres Noetzli 提交于
      Summary:
      sst_dump_tool contains two instances of `fprintf`s where the `format` argument is not
      a string literal. This prevents the code from compiling with some compilers/compiler
      options because of the potential security risks associated with printing non-literals.
      
      Test Plan: make all
      
      Reviewers: rven, igor, yhchiang, sdong, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43305
      544be638
    • A
      Fixing dead code in table_properties_collector_test · 193dc977
      Andres Notzli 提交于
      Summary:
      There was a bug in table_properties_collector_test that this patch
      is fixing: `!backward_mode && !test_int_tbl_prop_collector` in
      TestCustomizedTablePropertiesCollector was never true, so the code
      in the if-block never got executed. The reason is that the
      CustomizedTablePropertiesCollector test was skipping tests with
      `!backward_mode_ && !encode_as_internal`. The reason for skipping
      the tests is unknown.
      
      Test Plan: make table_properties_collector_test && ./table_properties_collector_test
      
      Reviewers: rven, igor, yhchiang, anthony, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43281
      193dc977
  6. 30 7月, 2015 3 次提交
    • B
      Merge branch 'master' of github.com:facebook/rocksdb · 05d4265a
      Boyang Zhang 提交于
      05d4265a
    • B
      Compression sizes option for sst_dump_tool · 4be6d441
      Boyang Zhang 提交于
      Summary:
      Added a new feature to sst_dump_tool.cc to allow a user to see the sizes of the different compression algorithms on an .sst file.
      
      Usage:
      ./sst_dump --file=<filename> --show_compression_sizes
      ./sst_dump --file=<filename> --show_compression_sizes --set_block_size=<block_size>
      
      Note: If you do not set a block size, it will default to 16kb
      
      Test Plan: manual test and the write a unit test
      
      Reviewers: IslamAbdelRahman, anthony, yhchiang, rven, kradhakrishnan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D42963
      4be6d441
    • A
      WriteBatch Save Points · 8161bdb5
      agiardullo 提交于
      Summary:
      Support RollbackToSavePoint() in WriteBatch and WriteBatchWithIndex.  Support for partial transaction rollback is needed for MyRocks.
      
      An alternate implementation of Transaction::RollbackToSavePoint() exists in D40869.  However, the other implementation is messier because it is implemented outside of WriteBatch.  This implementation is much cleaner and also exposes a potentially useful feature to WriteBatch.
      
      Test Plan: Added unit tests
      
      Reviewers: IslamAbdelRahman, kradhakrishnan, maykov, yoshinorim, hermanlee4, spetrunia, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42723
      8161bdb5