1. 05 8月, 2015 19 次提交
  2. 04 8月, 2015 6 次提交
    • Y
      Fix compile warning in compact_on_deletion_collector in some environment · be8621ff
      Yueh-Hsuan Chiang 提交于
      Summary: Fix compile warning in compact_on_deletion_collector some environment
      
      Test Plan: make
      
      Reviewers: igor, sdong, anthony, IslamAbdelRahman
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43467
      be8621ff
    • Y
      Add CompactOnDeletionCollector in utilities/table_properties_collectors. · 26894303
      Yueh-Hsuan Chiang 提交于
      Summary:
      This diff adds CompactOnDeletionCollector in utilities/table_properties_collectors,
      which applies a sliding window to a sst file and mark this file as need-compaction
      when it observe enough deletion entries within the consecutive keys covered by
      the sliding window.
      
      Test Plan: compact_on_deletion_collector_test
      
      Reviewers: igor, anthony, IslamAbdelRahman, kradhakrishnan, yoshinorim, sdong
      
      Reviewed By: sdong
      
      Subscribers: maykov, dhruba
      
      Differential Revision: https://reviews.facebook.net/D41175
      26894303
    • V
      Fix CompactFiles by adding all necessary files · 20b244fc
      Venkatesh Radhakrishnan 提交于
      Summary:
      The compact files API had a bug where some overlapping files
      are not added. These are files which overlap with files which were
      added to the compaction input files, but not to the original set of
      input files. This happens only when there are more than two levels
      involved in the compaction. An example will illustrate this better.
      
      Level 2 has 1 input file 1.sst which spans [20,30].
      
      Level 3 has added file  2.sst which spans [10,25]
      
      Level 4 has file 3.sst which spans [35,40] and
              input file 4.sst which spans [46,50].
      
      The existing code would not add 3.sst to the set of input_files because
      it only becomes an overlapping file in level 4 and it wasn't one in
      level 3.
      
      When installing the results of the compaction, 3.sst would overlap with
      output file from the compact files and result in the assertion in
      version_set.cc:1130
      
       // Must not overlap
         assert(level <= 0 || level_files->empty() ||
                  internal_comparator_->Compare(
                      (*level_files)[level_files->size() - 1]->largest, f->smallest) <
                      0);
      This change now adds overlapping files from the current level to the set
      of input files also so that we don't hit the assertion above.
      
      Test Plan:
      d=/tmp/j; rm -rf $d; seq 1000 | parallel --gnu --eta
      'd=/tmp/j/d-{}; mkdir -p $d; TEST_TMPDIR=$d ./db_compaction_test
      --gtest_filter=*CompactilesOnLevel* --gtest_also_run_disabled_tests >&
      '$d'/log-{}'
      
      Reviewers: igor, yhchiang, sdong
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43437
      20b244fc
    • V
      Make SuggestCompactRangeNoTwoLevel0Compactions deterministic · 87df6295
      Venkatesh Radhakrishnan 提交于
      Summary:
      Made SuggestCompactRangeNoTwoLevel0Compactions by forcing
      a flush after generating a file and waiting for compaction at the end.
      
      Test Plan: Run SuggestCompactRangeNoTwoLevel0Compactions
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43449
      87df6295
    • A
      Parallelize L0-L1 Compaction: Restructure Compaction Job · 40c64434
      Ari Ekmekji 提交于
      Summary:
      As of now compactions involving files from Level 0 and Level 1 are single
      threaded because the files in L0, although sorted, are not range partitioned like
      the other levels. This means that during L0-L1 compaction each file from L1
      needs to be merged with potentially all the files from L0.
      
      This attempt to parallelize the L0-L1 compaction assigns a thread and a
      corresponding iterator to each L1 file that then considers only the key range
      found in that L1 file and only the L0 files that have those keys (and only the
      specific portion of those L0 files in which those keys are found). In this way
      the overlap is minimized and potentially eliminated between different iterators
      focusing on the same files.
      
      The first step is to restructure the compaction logic to break L0-L1 compactions
      into multiple, smaller, sequential compactions. Eventually each of these smaller
      jobs will be run simultaneously. Areas to pay extra attention to are
      
        # Correct aggregation of compaction job statistics across multiple threads
        # Proper opening/closing of output files (make sure each thread's is unique)
        # Keys that span multiple L1 files
        # Skewed distributions of keys within L0 files
      
      Test Plan: Make and run db_test (newer version has separate compaction tests) and compaction_job_stats_test
      
      Reviewers: igor, noetzli, anthony, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42699
      40c64434
    • S
      dump_manifest supports DB with more number of levels · 47316c2d
      sdong 提交于
      Summary: Now ldb dump_manifest refuses to work if there are 20 levels. Extend the limit to 64.
      
      Test Plan: Run the tool with 20 number of levels
      
      Reviewers: kradhakrishnan, anthony, IslamAbdelRahman, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D42879
      47316c2d
  3. 01 8月, 2015 1 次提交
  4. 31 7月, 2015 2 次提交
    • A
      Fixing fprintf of non string literal · 544be638
      Andres Noetzli 提交于
      Summary:
      sst_dump_tool contains two instances of `fprintf`s where the `format` argument is not
      a string literal. This prevents the code from compiling with some compilers/compiler
      options because of the potential security risks associated with printing non-literals.
      
      Test Plan: make all
      
      Reviewers: rven, igor, yhchiang, sdong, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43305
      544be638
    • A
      Fixing dead code in table_properties_collector_test · 193dc977
      Andres Notzli 提交于
      Summary:
      There was a bug in table_properties_collector_test that this patch
      is fixing: `!backward_mode && !test_int_tbl_prop_collector` in
      TestCustomizedTablePropertiesCollector was never true, so the code
      in the if-block never got executed. The reason is that the
      CustomizedTablePropertiesCollector test was skipping tests with
      `!backward_mode_ && !encode_as_internal`. The reason for skipping
      the tests is unknown.
      
      Test Plan: make table_properties_collector_test && ./table_properties_collector_test
      
      Reviewers: rven, igor, yhchiang, anthony, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43281
      193dc977
  5. 30 7月, 2015 4 次提交
    • B
      Merge branch 'master' of github.com:facebook/rocksdb · 05d4265a
      Boyang Zhang 提交于
      05d4265a
    • B
      Compression sizes option for sst_dump_tool · 4be6d441
      Boyang Zhang 提交于
      Summary:
      Added a new feature to sst_dump_tool.cc to allow a user to see the sizes of the different compression algorithms on an .sst file.
      
      Usage:
      ./sst_dump --file=<filename> --show_compression_sizes
      ./sst_dump --file=<filename> --show_compression_sizes --set_block_size=<block_size>
      
      Note: If you do not set a block size, it will default to 16kb
      
      Test Plan: manual test and the write a unit test
      
      Reviewers: IslamAbdelRahman, anthony, yhchiang, rven, kradhakrishnan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D42963
      4be6d441
    • A
      WriteBatch Save Points · 8161bdb5
      agiardullo 提交于
      Summary:
      Support RollbackToSavePoint() in WriteBatch and WriteBatchWithIndex.  Support for partial transaction rollback is needed for MyRocks.
      
      An alternate implementation of Transaction::RollbackToSavePoint() exists in D40869.  However, the other implementation is messier because it is implemented outside of WriteBatch.  This implementation is much cleaner and also exposes a potentially useful feature to WriteBatch.
      
      Test Plan: Added unit tests
      
      Reviewers: IslamAbdelRahman, kradhakrishnan, maykov, yoshinorim, hermanlee4, spetrunia, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42723
      8161bdb5
    • S
      tools/db_crashtest2.py should run on the same DB · 7bfae3a7
      sdong 提交于
      Summary:
      Crash tests are supposed to restart the same DB after crashing, but it is now opening a different DB. Fix it.
      It's probably a leftover of https://reviews.facebook.net/D17073
      
      Test Plan: Run the test and make sure the same Db is opened.
      
      Reviewers: kradhakrishnan, rven, igor, IslamAbdelRahman, yhchiang, anthony
      
      Reviewed By: anthony
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43197
      7bfae3a7
  6. 29 7月, 2015 2 次提交
  7. 28 7月, 2015 2 次提交
  8. 25 7月, 2015 3 次提交
    • A
      Add missing hashCode() implementation · 6a82fba7
      Andres Noetzli 提交于
      Summary:
      Whenever a Java class implements equals(), it has to implement hashCode(), otherwise
      there might be weird behavior when inserting instances of the class in a hash map for
      example. This adds two missing hashCode() implementations and extends tests to test
      the hashCode() implementations.
      
      Test Plan: make jtest
      
      Reviewers: rven, igor, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: anthony, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43017
      6a82fba7
    • A
      Fixing Java tests. · f73c8014
      Andres Noetzli 提交于
      Summary:
      While working on https://reviews.facebook.net/D43017 , I realized
      that some Java tests are failing due to a deprecated option.
      This patch removes the offending tests, adds @Deprecated annotations
      to the Java interface and removes the corresponding functions in
      rocksjni
      
      Test Plan: make jtest (all tests are passing now)
      
      Reviewers: rven, igor, sdong, anthony, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43035
      f73c8014
    • Y
      Correct the comment of DB::GetApproximateSizes · 14f41376
      Yueh-Hsuan Chiang 提交于
      Summary: Correct the comment of DB::GetApproximateSizes
      
      Test Plan: no code change
      
      Reviewers: igor, anthony, IslamAbdelRahman, kradhakrishnan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42939
      14f41376
  9. 24 7月, 2015 1 次提交