1. 05 8月, 2015 22 次提交
    • M
      [wal changes 3/3] method in DB to sync WAL without blocking writers · e06cf1a0
      Mike Kolupaev 提交于
      Summary:
      Subj. We really need this feature.
      
      Previous diff D40899 has most of the changes to make this possible, this diff just adds the method.
      
      Test Plan: `make check`, the new test fails without this diff; ran with ASAN, TSAN and valgrind.
      
      Reviewers: igor, rven, IslamAbdelRahman, anthony, kradhakrishnan, tnovak, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: MarkCallaghan, maykov, hermanlee4, yoshinorim, tnovak, dhruba
      
      Differential Revision: https://reviews.facebook.net/D40905
      e06cf1a0
    • A
      Update Tests To Enable Subcompactions · 5dc3e688
      Ari Ekmekji 提交于
      Summary:
      Updated DBTest DBCompactionTest and CompactionJobStatsTest
      to run compaction-related tests once with subcompactions enabled and
      once disabled using the TEST_P test type in the Google Test suite.
      
      Test Plan: ./db_test  ./db_compaction-test  ./compaction_job_stats_test
      
      Reviewers: sdong, igor, anthony, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43443
      5dc3e688
    • I
      Support delete rate limiting · c45a57b4
      Islam AbdelRahman 提交于
      Summary:
      Introduce DeleteScheduler that allow enforcing a rate limit on file deletion
      Instead of deleting files immediately, files are moved to trash directory and deleted in a background thread that apply sleep penalty between deletes if needed.
      
      I have updated PurgeObsoleteFiles and PurgeObsoleteWALFiles to use the delete_scheduler instead of env_->DeleteFile
      
      Test Plan:
      added delete_scheduler_test
      existing unit tests
      
      Reviewers: kradhakrishnan, anthony, rven, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D43221
      c45a57b4
    • Y
      Update JAVA-HISTORY.md for v3.13 · 102ac118
      Yueh-Hsuan Chiang 提交于
      Summary: Update JAVA-HISTORY.md for v3.13
      
      Test Plan: no code change.
      
      Reviewers: igor, anthony, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43539
      102ac118
    • Y
      Merge pull request #670 from skunkwerks/fix_osx_shared_library_names · 3a1d4e6c
      Yueh-Hsuan Chiang 提交于
      Fix shared library names on OSX
      3a1d4e6c
    • Y
      Fixed RocksJava test failure of shouldSetTestCappedPrefixExtractor · f5d072ae
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fixed RocksJava test failure of shouldSetTestCappedPrefixExtractor
      by adding the missing native implementation of
      useCappedPrefixExtractor.
      
      Test Plan:
      make jclean
      make rocksdbjava -j32
      make jtest
      
      Reviewers: igor, anthony, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43551
      f5d072ae
    • Y
      Merge pull request #654 from adamretter/remove-emptyvalue-compactionfilter · f39cbcb0
      Yueh-Hsuan Chiang 提交于
      RemoveEmptyValueCompactionFilter
      f39cbcb0
    • A
      add support for capped prefix extractor in java · f0b5bcc7
      ashishn 提交于
      f0b5bcc7
    • Y
      Upgrading jemalloc from 3.6.0 to the latest for fbcode+gcc 4.8.1 · 18ba58a9
      Yoshinori Matsunobu 提交于
      Summary:
      MyRocks is using jemalloc latest version, not 3.6.0.
      Combining multiple versions (3.6.0 in RocksDB and latest in MyRocks)
      broke some features -- for example, getting SIGSEGV when heap profiling
      was enabled.
      This diff switches to use jemalloc latest, if
      env variable ROCKSDB_FBCODE_BUILD_WITH_481=1 was set.
      My understanding is this env was used by MyRocks only so it would be
      safe to change.
      
      Test Plan: building MyRocks then verified jemalloc heap profiling worked
      
      Reviewers: igor, rven, yhchiang, jtolmer, maykov, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D43479
      18ba58a9
    • Y
      Expose the BackupEngine from the Java API · ce21afd2
      Yueh-Hsuan Chiang 提交于
      Summary:
      Merge pull request #665 by adamretter
      
      Exposes BackupEngine from C++ to the Java API. Previously only BackupableDB was available
      
      Test Plan: BackupEngineTest.java
      
      Reviewers: fyrz, igor, ankgup87, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D42873
      ce21afd2
    • Y
      Merge pull request #569 from adamretter/travis-java-api · b0d12a13
      Yueh-Hsuan Chiang 提交于
      Another attempt at adding the Java API and tests to the travis build
      b0d12a13
    • Y
      Make DBCompactionTest.SkipStatsUpdateTest more stable. · 241bb2ae
      Yueh-Hsuan Chiang 提交于
      Summary:
      Make DBCompactionTest.SkipStatsUpdateTest more stable by
      removing flaky but unnecessary assertion on the size of db
      as simply checking the random file open count is suffice.
      
      Test Plan: db_compaction_test
      
      Reviewers: igor, anthony, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43533
      241bb2ae
    • Y
      Polish HISTORY.md · 3424eeb1
      Yueh-Hsuan Chiang 提交于
      Summary: Polish HISTORY.md
      
      Test Plan: no code change.
      
      Reviewers: igor, anthony, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D43527
      3424eeb1
    • S
      crash_test cleans up directory before testing if TEST_TMPDIR is set · cf3e0530
      sdong 提交于
      Summary: In a recent change, crash_test can put data under TEST_TMPDIR. However, the directory is not cleaned before running the test, which may cause unexpected results. Clean it.
      
      Test Plan: Run white and black box crash test against non-existing, or non-empty but not compactible DBs, and make sure it works as expected.
      
      Reviewers: kradhakrishnan, rven, yhchiang, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43515
      cf3e0530
    • Y
      Fix a typo and update HISTORY.md for NewCompactOnDeletionCollectorFactory(). · 24daff6d
      Yueh-Hsuan Chiang 提交于
      Summary: Fix a typo and update HISTORY.md for NewCompactOnDeletionCollectorFactory().
      
      Test Plan: no code change.
      
      Reviewers: igor, anthony, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43521
      24daff6d
    • Y
      Add DBOptions::skip_sats_update_on_db_open · 14d0bfa4
      Yueh-Hsuan Chiang 提交于
      Summary:
      UpdateAccumulatedStats() is used to optimize compaction decision
      esp. when the number of deletion entries are high, but this function
      can slowdown DBOpen esp. in disk environment.
      
      This patch adds DBOptions::skip_sats_update_on_db_open, which skips
      UpdateAccumulatedStats() in DB::Open() time when it's set to true.
      
      Test Plan: Add DBCompactionTest.SkipStatsUpdateTest
      
      Reviewers: igor, anthony, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: tnovak, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42843
      14d0bfa4
    • S
      First half of whitebox_crash_test to keep crashing the same DB · e2a3bfe7
      sdong 提交于
      Summary: Currently, whitebox crash test is not really executed, because the DB is destroyed after each crash. With this fix, in the first half of the time, DB will keep opening the crashed DB and continue from there.
      
      Test Plan: "make whitebox_crash_test" and see the same DB keeps crashing and being reopened.
      
      Reviewers: IslamAbdelRahman, yhchiang, rven, kradhakrishnan
      
      Reviewed By: kradhakrishnan
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43503
      e2a3bfe7
    • S
      crash_test to put DB under TEST_TMPDIR · 2e73bd4f
      sdong 提交于
      Summary: Currently crash_test only puts data under /tmp. It is less flexible if we want to cover different file systems or media. Make crash_test to appreciate TEST_TMPDIR so that users can run it against another file system.
      
      Test Plan: Run blackbox_crash_test and whitebox_crash_test with or without TEST_TMPDIR set and make sure DBs are put in the right place
      
      Reviewers: kradhakrishnan, yhchiang, rven, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43509
      2e73bd4f
    • S
      crash_test to cover simply cases · 1205bdbc
      sdong 提交于
      Summary:
      crash_test now only runs complicated options, multiple column families, prefix hash, frequently changing options, many compaction threads, etc. These options are good to cover new features but we loss coverage in most common use cases. Furthermore, by running only for multiple column families, we are not able to create LSM trees that are large enough to cover some stress cases.
      Make half of crash_test runs the simply tests: single column family, default mem table, one compaction thread, no change options.
      
      Test Plan: Run crash_test
      
      Reviewers: rven, yhchiang, IslamAbdelRahman, kradhakrishnan
      
      Reviewed By: kradhakrishnan
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43461
      1205bdbc
    • B
      Merge branch 'master' of github.com:facebook/rocksdb · d5c0a6da
      Boyang Zhang 提交于
      Fixed memory leak error
      d5c0a6da
    • B
      Made change to fix the memory leak · 2d41403f
      Boyang Zhang 提交于
      Summary: So I took a look and I used a pointer to TableBuilder.  Changed it to a unique_ptr.  I think this should work, but I cannot run valgrind correctly on my local machine to test it.
      
      Test Plan: Run valgrind, but it's not working locally.  It says I'm executing an unrecognized instruction.
      
      Reviewers: yhchiang
      
      Subscribers: dhruba, sdong
      
      Differential Revision: https://reviews.facebook.net/D43485
      2d41403f
    • S
      fix memory corruption issue in sst_dump --show_compression_sizes · 92f7039e
      sdong 提交于
      Summary: In "sst_dump --show_compression_sizes", a reference of CompressionOptions is kept in TableBuilderOptions, which is destroyed later, causing a memory issue.
      
      Test Plan: Run valgrind against SSTDumpToolTest.CompressedSizes and make sure it is fixed
      
      Reviewers: IslamAbdelRahman, yhchiang, kradhakrishnan, rven
      
      Reviewed By: rven
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43497
      92f7039e
  2. 04 8月, 2015 6 次提交
    • Y
      Fix compile warning in compact_on_deletion_collector in some environment · be8621ff
      Yueh-Hsuan Chiang 提交于
      Summary: Fix compile warning in compact_on_deletion_collector some environment
      
      Test Plan: make
      
      Reviewers: igor, sdong, anthony, IslamAbdelRahman
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43467
      be8621ff
    • Y
      Add CompactOnDeletionCollector in utilities/table_properties_collectors. · 26894303
      Yueh-Hsuan Chiang 提交于
      Summary:
      This diff adds CompactOnDeletionCollector in utilities/table_properties_collectors,
      which applies a sliding window to a sst file and mark this file as need-compaction
      when it observe enough deletion entries within the consecutive keys covered by
      the sliding window.
      
      Test Plan: compact_on_deletion_collector_test
      
      Reviewers: igor, anthony, IslamAbdelRahman, kradhakrishnan, yoshinorim, sdong
      
      Reviewed By: sdong
      
      Subscribers: maykov, dhruba
      
      Differential Revision: https://reviews.facebook.net/D41175
      26894303
    • V
      Fix CompactFiles by adding all necessary files · 20b244fc
      Venkatesh Radhakrishnan 提交于
      Summary:
      The compact files API had a bug where some overlapping files
      are not added. These are files which overlap with files which were
      added to the compaction input files, but not to the original set of
      input files. This happens only when there are more than two levels
      involved in the compaction. An example will illustrate this better.
      
      Level 2 has 1 input file 1.sst which spans [20,30].
      
      Level 3 has added file  2.sst which spans [10,25]
      
      Level 4 has file 3.sst which spans [35,40] and
              input file 4.sst which spans [46,50].
      
      The existing code would not add 3.sst to the set of input_files because
      it only becomes an overlapping file in level 4 and it wasn't one in
      level 3.
      
      When installing the results of the compaction, 3.sst would overlap with
      output file from the compact files and result in the assertion in
      version_set.cc:1130
      
       // Must not overlap
         assert(level <= 0 || level_files->empty() ||
                  internal_comparator_->Compare(
                      (*level_files)[level_files->size() - 1]->largest, f->smallest) <
                      0);
      This change now adds overlapping files from the current level to the set
      of input files also so that we don't hit the assertion above.
      
      Test Plan:
      d=/tmp/j; rm -rf $d; seq 1000 | parallel --gnu --eta
      'd=/tmp/j/d-{}; mkdir -p $d; TEST_TMPDIR=$d ./db_compaction_test
      --gtest_filter=*CompactilesOnLevel* --gtest_also_run_disabled_tests >&
      '$d'/log-{}'
      
      Reviewers: igor, yhchiang, sdong
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43437
      20b244fc
    • V
      Make SuggestCompactRangeNoTwoLevel0Compactions deterministic · 87df6295
      Venkatesh Radhakrishnan 提交于
      Summary:
      Made SuggestCompactRangeNoTwoLevel0Compactions by forcing
      a flush after generating a file and waiting for compaction at the end.
      
      Test Plan: Run SuggestCompactRangeNoTwoLevel0Compactions
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43449
      87df6295
    • A
      Parallelize L0-L1 Compaction: Restructure Compaction Job · 40c64434
      Ari Ekmekji 提交于
      Summary:
      As of now compactions involving files from Level 0 and Level 1 are single
      threaded because the files in L0, although sorted, are not range partitioned like
      the other levels. This means that during L0-L1 compaction each file from L1
      needs to be merged with potentially all the files from L0.
      
      This attempt to parallelize the L0-L1 compaction assigns a thread and a
      corresponding iterator to each L1 file that then considers only the key range
      found in that L1 file and only the L0 files that have those keys (and only the
      specific portion of those L0 files in which those keys are found). In this way
      the overlap is minimized and potentially eliminated between different iterators
      focusing on the same files.
      
      The first step is to restructure the compaction logic to break L0-L1 compactions
      into multiple, smaller, sequential compactions. Eventually each of these smaller
      jobs will be run simultaneously. Areas to pay extra attention to are
      
        # Correct aggregation of compaction job statistics across multiple threads
        # Proper opening/closing of output files (make sure each thread's is unique)
        # Keys that span multiple L1 files
        # Skewed distributions of keys within L0 files
      
      Test Plan: Make and run db_test (newer version has separate compaction tests) and compaction_job_stats_test
      
      Reviewers: igor, noetzli, anthony, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: MarkCallaghan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42699
      40c64434
    • S
      dump_manifest supports DB with more number of levels · 47316c2d
      sdong 提交于
      Summary: Now ldb dump_manifest refuses to work if there are 20 levels. Extend the limit to 64.
      
      Test Plan: Run the tool with 20 number of levels
      
      Reviewers: kradhakrishnan, anthony, IslamAbdelRahman, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D42879
      47316c2d
  3. 01 8月, 2015 1 次提交
  4. 31 7月, 2015 2 次提交
    • A
      Fixing fprintf of non string literal · 544be638
      Andres Noetzli 提交于
      Summary:
      sst_dump_tool contains two instances of `fprintf`s where the `format` argument is not
      a string literal. This prevents the code from compiling with some compilers/compiler
      options because of the potential security risks associated with printing non-literals.
      
      Test Plan: make all
      
      Reviewers: rven, igor, yhchiang, sdong, anthony
      
      Reviewed By: anthony
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43305
      544be638
    • A
      Fixing dead code in table_properties_collector_test · 193dc977
      Andres Notzli 提交于
      Summary:
      There was a bug in table_properties_collector_test that this patch
      is fixing: `!backward_mode && !test_int_tbl_prop_collector` in
      TestCustomizedTablePropertiesCollector was never true, so the code
      in the if-block never got executed. The reason is that the
      CustomizedTablePropertiesCollector test was skipping tests with
      `!backward_mode_ && !encode_as_internal`. The reason for skipping
      the tests is unknown.
      
      Test Plan: make table_properties_collector_test && ./table_properties_collector_test
      
      Reviewers: rven, igor, yhchiang, anthony, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43281
      193dc977
  5. 30 7月, 2015 4 次提交
    • B
      Merge branch 'master' of github.com:facebook/rocksdb · 05d4265a
      Boyang Zhang 提交于
      05d4265a
    • B
      Compression sizes option for sst_dump_tool · 4be6d441
      Boyang Zhang 提交于
      Summary:
      Added a new feature to sst_dump_tool.cc to allow a user to see the sizes of the different compression algorithms on an .sst file.
      
      Usage:
      ./sst_dump --file=<filename> --show_compression_sizes
      ./sst_dump --file=<filename> --show_compression_sizes --set_block_size=<block_size>
      
      Note: If you do not set a block size, it will default to 16kb
      
      Test Plan: manual test and the write a unit test
      
      Reviewers: IslamAbdelRahman, anthony, yhchiang, rven, kradhakrishnan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D42963
      4be6d441
    • A
      WriteBatch Save Points · 8161bdb5
      agiardullo 提交于
      Summary:
      Support RollbackToSavePoint() in WriteBatch and WriteBatchWithIndex.  Support for partial transaction rollback is needed for MyRocks.
      
      An alternate implementation of Transaction::RollbackToSavePoint() exists in D40869.  However, the other implementation is messier because it is implemented outside of WriteBatch.  This implementation is much cleaner and also exposes a potentially useful feature to WriteBatch.
      
      Test Plan: Added unit tests
      
      Reviewers: IslamAbdelRahman, kradhakrishnan, maykov, yoshinorim, hermanlee4, spetrunia, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D42723
      8161bdb5
    • S
      tools/db_crashtest2.py should run on the same DB · 7bfae3a7
      sdong 提交于
      Summary:
      Crash tests are supposed to restart the same DB after crashing, but it is now opening a different DB. Fix it.
      It's probably a leftover of https://reviews.facebook.net/D17073
      
      Test Plan: Run the test and make sure the same Db is opened.
      
      Reviewers: kradhakrishnan, rven, igor, IslamAbdelRahman, yhchiang, anthony
      
      Reviewed By: anthony
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D43197
      7bfae3a7
  6. 29 7月, 2015 2 次提交
  7. 28 7月, 2015 2 次提交
  8. 25 7月, 2015 1 次提交
    • A
      Add missing hashCode() implementation · 6a82fba7
      Andres Noetzli 提交于
      Summary:
      Whenever a Java class implements equals(), it has to implement hashCode(), otherwise
      there might be weird behavior when inserting instances of the class in a hash map for
      example. This adds two missing hashCode() implementations and extends tests to test
      the hashCode() implementations.
      
      Test Plan: make jtest
      
      Reviewers: rven, igor, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: anthony, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D43017
      6a82fba7