1. 10 4月, 2015 1 次提交
    • I
      Remove use of whole-archive to include jemalloc · 91df4e96
      Igor Canadi 提交于
      Summary: I don't think we need to use whole-archive to include jemalloc. This change only affects our development builds -- it does not affect our open source builds (which don't support jemalloc) or our fbcode third-party2 builds (which use open-source build codepaths).
      
      Test Plan:
      make
      verify that jemalloc is running by running `MALLOC_CONF="prof:true" ./cache_test` and observing that file was created
      
      Reviewers: MarkCallaghan
      
      Reviewed By: MarkCallaghan
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36783
      91df4e96
  2. 09 4月, 2015 7 次提交
    • A
      Add thread-safety documentation to MemTable and related classes · 84c5bd7e
      agiardullo 提交于
      Summary: Other than making some class members private, this is a documentation-only change
      
      Test Plan: unit tests
      
      Reviewers: sdong, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36567
      84c5bd7e
    • S
      Script to check whether RocksDB can read DB generated by previous releases and vice versa · ee9bdd38
      sdong 提交于
      Summary: Add a script, which checks out changes from a list of tags, build them and load the same data into it. In the last, checkout the target build and make sure it can successfully open DB and read all the data. It is implemented through ldb tool, because ldb tool is available from all previous builds so that we don't have to cross build anything.
      
      Test Plan: Run the script.
      
      Reviewers: yhchiang, rven, anthony, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36639
      ee9bdd38
    • K
      Enabling checksum in repair db as it should have been. · 2b019a15
      krad 提交于
      Summary: I think the checksum was turned off by mistake.
      
      Test Plan: Run make check
      
      Reviewers: igor sdong chip
      
      CC:
      
      Task ID:
      
      Blame Rev:
      2b019a15
    • S
      Create EnvOptions using sanitized DB Options · b1bbdd79
      sdong 提交于
      Summary: Now EnvOptions uses unsanitized DB options. bytes_per_sync is tuned off when rate_limiter is used, but this change doesn't take effort.
      
      Test Plan: See different I/O pattern in db_bench running fillseq.
      
      Reviewers: yhchiang, kradhakrishnan, rven, anthony, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36723
      b1bbdd79
    • I
      Fix Makefile · edbb08b5
      Igor Canadi 提交于
      Summary: These two files are test binaries and are not included in TESTS in Makefile.
      
      Test Plan: `make clean` now deletes those files, too
      
      Reviewers: sdong, kradhakrishnan, meyering
      
      Reviewed By: kradhakrishnan, meyering
      
      Subscribers: kradhakrishnan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36705
      edbb08b5
    • J
      build: create .o files specifically for java-related targets · 199313dc
      Jim Meyering 提交于
      Summary:
      When building rocksdbjava and rocksdbjavastatic, create -fPIC-enabled
      binaries in a temporary subdirectory, jl/.
      * Makefile (java_libobjects): New variable.
      (java_libobjects): New rule.
      (CLEAN_FILES): Arrange for "make clean" to remove that temporary dir.
      (rocksdbjavastatic): Depend on the new variable.
      Remove useless OPT=... line.
      (rocksdbjava): Likewise.
      
      Test Plan:
        JAVA_HOME=/usr/local/jdk-7u67-64 PATH=$JAVA_HOME/bin:$PATH \
          make rocksdbjavastatic
      
      Reviewers: yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36645
      199313dc
    • S
      Trivial move to cover multiple input levels · b118238a
      sdong 提交于
      Summary: Now trivial move is only triggered when moving from level n to n+1. With dynamic level base, it is possible that file is moved from level 0 to level n, while levels from 1 to n-1 are empty. Extend trivial move to this case.
      
      Test Plan: Add a more unit test of sequential loading. Non-trivial compaction happened without the patch and now doesn't happen.
      
      Reviewers: rven, yhchiang, MarkCallaghan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba, IslamAbdelRahman
      
      Differential Revision: https://reviews.facebook.net/D36669
      b118238a
  3. 08 4月, 2015 7 次提交
    • I
      Fix formatting of USERS.md · e7adfe69
      Igor Canadi 提交于
      e7adfe69
    • I
      Add USERS.md · 4e7543dc
      Igor Canadi 提交于
      Summary: See the file.
      
      Test Plan: none
      
      Reviewers: lgalanis, meyering, MarkCallaghan, yhchiang, rven, anthony, kradhakrishnan, jayadev, sdong
      
      Reviewed By: jayadev
      
      Subscribers: jayadev, rdallman, andybons, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36621
      4e7543dc
    • K
      Log writer record format doc. · 58346b9e
      krad 提交于
      Summary: Added a ASCII doodle to represent the log writer format.
      
      Test Plan: None
      
      Reviewers: sdong
      
      CC: leveldb
      
      Task ID: 6179896
      
      Blame Rev:
      58346b9e
    • Y
      Fix the compilation error in flashcache.cc on Mac · db6569cd
      Yueh-Hsuan Chiang 提交于
      Summary:
      Fix the following compilation error in flashcache.cc on Mac
      
      Undefined symbols for architecture x86_64:
      
      "rocksdb::NewFlashcacheAwareEnv(rocksdb::Env*, int)", referenced from:
          rocksdb::Benchmark::Open(rocksdb::Options*) in db_bench.o
      
      Test Plan: make db_bench
      
      Reviewers: sdong, igor, rven
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36657
      db6569cd
    • J
      build: don't use a glob for java/rocksjni/* · cba59200
      Jim Meyering 提交于
      Summary:
      * src.mk (JNI_NATIVE_SOURCES): New variable, so we don't have to use
      a glob in Makefile
      * Makefile (JNI_NATIVE_SOURCES): Remove glob-using definition, now
      that the explicit list of sources is in src.mk.
      
      Test Plan:
        Run this:
          JAVA_HOME=/usr/local/jdk-7u67-64 PATH=$JAVA_HOME/bin:$PATH \
            make rocksdbjava
      
      Reviewers: yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36633
      cba59200
    • I
      Fix github issue #563 · c66483c1
      Igor Canadi 提交于
      Summary:
      As described in https://github.com/facebook/rocksdb/issues/563, we should add minor version to SONAME, since we break ABI with minor releases.
      
      I also turned PLATFORM_SHARED_VERSIONED to true by default. This is true in LevelDB and it was switched to false by D15117 for no apparent reason. It should only be false for iOS.
      
      Test Plan: `make shared_lib` produced librocksdb.dylib.3.10.0
      
      Reviewers: sdong, yhchiang, meyering
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36573
      c66483c1
    • I
      Integrate Jenkins with Phabricator · de22c7bd
      Igor Canadi 提交于
      Summary:
      After this diff, when a user submits a diff from Facebook's VPN
      network, we'll automatically trigger a jenkins test. Once jenkins test
      is done, we'll update the diff with test results.
      
      Test Plan:
      Made sure that jenkins build is triggered on `arc diff` and
      that result is reflected back on the diff
      
      Reviewers: sdong, rven, kradhakrishnan, anthony, yhchiang
      
      Reviewed By: anthony
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36555
      de22c7bd
  4. 07 4月, 2015 7 次提交
    • Y
      Fix TSAN build error of D36447 · f1261407
      Yoshinori Matsunobu 提交于
      Summary:
      D36447 caused build error when using COMPILE_WITH_TSAN=1.
      This diff fixes the error.
      
      Test Plan: jenkins
      
      Reviewers: igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36579
      f1261407
    • Y
      Adding another NewFlashcacheAwareEnv function to support pre-opened fd · 824e6463
      Yoshinori Matsunobu 提交于
      Summary:
      There are some cases when flachcache file descriptor was
      already allocated (i.e. fb-MySQL). Then NewFlashcacheAwareEnv returns an
      error at open() because fd was already assigned. This diff adds another
      function to instantiate FlashcacheAwareEnv, with pre-allocated fd cachedev_fd.
      
      Test Plan: Tested with MyRocks using this function, then worked
      
      Reviewers: sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, MarkCallaghan, rven
      
      Differential Revision: https://reviews.facebook.net/D36447
      824e6463
    • I
      Clean up compression logging · 5e067a7b
      Igor Canadi 提交于
      Summary: Now we add warnings when user configures compression and the compression is not supported.
      
      Test Plan:
      Configured compression to non-supported values. Observed messages in my log:
      
          2015/03/26-12:17:57.586341 7ffb8a496840 [WARN] Compression type chosen for level 2 is not supported: LZ4. RocksDB will not compress data on level 2.
      
          2015/03/26-12:19:10.768045 7f36f15c5840 [WARN] Compression type chosen is not supported: LZ4. RocksDB will not compress data.
      
      Reviewers: rven, sdong, yhchiang
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35979
      5e067a7b
    • J
      run 'make check's rules (and even subtests) in parallel · e3ee98b3
      Jim Meyering 提交于
      Summary:
      When GNU parallel is available, "make check" tests are now run in parallel.
      When /dev/shm is usable, we tell those tests to create temporary files therein.
      Now, the longest-running single test, db_test, (which is composed of hundreds of sub-tests)
      is no longer run sequentially: instead, each of its sub-tests is run independently, and can
      be parallelized along with all other tests. To make that process easier, this change
      creates a temporary directory, "t/", in which it puts a small script for each of those
      subtests. The output from each parallel-run test is now saved in t/log-TEST_NAME.
      
      When GNU parallel is not available, we run the tests in sequence, just as before.
      If GNU parallel is available and you don't like the default of running one subtest
      per core, you can invoke "make J=1 check" to run only one test at a time.
      Beware: this will take a long time, and it starts with the two longest-running tests, so you
      will wait for a long time before seeing any results. Instead, if you want to use fewer resources
      but still see useful progress, try "make J=60% check". That will attempt to ensure that 60% of
      the cores are occupied by test runs.
      
      To watch progress of individual tests (duration, success (PASS-or-FAIL), name), run "make watch-log"
      in the same directory from another window.  That will start with something like this:
      
      and when complete should show numbers/names like this:
      
        Every 0.1s: sort -k7,7nr -k4,4gr LOG|perl -n -e '@a=split("\t",$_,-1); $t=$a[8]; $t =~ s,^\./,,;' -e '$t =~ s, >.*,,; chomp $t;' -e '$t =~ /.*--gtest_filter=...  Wed Apr  1 10:51:42 2015
      
        152.221 PASS t/DBTest.FileCreationRandomFailure
        109.280 PASS t/DBTest.EncodeDecompressedBlockSizeTest
         82.315 PASS reduce_levels_test
         77.812 PASS t/DBTest.CompactionFilterWithValueChange
         73.236 PASS backupable_db_test
         63.428 PASS deletefile_test
         57.248 PASS table_test
         55.665 PASS prefix_test
         49.816 PASS t/DBTest.RateLimitingTest
        ...
      
      Test Plan:
      Timings (measured so as to exclude compile and link times):
      With this change, all tests complete in 2m40s on a system for which nproc prints 32.
      Prior to this this change, "make check" would take 24.5 minutes on that same system.
      
      Here are durations (in seconds) of the longest-running subtests:
      
      152.435 PASS t/DBTest.FileCreationRandomFailure
      107.070 PASS t/DBTest.EncodeDecompressedBlockSizeTest
       81.391 PASS ./reduce_levels_test
       71.587 PASS ./backupable_db_test
       61.746 PASS ./deletefile_test
       57.960 PASS ./table_test
       55.230 PASS ./prefix_test
       54.060 PASS t/DBTest.CompactionFilterWithValueChange
       48.873 PASS t/DBTest.RateLimitingTest
       47.569 PASS ./fault_injection_test
       46.593 PASS t/DBTest.Randomized
       42.662 PASS t/DBTest.CompactionFilter
       31.793 PASS t/DBTest.SparseMerge
       30.612 PASS t/DBTest.CompactionFilterV2
       25.891 PASS t/DBTest.GroupCommitTest
       23.863 PASS t/DBTest.DynamicLevelMaxBytesBase
       22.976 PASS ./rate_limiter_test
       18.942 PASS t/DBTest.OptimizeFiltersForHits
       16.851 PASS ./env_test
       15.399 PASS t/DBTest.CompactionFilterV2WithValueChange
       14.827 PASS t/DBTest.CompactionFilterV2NULLPrefix
      
      Reviewers: igor, sdong, rven, yhchiang, igor.sugak
      
      Reviewed By: igor.sugak
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35379
      e3ee98b3
    • S
      Avoid naming conflict of EntryType · a45e7581
      sdong 提交于
      Summary:
      Fix build break on travis build:
      
      $ OPT=-DTRAVIS V=1 make unity && make clean && OPT=-DTRAVIS V=1 make db_test && ./db_test
      
      ......
      
      In file included from unity.cc:65:0:
      ./table/plain_table_key_coding.cc: In member function ‘rocksdb::Status rocksdb::PlainTableKeyDecoder::NextPrefixEncodingKey(const char*, const char*, rocksdb::ParsedInternalKey*, rocksdb::Slice*, size_t*, bool*)’:
      ./table/plain_table_key_coding.cc:224:3: error: reference to ‘EntryType’ is ambiguous
         EntryType entry_type;
         ^
      In file included from ./db/table_properties_collector.h:9:0,
                       from ./db/builder.h:11,
                       from ./db/builder.cc:10,
                       from unity.cc:1:
      ./include/rocksdb/table_properties.h:81:6: note: candidates are: enum rocksdb::EntryType
       enum EntryType {
            ^
      In file included from unity.cc:65:0:
      ./table/plain_table_key_coding.cc:16:6: note:                 enum rocksdb::{anonymous}::EntryType
       enum EntryType : unsigned char {
            ^
      ./table/plain_table_key_coding.cc:231:51: error: ‘entry_type’ was not declared in this scope
           const char* pos = DecodeSize(key_ptr, limit, &entry_type, &size);
                                                         ^
      make: *** [unity.o] Error 1
      
      Test Plan:
      OPT=-DTRAVIS V=1 make unity
      
      And make sure it doesn't break anymore.
      
      Reviewers: yhchiang, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36549
      a45e7581
    • M
      Add p99.9 and p99.99 response time to benchmark report, add new summary report · 3be82bc8
      Mark Callaghan 提交于
      Summary:
      This adds p99.9 and p99.99 response times to the benchmark report and
      adds a second report, report2.txt that has tests listed in test order rather
      than the time in which they were run, so overwrite tests are listed for
      all thread counts, then update etc.
      
      Also changes fillseq to compress all levels to avoid write-amp from rewriting
      uncompressed files when they reach the first level to compress.
      
      Increase max_write_buffer_number to avoid stalls during fillseq and make
      max_background_flushes agree with max_write_buffer_number.
      
      See https://gist.github.com/mdcallag/297ff4316a25cb2988f7 for an example
      of the new report (report2.txt)
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36537
      3be82bc8
    • S
      A new call back to TablePropertiesCollector to allow users know the entry is add, delete or merge · 953a885e
      sdong 提交于
      Summary:
      Currently users have no idea a key is add, delete or merge from TablePropertiesCollector call back. Add a new function to add it.
      
      Also refactor the codes so that
      (1) make table property collector and internal table property collector two separate data structures with the later one now exposed
      (2) table builders only receive internal table properties
      
      Test Plan: Add cases in table_properties_collector_test to cover both of old and new ways of using TablePropertiesCollector.
      
      Reviewers: yhchiang, igor.sugak, rven, igor
      
      Reviewed By: rven, igor
      
      Subscribers: meyering, yoshinorim, maykov, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D35373
      953a885e
  5. 04 4月, 2015 2 次提交
    • J
      avoid returning a number-of-active-keys estimate of nearly 2^64 · d2a92c13
      Jim Meyering 提交于
      Summary:
      If accumulated_num_non_deletions_ were ever smaller than
      accumulated_num_deletions_, the computation of
      "accumulated_num_non_deletions_ - accumulated_num_deletions_"
      would result in a logically "negative" value, but since
      the two operands are unsigned (uint64_t), the result corresponding
      to e.g., -1 would 2^64-1.
      
      Instead, return 0 in that case.
      
      Test Plan:
        - ensure "make check" still passes
        - temporarily add an "abort();" call in the new "if"-block, and
            observe that it fails in some test cases.  However, note that
            this case is triggered only when the two numbers are equal.
            Thus, no test case triggers the erroneous behavior this
            change is designed to avoid. If anyone can construct a
            scenario in which that bug would be triggered, I'll be
            happy to add a test case.
      
      Reviewers: ljin, igor, rven, igor.sugak, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36489
      d2a92c13
    • S
      Fix level size overflow for options_.level_compaction_dynamic_level_bytes=true · a7ac6cef
      sdong 提交于
      Summary: Int is used for level size targets when options_.level_compaction_dynamic_level_bytes=true, which will cause overflow when database grows big. Fix it.
      
      Test Plan: Add a new unit test which fails without the fix.
      
      Reviewers: rven, yhchiang, MarkCallaghan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba, yoshinorim
      
      Differential Revision: https://reviews.facebook.net/D36453
      a7ac6cef
  6. 03 4月, 2015 2 次提交
    • S
      db_test: clean up sync points in test cleaning up · 089509b8
      sdong 提交于
      Summary: In some db_test tests sync points are not cleared which will cause unexpected results in the next tests. Clean them up in test cleaning up.
      
      Test Plan:
      Run the same tests that used to fail:
      
      build using USE_CLANG=1 and run
      ./db_test --gtest_filter="DBTest.CompressLevelCompaction:*DBTestUniversalCompactionParallel*"
      
      Reviewers: rven, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36429
      089509b8
    • V
      Disallow trivial move if compression level is different · afbafeae
      Venkatesh Radhakrishnan 提交于
      Summary:
      Check compression level of start_level with output_compression
      before allowing trivial move
      
      Test Plan: New DBTest CompressLevelCompactionThirdPath added
      
      Reviewers: igor, yhchiang, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36213
      afbafeae
  7. 02 4月, 2015 2 次提交
  8. 31 3月, 2015 10 次提交
    • I
      Script to trigger jenkins test · df71c6b9
      Igor Canadi 提交于
      Summary: After you run `arc diff`, just run `build_tools/trigger_jenkins_test.sh` and Jenkins will test your diff!
      
      Test Plan: Triggered a build to jenkins
      
      Reviewers: sdong, rven, IslamAbdelRahman, anthony, yhchiang, meyering
      
      Reviewed By: meyering
      
      Subscribers: meyering, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36021
      df71c6b9
    • T
      Update COMMIT.md · 38a01ed1
      Tian Xia 提交于
      38a01ed1
    • S
      Fix one non-determinism of DBTest.DynamicCompactionOptions · 76d63b45
      sdong 提交于
      Summary:
      After recent change of DBTest.DynamicCompactionOptions, occasionally hit another non-deterministic case where L0 showdown is triggered while timeout should not triggered for hard limit.
      Fix it by increasing L0 slowdown trigger at the same time.
      
      Test Plan: Run the failed test.
      
      Reviewers: igor, rven
      
      Reviewed By: rven
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36219
      76d63b45
    • S
      Universal Compactions with Small Files · b23bbaa8
      sdong 提交于
      Summary:
      With this change, we use L1 and up to store compaction outputs in universal compaction.
      The compaction pick logic stays the same. Outputs are stored in the largest "level" as possible.
      
      If options.num_levels=1, it behaves all the same as now.
      
      Test Plan:
      1) convert most of existing unit tests for universal comapaction to include the option of one level and multiple levels.
      2) add a unit test to cover parallel compaction in universal compaction and run it in one level and multiple levels
      3) add unit test to migrate from multiple level setting back to one level setting
      4) add a unit test to insert keys to trigger multiple rounds of compactions and verify results.
      
      Reviewers: rven, kradhakrishnan, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: meyering, leveldb, MarkCallaghan, dhruba
      
      Differential Revision: https://reviews.facebook.net/D34539
      b23bbaa8
    • I
      Makefile minor cleanup · 2511b7d9
      Igor Canadi 提交于
      Summary:
      Just couple of small changes:
      1. removed signal_test, since it doesn't seem useful and we don't even run it as part of `make check`
      2. moved perf_context_test to TESTS instead of PROGRAMS
      3. `make release` probably shouldn't compile benchmarks. We currently rely on `make release` building db_bench (via Jenkins), so I left db_bench there.
      
      This is just a minor cleanup. We need to rethink our targets since they are a bit messy right now. We can do this during our tech debt week.
      
      Test Plan: make release
      
      Reviewers: anthony, rven, yhchiang, sdong, meyering
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36171
      2511b7d9
    • M
      Add --stats_interval_seconds to db_bench · 1bd70fb5
      Mark Callaghan 提交于
      Summary:
      The --stats_interval_seconds determines interval for stats reporting
      and overrides --stats_interval when set. I also changed tools/benchmark.sh
      to report stats every 60 seconds so I can avoid trying to figure out a
      good value for --stats_interval per test and per storage device.
      
      Task ID: #6631621
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench, look at output
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36189
      1bd70fb5
    • I
      Clean up old log files in background threads · fd3dbef2
      Igor Canadi 提交于
      Summary:
      Cleaning up log files can do heavy IO, since we call ftruncate() in the destructor. We don't want to call ftruncate() in user threads.
      
      This diff moves cleaning to background threads (flush and compaction)
      
      Test Plan: make check, will also run valgrind
      
      Reviewers: yhchiang, rven, MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36177
      fd3dbef2
    • M
      Make the benchmark scripts configurable and add tests · 99ec2412
      Mark Callaghan 提交于
      Summary:
      This makes run_flash_bench.sh configurable. Previously it was hardwired for 1B keys and tests
      ran for 12 hours each. That kept me from using it. This makes it configuable, adds more tests,
      makes the duration per-test configurable and refactors the test scripts.
      
      Adds the seekrandomwhilemerging test to db_bench which is the same as seekrandomwhilewriting except
      the writer thread does Merge rather than Put.
      
      Forces the stall-time column in compaction IO stats to use a fixed format (H:M:S) which makes
      it easier to scrape and parse. Also adds an option to AppendHumanMicros to force a fixed format.
      Sometimes automation and humans want different format.
      
      Calls thread->stats.AddBytes(bytes); in db_bench for more tests to get the MB/sec summary
      stats in the output at test end.
      
      Adds the average ingest rate to compaction IO stats. Output now looks like:
      https://gist.github.com/mdcallag/2bd64d18be1b93adc494
      
      More information on the benchmark output is at https://gist.github.com/mdcallag/db43a58bd5ac624f01e1
      
      For benchmark.sh changes default RocksDB configuration to reduce stalls:
      * min_level_to_compress from 2 to 3
      * hard_rate_limit from 2 to 3
      * max_grandparent_overlap_factor and max_bytes_for_level_multiplier from 10 to 8
      * L0 file count triggers from 4,8,12 to 4,12,20 for (start,stall,stop)
      
      Task ID: #6596829
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench.sh
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36075
      99ec2412
    • I
      Fix clang build · 2158e0f8
      Igor Canadi 提交于
      Summary: as title
      
      Test Plan: clang builds
      
      Reviewers: leveldb
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36183
      2158e0f8
    • I
      db_bench can now disable flashcache for background threads · d61cb0b9
      Igor Canadi 提交于
      Summary: Most of the approach is copied from WebSQL's MySQL branch. It's nice that we can do this without touching core RocksDB code.
      
      Test Plan: Compiles and runs. Didn't test flashback code, as I don't have flashback device and most if it is c/p
      
      Reviewers: MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: rven, lgalanis, kradhakrishnan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35391
      d61cb0b9
  9. 28 3月, 2015 2 次提交
    • J
      build: always attempt to update util/build_version.cc · 1c47c433
      Jim Meyering 提交于
      Summary:
      This fixes two bugs: "make clean" would never remove the generated
      file, util/build_version.cc, and since D33591, would be regenerated
      only if it were absent.
      * Makefile (clean): Remove the generated file.
      (util/build_version.cc): Depend on the no-prereq FORCE target,
      so that this target's rules are always run.
      Since this is a generated file, make it read-only.
      Also, be sure to remove the temporary file when it is the same
      as the original.
      
      Test Plan:
      Ensure that we attempt regeneration every time.
      Make it empty with an up-to-date time stamp and demonstrate
      that it is rebuilt with the expected content:
      
        $ : > util/build_version.cc
        $ make util/build_version.o
         GEN      util/build_version.cc
         GEN      util/build_version.d
         GEN      util/build_version.cc
         CC       util/build_version.o
        $ cat util/build_version.cc
        #include "build_version.h"
        const char* rocksdb_build_git_sha = "rocksdb_build_git_sha:v3.10-2-gb30e72a";
        const char* rocksdb_build_git_date = "rocksdb_build_git_date:2015-03-27";
        const char* rocksdb_build_compile_date = __DATE__;
      
      Reviewers: igor.sugak, sdong, ljin, igor, rven
      
      Reviewed By: rven
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36087
      1c47c433
    • H
      Formalize the DB properties string definitions. · e018892b
      Herman Lee 提交于
      Summary:
      Assign the string properties to const string variables under the
      DB::Properties namespace. This helps catch typos during compilation and
      also consolidates the property definition in one place.
      
      Test Plan: Run rocksdb unit tests
      
      Reviewers: sdong, yoshinorim, igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35991
      e018892b