1. 10 7月, 2015 6 次提交
  2. 09 7月, 2015 3 次提交
    • D
      All of these are in the new code added past 3.10 · d8586ab2
      Dmitri Smirnov 提交于
           1) Crash in env_win.cc that prevented db_test run to completion and some new tests
           2) Fix new corruption tests in DBTest by allowing a shared trunction of files. Note that this is generally needed ONLY for tests.
           3) Close database so WAL is closed prior to inducing corruption similar to what we did within Corruption tests.
      d8586ab2
    • P
      Fix function name format according to google style · 4bed00a4
      Poornima Chozhiyath Raman 提交于
      Summary: Change the naming style of getter and setters according to Google C++ style in compaction.h file
      
      Test Plan: Compilation success
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D41265
      4bed00a4
    • K
      Added multi WAL log testing to recovery tests. · e2e3d84b
      krad 提交于
      Summary: Currently there is no test in the suite to test the case where
      there are multiple WAL files and there is a corruption in one of them. We have
      tests for single WAL file corruption scenarios. Added tests to mock
      the scenarios for all combinations of recovery modes and corruption in
      specified file locations.
      
      Test Plan: Run make check
      
      Reviewers: sdong igor
      
      CC: leveldb@
      
      Task ID: #7501229
      
      Blame Rev:
      e2e3d84b
  3. 08 7月, 2015 13 次提交
  4. 07 7月, 2015 2 次提交
    • A
      Added tests for ExpandWhileOverlapping() · 58d7ab3c
      Andres Notzli 提交于
      Summary:
      This patch adds three test cases for ExpandWhileOverlapping()
      to the compaction_picker_test test suite.
      ExpandWhileOverlapping() only has an effect if the comparison
      function for the internal keys allows for overlapping user
      keys in different SST files on the same level. Thus, this
      patch adds a comparator based on sequence numbers to
      compaction_picker_test for the new test cases.
      
      Test Plan:
      - make compaction_picker_test && ./compaction_picker_test
        -> All tests pass
      - Replace body of ExpandWhileOverlapping() with `return true`
        -> Compile and run ./compaction_picker_test as before
        -> New tests fail
      
      Reviewers: sdong, yhchiang, rven, anthony, IslamAbdelRahman, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D41277
      58d7ab3c
    • I
      Fix compaction_job_test · 155ce60d
      Igor Canadi 提交于
      Summary:
      Two issues:
      * the input keys to the compaction don't include sequence number.
      * sequence number is set to max(seq_num), but it should be set to max(seq_num)+1, because the condition here is strictly-larger (i.e. we will only zero-out sequence number if the DB's sequence number is strictly greater than the key's sequence number): https://github.com/facebook/rocksdb/blob/master/db/compaction_job.cc#L830
      
      Test Plan: make compaction_job_test && ./compaction_job_test
      
      Reviewers: sdong, lovro
      
      Reviewed By: lovro
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D41247
      155ce60d
  5. 06 7月, 2015 1 次提交
    • L
      Replace std::priority_queue in MergingIterator with custom heap · b6655a67
      lovro 提交于
      Summary:
      While profiling compaction in our service I noticed a lot of CPU (~15% of compaction) being spent in MergingIterator and key comparison.  Looking at the code I found MergingIterator was (understandably) using std::priority_queue for the multiway merge.
      
      Keys in our dataset include sequence numbers that increase with time.  Adjacent keys in an L0 file are very likely to be adjacent in the full database.  Consequently, compaction will often pick a chunk of rows from the same L0 file before switching to another one.  It would be great to avoid the O(log K) operation per row while compacting.
      
      This diff replaces std::priority_queue with a custom binary heap implementation.  It has a "replace top" operation that is cheap when the new top is the same as the old one (i.e. the priority of the top entry is decreased but it still stays on top).
      
      Test Plan:
      make check
      
      To test the effect on performance, I generated databases with data patterns that mimic what I describe in the summary (rows have a mostly increasing sequence number).  I see a 10-15% CPU decrease for compaction (and a matching throughput improvement on tmpfs).  The exact improvement depends on the number of L0 files and the amount of locality.  Performance on randomly distributed keys seems on par with the old code.
      
      Reviewers: kailiu, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: yoshinorim, dhruba, tnovak
      
      Differential Revision: https://reviews.facebook.net/D29133
      b6655a67
  6. 03 7月, 2015 10 次提交
    • D
      Arena needs mman header for mmap · e25ee32e
      Dmitri Smirnov 提交于
      e25ee32e
    • D
      Merge the latest changes from github/master · d2f0912b
      Dmitri Smirnov 提交于
      d2f0912b
    • A
      Introduce InfoLogLevel::HEADER_LEVEL · 35cd75c3
      Ari Ekmekji 提交于
      Summary:
       Introduced a new category in the enum InfoLogLevel in env.h.
       Modifed Log() in env.cc to use the Header()
       when the InfoLogLevel == HEADER_LEVEL.
       Updated tests in auto_roll_logger_test to ensure
       the header is handled properly in these cases.
      
      Test Plan: Augment existing tests in auto_roll_logger_test
      
      Reviewers: igor, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D41067
      35cd75c3
    • Y
      Fixed endless loop in DBIter::FindPrevUserKey() · acee2b08
      Yueh-Hsuan Chiang 提交于
      Summary: Fixed endless loop in DBIter::FindPrevUserKey()
      
      Test Plan: ./db_stress --test_batches_snapshots=1 --threads=32 --write_buffer_size=4194304 --destroy_db_initially=0 --reopen=20 --readpercent=45 --prefixpercent=5 --writepercent=35 --delpercent=5 --iterpercent=10 --db=/tmp/rocksdb_crashtest_KdCI5F --max_key=100000000 --mmap_read=0 --block_size=16384 --cache_size=1048576 --open_files=500000 --verify_checksum=1 --sync=0 --progress_reports=0 --disable_wal=0 --disable_data_sync=1 --target_file_size_base=2097152 --target_file_size_multiplier=2 --max_write_buffer_number=3 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --filter_deletes=0 --memtablerep=prefix_hash --prefix_size=7 --ops_per_thread=200 --kill_random_test=97
      
      Reviewers: tnovak, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D41085
      acee2b08
    • M
      [wal changes 1/3] fixed unbounded wal growth in some workloads · 218487d8
      Mike Kolupaev 提交于
      Summary:
      This fixes the following scenario we've hit:
       - we reached max_total_wal_size, created a new wal and scheduled flushing all memtables corresponding to the old one,
       - before the last of these flushes started its column family was dropped; the last background flush call was a no-op; no one removed the old wal from alive_logs_,
       - hours have passed and no flushes happened even though lots of data was written; data is written to different column families, compactions are disabled; old column families are dropped before memtable grows big enough to trigger a flush; the old wal still sits in alive_logs_ preventing max_total_wal_size limit from kicking in,
       - a few more hours pass and we run out disk space because of one huge .log file.
      
      Test Plan: `make check`; backported the new test, checked that it fails without this diff
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40893
      218487d8
    • D
      feb99c31
    • A
      Fix unity build by removing anonymous namespace · e70115e7
      Aaron Feldman 提交于
      Summary: see title
      
      Test Plan: run 'make unity'
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D41079
      e70115e7
    • A
      Prepare 3.12 · 4159f5b8
      agiardullo 提交于
      Summary: About to cut release
      
      Test Plan: none
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D41061
      4159f5b8
    • A
      Multithreaded backup and restore in BackupEngineImpl · a69bc91e
      Aaron Feldman 提交于
      Summary:
      Add a new field: BackupableDBOptions.max_background_copies.
      CreateNewBackup() and RestoreDBFromBackup() will use this number of threads to perform copies.
      If there is a backup rate limit, then max_background_copies must be 1.
      Update backupable_db_test.cc to test multi-threaded backup and restore.
      Update backupable_db_test.cc to test backups when the backup environment is not the same as the database environment.
      
      Test Plan:
      Run ./backupable_db_test
      Run valgrind ./backupable_db_test
      Run with TSAN and ASAN
      
      Reviewers: yhchiang, rven, anthony, sdong, igor
      
      Reviewed By: igor
      
      Subscribers: yhchiang, anthony, sdong, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D40725
      a69bc91e
    • D
      9dbde727
  7. 02 7月, 2015 5 次提交
    • Y
      [RocksJava] Fixed test failures · 03d433ee
      Yueh-Hsuan Chiang 提交于
      Summary:
      The option bottommost_level_compaction was introduced lately.
      This option breaks the Java API behavior. To prevent the library
      from doing so we set that option to a fixed value in Java.
      
      In future we are going to remove that portion and replace the
      hardcoded options using a more flexible way.
      
      Fixed bug introduced by WriteBatchWithIndex Patch
      
      Lately icanadi changed the behavior of WriteBatchWithIndex.
      See commit: 821cff11
      
      This commit solves problems introduced by above mentioned commit.
      
      Test Plan:
      make rocksdbjava
      make jtest
      
      Reviewers: adamretter, ankgup87, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: igor, dhruba
      
      Differential Revision: https://reviews.facebook.net/D40647
      03d433ee
    • D
    • D
      Address GCC compilation issues · ca2fe2c1
      Dmitri Smirnov 提交于
       invalid suffix on literal
       no return statement in function returning non-void CuckooStep::operator=
       extra qualification ‘rocksdb::spatial::Variant::
       dereferencing type-punned pointer will break strict-aliasing rules
      ca2fe2c1
    • D
      Fix header inclusion · 19e13a59
      Dmitri Smirnov 提交于
      19e13a59
    • D
      Windows Port from Microsoft · 18285c1e
      Dmitri Smirnov 提交于
       Summary: Make RocksDb build and run on Windows to be functionally
       complete and performant. All existing test cases run with no
       regressions. Performance numbers are in the pull-request.
      
       Test plan: make all of the existing unit tests pass, obtain perf numbers.
      
       Co-authored-by: Praveen Rao praveensinghrao@outlook.com
       Co-authored-by: Sherlock Huang baihan.huang@gmail.com
       Co-authored-by: Alex Zinoviev alexander.zinoviev@me.com
       Co-authored-by: Dmitri Smirnov dmitrism@microsoft.com
      18285c1e