1. 25 6月, 2015 2 次提交
  2. 24 6月, 2015 5 次提交
  3. 23 6月, 2015 4 次提交
    • K
      Introduce WAL recovery consistency levels · de85e4ca
      krad 提交于
      Summary:
      The "one size fits all" approach with WAL recovery will only introduce inconvenience for our varied clients as we go forward. The current recovery is a bit heuristic. We introduce the following levels of consistency while replaying the WAL.
      
      1. RecoverAfterRestart (kTolerateCorruptedTailRecords)
      
      This mocks the current recovery mode.
      
      2. RecoverAfterCleanShutdown (kAbsoluteConsistency)
      
      This is ideal for unit test and cases where the store is shutdown cleanly. We tolerate no corruption or incomplete writes.
      
      3. RecoverPointInTime (kPointInTimeRecovery)
      
      This is ideal when using devices with controller cache or file systems which can loose data on restart. We recover upto the point were is no corruption or incomplete write.
      
      4. RecoverAfterDisaster (kSkipAnyCorruptRecord)
      
      This is ideal mode to recover data. We tolerate corruption and incomplete writes, and we hop over those sections that we cannot make sense of salvaging as many records as possible.
      
      Test Plan:
      (1) Run added unit test to cover all levels.
      (2) Run make check.
      
      Reviewers: leveldb, sdong, igor
      
      Subscribers: yoshinorim, dhruba
      
      Differential Revision: https://reviews.facebook.net/D38487
      de85e4ca
    • I
      Fix trivial move merge · 530534fc
      Islam AbdelRahman 提交于
      Summary: Fixing bad merge
      
      Test Plan: make -j64 check (this is not enough to verify the fix)
      
      Reviewers: igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40521
      530534fc
    • K
      Add read_nanos to IOStatsContext. · 7015fd81
      krad 提交于
      Summary: MyRocks need a mechanism to track read outliers. We need to expose this
      stat.
      
      Test Plan: None
      
      Reviewers: sdong
      
      CC: leveldb
      
      Task ID: #7152512
      
      Blame Rev:
      7015fd81
    • A
      Fix broken gflags link · 7160f5d8
      Aaron Feldman 提交于
      Summary: Fix broken gflags link
      
      Test Plan: Follow the link
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40503
      7160f5d8
  4. 20 6月, 2015 5 次提交
  5. 19 6月, 2015 10 次提交
    • I
      Disable CompressLevelCompaction() if Zlib is not supported · bf03f59c
      Igor Canadi 提交于
      Summary: CompressLevelCompaction() depends on Zlib. We should skip it when zlib is not present.
      
      Test Plan: `make check` without zlib
      
      Reviewers: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40401
      bf03f59c
    • Y
      Make autovector_test runnable in ROCKSDB_LITE · df719d49
      Yueh-Hsuan Chiang 提交于
      Summary: Make autovector_test runnable in ROCKSDB_LITE
      
      Test Plan: autovector_test
      
      Reviewers: sdong, rven, anthony, kradhakrishnan, IslamAbdelRahman, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40245
      df719d49
    • Y
      Block geodb_test in ROCKSDB_LITE · 4d6d4768
      Yueh-Hsuan Chiang 提交于
      Summary:
      Block geodb_test in ROCKSDB_LITE as geodb is not supported
      in ROCKSDB_LITE
      
      Test Plan: geodb_test
      
      Reviewers: sdong, rven, anthony, kradhakrishnan, IslamAbdelRahman, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40335
      4d6d4768
    • Y
      Remove unused target --- compactor_test · 71b438c4
      Yueh-Hsuan Chiang 提交于
      Summary:
      Remove compactor_test, which depends on a directory not exist
      in our code base.
          make compactor_test
          GEN      util/build_version.cc
          GEN      util/build_version.cc
          make: *** No rule to make target `utilities/compaction/compactor_test.o', needed by `compactor_test'.  Stop.
      
      Test Plan: verify the output message of make compactor_test
      
      Reviewers: rven, anthony, kradhakrishnan, igor, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40341
      71b438c4
    • Y
      Block utilities/write_batch_with_index in ROCKSDB_LITE · eade498b
      Yueh-Hsuan Chiang 提交于
      Summary:
      Block utilities/write_batch_with_index in ROCKSDB_LITE as we
      don't include anly utilities in ROCKSDB_LITE
      
      Test Plan: write_batch_with_index_test
      
      Reviewers: rven, anthony, kradhakrishnan, IslamAbdelRahman, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40347
      eade498b
    • I
      Fail DB::Open() when the requested compression is not available · 760e9a94
      Igor Canadi 提交于
      Summary:
      Currently RocksDB silently ignores this issue and doesn't compress the data. Based on discussion, we agree that this is pretty bad because it can cause confusion for our users.
      
      This patch fails DB::Open() if we don't support the compression that is specified in the options.
      
      Test Plan: make check with LZ4 not present. If Snappy is not present all tests will just fail because Snappy is our default library. We should make Snappy the requirement, since without it our default DB::Open() fails.
      
      Reviewers: sdong, MarkCallaghan, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39687
      760e9a94
    • A
      Add Cache.GetPinnedUsageUsage() · 69bb210d
      Aaron Feldman 提交于
      Summary:
        Add the funcion Cache.GetPinnedUsage() to return the memory size of entries
        that are in use by the system (that is, all the entries not in the LRU list).
      
      Test Plan:
        Run ./cache_test and examine PinnedUsageTest.
      
      Reviewers: tnovak, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40305
      69bb210d
    • I
      Skip bottommost level compaction if possible · 4eabbdb7
      Islam AbdelRahman 提交于
      Summary:
      This is https://reviews.facebook.net/D39999 but after introducing an option to force compaction the bottom most level
      
      Changes in this patch
      - Introduce force_bottommost_level_compaction to CompactRangeOptions that force compacting bottommost level during compaction
      - Skip bottommost level compaction if we dont have a compaction filter and force_bottommost_level_compaction options is not set
      
      Although tests pass on my machine but I suspect that there maybe some tests that I am not aware of that  should use force_bottommost_level_compaction to pass in a deterministic way
      
      Test Plan:
      make check
      adding new tests
      
      Reviewers: igor, sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40059
      4eabbdb7
    • I
      Don't dump DBOptions for each column family · 4b8bb62f
      Igor Canadi 提交于
      Summary: Currently we dump DBOptions for each column family options we dump. This leads to duplicate lines in our LOG file. This diff fixes that.
      
      Test Plan: Check out the LOG
      
      Reviewers: sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: IslamAbdelRahman, yoshinorim, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39729
      4b8bb62f
    • P
      Merge branch 'master' of github.com:facebook/rocksdb · 176f0bed
      Poornima Chozhiyath Raman 提交于
      D40233: Replace %llu with format macros in ParsedInternalKey::DebugString())
      176f0bed
  6. 18 6月, 2015 10 次提交
    • Y
      Fixed a bug of CompactionStats in multi-level universal compaction case · bb1c74ce
      Yueh-Hsuan Chiang 提交于
      Summary:
      Universal compaction can involves in multiple levels.  However,
      the current implementation of bytes_readn and bytes_readnp1
      (and some other stats with postfix `n` and `np1`) assumes compaction
      can only have two levels.
      
      This patch fixes this bug and redefines bytes_readn and bytes_readnp1:
      * bytes_readnp1: the number of bytes read in the compaction output level.
      * bytes_readn: the total number of bytes read minus bytes_readnp1
      
      Test Plan: Add a test in compaction_job_stats_test
      
      Reviewers: igor, sdong, rven, anthony, kradhakrishnan, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40239
      bb1c74ce
    • P
      Merge branch 'master' of github.com:facebook/rocksdb · a66b8157
      Poornima Chozhiyath Raman 提交于
      D40233: Replace %llu with format macros in ParsedInternalKey::DebugString())
      a66b8157
    • P
      Replace %llu with format macros in ParsedInternalKey::DebugString()) · f06be62f
      Poornima Chozhiyath Raman 提交于
      Test Plan: successfully compiled the code
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40233
      f06be62f
    • I
      Add --benchmark_write_rate_limit option to db_bench · 2dc3910b
      Igor Canadi 提交于
      Summary:
      So far, we benchmarked RocksDB by writing as fast as possible. With this change, we're able to limit our write throughput, which should help us better understand how RocksDB performes under varying write workloads.
      
      Specifically, I'm currently interested in the shape of the graph that has write throughput on one axis and write rate on another. This should help us with designing our stall system, as we have started to do with D36351.
      
      Test Plan:
          $ ./db_bench --benchmarks=fillrandom --benchmark_write_rate_limit=1000000
          fillrandom   :     118.523 micros/op 8437 ops/sec;    0.9 MB/s
          $ ./db_bench --benchmarks=fillrandom --benchmark_write_rate_limit=2000000
          fillrandom   :      59.136 micros/op 16910 ops/sec;    1.9 MB/s
      
      Reviewers: MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39759
      2dc3910b
    • I
      Use CompactRangeOptions for CompactRange · 12e030a9
      Islam AbdelRahman 提交于
      Summary:
      This diff update DB::CompactRange to use RangeCompactionOptions instead of using multiple parameters
      Old CompactRange is still available but deprecated
      
      Test Plan:
      make all check
      make rocksdbjava
      USE_CLANG=1 make all
      OPT=-DROCKSDB_LITE make release
      
      Reviewers: sdong, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D40209
      12e030a9
    • I
      Move dockerbuild.sh to build_tools/ · c89369f5
      Igor Canadi 提交于
      Summary: That's where we keep build tools :)
      
      Test Plan: none
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D39741
      c89369f5
    • I
      Merge pull request #638 from HolodovAlexander/master · 4716ab4d
      Igor Canadi 提交于
      C api: human-readable statistics
      4716ab4d
    • I
      Clean up InstallSuperVersion · 25d60056
      Igor Canadi 提交于
      Summary:
      We go to great lengths to make sure MaybeScheduleFlushOrCompaction() is called outside of write thread. But anyway, it's still called in the mutex, so it's not that much cheaper.
      
      This diff removes the "optimization" and cleans up the code a bit.
      
      Test Plan: make check
      
      Reviewers: rven, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40113
      25d60056
    • Y
      Only initialize the ThreadStatusData when necessary. · 1369f015
      Yueh-Hsuan Chiang 提交于
      Summary:
      Before this patch, any function call to ThreadStatusUtil might automatically initialize and register the thread status data.  However, if it is the user-thread making this call, the allocated thread-status-data will never be released as such threads are not managed by rocksdb.
      
      In this patch, I remove the automatic-initialization part.  Thread-status data is only initialized and uninitialized in Env during the thread creation and destruction.
      
      Test Plan:
      db_test
      thread_list_test
      listener_test
      
      Reviewers: igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40017
      1369f015
    • Y
      Block c_test in ROCKSDB_LITE · 1a08d0be
      Yueh-Hsuan Chiang 提交于
      Summary: Block c_test in ROCKSDB_LITE as it's not supported in ROCKSDB_LITE.
      
      Test Plan: c_test
      
      Reviewers: sdong, rven, anthony, kradhakrishnan, IslamAbdelRahman, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40257
      1a08d0be
  7. 17 6月, 2015 1 次提交
  8. 13 6月, 2015 3 次提交
    • I
      db_bench periodically writes QPS to CSV file · d59d90bb
      Igor Canadi 提交于
      Summary:
      This is part of an effort to better understand and optimize RocksDB stalls under high load. I added a feature to db_bench to periodically write QPS to CSV files. That way we can nicely see how our QPS changes in time (especially when DB is stalled) and can do a better job of evaluating our stall system (i.e. we want the QPS to be as constant as possible, as opposed to having bunch of stalls)
      
      Cool part of CSV files is that we can easily graph them -- there are a bunch of tools available.
      
      Test Plan:
      Ran ./db_bench --report_interval_seconds=10 --benchmarks=fillrandom --num=10000000
      and observed this in report.csv:
      
      secs_elapsed,interval_qps
      10,2725860
      20,1980480
      30,1863456
      40,1454359
      50,1460389
      
      Reviewers: sdong, MarkCallaghan, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40047
      d59d90bb
    • S
      Cygwin build not to use -fPIC · 46296cc8
      sdong 提交于
      Summary:
      Cygwin doesn't support -fPIC. Remove it.
      Not sure whether we can build shared library in Cygwin but at least it can build without warning.
      
      Test Plan: Build under Cygwin
      
      Reviewers: yhchiang, rven, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D40077
      46296cc8
    • Y
      Removed two unused macros in iostats_context · bee8d033
      Yueh-Hsuan Chiang 提交于
      Summary: Removed two unused macros in iostats_context
      
      Test Plan: make all check
      
      Reviewers: sdong, rven, IslamAbdelRahman, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D40005
      bee8d033