1. 10 4月, 2015 3 次提交
  2. 09 4月, 2015 4 次提交
    • A
      Add thread-safety documentation to MemTable and related classes · 84c5bd7e
      agiardullo 提交于
      Summary: Other than making some class members private, this is a documentation-only change
      
      Test Plan: unit tests
      
      Reviewers: sdong, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36567
      84c5bd7e
    • K
      Enabling checksum in repair db as it should have been. · 2b019a15
      krad 提交于
      Summary: I think the checksum was turned off by mistake.
      
      Test Plan: Run make check
      
      Reviewers: igor sdong chip
      
      CC:
      
      Task ID:
      
      Blame Rev:
      2b019a15
    • S
      Create EnvOptions using sanitized DB Options · b1bbdd79
      sdong 提交于
      Summary: Now EnvOptions uses unsanitized DB options. bytes_per_sync is tuned off when rate_limiter is used, but this change doesn't take effort.
      
      Test Plan: See different I/O pattern in db_bench running fillseq.
      
      Reviewers: yhchiang, kradhakrishnan, rven, anthony, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36723
      b1bbdd79
    • S
      Trivial move to cover multiple input levels · b118238a
      sdong 提交于
      Summary: Now trivial move is only triggered when moving from level n to n+1. With dynamic level base, it is possible that file is moved from level 0 to level n, while levels from 1 to n-1 are empty. Extend trivial move to this case.
      
      Test Plan: Add a more unit test of sequential loading. Non-trivial compaction happened without the patch and now doesn't happen.
      
      Reviewers: rven, yhchiang, MarkCallaghan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba, IslamAbdelRahman
      
      Differential Revision: https://reviews.facebook.net/D36669
      b118238a
  3. 08 4月, 2015 1 次提交
    • K
      Log writer record format doc. · 58346b9e
      krad 提交于
      Summary: Added a ASCII doodle to represent the log writer format.
      
      Test Plan: None
      
      Reviewers: sdong
      
      CC: leveldb
      
      Task ID: 6179896
      
      Blame Rev:
      58346b9e
  4. 07 4月, 2015 4 次提交
    • Y
      Fix TSAN build error of D36447 · f1261407
      Yoshinori Matsunobu 提交于
      Summary:
      D36447 caused build error when using COMPILE_WITH_TSAN=1.
      This diff fixes the error.
      
      Test Plan: jenkins
      
      Reviewers: igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36579
      f1261407
    • Y
      Adding another NewFlashcacheAwareEnv function to support pre-opened fd · 824e6463
      Yoshinori Matsunobu 提交于
      Summary:
      There are some cases when flachcache file descriptor was
      already allocated (i.e. fb-MySQL). Then NewFlashcacheAwareEnv returns an
      error at open() because fd was already assigned. This diff adds another
      function to instantiate FlashcacheAwareEnv, with pre-allocated fd cachedev_fd.
      
      Test Plan: Tested with MyRocks using this function, then worked
      
      Reviewers: sdong, igor
      
      Reviewed By: igor
      
      Subscribers: dhruba, MarkCallaghan, rven
      
      Differential Revision: https://reviews.facebook.net/D36447
      824e6463
    • I
      Clean up compression logging · 5e067a7b
      Igor Canadi 提交于
      Summary: Now we add warnings when user configures compression and the compression is not supported.
      
      Test Plan:
      Configured compression to non-supported values. Observed messages in my log:
      
          2015/03/26-12:17:57.586341 7ffb8a496840 [WARN] Compression type chosen for level 2 is not supported: LZ4. RocksDB will not compress data on level 2.
      
          2015/03/26-12:19:10.768045 7f36f15c5840 [WARN] Compression type chosen is not supported: LZ4. RocksDB will not compress data.
      
      Reviewers: rven, sdong, yhchiang
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35979
      5e067a7b
    • S
      A new call back to TablePropertiesCollector to allow users know the entry is add, delete or merge · 953a885e
      sdong 提交于
      Summary:
      Currently users have no idea a key is add, delete or merge from TablePropertiesCollector call back. Add a new function to add it.
      
      Also refactor the codes so that
      (1) make table property collector and internal table property collector two separate data structures with the later one now exposed
      (2) table builders only receive internal table properties
      
      Test Plan: Add cases in table_properties_collector_test to cover both of old and new ways of using TablePropertiesCollector.
      
      Reviewers: yhchiang, igor.sugak, rven, igor
      
      Reviewed By: rven, igor
      
      Subscribers: meyering, yoshinorim, maykov, leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D35373
      953a885e
  5. 04 4月, 2015 2 次提交
    • J
      avoid returning a number-of-active-keys estimate of nearly 2^64 · d2a92c13
      Jim Meyering 提交于
      Summary:
      If accumulated_num_non_deletions_ were ever smaller than
      accumulated_num_deletions_, the computation of
      "accumulated_num_non_deletions_ - accumulated_num_deletions_"
      would result in a logically "negative" value, but since
      the two operands are unsigned (uint64_t), the result corresponding
      to e.g., -1 would 2^64-1.
      
      Instead, return 0 in that case.
      
      Test Plan:
        - ensure "make check" still passes
        - temporarily add an "abort();" call in the new "if"-block, and
            observe that it fails in some test cases.  However, note that
            this case is triggered only when the two numbers are equal.
            Thus, no test case triggers the erroneous behavior this
            change is designed to avoid. If anyone can construct a
            scenario in which that bug would be triggered, I'll be
            happy to add a test case.
      
      Reviewers: ljin, igor, rven, igor.sugak, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36489
      d2a92c13
    • S
      Fix level size overflow for options_.level_compaction_dynamic_level_bytes=true · a7ac6cef
      sdong 提交于
      Summary: Int is used for level size targets when options_.level_compaction_dynamic_level_bytes=true, which will cause overflow when database grows big. Fix it.
      
      Test Plan: Add a new unit test which fails without the fix.
      
      Reviewers: rven, yhchiang, MarkCallaghan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba, yoshinorim
      
      Differential Revision: https://reviews.facebook.net/D36453
      a7ac6cef
  6. 03 4月, 2015 2 次提交
    • S
      db_test: clean up sync points in test cleaning up · 089509b8
      sdong 提交于
      Summary: In some db_test tests sync points are not cleared which will cause unexpected results in the next tests. Clean them up in test cleaning up.
      
      Test Plan:
      Run the same tests that used to fail:
      
      build using USE_CLANG=1 and run
      ./db_test --gtest_filter="DBTest.CompressLevelCompaction:*DBTestUniversalCompactionParallel*"
      
      Reviewers: rven, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36429
      089509b8
    • V
      Disallow trivial move if compression level is different · afbafeae
      Venkatesh Radhakrishnan 提交于
      Summary:
      Check compression level of start_level with output_compression
      before allowing trivial move
      
      Test Plan: New DBTest CompressLevelCompactionThirdPath added
      
      Reviewers: igor, yhchiang, IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36213
      afbafeae
  7. 02 4月, 2015 2 次提交
  8. 31 3月, 2015 7 次提交
    • S
      Fix one non-determinism of DBTest.DynamicCompactionOptions · 76d63b45
      sdong 提交于
      Summary:
      After recent change of DBTest.DynamicCompactionOptions, occasionally hit another non-deterministic case where L0 showdown is triggered while timeout should not triggered for hard limit.
      Fix it by increasing L0 slowdown trigger at the same time.
      
      Test Plan: Run the failed test.
      
      Reviewers: igor, rven
      
      Reviewed By: rven
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D36219
      76d63b45
    • S
      Universal Compactions with Small Files · b23bbaa8
      sdong 提交于
      Summary:
      With this change, we use L1 and up to store compaction outputs in universal compaction.
      The compaction pick logic stays the same. Outputs are stored in the largest "level" as possible.
      
      If options.num_levels=1, it behaves all the same as now.
      
      Test Plan:
      1) convert most of existing unit tests for universal comapaction to include the option of one level and multiple levels.
      2) add a unit test to cover parallel compaction in universal compaction and run it in one level and multiple levels
      3) add unit test to migrate from multiple level setting back to one level setting
      4) add a unit test to insert keys to trigger multiple rounds of compactions and verify results.
      
      Reviewers: rven, kradhakrishnan, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: meyering, leveldb, MarkCallaghan, dhruba
      
      Differential Revision: https://reviews.facebook.net/D34539
      b23bbaa8
    • I
      Makefile minor cleanup · 2511b7d9
      Igor Canadi 提交于
      Summary:
      Just couple of small changes:
      1. removed signal_test, since it doesn't seem useful and we don't even run it as part of `make check`
      2. moved perf_context_test to TESTS instead of PROGRAMS
      3. `make release` probably shouldn't compile benchmarks. We currently rely on `make release` building db_bench (via Jenkins), so I left db_bench there.
      
      This is just a minor cleanup. We need to rethink our targets since they are a bit messy right now. We can do this during our tech debt week.
      
      Test Plan: make release
      
      Reviewers: anthony, rven, yhchiang, sdong, meyering
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36171
      2511b7d9
    • M
      Add --stats_interval_seconds to db_bench · 1bd70fb5
      Mark Callaghan 提交于
      Summary:
      The --stats_interval_seconds determines interval for stats reporting
      and overrides --stats_interval when set. I also changed tools/benchmark.sh
      to report stats every 60 seconds so I can avoid trying to figure out a
      good value for --stats_interval per test and per storage device.
      
      Task ID: #6631621
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench, look at output
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36189
      1bd70fb5
    • I
      Clean up old log files in background threads · fd3dbef2
      Igor Canadi 提交于
      Summary:
      Cleaning up log files can do heavy IO, since we call ftruncate() in the destructor. We don't want to call ftruncate() in user threads.
      
      This diff moves cleaning to background threads (flush and compaction)
      
      Test Plan: make check, will also run valgrind
      
      Reviewers: yhchiang, rven, MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D36177
      fd3dbef2
    • M
      Make the benchmark scripts configurable and add tests · 99ec2412
      Mark Callaghan 提交于
      Summary:
      This makes run_flash_bench.sh configurable. Previously it was hardwired for 1B keys and tests
      ran for 12 hours each. That kept me from using it. This makes it configuable, adds more tests,
      makes the duration per-test configurable and refactors the test scripts.
      
      Adds the seekrandomwhilemerging test to db_bench which is the same as seekrandomwhilewriting except
      the writer thread does Merge rather than Put.
      
      Forces the stall-time column in compaction IO stats to use a fixed format (H:M:S) which makes
      it easier to scrape and parse. Also adds an option to AppendHumanMicros to force a fixed format.
      Sometimes automation and humans want different format.
      
      Calls thread->stats.AddBytes(bytes); in db_bench for more tests to get the MB/sec summary
      stats in the output at test end.
      
      Adds the average ingest rate to compaction IO stats. Output now looks like:
      https://gist.github.com/mdcallag/2bd64d18be1b93adc494
      
      More information on the benchmark output is at https://gist.github.com/mdcallag/db43a58bd5ac624f01e1
      
      For benchmark.sh changes default RocksDB configuration to reduce stalls:
      * min_level_to_compress from 2 to 3
      * hard_rate_limit from 2 to 3
      * max_grandparent_overlap_factor and max_bytes_for_level_multiplier from 10 to 8
      * L0 file count triggers from 4,8,12 to 4,12,20 for (start,stall,stop)
      
      Task ID: #6596829
      
      Blame Rev:
      
      Test Plan:
      run tools/run_flash_bench.sh
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D36075
      99ec2412
    • I
      db_bench can now disable flashcache for background threads · d61cb0b9
      Igor Canadi 提交于
      Summary: Most of the approach is copied from WebSQL's MySQL branch. It's nice that we can do this without touching core RocksDB code.
      
      Test Plan: Compiles and runs. Didn't test flashback code, as I don't have flashback device and most if it is c/p
      
      Reviewers: MarkCallaghan, sdong
      
      Reviewed By: sdong
      
      Subscribers: rven, lgalanis, kradhakrishnan, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35391
      d61cb0b9
  9. 28 3月, 2015 1 次提交
  10. 27 3月, 2015 1 次提交
    • I
      Dump compression info on startup · 030859eb
      Igor Canadi 提交于
      Summary: It's useful to know if we have compression support or no
      
      Test Plan:
      Observed this in my LOG:
      
            2015/03/26-10:34:35.460681 7f5b322b7840 Snappy supported
            2015/03/26-10:34:35.460682 7f5b322b7840 Zlib supported
            2015/03/26-10:34:35.460686 7f5b322b7840 Bzip supported
            2015/03/26-10:34:35.460687 7f5b322b7840 LZ4 NOT supported
      
      Reviewers: sdong, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35955
      030859eb
  11. 26 3月, 2015 1 次提交
    • A
      fix compilation error (same as fix #284) · a3e4b324
      Alexander.Mikhaylov 提交于
      [maa@srv2-nskb-devg2 rocksdb-master]$ CXX=/usr/local/CC/gcc-4.7.4/bin/g++ EXTRA_CXXFLAGS=-std=c++11 DISABLE_WARNING_AS_ERROR=1  make db_bench
        CC       db/db_bench.o
      db/db_bench.cc: In member function 'rocksdb::Slice rocksdb::Benchmark::AllocateKey(std::unique_ptr<const char []>*)':
      db/db_bench.cc:1434:41: error: use of deleted function 'void std::unique_ptr<_Tp [], _Dp>::reset(_Up) [with _Up = char*; _Tp = const char; _Dp = std::default_delete<const char []>]'
      In file included from /usr/local/CC/gcc-4.7.4/lib/gcc/x86_64-unknown-linux-gnu/4.7.4/../../../../include/c++/4.7.4/memory:86:0,
                       from ./include/rocksdb/db.h:14,
                       from ./db/dbformat.h:14,
                       from ./db/db_impl.h:21,
                       from db/db_bench.cc:33:
      a3e4b324
  12. 25 3月, 2015 2 次提交
    • A
      Adding stats for the merge and filter operation · 3d1a924f
      Anurag Indu 提交于
      Summary:
      We have addded new stats and perf_context for measuring the merge and filter operation time consumption.
      We have bounded all the merge operations within the GUARD statment and collected the total time for these operations in the DB.
      
      Test Plan: WIP
      
      Reviewers: rven, yhchiang, kradhakrishnan, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D34377
      3d1a924f
    • Y
      Report elapsed time in micros in ThreadStatus instead of start time. · 248c063b
      Yueh-Hsuan Chiang 提交于
      Summary:
      Report elapsed time of a thread operation in micros in ThreadStatus
      instead of start time of a thread operation in seconds since the
      Epoch, 1970-01-01 00:00:00 (UTC).
      
      Test Plan:
      ./db_bench --benchmarks=fillrandom --num=100000 --threads=40 \
      --max_background_compactions=10 --max_background_flushes=3 \
      --thread_status_per_interval=1000 --key_size=16 --value_size=1000 \
      --num_column_families=10
      
      Sample Output:
                  ThreadID ThreadType                    cfName    Operation  ElapsedTime                                         Stage        State
           140667724562496   High Pri column_family_name_000002        Flush   772.419 ms                    FlushJob::WriteLevel0Table
           140667728756800   High Pri                   default        Flush   617.845 ms                    FlushJob::WriteLevel0Table
           140667732951104   High Pri column_family_name_000005        Flush   772.078 ms                    FlushJob::WriteLevel0Table
           140667875557440    Low Pri column_family_name_000008   Compaction  1409.216 ms                        CompactionJob::Install
           140667737145408    Low Pri
           140667749728320    Low Pri
           140667816837184    Low Pri column_family_name_000007   Compaction  1071.815 ms      CompactionJob::ProcessKeyValueCompaction
           140667787477056    Low Pri column_family_name_000009   Compaction   772.516 ms      CompactionJob::ProcessKeyValueCompaction
           140667741339712    Low Pri
           140667758116928    Low Pri column_family_name_000004   Compaction   620.739 ms      CompactionJob::ProcessKeyValueCompaction
           140667753922624    Low Pri
           140667842003008    Low Pri column_family_name_000006   Compaction  1260.079 ms      CompactionJob::ProcessKeyValueCompaction
           140667745534016    Low Pri
      
      Reviewers: sdong, igor, rven
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35769
      248c063b
  13. 24 3月, 2015 1 次提交
    • Y
      Improve ThreadStatusSingleCompaction · a057bb2a
      Yueh-Hsuan Chiang 提交于
      Summary:
      Improve ThreadStatusSingleCompaction in two ways:
      1. Use SYNC_POINT to ensure compaction won't happen
         before the test finishes its "Put Phase" instead of
         using sleep.
      2. In Put Phase, it continues until we have sufficient
         number of L0 files.  Note that during the put phase,
         there won't be any compaction that consumes L0 files
         because of item 1.
      
      Test Plan: ./db_test  --gtest_filter="*ThreadStatusSingleCompaction*"
      
      Reviewers: sdong, igor, rven
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35727
      a057bb2a
  14. 21 3月, 2015 1 次提交
  15. 20 3月, 2015 2 次提交
    • I
      rocksdb: Remove #include "util/string_util.h" from util/testharness.h · 9405b5ef
      Igor Sugak 提交于
      Summary:
      1. Manually deleted #include "util/string_util.h" from util/testharness.h
      2.
      ```
      % USE_CLANG=1 make all -j55 -k 2> build.log
      % perl -naF: -E 'say $F[0] if /: error:/' build.log | sort -u | xargs sed -i '/#include "util\/testharness.h"/i #include "util\/string_util.h"'
      ```
      
      Test Plan:
      Make sure make all completes with no errors.
      ```
      % make all -j55
      ```
      
      Reviewers: meyering, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35493
      9405b5ef
    • I
      Don't delete files when column family is dropped · b088c83e
      Igor Canadi 提交于
      Summary:
      To understand the bug read t5943287 and check out the new test in column_family_test (ReadDroppedColumnFamily), iter 0.
      
      RocksDB contract allowes you to read a drop column family as long as there is a live reference. However, since our iteration ignores dropped column families, AddLiveFiles() didn't mark files of a dropped column families as live. So we deleted them.
      
      In this patch I no longer ignore dropped column families in the iteration. I think this behavior was confusing and it also led to this bug. Now if an iterator client wants to ignore dropped column families, he needs to do it explicitly.
      
      Test Plan: Added a new unit test that is failing on master. Unit test succeeds now.
      
      Reviewers: sdong, rven, yhchiang
      
      Reviewed By: yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D32535
      b088c83e
  16. 19 3月, 2015 4 次提交
    • I
      Clean up compactions_in_progress_ · 52e0f335
      Igor Canadi 提交于
      Summary: Suprisingly, the only way we use this vector is to keep track of level0 compactions. Thus, I simplified it.
      
      Test Plan: make check
      
      Reviewers: rven, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35313
      52e0f335
    • I
      rocksdb: change db_test::MultiThreadedDBTest as value parameterized test. · 6b626ff2
      Igor Sugak 提交于
      Summary: This is a simple change to make db_test::MultiThreadedDBTest as value parameterized test. There is a value of creating a separate set of such tests later.
      
      Test Plan:
      ```lang=bash
      % make db_test
      % ./make db_test
      ```
      
      Also with the following command I can execute all db_test in 2:37.87 on my box
      ```
      % ./db_test --gtest_list_tests | sed 's/\# GetParam.*//' | tr -d ' ' | env time parallel --gnu --eta --joblog=LOG -- 'TEST_TMPDIR=/dev/shm/rocksdb-{} ./db_test --gtest_filter="*{}"'
      ```
      
      Reviewers: igor, rven, meyering, sdong
      
      Reviewed By: meyering
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D35361
      6b626ff2
    • S
      Add a DB Property For Number of Deletions in Memtables · 0831a359
      sdong 提交于
      Summary: Add a DB property for number of deletions in memtables. It can sometimes help people debug slowness because of too many deletes.
      
      Test Plan: Add test cases.
      
      Reviewers: rven, yhchiang, kradhakrishnan, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb, dhruba, yoshinorim
      
      Differential Revision: https://reviews.facebook.net/D35247
      0831a359
    • M
      Add readwhilemerging benchmark · dfccc7b4
      Mark Callaghan 提交于
      Summary:
      This is like readwhilewriting but uses Merge rather than Put in the writer thread.
      I am using it for in-progress benchmarks. I don't think the other benchmarks for Merge
      cover this behavior. The purpose for this test is to measure read performance when
      readers might have to merge results. This will also benefit from work-in-progress
      to add skewed key generation.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: igor
      
      Reviewed By: igor
      
      Subscribers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D35115
      dfccc7b4
  17. 18 3月, 2015 2 次提交
    • A
      Create an abstract interface for write batches · 81345b90
      agiardullo 提交于
      Summary: WriteBatch and WriteBatchWithIndex now both inherit from a common abstract base class.  This makes it easier to write code that is agnostic toward the implementation of the particular write batch.  In particular, I plan on utilizing this abstraction to allow transactions to support using either implementation of a write batch.
      
      Test Plan: modified existing WriteBatchWithIndex tests to test new functions.  Running all tests.
      
      Reviewers: igor, rven, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34017
      81345b90
    • I
      Deprecate removeScanCountLimit in NewLRUCache · c88ff4ca
      Igor Canadi 提交于
      Summary: It is no longer used by the implementation, so we should also remove it from the public API.
      
      Test Plan: make check
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D34971
      c88ff4ca