1. 06 8月, 2016 1 次提交
  2. 02 8月, 2016 2 次提交
    • I
      Fix parallel tests `make check -j` · 0155c73d
      Islam AbdelRahman 提交于
      Summary:
      parallel tests are broken because gnu_parallel is reading deprecated options from `/etc/parallel/config`
      Fix this by passing `--plain` to ignore `/etc/parallel/config`
      
      Test Plan: make check -j64
      
      Reviewers: kradhakrishnan, sdong, andrewkr, yiwu, arahut
      
      Reviewed By: arahut
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D61359
      0155c73d
    • O
      Experiments on column-aware encodings · d51dc96a
      omegaga 提交于
      Summary:
      Experiments on column-aware encodings. Supported features: 1) extract data blocks from SST file and encode with specified encodings; 2) Decode encoded data back into row format; 3) Directly extract data blocks and write in row format (without prefix encoding); 4) Get column distribution statistics for column format; 5) Dump data blocks separated by columns in human-readable format.
      
      There is still on-going work on this diff. More refactoring is necessary.
      
      Test Plan: Wrote tests in `column_aware_encoding_test.cc`. More tests should be added.
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: arahut, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60027
      d51dc96a
  3. 30 7月, 2016 1 次提交
  4. 28 7月, 2016 1 次提交
  5. 26 7月, 2016 1 次提交
  6. 23 7月, 2016 2 次提交
  7. 22 7月, 2016 4 次提交
    • S
      Need to make sure log file synced before flushing memtable of one column family · d5a51d4d
      sdong 提交于
      Summary: Multiput atomiciy is broken across multiple column families if we don't sync WAL before flushing one column family. The WAL file may contain a write batch containing writes to a key to the CF to be flushed and a key to other CF. If we don't sync WAL before flushing, if machine crashes after flushing, the write batch will only be partial recovered. Data to other CFs are lost.
      
      Test Plan: Add a new unit test which will fail without the diff.
      
      Reviewers: yhchiang, IslamAbdelRahman, igor, yiwu
      
      Reviewed By: yiwu
      
      Subscribers: yiwu, leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60915
      d5a51d4d
    • S
      Add unit test not on /dev/shm as part of the pre-commit tests · b5063292
      sdong 提交于
      Summary: RocksDB behavior is slightly different between data on tmpfs and normal file systems. Add a test case to run RocksDB on normal file system.
      
      Test Plan: See the tests launched by Phabricator
      
      Reviewers: kradhakrishnan, IslamAbdelRahman, gunnarku
      
      Reviewed By: gunnarku
      
      Subscribers: leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D60963
      b5063292
    • R
      Add EnvLibrados - RocksDB Env of RADOS (#1222) · 663afef8
      ryneli 提交于
      EnvLibrados is a customized RocksDB Env to use RADOS as the backend file system of RocksDB. It overrides all file system related API of default Env. The easiest way to use it is just like following:
      
      	std::string db_name = "test_db";
      	std::string config_path = "path/to/ceph/config";
      	DB* db;
      	Options options;
      	options.env = EnvLibrados(db_name, config_path);
      	Status s = DB::Open(options, kDBPath, &db);
      
      Then EnvLibrados will forward all file read/write operation to the RADOS cluster assigned by config_path. Default pool is db_name+"_pool".
      
      There are some options that users could set for EnvLibrados.
      - write_buffer_size. This variable is the max buffer size for WritableFile. After reaching the buffer_max_size, EnvLibrados will sync buffer content to RADOS, then clear buffer.
      - db_pool. Rather than using default pool, users could set their own db pool name
      - wal_dir. The dir for WAL files. Because RocksDB only has 2-level structure (dir_name/file_name), the format of wal_dir is "/dir_name"(CAN'T be "/dir1/dir2"). Default wal_dir is "/wal".
      - wal_pool. Corresponding pool name for WAL files. Default value is db_name+"_wal_pool"
      
      The example of setting options looks like following:
      
      	db_name = "test_db";
      	db_pool = db_name+"_pool";
      	wal_dir = "/wal";
      	wal_pool = db_name+"_wal_pool";
      	write_buffer_size = 1 << 20;
      	env_ = new EnvLibrados(db_name, config, db_pool, wal_dir, wal_pool, write_buffer_size);
      
      	DB* db;
      	Options options;
      	options.env = env_;
      	// The last level dir name should match the dir name in prefix_pool_map
      	options.wal_dir = "/tmp/wal";
      
      	// open DB
      	Status s = DB::Open(options, kDBPath, &db);
      
      Librados is required to compile EnvLibrados. Then use "$make LIBRADOS=1" to compile RocksDB. If you want to only compile EnvLibrados test, just run "$ make env_librados_test LIBRADOS=1". To run env_librados_test, you need to have a running RADOS cluster with the configure file located in "../ceph/src/ceph.conf" related to "rocksdb/".
      663afef8
    • Y
      Fix flush not being commit while writing manifest · 32604e66
      Yi Wu 提交于
      Summary:
      Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
      
      The issue:
      # Options.max_background_flushes > 1
      # Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
      # Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
      # After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
      
      Test Plan: run the test. Also verify the new test hit deadlock without the fix.
      
      Reviewers: sdong, igor, lightmark
      
      Reviewed By: lightmark
      
      Subscribers: andrewkr, omegaga, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60969
      32604e66
  8. 21 7月, 2016 2 次提交
  9. 13 7月, 2016 1 次提交
    • Y
      Fix deadlock when trying update options when write stalls · 6ea41f85
      Yi Wu 提交于
      Summary:
      When write stalls because of auto compaction is disabled, or stop write trigger is reached,
      user may change these two options to unblock writes. Unfortunately we had issue where the write
      thread will block the attempt to persist the options, thus creating a deadlock. This diff
      fix the issue and add two test cases to detect such deadlock.
      
      Test Plan:
      Run unit tests.
      
      Also, revert db_impl.cc to master (but don't revert `DBImpl::BackgroundCompaction:Finish` sync point) and run db_options_test. Both tests should hit deadlock.
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60627
      6ea41f85
  10. 12 7月, 2016 1 次提交
    • O
      Update Makefile to fix dependency · e6f68faf
      omegaga 提交于
      Summary: In D33849 we updated Makefile to generate .d files for all .cc sources. Since we have more types of source files now, this needs to be updated so that this mechanism can work for new files.
      
      Test Plan: change a dependent .h file, re-make and see if .o file is recompiled.
      
      Reviewers: sdong, andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D60591
      e6f68faf
  11. 22 6月, 2016 1 次提交
    • I
      Makefile warning for invalid paths in make_config.mk · f9bd6677
      Islam AbdelRahman 提交于
      Summary:
      Update Makefile to show warnings when we have invalid paths in our make_config.mk file
      
      sample output
      
      ```
      $ make static_lib -j64
      Makefile:150: Warning: /mnt/gvfs/third-party2/libgcc/53e0eac8911888a105aa98b9a35fe61cf1d8b278/4.9.x/gcc-4.9-glibc-2.20/024dbc3/libs dont exist
      Makefile:150: Warning: /mnt/gvfs/third-party2/llvm-fb/b91de48a4974ec839946d824402b098d43454cef/stable/centos6-native/7aaccbe/../../src/clang/tools/scan-build/scan-build dont exist
        GEN      util/build_version.cc
      ```
      
      Test Plan: check that warning is printed visually
      
      Reviewers: sdong, yiwu, andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D59523
      f9bd6677
  12. 08 6月, 2016 1 次提交
    • K
      Persistent Read Cache (5) Volatile cache tier implementation · d755c62f
      krad 提交于
      Summary:
      This provides provides an implementation of PersistentCacheTier that is
      specialized for RAM. This tier does not persist data though.
      
      Why do we need this tier ?
      
      This is ideal as tier 0. This tier can host data that is too hot.
      
      Why can't we use Cache variants ?
      
      Yes you can use them instead. This tier can potentially outperform BlockCache
      in RAW mode by virtue of compression and compressed cache in block cache doesn't
      seem very popular. Potentially this tier can be modified to under stand the
      disadvantage of the tier below and retain data that the tier below is bad at
      handling (for example index and bloom data that is huge in size)
      
      Test Plan: Run unit tests added
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D57069
      d755c62f
  13. 04 6月, 2016 2 次提交
    • A
      env_basic_test library for testing new Envs [pluggable Env part 3] · 5aca977b
      Andrew Kryczka 提交于
      Summary:
      - Provide env_test as a static library. We will build it for future releases so internal Envs can use env_test by linking against this library.
      - Add tests for CustomEnv, which is configurable via ENV_TEST_URI environment variable. It uses the URI-based Env lookup (depends on D58449).
      - Refactor env_basic_test cases to use a unique/configurable directory for test files.
      
      Test Plan:
      built a test binary against librocksdb_env_test.a. It registered the
      default Env with URI prefix "a://".
      
      - verify runs all CustomEnv tests when URI with correct prefix is provided
      
      ```
      $ ENV_TEST_URI="a://ok" ./tmp --gtest_filter="CustomEnv/*"
      ...
      [  PASSED  ] 12 tests.
      ```
      
      - verify runs no CustomEnv tests when URI with non-matching prefix is provided
      
      ```
      $ ENV_TEST_URI="b://ok" ./tmp --gtest_filter="CustomEnv/*"
      ...
      [  PASSED  ] 0 tests.
      ```
      
      Reviewers: ldemailly, IslamAbdelRahman, lightmark, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D58485
      5aca977b
    • A
      Create env_basic_test [pluggable Env part 2] · 6e6622ab
      Andrew Kryczka 提交于
      Summary:
      Extracted basic Env-related tests from mock_env_test and memenv_test into a
      parameterized test for Envs: env_basic_test.
      
      Depends on D58449. (The dependency is here only so I can keep this series of
      diffs in a chain -- there is no dependency on that diff's code.)
      
      Test Plan: ran tests
      
      Reviewers: IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D58635
      6e6622ab
  14. 03 6月, 2016 2 次提交
    • A
      Env registry for URI-based Env selection [pluggable Env part 1] · af0c9ac0
      Andrew Kryczka 提交于
      Summary:
      This enables configurable Envs without recompiling. For example, my
      next diff will make env_test test an Env created by NewEnvFromUri(). Then,
      users can determine which Env is tested simply by providing the URI for
      NewEnvFromUri() (e.g., through a CLI argument or environment variable).
      
      The registration process allows us to register any Env that is linked with the
      RocksDB library, so we can register our internal Envs as well.
      
      The registration code is inspired by our internal InitRegistry.
      
      Test Plan: new unit test
      
      Reviewers: IslamAbdelRahman, lightmark, ldemailly, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb, dhruba, andrewkr
      
      Differential Revision: https://reviews.facebook.net/D58449
      af0c9ac0
    • Y
      Allows db_bench to take an options file · 88acd932
      Yueh-Hsuan Chiang 提交于
      Summary:
      This patch allows db_bench to initialize it's RocksDB Options via a
      options file, specified by the --options_file flag.  Note that if
      --options_file flag is set, then it has higher priority than the
      command-line argument.
      
      Test Plan: db_bench_tool_test
      
      Reviewers: sdong, IslamAbdelRahman, kradhakrishnan, yiwu, andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D58533
      88acd932
  15. 28 5月, 2016 1 次提交
    • A
      Add statically-linked library for tools/benchmarks · 8dfa980c
      Andrew Kryczka 提交于
      Summary:
      Currently all the tools are included in librocksdb.a (db_bench is not). With
      this separate library, we can access db_bench functionality from our internal
      repo and eventually move tools out of librocksdb.a.
      
      Test Plan: built a simple binary against this library that invokes db_bench_tool().
      
      Reviewers: IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D58977
      8dfa980c
  16. 24 5月, 2016 1 次提交
    • A
      add simulator Cache as class SimCache/SimLRUCache(with test) · 5d660258
      Aaron Gao 提交于
      Summary: add class SimCache(base class with instrumentation api) and SimLRUCache(derived class with detailed implementation) which is used as an instrumented block cache that can predict hit rate for different cache size
      
      Test Plan:
      Add a test case in `db_block_cache_test.cc` called `SimCacheTest` to test basic logic of SimCache.
      Also add option `-simcache_size` in db_bench. if set with a value other than -1, then the benchmark will use this value as the size of the simulator cache and finally output the simulation result.
      ```
      [gzh@dev9927.prn1 ~/local/rocksdb] ./db_bench -benchmarks "fillseq,readrandom" -cache_size 1000000 -simcache_size 1000000
      RocksDB:    version 4.8
      Date:       Tue May 17 16:56:16 2016
      CPU:        32 * Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
      CPUCache:   20480 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      Write rate: 0 bytes/second
      Compression: Snappy
      Memtablerep: skip_list
      Perf Level: 0
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      ------------------------------------------------
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      fillseq      :       6.809 micros/op 146874 ops/sec;   16.2 MB/s
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      readrandom   :       6.343 micros/op 157665 ops/sec;   17.4 MB/s (1000000 of 1000000 found)
      
      SIMULATOR CACHE STATISTICS:
      SimCache LOOKUPs: 986559
      SimCache HITs:    264760
      SimCache HITRATE: 26.84%
      
      [gzh@dev9927.prn1 ~/local/rocksdb] ./db_bench -benchmarks "fillseq,readrandom" -cache_size 1000000 -simcache_size 10000000
      RocksDB:    version 4.8
      Date:       Tue May 17 16:57:10 2016
      CPU:        32 * Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
      CPUCache:   20480 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      Write rate: 0 bytes/second
      Compression: Snappy
      Memtablerep: skip_list
      Perf Level: 0
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      ------------------------------------------------
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      fillseq      :       5.066 micros/op 197394 ops/sec;   21.8 MB/s
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      readrandom   :       6.457 micros/op 154870 ops/sec;   17.1 MB/s (1000000 of 1000000 found)
      
      SIMULATOR CACHE STATISTICS:
      SimCache LOOKUPs: 1059764
      SimCache HITs:    374501
      SimCache HITRATE: 35.34%
      
      [gzh@dev9927.prn1 ~/local/rocksdb] ./db_bench -benchmarks "fillseq,readrandom" -cache_size 1000000 -simcache_size 100000000
      RocksDB:    version 4.8
      Date:       Tue May 17 16:57:32 2016
      CPU:        32 * Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
      CPUCache:   20480 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      Write rate: 0 bytes/second
      Compression: Snappy
      Memtablerep: skip_list
      Perf Level: 0
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      ------------------------------------------------
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      fillseq      :       5.632 micros/op 177572 ops/sec;   19.6 MB/s
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      readrandom   :       6.892 micros/op 145094 ops/sec;   16.1 MB/s (1000000 of 1000000 found)
      
      SIMULATOR CACHE STATISTICS:
      SimCache LOOKUPs: 1150767
      SimCache HITs:    1034535
      SimCache HITRATE: 89.90%
      ```
      
      Reviewers: IslamAbdelRahman, andrewkr, sdong
      
      Reviewed By: sdong
      
      Subscribers: MarkCallaghan, andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D57999
      5d660258
  17. 21 5月, 2016 1 次提交
  18. 20 5月, 2016 1 次提交
  19. 19 5月, 2016 1 次提交
    • O
      Move IO failure test to separate file · 3c69f77c
      omegaga 提交于
      Summary:
      This is a part of effort to reduce the size of db_test.cc. We move the following tests to a separate file `db_io_failure_test.cc`:
      
      * DropWrites
      * DropWritesFlush
      * NoSpaceCompactRange
      * NonWritableFileSystem
      * ManifestWriteError
      * PutFailsParanoid
      
      Test Plan: Run `make check` to see if the tests are working properly.
      
      Reviewers: sdong, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D58341
      3c69f77c
  20. 18 5月, 2016 1 次提交
    • K
      Persistent Read Cache (Part 2) Data structure for building persistent read cache index · 1f0142ce
      krad 提交于
      Summary:
      We expect the persistent read cache to perform at speeds upto 8 GB/s. In order
      to accomplish that, we need build a index mechanism which operate in the order
      of multiple millions per sec rate.
      
      This patch provide the basic data structure to accomplish that:
      
      (1) Hash table implementation with lock contention spread
          It is based on the StripedHashSet<T> implementation in
          The Art of multiprocessor programming by Maurice Henry & Nir Shavit
      (2) LRU implementation
          Place holder algorithm for further optimizing
      (3) Evictable Hash Table implementation
          Building block for building index data structure that evicts data like files
          etc
      
      TODO:
      (1) Figure if the sharded hash table and LRU can be used instead
      (2) Figure if we need to support configurable eviction algorithm for
      EvictableHashTable
      
      Test Plan: Run unit tests
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D55785
      1f0142ce
  21. 28 4月, 2016 1 次提交
    • S
      Print memory allocation counters · 1c80dfab
      Sergey Makarenko 提交于
      Summary:
      Introduced option to dump malloc statistics using new option flag.
          Added new command line option to db_bench tool to enable this
          funtionality.
          Also extended build to support environments with/without jemalloc.
      
      Test Plan:
      1) Build rocksdb using `make` command. Launch the following command
          `./db_bench --benchmarks=fillrandom --dump_malloc_stats=true
          --num=10000000` end verified that jemalloc dump is present in LOG file.
          2) Build rocksdb using `DISABLE_JEMALLOC=1  make db_bench -j32` and ran
          the same db_bench tool and found the following message in LOG file:
          "Please compile with jemalloc to enable malloc dump".
          3) Also built rocksdb using `make` command on MacOS to verify behavior
          in non-FB environment.
          Also to debug build configuration change temporary changed
          AM_DEFAULT_VERBOSITY = 1 in Makefile to see compiler and build
          tools output. For case 1) -DROCKSDB_JEMALLOC was present in compiler
          command line. For both 2) and 3) this flag was not present.
      
      Reviewers: sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D57321
      1c80dfab
  22. 23 4月, 2016 1 次提交
    • D
      Alpine Linux Build (#990) · b71c4e61
      dx9 提交于
      * Musl libc does not provide adaptive mutex. Added feature test for PTHREAD_MUTEX_ADAPTIVE_NP.
      
      * Musl libc does not provide backtrace(3). Added a feature check for backtrace(3).
      
      * Fixed compiler error.
      
      * Musl libc does not implement backtrace(3). Added platform check for libexecinfo.
      
      * Alpine does not appear to support gcc -pg option. By default (gcc has PIE option enabled) it fails with:
      
      gcc: error: -pie and -pg|p|profile are incompatible when linking
      
      When -fno-PIE and -nopie are used it fails with:
      
      /usr/lib/gcc/x86_64-alpine-linux-musl/5.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find gcrt1.o: No such file or directory
      
      Added gcc -pg platform test and output PROFILING_FLAGS accordingly. Replaced pg var in Makefile with PROFILING_FLAGS.
      
      * fix segfault when TEST_IOCTL_FRIENDLY_TMPDIR is undefined and default candidates are not suitable
      
      * use ASSERT_DOUBLE_EQ instead of ASSERT_EQ
      
      * When compiled with ROCKSDB_MALLOC_USABLE_SIZE UniversalCompactionFourPaths and UniversalCompactionSecondPathRatio tests fail due to premature memtable flushes on systems with 16-byte alignment. Arena runs out of block space before GenerateNewFile() completes.
      
      Increased options.write_buffer_size.
      b71c4e61
  23. 19 4月, 2016 1 次提交
    • Y
      Split db_test.cc · 792762c4
      Yi Wu 提交于
      Summary: Split db_test.cc into several files. Moving several helper functions into DBTestBase.
      
      Test Plan: make check
      
      Reviewers: sdong, yhchiang, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: dhruba, andrewkr, kradhakrishnan, yhchiang, leveldb, sdong
      
      Differential Revision: https://reviews.facebook.net/D56715
      792762c4
  24. 18 4月, 2016 1 次提交
    • Y
      Make more tests run in parallel · 6affd45d
      Yi Wu 提交于
      Summary:
      Generate t/run-* scripts to run tests in $PARALLEL_TEST separately, then make check_0 rule execute all of them.
      
      Run `time make check` after running `make all`.
      master: 71 sec
      with this diff: 63 sec.
      
      It seems moving more tests to $PARALLEL_TEST doesn't help improve test time though.
      
      Test Plan:
      Run the following
        make check
        J=16 make check
        J=1 make check
        make valgrind_check
        J=1 make valgrind_check
        J=16 make_valgrind_check
      
      Reviewers: IslamAbdelRahman, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb, kradhakrishnan, dhruba, andrewkr, yhchiang
      
      Differential Revision: https://reviews.facebook.net/D56805
      6affd45d
  25. 16 4月, 2016 1 次提交
  26. 15 4月, 2016 1 次提交
    • S
      Allow valgrind_check to run in parallel · 3894603f
      sdong 提交于
      Summary:
      Extend "J=<parallel>" to valgrind_check.
      For DBTest, modify the script to run valgrind. For other tests, prefix launch command with valgrind.
      
      Test Plan: Run valgrind_check with J=1 and J>1 and make sure tests run under valgrind. Manually change codes to introduce memory leak and make sure "make watch-log" correctly report it.
      
      Reviewers: yhchiang, yiwu, andrewkr, kradhakrishnan, IslamAbdelRahman
      
      Reviewed By: kradhakrishnan, IslamAbdelRahman
      
      Subscribers: leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D56727
      3894603f
  27. 14 4月, 2016 1 次提交
  28. 12 4月, 2016 1 次提交
    • S
      Don't run DBOptionsAllFieldsSettable under valgrind · a23c6052
      sdong 提交于
      Summary: Test DBOptionsAllFieldsSettable sometimes fails under valgrind. Move option settable tests to a separate test file and disable it in valgrind..
      
      Test Plan: Run valgrind test and make sure the test doesn't run.
      
      Reviewers: andrewkr, IslamAbdelRahman
      
      Reviewed By: IslamAbdelRahman
      
      Subscribers: kradhakrishnan, yiwu, yhchiang, leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D56529
      a23c6052
  29. 31 3月, 2016 1 次提交
  30. 19 3月, 2016 1 次提交
    • A
      Add unit tests for RepairDB · e182f03c
      Andrew Kryczka 提交于
      Summary:
      Basic test cases:
      
      - Manifest is lost or corrupt
      - Manifest refers to too many or too few SST files
      - SST file is corrupt
      - Unflushed data is present when RepairDB is called
      
      Depends on D55065 for its CreateFile() function in file_utils
      
      Test Plan: Ran the tests.
      
      Reviewers: IslamAbdelRahman, yhchiang, yoshinorim, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb, andrewkr, dhruba
      
      Differential Revision: https://reviews.facebook.net/D55485
      e182f03c
  31. 11 3月, 2016 1 次提交
    • Y
      Cache to have an option to fail Cache::Insert() when full · f71fc77b
      Yi Wu 提交于
      Summary:
      Cache to have an option to fail Cache::Insert() when full. Update call sites to check status and handle error.
      
      I totally have no idea what's correct behavior of all the call sites when they encounter error. Please let me know if you see something wrong or more unit test is needed.
      
      Test Plan: make check -j32, see tests pass.
      
      Reviewers: anthony, yhchiang, andrewkr, IslamAbdelRahman, kradhakrishnan, sdong
      
      Reviewed By: sdong
      
      Subscribers: andrewkr, dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D54705
      f71fc77b
  32. 03 3月, 2016 1 次提交
    • S
      Add Iterator Property rocksdb.iterator.version_number · e79ad9e1
      sdong 提交于
      Summary: We want to provide a way to detect whether an iterator is stale and needs to be recreated. Add a iterator property to return version number.
      
      Test Plan: Add two unit tests for it.
      
      Reviewers: IslamAbdelRahman, yhchiang, anthony, kradhakrishnan, andrewkr
      
      Reviewed By: andrewkr
      
      Subscribers: leveldb, dhruba
      
      Differential Revision: https://reviews.facebook.net/D54921
      e79ad9e1