- 28 4月, 2016 3 次提交
-
-
由 Andrew Kryczka 提交于
Summary: This adds a new metablock containing a shared dictionary that is used to compress all data blocks in the SST file. The size of the shared dictionary is configurable in CompressionOptions and defaults to 0. It's currently only used for zlib/lz4/lz4hc, but the block will be stored in the SST regardless of the compression type if the user chooses a nonzero dictionary size. During compaction, computes the dictionary by randomly sampling the first output file in each subcompaction. It pre-computes the intervals to sample by assuming the output file will have the maximum allowable length. In case the file is smaller, some of the pre-computed sampling intervals can be beyond end-of-file, in which case we skip over those samples and the dictionary will be a bit smaller. After the dictionary is generated using the first file in a subcompaction, it is loaded into the compression library before writing each block in each subsequent file of that subcompaction. On the read path, gets the dictionary from the metablock, if it exists. Then, loads that dictionary into the compression library before reading each block. Test Plan: new unit test Reviewers: yhchiang, IslamAbdelRahman, cyan, sdong Reviewed By: sdong Subscribers: andrewkr, yoshinorim, kradhakrishnan, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D52287
-
由 Yueh-Hsuan Chiang 提交于
Summary: As db_stress with CompactFiles possibly catches a previous bug currently, temporarily disable CompactFiles in db_stress in its default setting to allows new bug to be detected while investigating the bug in CompactFiles. Test Plan: crash test Reviewers: sdong, kradhakrishnan, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D57333
-
由 Sergey Makarenko 提交于
Summary: Introduced option to dump malloc statistics using new option flag. Added new command line option to db_bench tool to enable this funtionality. Also extended build to support environments with/without jemalloc. Test Plan: 1) Build rocksdb using `make` command. Launch the following command `./db_bench --benchmarks=fillrandom --dump_malloc_stats=true --num=10000000` end verified that jemalloc dump is present in LOG file. 2) Build rocksdb using `DISABLE_JEMALLOC=1 make db_bench -j32` and ran the same db_bench tool and found the following message in LOG file: "Please compile with jemalloc to enable malloc dump". 3) Also built rocksdb using `make` command on MacOS to verify behavior in non-FB environment. Also to debug build configuration change temporary changed AM_DEFAULT_VERBOSITY = 1 in Makefile to see compiler and build tools output. For case 1) -DROCKSDB_JEMALLOC was present in compiler command line. For both 2) and 3) this flag was not present. Reviewers: sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D57321
-
- 27 4月, 2016 4 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: Fix BackupableDBTest.NoDoubleCopy and BackupableDBTest.DifferentEnvs by mocking the db files in db_env instead of backup_env_ Test Plan: make check -j64 Reviewers: sdong, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D57273
-
由 sdong 提交于
Summary: CompactedDB skips memtable. So we shouldn't use compacted DB if there is outstanding WAL files. Test Plan: Change to options.max_open_files = -1 perf context test to create a compacted DB, which we shouldn't do. Reviewers: yhchiang, kradhakrishnan, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: leveldb, andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D57057
-
由 Islam AbdelRahman 提交于
Summary: While trying to reuse PinData() / ReleasePinnedData() .. to optimize away some memcpys I realized that there is a significant overhead for using PinData() / ReleasePinnedData if they were called many times. This diff refactor the pinning logic by introducing PinnedIteratorsManager a centralized component that will be created once and will be notified whenever we need to Pin an Iterator. This implementation have much less overhead than the original implementation Test Plan: make check -j64 COMPILE_WITH_ASAN=1 make check -j64 Reviewers: yhchiang, sdong, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56493
-
由 Andrew Kryczka 提交于
Summary: When db_env_ != backup_env_, InsertPathnameToSizeBytes() would use the wrong Env during backup creation. This happened because this function used backup_env_ instead of db_env_ to get WAL/data file sizes. This diff adds an argument to InsertPathnameToSizeBytes() indicating which Env to use. Test Plan: ran @anirbanb's BackupTestTool Reviewers: sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D57159
-
- 26 4月, 2016 5 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: The current implementation find the first different byte and try to increment it, if it cannot it return the original key we can improve this by keep going after the first different byte to find the first non 0xFF byte and increment it After trying this patch on some logdevice sst files I see decrease in there index block size by 8.5% Test Plan: existing tests and updated test Reviewers: yhchiang, andrewkr, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56241
-
由 Islam AbdelRahman 提交于
Summary: In this test some times automatic compactions do everything and Manual compaction become a no-op. Update the test to make sure manual compaction is not a no-op Test Plan: run the test Reviewers: andrewkr, yhchiang, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D57189
-
由 sdong 提交于
Summary: Now we collect compaction stats per column family, but report default colum family's stat as compaction stats for DB. Fix it by reporting compaction stats per column family instead. Test Plan: Run db_bench with --num_column_families=4 and see the number fixed. Reviewers: IslamAbdelRahman, yhchiang Reviewed By: yhchiang Subscribers: leveldb, andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D57063
-
https://github.com/facebook/infer由 Dhruba Borthakur 提交于
Test Plan: make check Reviewers: leveldb, sdong Reviewed By: sdong Subscribers: leveldb, andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D57165
-
由 Yueh-Hsuan Chiang 提交于
Summary: In https://reviews.facebook.net/D56271, we fixed an issue where we consider flush as compaction. However, that makes us mistakenly count FLUSH_WRITE_BYTES twice (one in flush_job and one in db_impl.) This patch removes the one incremented in db_impl. Test Plan: db_test Reviewers: yiwu, andrewkr, IslamAbdelRahman, kradhakrishnan, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D57111
-
- 23 4月, 2016 7 次提交
-
-
由 dx9 提交于
* Musl libc does not provide adaptive mutex. Added feature test for PTHREAD_MUTEX_ADAPTIVE_NP. * Musl libc does not provide backtrace(3). Added a feature check for backtrace(3). * Fixed compiler error. * Musl libc does not implement backtrace(3). Added platform check for libexecinfo. * Alpine does not appear to support gcc -pg option. By default (gcc has PIE option enabled) it fails with: gcc: error: -pie and -pg|p|profile are incompatible when linking When -fno-PIE and -nopie are used it fails with: /usr/lib/gcc/x86_64-alpine-linux-musl/5.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find gcrt1.o: No such file or directory Added gcc -pg platform test and output PROFILING_FLAGS accordingly. Replaced pg var in Makefile with PROFILING_FLAGS. * fix segfault when TEST_IOCTL_FRIENDLY_TMPDIR is undefined and default candidates are not suitable * use ASSERT_DOUBLE_EQ instead of ASSERT_EQ * When compiled with ROCKSDB_MALLOC_USABLE_SIZE UniversalCompactionFourPaths and UniversalCompactionSecondPathRatio tests fail due to premature memtable flushes on systems with 16-byte alignment. Arena runs out of block space before GenerateNewFile() completes. Increased options.write_buffer_size.
-
由 Igor Canadi 提交于
-
由 Naitik Shah 提交于
rocksdb_backup_engine_purge_old_backups for C libraries
-
由 PraveenSinghRao 提交于
-
由 Naitik Shah 提交于
-
由 Naitik Shah 提交于
-
由 Naitik Shah 提交于
-
- 22 4月, 2016 2 次提交
-
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fix RocksDB Lite build in db_stress Test Plan: OPT=-DROCKSDB_LITE db_stress Reviewers: IslamAbdelRahman, kradhakrishnan, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D57045
-
由 Islam AbdelRahman 提交于
Summary: This is the original diff that I have landed and reverted and now I want to land again https://reviews.facebook.net/D34269 For old SST files we will show ``` comparator name: N/A merge operator name: N/A property collectors names: N/A ``` For new SST files with no merge operator name and with no property collectors ``` comparator name: leveldb.BytewiseComparator merge operator name: nullptr property collectors names: [] ``` for new SST files with these properties ``` comparator name: leveldb.BytewiseComparator merge operator name: UInt64AddOperator property collectors names: [DummyPropertiesCollector1,DummyPropertiesCollector2] ``` Test Plan: unittests Reviewers: andrewkr, yhchiang, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56487
-
- 21 4月, 2016 2 次提交
-
-
由 Andrew Kryczka 提交于
-
由 Andrew Kryczka 提交于
Summary: This is needed so we can measure compression ratio improvements achieved by D52287. The property compares raw data size against the total file size for a given level. If the level is empty it should return 0.0. Test Plan: new unit test Reviewers: IslamAbdelRahman, yhchiang, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56967
-
- 20 4月, 2016 6 次提交
-
-
由 Dmitri Smirnov 提交于
Comparable with Snappy on comp ratio. Implemented using Windows API, does not require external package. Avaiable since Windows 8 and server 2012. Use -DXPRESS=1 with CMake to enable.
-
由 flabby 提交于
fix typo in comment of options.h
-
由 Yueh-Hsuan Chiang 提交于
Summary: Enable testing CompactFiles in db_stress by adding flag test_compact_files to db_stress. Test Plan: ./db_stress --test_compact_files=1 --compaction_style=0 --allow_concurrent_memtable_write=false --ops_per_thread=100000 ./db_stress --test_compact_files=1 --compaction_style=1 --allow_concurrent_memtable_write=false --ops_per_thread=100000 Sample output (note that it's normal to have some CompactFiles() failed): Stress Test : 491.891 micros/op 65054 ops/sec : Wrote 21.98 MB (0.45 MB/sec) (45% of 3200352 ops) : Wrote 1440728 times : Deleted 441616 times : Single deleted 38181 times : 319251 read and 19025 found the key : Prefix scanned 640520 times : Iterator size sum is 9691415 : Iterated 319704 times : Got errors 0 times : 1323 CompactFiles() succeed : 32 CompactFiles() failed 2016/04/11-15:50:58 Verification successful Reviewers: sdong, IslamAbdelRahman, kradhakrishnan, yiwu, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56565
-
由 Islam AbdelRahman 提交于
Summary: We need to enable sync_point processing before creating the SstFileManager to ensure that we are holding the bg delete scheduler thread from running Test Plan: run the test debug using printf Reviewers: sdong, yhchiang, yiwu, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56871
-
由 Islam AbdelRahman 提交于
Summary: @dulmarod Ran infer on RocksDB and found that we dereference nullptr in adaptive_table https://fb.facebook.com/groups/rocksdb/permalink/1046374415411173/ Test Plan: make check -j64 Reviewers: sdong, yhchiang, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56973
-
由 Andrew Kryczka 提交于
Summary: Corresponding change to D56331. Test Plan: Now build succeeds: $ make jclean && make rocksdbjava Reviewers: yhchiang, IslamAbdelRahman, adamretter Reviewed By: adamretter Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56913
-
- 19 4月, 2016 8 次提交
-
-
由 Yueh-Hsuan Chiang 提交于
Summary: Make subcompaction random in crash_test Test Plan: make crash_test and verify whether subcompaction changes randomly Reviewers: IslamAbdelRahman, kradhakrishnan, yiwu, sdong, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56571
-
由 Andrew Kryczka 提交于
Summary: This is taken from the "Write(GB)" column in compaction stats, so the units should be GB, not MB. Test Plan: none Reviewers: sdong, yhchiang, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: leveldb, andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56889
-
由 Andrew Kryczka 提交于
Summary: Added a python script to parse combined stdout/stderr of legocastle steps. Previously we just matched words like 'Failure', which didn't work since even our test names matched that pattern. I went through all the legocastle steps to come up with strict failure regexes for the common failure cases. There is also some more complex logic to present gtest failures, since the test name and failure message are not on the same line. There will definitely be error cases that don't match any of these patterns, so we can iterate on it over time. Test Plan: no end-to-end test. I ran the legocastle steps locally and piped to my script, then verified output, e.g., $ set -o pipefail && TEST_TMPDIR=/dev/shm/rocksdb COMPILE_WITH_ASAN=1 OPT=-g make J=1 asan_check |& /usr/facebook/ops/scripts/asan_symbolize.py -d |& python build_tools/error_filter.py asan ==2058029==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000a414 at pc 0x4c12f6 bp 0x7ffcfb7a0520 sp 0x7ffcfb7a0518 Reviewers: kradhakrishnan Reviewed By: kradhakrishnan Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56691
-
由 Yi Wu 提交于
Summary: D56715 move some of the tests from db_test to db_block_cache_test. Some of them should be disabled in lite build. Test Plan: make check -j32 OPT='-DROCKSDB_LITE' make check -j32 Reviewers: sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56907
-
由 Yi Wu 提交于
Summary: Fix rocksdb lite build after D56715. Test Plan: make -j40 'OPT=-g -DROCKSDB_LITE' Reviewers: sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56895
-
由 sdong 提交于
Summary: write_callback_test fails if previous run didn't finish cleanly. Clean the DB before runing the test. Test Plan: Run the test that see it doesn't fail any more. Reviewers: andrewkr, yhchiang, yiwu, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: kradhakrishnan, leveldb, andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56859
-
由 Yi Wu 提交于
Summary: Split db_test.cc into several files. Moving several helper functions into DBTestBase. Test Plan: make check Reviewers: sdong, yhchiang, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: dhruba, andrewkr, kradhakrishnan, yhchiang, leveldb, sdong Differential Revision: https://reviews.facebook.net/D56715
-
由 Andrew Kryczka 提交于
Summary: This interface is redundant and has been deprecated for a while. It's also unused internally. Let's delete it. I moved the comments to the corresponding functions in BackupEngine/ BackupEngineReadOnly. This caused the diff tool to not work cleanly. Test Plan: unit tests $ ./backupable_db_test Reviewers: yhchiang, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56331
-
- 18 4月, 2016 1 次提交
-
-
由 Yi Wu 提交于
Summary: Generate t/run-* scripts to run tests in $PARALLEL_TEST separately, then make check_0 rule execute all of them. Run `time make check` after running `make all`. master: 71 sec with this diff: 63 sec. It seems moving more tests to $PARALLEL_TEST doesn't help improve test time though. Test Plan: Run the following make check J=16 make check J=1 make check make valgrind_check J=1 make valgrind_check J=16 make_valgrind_check Reviewers: IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: leveldb, kradhakrishnan, dhruba, andrewkr, yhchiang Differential Revision: https://reviews.facebook.net/D56805
-
- 16 4月, 2016 2 次提交
-
-
由 Victor Tyutyunov 提交于
-
由 Victor Tyutyunov 提交于
Summary: Solution is not to change db sequence number to start from 1 because 0 value is used in multiple other places. Fix covers only compact_iterator::findEarliestVisibleSnapshot with updated logic to support snapshot's numbering starting from 0. Test Plan: run: make all check it should pass all tests Reviewers: IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: lgalanis, mgalushka, andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D56601
-