- 26 8月, 2016 6 次提交
-
-
由 Islam AbdelRahman 提交于
Summary: Instead of doing a cat for all the log files, we first sort them and by exit code and cat the failing tests at the end. This will make it easier to debug failing tests, since we will just need to look at the end of the logs instead of searching in them Test Plan: run it locally Reviewers: sdong, yiwu, lightmark, kradhakrishnan, yhchiang, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62211
-
由 Justin Gibbs 提交于
Summary: Move the manual memtable flush for databases containing data that has bypassed the WAL from DBImpl's destructor to CancleAllBackgroundWork(). CancelAllBackgroundWork() is a publicly exposed API which allows async operations performed by background threads to be disabled on a database. In effect, this places the database into a "shutdown" state in advance of calling the database object's destructor. No compactions or flushing of SST files can occur once a call to this API completes. When writes are issued to a database with WriteOptions::disableWAL set to true, DBImpl::has_unpersisted_data_ is set so that memtables can be flushed when the database object is destroyed. If CancelAllBackgroundWork() has been called prior to DBImpl's destructor, this flush operation is not possible and is skipped, causing unnecessary loss of data. Since CancelAllBackgroundWork() is already invoked by DBImpl's destructor in order to perform the thread join portion of its cleanup processing, moving the manual memtable flush to CancelAllBackgroundWork() ensures data is persisted regardless of client behavior. Test Plan: Write an amount of data that will not cause a memtable flush to a rocksdb database with all writes marked with WriteOptions::disableWAL. Properly "close" the database. Reopen database and verify that the data was persisted. Reviewers: IslamAbdelRahman, yiwu, yoshinorim, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62277
-
由 Islam AbdelRahman 提交于
Summary: I just realized that when we run parallel valgrind we actually don't run the parallel tests under valgrind (we run the normally) This patch make sure that we run both parallel and non-parallel tests with valgrind Test Plan: DISABLE_JEMALLOC=1 make valgrind_check -j64 Reviewers: andrewkr, yiwu, lightmark, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62469
-
由 Andrew Kryczka 提交于
Summary: see discussion in D62337 Test Plan: unit tests Reviewers: sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62577
-
由 Alex Robinson 提交于
-
由 Adam Retter 提交于
Fix the Windows build of RocksDB Java. Similar to https://github.com/facebook/rocksdb/issues/1220 (#1284)
-
- 25 8月, 2016 5 次提交
-
-
由 Mike Kolupaev 提交于
Summary: We've got a crash with this stack trace: Program terminated with signal SIGTRAP, Trace/breakpoint trap. #0 0x00007fc85f2f4009 in raise () from /usr/local/fbcode/gcc-4.9-glibc-2.20-fb/lib/libpthread.so.0 #1 0x00000000005c8f61 in facebook::logdevice::handle_sigsegv(int) () at logdevice/server/sigsegv.cpp:159 #2 0x00007fc85f2f4150 in <signal handler called> () at /usr/local/fbcode/gcc-4.9-glibc-2.20-fb/lib/libpthread.so.0 #3 0x00000000031ed80c in rocksdb::NewReadaheadRandomAccessFile() at util/file_reader_writer.cc:383 #4 0x00000000031ed80c in rocksdb::NewReadaheadRandomAccessFile() at util/file_reader_writer.cc:472 #5 0x00000000031558e7 in rocksdb::TableCache::GetTableReader() at db/table_cache.cc:99 #6 0x0000000003156329 in rocksdb::TableCache::NewIterator() at db/table_cache.cc:198 #7 0x0000000003166568 in rocksdb::VersionSet::MakeInputIterator() at db/version_set.cc:3345 #8 0x000000000324a94f in rocksdb::CompactionJob::ProcessKeyValueCompaction(rocksdb::CompactionJob::SubcompactionState*) () at db/compaction_job.cc:650 #9 0x000000000324c2f6 in rocksdb::CompactionJob::Run() () at db/compaction_job.cc:530 #10 0x00000000030f5ae5 in rocksdb::DBImpl::BackgroundCompaction() at db/db_impl.cc:3269 #11 0x0000000003108d36 in rocksdb::DBImpl::BackgroundCallCompaction(void*) () at db/db_impl.cc:2970 #12 0x00000000029a2a9a in facebook::logdevice::RocksDBEnv::callback(void*) () at logdevice/server/locallogstore/RocksDBEnv.cpp:26 #13 0x00000000029a2a9a in facebook::logdevice::RocksDBEnv::callback(void*) () at logdevice/server/locallogstore/RocksDBEnv.cpp:30 #14 0x00000000031e7521 in rocksdb::ThreadPool::BGThread() at util/threadpool.cc:230 #15 0x00000000031e7663 in rocksdb::BGThreadWrapper(void*) () at util/threadpool.cc:254 #16 0x00007fc85f2ea7f1 in start_thread () at /usr/local/fbcode/gcc-4.9-glibc-2.20-fb/lib/libpthread.so.0 #17 0x00007fc85e8fb46d in clone () at /usr/local/fbcode/gcc-4.9-glibc-2.20-fb/lib/libc.so.6 From looking at the code, probably what happened is this: - `TableCache::GetTableReader()` called `Env::NewRandomAccessFile()`, which dispatched to a `PosixEnv::NewRandomAccessFile()`, where probably an `open()` call failed, so the `NewRandomAccessFile()` left a nullptr in the resulting file, - `TableCache::GetTableReader()` called `NewReadaheadRandomAccessFile()` with that `nullptr` file, - it tried to call file's method and crashed. This diff is a trivial fix to this crash. Test Plan: `make -j check` Reviewers: sdong, andrewkr, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62451
-
由 Andrew Kryczka 提交于
Summary: The global atomics we previously used for tickers had poor cache performance since they were typically updated from different threads, causing frequent invalidations. In this diff, - recordTick() updates a local ticker value specific to the thread in which it was called - When a thread exits, its local ticker value is added into merged_sum - getTickerCount() returns the sum of all threads' local ticker values and the merged_sum - setTickerCount() resets all threads' local ticker values and sets merged_sum to the value provided by the caller. In a next diff I will make a similar change for histogram stats. Test Plan: before: $ TEST_TMPDIR=/dev/shm/ perf record -g ./db_bench --benchmarks=readwhilewriting --statistics --num=1000000 --use_existing_db --threads=64 --cache_size=250000000 --compression_type=lz4 $ perf report -g --stdio | grep recordTick 7.59% db_bench db_bench [.] rocksdb::StatisticsImpl::recordTick ... after: $ TEST_TMPDIR=/dev/shm/ perf record -g ./db_bench --benchmarks=readwhilewriting --statistics --num=1000000 --use_existing_db --threads=64 --cache_size=250000000 --compression_type=lz4 $ perf report -g --stdio | grep recordTick 1.46% db_bench db_bench [.] rocksdb::StatisticsImpl::recordTick ... Reviewers: kradhakrishnan, MarkCallaghan, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: yiwu, andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62337
-
由 Joel Marcey 提交于
This is the initial commit with the templates necessary to have our RocksDB user documentation hosted on GitHub pages. Ensure you meet requirements here: https://help.github.com/articles/setting-up-your-github-pages-site-locally-with-jekyll/#requirements Then you can run this right now by doing the following: ``` % bundle install % bundle exec jekyll serve --config=_config.yml,_config_local_dev.yml ``` Then go to: http://127.0.0.1:4000/ Obviously, this is just the skeleton. Moving forward we will do these things in separate pull requests: - Replace logos with RocksDB logos - Update the color schemes - Add current information on rocksdb.org to markdown in this infra - Migrate current Wodpress blog to Jekyll and Disqus comments - Etc.
-
由 Islam AbdelRahman 提交于
Summary: Disable flaky test Test Plan: run it Reviewers: yiwu, andrewkr, kradhakrishnan, yhchiang, lightmark, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62487
-
由 Yi Wu 提交于
Summary: Tempororily disable clock cache in db_crashtest while we investigate data race issue with clock cache. Test Plan: python ./tools/db_crashtest.py blackbox Reviewers: sdong, lightmark, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62481
-
- 24 8月, 2016 6 次提交
-
-
由 Aaron Gao 提交于
Summary: fixed data race described in https://github.com/facebook/rocksdb/issues/1267 and add regression test Test Plan: ./table_test --gtest_filter=BlockBasedTableTest.NewIndexIteratorLeak make all check -j64 core dump before fix. ok after fix. Reviewers: andrewkr, sdong Reviewed By: sdong Subscribers: igor, andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62361
-
由 Yi Wu 提交于
Summary: We used to allow insert into full block cache as long as `strict_capacity_limit=false`. This diff further restrict insert to full cache if caller don't intent to hold handle to the cache entry after insert. Hope this diff fix the assertion failure with db_stress: https://our.intern.facebook.com/intern/sandcastle/log/?instance_id=211853102&step_id=2475070014 db_stress: util/lru_cache.cc:278: virtual void rocksdb::LRUCacheShard::Release(rocksdb::Cache::Handle*): Assertion `lru_.next == &lru_' failed. The assertion at lru_cache.cc:278 can fail when an entry is inserted into full cache and stay in LRU list. Test Plan: make all check Reviewers: IslamAbdelRahman, lightmark, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62325
-
由 Yi Wu 提交于
Summary: Add option to block based table to insert index/filter blocks to block cache with priority. Combined with LRUCache with high_pri_pool_ratio, we can reserved space for index/filter blocks, make them less likely to be evicted. Depends on D61977. Test Plan: See unit test. Reviewers: lightmark, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, march, leveldb Differential Revision: https://reviews.facebook.net/D62241
-
由 Yi Wu 提交于
Summary: ..to unblock TSAN contbuild. Test Plan: COMPILE_WITH_TSAN=1 make db_stress -j64 ./db_stress --use_clock_cache=true Reviewers: sdong, lightmark, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62397
-
由 Andrew Kryczka 提交于
Summary: make the variables static so capturing is unnecessary since I couldn't find a portable way to capture variables in a lambda that's converted to a C-style pointer-to-function. Test Plan: https://ci.appveyor.com/project/Facebook/rocksdb/build/1.0.1658 Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62403
-
由 Andrew Kryczka 提交于
Summary: there was an error when accessing kItersPerThread in the lambda: https://ci.appveyor.com/project/Facebook/rocksdb/build/1.0.1654 Test Plan: doitlive Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62379
-
- 23 8月, 2016 4 次提交
-
-
由 Andrew Kryczka 提交于
Summary: This function allows the user to provide a custom function to fold all threads' local data. It will be used in my next diff for aggregating statistics stored in thread-local data. Note the test case uses atomics as thread-local values due to the synchronization requirement (documented in code). Test Plan: unit test Reviewers: yhchiang, sdong, kradhakrishnan Reviewed By: kradhakrishnan Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62049
-
由 Adam Retter 提交于
* Rename RocksDB#remove -> RocksDB#delete to match C++ API; Added deprecated versions of RocksDB#remove for backwards compatibility. * Add missing experimental feature RocksDB#singleDelete
-
由 Adam Retter 提交于
Add Status to RocksDBException so that meaningful function result Status from the C++ API isn't lost (#1273)
-
由 Andrew Kryczka 提交于
Summary: value is not an InternalKey, we do not need to decode it Test Plan: setup: $ ldb put --create_if_missing=true k v $ ldb put --db=./tmp --create_if_missing k v $ ldb compact --db=./tmp before: $ sst_dump --command=raw --file=./tmp/000004.sst ... terminate called after throwing an instance of 'std::length_error' after: $ ./sst_dump --command=raw --file=./tmp/000004.sst $ cat tmp/000004_dump.txt ... ASCII k : v ... Reviewers: sdong, yhchiang, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62301
-
- 20 8月, 2016 4 次提交
-
-
由 Yi Wu 提交于
Summary: Add mid-point insertion functionality to LRU cache. Caller of `Cache::Insert()` can set an additional parameter to make a cache entry have higher priority. The LRU cache will reserve at most `capacity * high_pri_pool_pct` bytes for high-pri cache entries. If `high_pri_pool_pct` is zero, the cache degenerates to normal LRU cache. Context: If we are to put index and filter blocks into RocksDB block cache, index/filter block can be swap out too early. We want to add an option to RocksDB to reserve some capacity in block cache just for index/filter blocks, to mitigate the issue. In later diffs I'll update block based table reader to use the interface to cache index/filter blocks at high priority, and expose the option to `DBOptions` and make it dynamic changeable. Test Plan: unit test. Reviewers: IslamAbdelRahman, sdong, lightmark Reviewed By: lightmark Subscribers: andrewkr, dhruba, march, leveldb Differential Revision: https://reviews.facebook.net/D61977
-
由 Islam AbdelRahman 提交于
Summary: Update SstFileWriter to use user TablePropertiesCollectors that are passed in Options Test Plan: unittests Reviewers: sdong Reviewed By: sdong Subscribers: jkedgar, andrewkr, hermanlee4, dhruba, yoshinorim Differential Revision: https://reviews.facebook.net/D62253
-
由 Wanning Jiang 提交于
Summary: 1. Range Deletion Tombstone structure 2. Modify Add() in table_builder to make it usable for adding range del tombstones 3. Expose NewTombstoneIterator() API in table_reader Test Plan: table_test.cc (now BlockBasedTableBuilder::Add() only accepts InternalKey. I make table_test only pass InternalKey to BlockBasedTableBuidler. Also test writing/reading range deletion tombstones in table_test ) Reviewers: sdong, IslamAbdelRahman, lightmark, andrewkr Reviewed By: andrewkr Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D61473
-
由 Yi Wu 提交于
Summary: Clock-based cache implemenetation aim to have better concurreny than default LRU cache. See inline comments for implementation details. Test Plan: Update cache_test to run on both LRUCache and ClockCache. Adding some new tests to catch some of the bugs that I fixed while implementing the cache. Reviewers: kradhakrishnan, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D61647
-
- 19 8月, 2016 1 次提交
-
-
由 Yi Wu 提交于
Summary: Splitting the makefile part of D55581. Test Plan: make all check -j32 ROCKSDB_FBCODE_BUILD_WITH_481=1 make all check -j32 ROCKSDB_NO_FBCODE=1 make all check -j32 export TBB_BASE=/mnt/gvfs/third-party2/tbb/afa54b33cfcf93f1d90a3160cdb894d6d63d5dca/4.0_update2/gcc-4.9-glibc-2.20/e9936bf; ROCKSDB_NO_FBCODE=1 CFLAGS="-I $TBB_BASE/include" LDFLAGS="-L $TBB_BASE/lib -Wl,-rpath=$TBB_BASE/lib" make all check -j32 Reviewers: IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: kradhakrishnan, yhchiang, IslamAbdelRahman, andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D56979
-
- 18 8月, 2016 2 次提交
-
-
由 Jay 提交于
close #1262
-
由 Alexander Jipa 提交于
fixes 1215: execute_process(COMMAND mkdir ${DIR}) fails to create a directory with cmake on Windows (#1219) CMake - Use a platform-neutral mkdir function
-
- 17 8月, 2016 2 次提交
-
-
由 Dmitri Smirnov 提交于
* Create rate limiter using factory function in the test. * Convert function local statics in option helper to a C array that does not perform dynamic memory allocation. This is helpful when you try to memory isolate different DB instances.
-
由 Yi Wu 提交于
Summary: ... so that I can include the header and create LRUCache specific tests for D61977 Test Plan: make check Reviewers: lightmark, IslamAbdelRahman, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62145
-
- 16 8月, 2016 9 次提交
-
-
由 Anirban Rahut 提交于
Summary: Added 2 statistics in compaction job statistics, to identify if single deletes are not meeting a matching key (fallthrough) or single deletes are meeting a merge, delete or another single delete (i.e. not the expected case of put). Test Plan: Tested the statistics using write_stress and compaction_job_stats_test Reviewers: sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D61749
-
由 Andrew Kryczka 提交于
Summary: Add API to WriteBatch to store range deletions in its buffer which are later added to memtable. In the WriteBatch buffer, a range deletion is encoded as "<optype><CF ID (optional)><begin key><end key>". With this diff, the range tombstones are stored inline with the data in the memtable. It's useful for now because the test cases rely on the data being accessible via memtable. My next step is to store range tombstones in a separate area in the memtable. Test Plan: unit tests Reviewers: IslamAbdelRahman, sdong, wanning Reviewed By: wanning Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D61401
-
由 Andrew Kryczka 提交于
Summary: We were frequently seeing a race between SyncPoint::Process() and SyncPoint::~SyncPoint() (e.g., https://our.intern.facebook.com/intern/sandcastle/log/?instance_id=207289975&step_id=2412725431). The issue was marked_thread_id_ gets deleted when the main thread is exiting and simultaneously background threads may access it. We can prevent this race condition by checking whether sync points are disabled (assuming the test terminates with them disabled) before attempting to access that member. I do not understand why accesses to other members (mutex_ and enabled_) are ok but anyways the test no longer fails tsan. Test Plan: ran tests Reviewers: sdong, yhchiang, IslamAbdelRahman, yiwu, wanning Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62133
-
由 Islam AbdelRahman 提交于
Summary: Fix the test by releasing the last snapshot Test Plan: run the test under valgrind Reviewers: andrewkr, yiwu, lightmark, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62091
-
由 Islam AbdelRahman 提交于
Summary: Fix the java build Test Plan: make rocksdbjava -j64 Reviewers: yhchiang, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62097
-
由 Mark Callaghan 提交于
Summary: Changes compaction IO stats to be printed once per interval rather than once per interval per thread. https://github.com/facebook/rocksdb/issues/1276 Task ID: #12698508 Test Plan: run db_bench Reviewers: sdong Reviewed By: sdong Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62067
-
由 sdong 提交于
Summary: "Batch" is ambiguous in this context. It can mean "write batch" or commit group. Change it to commit group to be clear. Test Plan: Build Reviewers: MarkCallaghan, yhchiang, IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: leveldb, andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D62055
-
由 Edouard A 提交于
Users shouldn't reply on -> users shouldn't rely on
-
由 Yueh-Hsuan Chiang 提交于
Summary: Env holds a pointer of ThreadStatusUpdater, which will be deleted when Env is deleted. However, in case a rocksdb database is deleted after Env is deleted. Then this will introduce a free-after-use of this ThreadStatusUpdater. This patch fix this by never deleting the ThreadStatusUpdater in Env, which is in general safe as Env is a singleton in most cases. Test Plan: thread_list_test Reviewers: andrewkr, sdong Reviewed By: sdong Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D59187
-
- 13 8月, 2016 1 次提交
-
-
由 Philipp Unterbrunner 提交于
Summary: Added min/max/avg data block size output to sst_dump. Output was added to the end of BlockBasedTable::DumpDataBlocks, so it appears after the data block details, at the very end of the dump file. Test Plan: ``` ./db_bench --benchmarks=fillrandom ./sst_dump --file=/tmp/rocksdbtest-xyz/dbbench/000007.sst --command=raw tail -n 6 /tmp/rocksdbtest-xyz/dbbench/000007_dump.txt ``` ``` Data Block Summary: -------------------------------------- # data blocks: 11336 min data block size: 903 max data block size: 2268 avg data block size: 2245.363356 ``` Reviewers: IslamAbdelRahman Reviewed By: IslamAbdelRahman Subscribers: andrewkr, dhruba Differential Revision: https://reviews.facebook.net/D61815
-