- 13 4月, 2013 1 次提交
-
-
由 Haobo Xu 提交于
Summary: FindObsoleteFiles was slow, holding the single big lock, resulted in bad p99 behavior. Didn't profile anything, but several things could be improved: 1. VersionSet::AddLiveFiles works with std::set, which is by itself slow (a tree). You also don't know how many dynamic allocations occur just for building up this tree. switched to std::vector, also added logic to pre-calculate total size and do just one allocation 2. Don't see why env_->GetChildren() needs to be mutex proteced, moved to PurgeObsoleteFiles where mutex could be unlocked. 3. switched std::set to std:unordered_set, the conversion from vector is also inside PurgeObsoleteFiles I have a feeling this should pretty much fix it. Test Plan: make check; db_stress Reviewers: dhruba, heyongqiang, MarkCallaghan Reviewed By: dhruba CC: leveldb, zshao Differential Revision: https://reviews.facebook.net/D10197
-
- 12 4月, 2013 3 次提交
-
-
由 Abhishek Kona 提交于
[RocksDB] Expose LDB functioanality as a library call - clients can build their own LDB binary with additional options Summary: Primarily a refactor. Introduced LDBTool interface to which customers can plug in their options and this will create their own version of ldb tool. Test Plan: made ldb tool and tried it. Reviewers: dhruba, heyongqiang Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D10191
-
由 Mayank Agarwal 提交于
Summary: using unique_ptr to have automatic delete for probableWALfiles in db_impl.cc Test Plan: make Reviewers: sheki, dhruba Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D10083
-
由 Mayank Agarwal 提交于
Test Plan: make valgrind_check Reviewers: sheki Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D10185
-
- 11 4月, 2013 5 次提交
-
-
由 Mayank Agarwal 提交于
Summary: new release 1.5.9.fb Test Plan: release Reviewers: heyongqiang Reviewed By: heyongqiang Differential Revision: https://reviews.facebook.net/D10149
-
由 Mayank Agarwal 提交于
Summary: The background compaction threads are never exitted and therefore caused memory-leaks while running rpcksdb tests. Have changed the PosixEnv destructor to exit and join them and changed the tests likewise The memory leaked has reduced from 320 bytes to 64 bytes in all the tests. The 64 bytes is relating to pthread_exit, but still have to figure out why. The stack-trace right now with table_test.cc = 64 bytes in 1 blocks are possibly lost in loss record 4 of 5 at 0x475D8C: malloc (jemalloc.c:914) by 0x400D69E: _dl_map_object_deps (dl-deps.c:505) by 0x4013393: dl_open_worker (dl-open.c:263) by 0x400F015: _dl_catch_error (dl-error.c:178) by 0x4013B2B: _dl_open (dl-open.c:569) by 0x5D3E913: do_dlopen (dl-libc.c:86) by 0x400F015: _dl_catch_error (dl-error.c:178) by 0x5D3E9D6: __libc_dlopen_mode (dl-libc.c:47) by 0x5048BF3: pthread_cancel_init (unwind-forcedunwind.c:53) by 0x5048DC9: _Unwind_ForcedUnwind (unwind-forcedunwind.c:126) by 0x5046D9F: __pthread_unwind (unwind.c:130) by 0x50413A4: pthread_exit (pthreadP.h:289) Test Plan: make all check Reviewers: dhruba, sheki, haobo Reviewed By: dhruba CC: leveldb, chip Differential Revision: https://reviews.facebook.net/D9573
-
由 heyongqiang 提交于
Summary: as subject. This is causing problem in adsconv. Ideally, this flags should be set in open. But that is only supported in Linux kernel ≥2.6.23 and glibc ≥2.7. Test Plan: db_test run db_test Reviewers: dhruba, MarkCallaghan, haobo Reviewed By: dhruba CC: leveldb, chip Differential Revision: https://reviews.facebook.net/D10089
-
由 Mayank Agarwal 提交于
Summary: To know which options the crashtest was run with. Also changed print to sys.stdout.write which is more standard. Test Plan: python tools/db_crashtest.py Reviewers: vamsi, akushner, dhruba Reviewed By: akushner Differential Revision: https://reviews.facebook.net/D10119
-
由 Dhruba Borthakur 提交于
Summary: The segfault was happening because the program was unable to open a new sst file (as part of the compaction) because the process ran out of file descriptors. The fix is to check the return status of the file creation before taking any other action. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fabf03f9700 (LWP 29904)] leveldb::DBImpl::OpenCompactionOutputFile (this=this@entry=0x7fabf9011400, compact=compact@entry=0x7fabf741a2b0) at db/db_impl.cc:1399 1399 db/db_impl.cc: No such file or directory. (gdb) where Test Plan: make check Reviewers: MarkCallaghan, sheki Reviewed By: MarkCallaghan CC: leveldb Differential Revision: https://reviews.facebook.net/D10101
-
- 10 4月, 2013 1 次提交
-
-
由 Mayank Agarwal 提交于
Summary: Was deleting incorrectly. Should delete the whole array. Test Plan: make;valgrind stops complaining about Mismatched free/delete Reviewers: dhruba, sheki Reviewed By: sheki CC: leveldb, haobo Differential Revision: https://reviews.facebook.net/D10059
-
- 09 4月, 2013 4 次提交
-
-
由 Mayank Agarwal 提交于
Summary: make crash_test will now invoke the crash_test. Also some cleanup in the db_crashtest.py file Test Plan: make crash_test Reviewers: akushner, vamsi, sheki, dhruba Reviewed By: vamsi Differential Revision: https://reviews.facebook.net/D9987
-
由 Abhishek Kona 提交于
[RocksDB][Bug] Look at all the files, not just the first file in TransactionLogIter as BatchWrites can leave it in Limbo Summary: Transaction Log Iterator did not move to the next file in the series if there was a write batch at the end of the currentFile. The solution is if the last seq no. of the current file is < RequestedSeqNo. Assume the first seqNo. of the next file has to satisfy the request. Also major refactoring around the code. Moved opening the logreader to a seperate function, got rid of goto. Test Plan: added a unit test for it. Reviewers: dhruba, heyongqiang Reviewed By: heyongqiang CC: leveldb, emayanke Differential Revision: https://reviews.facebook.net/D10029
-
由 Mayank Agarwal 提交于
Summary: The crash_test depends on db_stress to work with pre-existing dir Test Plan: make db_stress; Run db_stress with 'destroy_db_initially=0' Reviewers: vamsi, dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D10041
-
由 Mayank Agarwal 提交于
Summary: For sanity w.r.t. the way we split up the reopens equally among the ops/thread Test Plan: make db_stress; db_stress --ops_per_thread=10 --reopens=10 => error Reviewers: vamsi, dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D10023
-
- 06 4月, 2013 2 次提交
-
-
由 Mayank Agarwal 提交于
Summary: Renames in the Makefile. It will be used like this in third-party. Test Plan: make Reviewers: dhruba, sheki, heyongqiang, haobo Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9633
-
由 Abhishek Kona 提交于
Summary: TableAndFile was a struct used earlier to delete the file as we did not have std::unique_ptr in the codebase. With Chip introducing C++11 hotness like std::unique_ptr we can do away with the struct. Test Plan: make all check Reviewers: haobo, heyongqiang Reviewed By: heyongqiang CC: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D9975
-
- 05 4月, 2013 1 次提交
-
-
由 Haobo Xu 提交于
Summary: 1. The stock LRUCache nukes itself whenever the working set (the total number of entries not released by client at a certain time) is bigger than the cache capacity. See https://our.dev.facebook.com/intern/tasks/?t=2252281 2. There's a bug in shard calculation leading to segmentation fault when only one shard is needed. Test Plan: make check Reviewers: dhruba, heyongqiang Reviewed By: heyongqiang CC: leveldb, zshao, sheki Differential Revision: https://reviews.facebook.net/D9927
-
- 04 4月, 2013 2 次提交
-
-
由 Vamsi Ponnekanti 提交于
Summary: When I run db_crashtest, I am seeing lot of warnings that say db_stress completed before it was killed. To fix that I made ops per thread a very large value so that it keeps running until it is killed. I also set #reopens to 0. Since we are killing the process anyway, the 'simulated crash' that happens during reopen may not add additional value. I usually see 10-25K ops happening before the kill. So I increased max_key from 100 to 1000 so that we use more distinct keys. Test Plan: Ran a few times. Revert Plan: OK Task ID: # Reviewers: emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9909
-
由 Mayank Agarwal 提交于
Test Plan: make all check Reviewers: sheki Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D9915
-
- 03 4月, 2013 3 次提交
-
-
由 Abhishek Kona 提交于
Summary: During recovery, last_updated_manifest number was not set if there were no records in the Write-ahead log. Now check for the recovered manifest also and set last_updated_manifest file to the max value. Test Plan: unit test Reviewers: heyongqiang Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9891
-
由 Haobo Xu 提交于
Summary: As title. Code is shorter and cleaner See https://our.dev.facebook.com/intern/tasks/?t=2233981 Test Plan: make check Reviewers: dhruba, heyongqiang Reviewed By: dhruba CC: leveldb, zshao Differential Revision: https://reviews.facebook.net/D9789
-
由 Haobo Xu 提交于
Summary: 1. SetBackgroundThreads was not thread safe 2. queue_size_ does not seem necessary 3. moved condition signal after shared state change. Even though the original order is in practice ok (because the mutex is still held), it looks fishy and non-intuitive. Test Plan: make check Reviewers: dhruba Reviewed By: dhruba CC: leveldb, zshao Differential Revision: https://reviews.facebook.net/D9825
-
- 02 4月, 2013 1 次提交
-
-
由 Mayank Agarwal 提交于
Summary: The script runs and kills the stress test periodically. Default values have been used in the script now. Should I make this a part of the Makefile or automated rocksdb build? The values can be easily changed in the script right now, but should I add some support for variable values or input to the script? I believe the script achieves its objective of unsafe crashes and reopening to expect sanity in the database. Test Plan: python tools/db_crashtest.py Reviewers: dhruba, vamsi, MarkCallaghan Reviewed By: vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D9369
-
- 29 3月, 2013 4 次提交
-
-
由 Haobo Xu 提交于
Summary: If a class owns an object: - If the object can be null => use a unique_ptr. no delete - If the object can not be null => don't even need new, let alone delete - for runtime sized array => use vector, no delete. Test Plan: make check Reviewers: dhruba, heyongqiang Reviewed By: heyongqiang CC: leveldb, zshao, sheki, emayanke, MarkCallaghan Differential Revision: https://reviews.facebook.net/D9783
-
由 Abhishek Kona 提交于
Summary: RocksDB does a binary search to look at the files which might contain the requested sequence number at the call GetUpdatesSince. There was a bug in the binary search => when the file pointed by the middle index of bsearch was empty/corrupt it needst to resize the vector and update indexes. This now fixes that. Test Plan: existing unit tests pass. Reviewers: heyongqiang, dhruba Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9777
-
由 Abhishek Kona 提交于
Summary: If the vector returned by GetUpdatesSince is empty, it is still returned to the user. This causes it throw an std::range error. The probable file list is checked and it returns an IOError status instead of OK now. Test Plan: added a unit test. Reviewers: dhruba, heyongqiang Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9771
-
由 Abhishek Kona 提交于
Summary: Use non mmapd files for Write-Ahead log. Earlier use of MMaped files. made the log iterator read ahead and miss records. Now the reader and writer will point to the same physical location. There is no perf regression : ./db_bench --benchmarks=fillseq --db=/dev/shm/mmap_test --num=$(million 20) --use_existing_db=0 --threads=2 with This diff : fillseq : 10.756 micros/op 185281 ops/sec; 20.5 MB/s without this dif : fillseq : 11.085 micros/op 179676 ops/sec; 19.9 MB/s Test Plan: unit test included Reviewers: dhruba, heyongqiang Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9741
-
- 28 3月, 2013 1 次提交
-
-
由 Abhishek Kona 提交于
Summary: Earlier Statistics object was a raw pointer. This meant the user had to clear up the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted. Now Using a shared_ptr to manage this. Want this in before the next release. Test Plan: make all check. Reviewers: dhruba, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9735
-
- 27 3月, 2013 1 次提交
-
-
由 Haobo Xu 提交于
Summary: rocksdb uses a single global lock to protect in memory metadata. We should minimize the mutex protected code section to increase the effective parallelism of the program. See https://our.intern.facebook.com/intern/tasks/?t=2218928 Test Plan: make check db_bench Reviewers: dhruba, heyongqiang CC: zshao, leveldb Differential Revision: https://reviews.facebook.net/D9705
-
- 23 3月, 2013 1 次提交
-
-
由 Simon Marlow 提交于
Summary: Syntax: manifest_dump [--verbose] --num=<manifest_num> e.g. $ ./ldb --db=/home/smarlow/tmp/testdb manifest_dump --num=12 manifest_file_number 13 next_file_number 14 last_sequence 3 log_number 11 prev_log_number 0 --- level 0 --- version# 0 --- 6:116['a1' @ 1 : 1 .. 'a1' @ 1 : 1] 10:130['a3' @ 2 : 1 .. 'a4' @ 3 : 1] --- level 1 --- version# 0 --- --- level 2 --- version# 0 --- --- level 3 --- version# 0 --- --- level 4 --- version# 0 --- --- level 5 --- version# 0 --- --- level 6 --- version# 0 --- Test Plan: - Tested on an example DB (see output in summary) Reviewers: sheki, dhruba Reviewed By: sheki CC: leveldb, heyongqiang Differential Revision: https://reviews.facebook.net/D9609
-
- 22 3月, 2013 4 次提交
-
-
由 Abhishek Kona 提交于
Summary: The unit test fails as our solution does not work with MMap'd files. Disable the failing unit test. Put it back with the next diff which should fix the problem. Test Plan: db_test Reviewers: heyongqiang CC: dhruba Differential Revision: https://reviews.facebook.net/D9645
-
由 Abhishek Kona 提交于
Summary: * Add a method to check if the log reader is at EOF. * If we know a record has been flushed force the log_reader to believe it is not at EOF, using a new method UnMarkEof(). This does not work with MMpaed files. Test Plan: added a unit test. Reviewers: dhruba, heyongqiang Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D9567
-
由 Mayank Agarwal 提交于
Summary: This caused compilation problems on some gcc platforms during the third-partyrelease Test Plan: make Reviewers: sheki Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D9627
-
由 Abhishek Kona 提交于
Summary: simple sed command to replace NULL in tools directory. Was missed by the previous codemod. Test Plan: it compiles Reviewers: emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9621
-
- 21 3月, 2013 4 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: The events that trigger compaction: * opening the database * Get -> only if seek compaction is not disabled and other checks are true * MakeRoomForWrite -> when memtable is full * BackgroundCall -> If the background thread is about to do a compaction run, it schedules a new background task to trigger a possible compaction. This will cause additional background threads to find and process other compactions that can run concurrently. Test Plan: ran db_bench with overwrite and readonly alternatively. Reviewers: sheki, MarkCallaghan Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9579
-
由 Dhruba Borthakur 提交于
Summary: This patch allows an application to specify whether to use bufferedio, reads-via-mmaps and writes-via-mmaps per database. Earlier, there was a global static variable that was used to configure this functionality. The default setting remains the same (and is backward compatible): 1. use bufferedio 2. do not use mmaps for reads 3. use mmap for writes 4. use readaheads for reads needed for compaction I also added a parameter to db_bench to be able to explicitly specify whether to do readaheads for compactions or not. Test Plan: make check Reviewers: sheki, heyongqiang, MarkCallaghan Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9429
-
由 Dhruba Borthakur 提交于
Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev:
-
由 Mayank Agarwal 提交于
Summary: Getting rid of boost in our github codebase which caused problems on third-party Test Plan: make ldb; python tools/ldb_test.py Reviewers: sheki, dhruba Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D9543
-
- 20 3月, 2013 2 次提交
-
-
由 Mayank Agarwal 提交于
Summary: Was causing error(warning) in third-party saying unused result Test Plan: make Reviewers: sheki, dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9447
-
由 Mayank Agarwal 提交于
Summary: Some comparisons left in log_test.cc and db_test.cc complained by make Test Plan: make Reviewers: dhruba, sheki Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9537
-