- 29 8月, 2014 1 次提交
-
-
由 Wankai Zhang 提交于
-
- 28 8月, 2014 3 次提交
-
-
由 Wankai Zhang 提交于
more c++11 way noncopyable and keep parameter's name of constructor consistent
-
由 Igor Canadi 提交于
Summary: This assert makes Insert O(n^2) instead of O(n) in debug mode. Memtable insert is in the critical path. No need to assert uniqunnes of the key here, since we're adding a sequence number to it anyway. Test Plan: none Reviewers: sdong, ljin Reviewed By: ljin Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22443
-
由 Radheshyam Balasundaram 提交于
Summary: - New Uint64 comparator - Modify Reader and Builder to take custom user comparators instead of bytewise comparator - Modify logic for choosing unused user key in builder - Modify iterator logic in reader - test changes Test Plan: cuckoo_table_{builder,reader,db}_test make check all Reviewers: ljin, sdong Reviewed By: ljin Subscribers: dhruba, leveldb Differential Revision: https://reviews.facebook.net/D22377
-
- 27 8月, 2014 8 次提交
-
-
由 Igor Canadi 提交于
-
由 Igor Canadi 提交于
-
由 Igor Canadi 提交于
-
由 Radheshyam Balasundaram 提交于
Summary: In DBImpl::Recover method, while loading memtables, also check if memtables are empty. Use this in DBImplReadonly to determine whether to lookup memtable or not. Test Plan: db_test make check all Reviewers: sdong, yhchiang, ljin, igor Reviewed By: ljin Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22281
-
由 Stanislau Hlebik 提交于
Summary: Add property 'rocksdb.is-file-deletions-enable' which equals disable_delete_obsole_file_ Test Plan: make all check Reviewers: sdong Reviewed By: sdong Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22119
-
由 Lei Jin 提交于
Summary: also fix HISTORY.md Test Plan: make all check Reviewers: sdong, yhchiang, igor Reviewed By: igor Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22437
-
由 Igor Canadi 提交于
Summary: See https://github.com/facebook/rocksdb/issues/244#issuecomment-53372297 Also see this: https://github.com/facebook/rocksdb/blob/master/util/env_posix.cc#L1075 Test Plan: compiles Reviewers: yhchiang, ljin, sdong Reviewed By: ljin Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22419
-
由 Lei Jin 提交于
Summary: It was creating BlockBasedTableOptions object in a loop without calling destroy() Test Plan: valgrind ./c_test --leak-check=full --show-reachable=yes Reviewers: sdong, igor Reviewed By: igor Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22431
-
- 26 8月, 2014 5 次提交
-
-
由 Igor Canadi 提交于
Summary: Two things: 1. Use hash-based index for data column family 2. Use Get() instead of Iterator Seek() when DB is opened read-only Test Plan: added read-only test in unit test Reviewers: yinwang Reviewed By: yinwang Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22323
-
由 Lei Jin 提交于
ReadOptions.total_order_seek to allow total order seek for block-based table when hash index is enabled Summary: as title Test Plan: table_test Reviewers: igor, yhchiang, sdong Reviewed By: sdong Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22239
-
由 Lei Jin 提交于
Summary: Add a virtual function in table factory that will print table options Test Plan: make release Reviewers: igor, yhchiang, sdong Reviewed By: sdong Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22149
-
由 Lei Jin 提交于
Summary: as title Test Plan: tested on my mac make rocksdbjava make jtest Reviewers: sdong, igor, yhchiang Reviewed By: yhchiang Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D21963
-
由 Lei Jin 提交于
Summary: I will move compression related options in a separate diff since this diff is already pretty lengthy. I guess I will also need to change JNI accordingly :( Test Plan: make all check Reviewers: yhchiang, igor, sdong Reviewed By: igor Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D21915
-
- 24 8月, 2014 2 次提交
-
-
由 Igor Canadi 提交于
Add missing include to use std::unique_ptr
-
由 Andrew Bonventre 提交于
This was causing issues when including this header from another file.
-
- 23 8月, 2014 1 次提交
-
-
由 Igor Canadi 提交于
Summary: I am currently working on a project that uses RocksDB. While debugging some perf issues, I came up across interesting compaction concurrency issue. Namely, I had 15 idle threads and a good comapction to do, but CompactionPicker returned "Compaction nothing to do". Here's how Internal stats looked: 2014/08/22-08:08:04.551982 7fc7fc3f5700 ------- DUMPING STATS ------- 2014/08/22-08:08:04.552000 7fc7fc3f5700 ** Compaction Stats [default] ** Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ L0 7/5 353 1.0 0.0 0.0 0.0 2.3 2.3 0.0 0.0 0.0 9.4 0 0 0 0 247 46 5.359 8.53 1 8526.25 L1 2/2 86 1.3 2.6 1.9 0.7 2.6 1.9 2.7 1.3 24.3 24.0 39 19 71 52 109 11 9.938 0.00 0 0.00 L2 26/0 833 1.3 5.7 1.7 4.0 5.2 1.2 6.3 3.0 15.6 14.2 47 112 147 35 373 44 8.468 0.00 0 0.00 L3 12/0 505 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0.000 0.00 0 0.00 Sum 47/7 1778 0.0 8.3 3.6 4.6 10.0 5.4 8.1 4.4 11.6 14.1 86 131 218 87 728 101 7.212 8.53 1 8526.25 Int 0/0 0 0.0 2.4 0.8 1.6 2.7 1.2 11.5 6.1 12.0 13.6 20 43 63 20 203 23 8.845 0.00 0 0.00 Flush(GB): accumulative 2.266, interval 0.444 Stalls(secs): 0.000 level0_slowdown, 0.000 level0_numfiles, 8.526 memtable_compaction, 0.000 leveln_slowdown_soft, 0.000 leveln_slowdown_hard Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 1 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard ** DB Stats ** Uptime(secs): 336.8 total, 60.4 interval Cumulative writes: 61584000 writes, 6480589 batches, 9.5 writes per batch, 1.39 GB user ingest Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 GB written Interval writes: 11235257 writes, 1175050 batches, 9.6 writes per batch, 259.9 MB user ingest Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 MB written To see what happened, go here: https://github.com/facebook/rocksdb/blob/47b452cfcf9b1487d41f886a98bc0d6f95587e90/db/compaction_picker.cc#L430 * The for loop started with level 1, because it has the worst score. * PickCompactionBySize on L429 returned nullptr because all files were being compacted * ExpandWhileOverlapping(c) returned true (because that's what it does when it gets nullptr!?) * for loop break-ed, never trying compactions for level 2 :( :( This bug was present at least since January. I have no idea how we didn't find this sooner. Test Plan: Unit testing compaction picker is hard. I tested this by running my service and observing L0->L1 and L2->L3 compactions in parallel. However, for long-term, I opened the task #4968469. @yhchiang is currently refactoring CompactionPicker, hopefully the new version will be unit-testable ;) Here's how my compactions look like after the patch: 2014/08/22-08:50:02.166699 7f3400ffb700 ------- DUMPING STATS ------- 2014/08/22-08:50:02.166722 7f3400ffb700 ** Compaction Stats [default] ** Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ L0 8/5 404 1.5 0.0 0.0 0.0 4.3 4.3 0.0 0.0 0.0 9.6 0 0 0 0 463 88 5.260 0.00 0 0.00 L1 2/2 60 0.9 4.8 3.9 0.8 4.7 3.9 2.4 1.2 23.9 23.6 80 23 131 108 204 19 10.747 0.00 0 0.00 L2 23/3 697 1.0 11.6 3.5 8.1 10.9 2.8 6.4 3.1 17.7 16.6 95 242 317 75 669 92 7.268 0.00 0 0.00 L3 58/14 2207 0.3 6.2 1.6 4.6 5.9 1.3 7.4 3.6 14.6 13.9 43 121 159 38 436 36 12.106 0.00 0 0.00 Sum 91/24 3368 0.0 22.5 9.1 13.5 25.8 12.4 11.2 6.0 13.0 14.9 218 386 607 221 1772 235 7.538 0.00 0 0.00 Int 0/0 0 0.0 3.2 0.9 2.3 3.6 1.3 15.3 8.0 12.4 13.7 24 66 89 23 266 27 9.838 0.00 0 0.00 Flush(GB): accumulative 4.336, interval 0.444 Stalls(secs): 0.000 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 0.000 leveln_slowdown_soft, 0.000 leveln_slowdown_hard Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard ** DB Stats ** Uptime(secs): 577.7 total, 60.1 interval Cumulative writes: 116960736 writes, 11966220 batches, 9.8 writes per batch, 2.64 GB user ingest Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 GB written Interval writes: 11643735 writes, 1206136 batches, 9.7 writes per batch, 269.2 MB user ingest Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 MB written Yay for concurrent L0->L1 and L2->L3 compactions! Reviewers: sdong, yhchiang, ljin Reviewed By: yhchiang Subscribers: yhchiang, leveldb Differential Revision: https://reviews.facebook.net/D22305
-
- 22 8月, 2014 2 次提交
-
-
由 Siying Dong 提交于
Fix compilation issue on OSX
-
由 Shao Yu Zhang 提交于
-
- 21 8月, 2014 7 次提交
-
-
由 Radheshyam Balasundaram 提交于
Summary: - Implement Prepare method - Rewrite performance tests in cuckoo_table_reader_test to write new file only if one doesn't already exist. - Add performance tests for batch lookup along with prefetching. Test Plan: ./cuckoo_table_reader_test --enable_perf Results (We get better results if we used int64 comparator instead of string comparator (TBD in future diffs)): With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2. Time taken per op is 0.208us (4.8 Mqps) with batch size of 0 With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2. Time taken per op is 0.182us (5.5 Mqps) with batch size of 10 With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2. Time taken per op is 0.161us (6.2 Mqps) with batch size of 25 With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2. Time taken per op is 0.161us (6.2 Mqps) with batch size of 50 With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2. Time taken per op is 0.163us (6.1 Mqps) with batch size of 100 With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3. Time taken per op is 0.252us (4.0 Mqps) with batch size of 0 With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3. Time taken per op is 0.192us (5.2 Mqps) with batch size of 10 With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3. Time taken per op is 0.195us (5.1 Mqps) with batch size of 25 With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3. Time taken per op is 0.191us (5.2 Mqps) with batch size of 50 With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3. Time taken per op is 0.194us (5.1 Mqps) with batch size of 100 With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3. Time taken per op is 0.228us (4.4 Mqps) with batch size of 0 With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3. Time taken per op is 0.185us (5.4 Mqps) with batch size of 10 With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3. Time taken per op is 0.186us (5.4 Mqps) with batch size of 25 With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3. Time taken per op is 0.189us (5.3 Mqps) with batch size of 50 With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3. Time taken per op is 0.188us (5.3 Mqps) with batch size of 100 With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3. Time taken per op is 0.325us (3.1 Mqps) with batch size of 0 With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3. Time taken per op is 0.196us (5.1 Mqps) with batch size of 10 With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3. Time taken per op is 0.199us (5.0 Mqps) with batch size of 25 With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3. Time taken per op is 0.196us (5.1 Mqps) with batch size of 50 With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3. Time taken per op is 0.209us (4.8 Mqps) with batch size of 100 Reviewers: sdong, yhchiang, igor, ljin Reviewed By: ljin Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22167
-
由 Yueh-Hsuan Chiang 提交于
Summary: Fix the error of c_test.c Test Plan: make c_test ./c_test
-
由 Yueh-Hsuan Chiang 提交于
Summary: Add missing implementaiton of SanitizeDBOptions in simple_table_db_test.cc Test Plan: make simple_table_db_test.cc
-
由 Yueh-Hsuan Chiang 提交于
Summary: Currently, PlainTable must use mmap_reads. When PlainTable is used but allow_mmap_reads is not set, rocksdb will fail in flush. This diff improve Options sanitization and add MmapReadRequired() to TableFactory. Test Plan: export ROCKSDB_TESTS=PlainTableOptionsSanitizeTest make db_test -j32 ./db_test Reviewers: sdong, ljin Reviewed By: ljin Subscribers: you, leveldb Differential Revision: https://reviews.facebook.net/D21939
-
由 Jonah Cohen 提交于
Summary: ManifestDumpCommand::DoCommand was allocating a VersionSet and never freeing it. Test Plan: make Reviewers: igor Reviewed By: igor Differential Revision: https://reviews.facebook.net/D22221
-
由 sdong 提交于
Summary: A previous change triggered a change by mistake: DestroyDB() will keep info logs under DB directory. Revert the unintended change. Test Plan: Add a unit test case to verify it. Reviewers: ljin, yhchiang, igor Reviewed By: igor Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22209
-
由 Igor Canadi 提交于
Summary: We need to start compression at level 1, while OptimizeForLevelComapaction() only sets up rocksdb to start compressing at level 2. I also adjusted some other things. Test Plan: compiles Reviewers: yinwang Reviewed By: yinwang Differential Revision: https://reviews.facebook.net/D22203
-
- 20 8月, 2014 1 次提交
-
-
由 sdong 提交于
Summary: Make table_reader_bench cover all the three table formats. Test Plan: Run it using three options Reviewers: radheshyamb, ljin Reviewed By: ljin Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22137
-
- 19 8月, 2014 9 次提交
-
-
由 Igor Canadi 提交于
-
由 Igor Canadi 提交于
Summary: I was checking some functions in coding.h and coding.cc when I noticed these unused functions. Let's remove them. Test Plan: compiles Reviewers: sdong, ljin, yhchiang, dhruba Reviewed By: dhruba Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D22077
-
由 Radheshyam Balasundaram 提交于
Summary: Adding num_column_families flag. Adding support for column families in DoWrite and ReadRandom methods. [Igor, please let me know if this approach sounds good. I shall add it to other methods too.] Test Plan: Ran fillseq on 1M keys and 10 Column families and ran readrandom. Reviewers: sdong, yhchiang, igor, ljin Reviewed By: ljin Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D21387
-
由 sdong 提交于
Summary: Add WriteBatchWithIndex so that a user can query data out of a WriteBatch, to support MongoDB's read-its-own-write. WriteBatchWithIndex uses a skiplist to store the binary index. The index stores the offset of the entry in the write batch. When searching for a key, the key for the entry is read by read the entry from the write batch from the offset. Define a new iterator class for querying data out of WriteBatchWithIndex. A user can create an iterator of the write batch for one column family, seek to a key and keep calling Next() to see next entries. I will add more unit tests if people are OK about this API. Test Plan: make all check Add unit tests. Reviewers: yhchiang, igor, MarkCallaghan, ljin Reviewed By: ljin Subscribers: dhruba, leveldb, xjin Differential Revision: https://reviews.facebook.net/D21381
-
由 sdong 提交于
Summary: N/A Test Plan: N/A Reviewers: yhchiang, ljin Reviewed By: ljin Subscribers: dhruba, leveldb, igor Differential Revision: https://reviews.facebook.net/D22053
-
由 sdong 提交于
Summary: Bump up version after we've cut 3.4 Test Plan: N/A Reviewers: yhchiang, ljin Reviewed By: ljin Subscribers: leveldb, dhruba, igor Differential Revision: https://reviews.facebook.net/D22047
-
由 Radheshyam Balasundaram 提交于
Summary: Adding flags to use cuckoo table SST in db_bench.cc Test Plan: Ran benchmark with fillseq and readrandom Reviewers: sdong, ljin Reviewed By: ljin Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D21729
-
由 Igor Canadi 提交于
-
由 Lei Jin 提交于
Summary: auto_roll_logger_test fails from time to time. I wasn't able to repro the issue but by looking at the code, it seems like the initial ctime_ value can be set to the boundary of the second so it may still have a chance to get rolled when interval is set to 1 second. ``` util/auto_roll_logger_test.cc:120: failed: 118 > 708 ==19470== Syscall param msync(start) points to unaddressable byte(s) ==19470== at 0x4E46CE0: __msync_nocancel (in /usr/local/fbcode/gcc-4.8.1-glibc-2.17/lib/libpthread-2.17.so) ==19470== by 0x584EFB: access_mem (Ginit.c:137) ==19470== by 0x5834E3: _ULx86_64_access_reg (libunwind_i.h:162) ==19470== by 0x585601: apply_reg_state (Gparser.c:742) ==19470== by 0x5866BE: _ULx86_64_dwarf_find_save_locs (Gparser.c:883) ==19470== by 0x584550: _ULx86_64_dwarf_step (Gstep.c:34) ==19470== by 0x583653: _ULx86_64_step (Gstep.c:71) ==19470== by 0x583FD2: _ULx86_64_tdep_trace (Gtrace.c:217) ==19470== by 0x5831C3: backtrace (backtrace.c:69) Test Plan: ./auto_roll_logger_test Reviewers: sdong, yhchiang, igor Reviewed By: igor Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D21951
-
- 18 8月, 2014 1 次提交
-
-
由 Igor Canadi 提交于
Pass parsed user key to prefix extractor in V2 compaction
-