- 20 12月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: Leveldb has an api OpenForReadOnly() that opens the database in readonly mode. This call had an option to not process the transaction log. This patch removes this option and always processes all transactions that had been committed. It has been done in such a way that it does not create/write to any new files in the process. The invariant of "no-writes" to the leveldb data directory is still true. This enhancement allows multiple threads to open the same database in readonly mode and access all trancations that were committed right upto the OpenForReadOnly call. I changed the public API to match the new semantics because there are no users who are currently using this api. Test Plan: make clean check Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D7479
-
- 19 12月, 2012 1 次提交
-
-
由 Zheng Shao 提交于
Summary: This command accepts key-value pairs from stdin with the same format of "ldb dump" command. This allows us to try out different compression algorithms/block sizes easily. Test Plan: dump, load, dump, verify the data is the same. Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7443
-
- 18 12月, 2012 4 次提交
-
-
由 Abhishek Kona 提交于
Test Plan: it compiles and deploys. Reviewers: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7341
-
由 Dhruba Borthakur 提交于
Summary: 1. The OpenForReadOnly() call should not lock the db. This is useful so that multiple processes can open the same database concurrently for reading. 2. GetUpdatesSince should not error out if the archive directory does not exist. 3. A new constructor for WriteBatch that can takes a serialized string as a parameter of the constructor. Test Plan: make clean check Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D7449
-
由 Kosie van der Merwe 提交于
Summary: Added kMetaDatabase for meta-databases in db/filename.h along with supporting fuctions. Fixed switch in DBImpl so that it also handles kMetaDatabase. Fixed DestroyDB() that it can handle destroying meta-databases. Test Plan: make check Reviewers: sheki, emayanke, vamsi, dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D7245
-
由 Abhishek Kona 提交于
Summary: C tests would fail sometimes as DestroyDB would return a Failure Status message when deleting an archival directory which was not created (WAL_ttl_seconds = 0). Fix: Ignore the Status returned on Deleting Archival Directory. Test Plan: * make check Reviewers: dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7395
-
- 17 12月, 2012 3 次提交
-
-
由 Zheng Shao 提交于
Summary: The old code was omitting the 0 if the char is less than 16. Test Plan: Tried the following program: int main() { unsigned char c = 1; printf("%X\n", c); printf("%02X\n", c); return 0; } The output is: 1 01 Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7437
-
由 Zheng Shao 提交于
Summary: This allows us to use ldb to do more experiments like block_size changes. Test Plan: run it by hand. Reviewers: dhruba, sheki, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7431
-
由 Zheng Shao 提交于
Summary: Without this option, manifest_dump does not print binary keys for files in a human-readable way. Test Plan: ./manifest_dump --hex=1 --verbose=0 --file=/data/users/zshao/fdb_comparison/leveldb/fbobj.apprequest-0_0_original/MANIFEST-000002 manifest_file_number 589 next_file_number 590 last_sequence 2311567 log_number 543 prev_log_number 0 --- level 0 --- version# 0 --- 532:1300357['0000455BABE20000' @ 2183973 : 1 .. 'FFFCA5D7ADE20000' @ 2184254 : 1] 536:1308170['000198C75CE30000' @ 2203313 : 1 .. 'FFFCF94A79E30000' @ 2206463 : 1] 542:1321644['0002931AA5E50000' @ 2267055 : 1 .. 'FFF77B31C5E50000' @ 2270754 : 1] 544:1286390['000410A309E60000' @ 2278592 : 1 .. 'FFFE470A73E60000' @ 2289221 : 1] 538:1298778['0006BCF4D8E30000' @ 2217050 : 1 .. 'FFFD77DAF7E30000' @ 2220489 : 1] 540:1282353['00090D5356E40000' @ 2231156 : 1 .. 'FFFFF4625CE40000' @ 2231969 : 1] --- level 1 --- version# 0 --- 510:2112325['000007F9C2D40000' @ 1782099 : 1 .. '146F5B67B8D80000' @ 1905458 : 1] 511:2121742['146F8A3023D60000' @ 1824388 : 1 .. '28BC8FBB9CD40000' @ 1777993 : 1] 512:801631['28BCD396F1DE0000' @ 2080191 : 1 .. '3082DBE9ADDB0000' @ 1989927 : 1] Reviewers: dhruba, sheki, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7425
-
- 15 12月, 2012 1 次提交
-
-
由 Abhishek Kona 提交于
Summary: old version of linters use "lint_engine" instead of "lint.engine" Some bookeeping in gitignore. Reviewers: abhishekk
-
- 13 12月, 2012 3 次提交
-
-
由 Abhishek Kona 提交于
Summary: WriteBatch is now used by the GetUpdatesSinceAPI. This API is external and will be used by the rocks server. Rocks Server and others will need to know about the Sequence Number in the WriteBatch. This public method will allow for that. Test Plan: make all check. Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7293
-
由 Dhruba Borthakur 提交于
Summary: Expose the serialized string that represents a WriteBatch. This is helpful to replicate a writebatch operation from one machine to another. Test Plan: make clean check Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D7317
-
由 Abhishek Kona 提交于
Summary: Debug and ported changes from the Open Source Github repo to our repo. Wrote a script to easy build the java Library. future compiling java lib should just be running this script. Test Plan: it compiles. Reviewers: dhruba, leveldb Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D7323
-
- 12 12月, 2012 1 次提交
-
-
由 Abhishek Kona 提交于
Fix Bug in Binary Search for files containing a seq no. and delete Archived Log Files during Destroy DB. Summary: * Fixed implementation bug in Binary_Searvch introduced in https://reviews.facebook.net/D7119 * Binary search is also overflow safe. * Delete archive log files and archive dir during DestroyDB Test Plan: make check Reviewers: dhruba CC: kosievdmerwe, emayanke Differential Revision: https://reviews.facebook.net/D7263
-
- 11 12月, 2012 3 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: Implement a interface to retrieve the most current transaction id from the database. Test Plan: Added unit test. Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D7269
-
由 sheki 提交于
Summary: Porting various options, mostly related to Multi-threaded compaction to Java. Test Plan: mvn test. No clear plan on how else test. Reviewers: dhruba Reviewed By: dhruba CC: leveldb, emayanke Differential Revision: https://reviews.facebook.net/D7221
-
由 Abhishek Kona 提交于
Summary: filename.h has functions to do similar things. Moving code away from db_impl.cc Test Plan: make check Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D7251
-
- 10 12月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: Suppose you submit 100 background tasks one after another. The first enqueu task finds that the queue is empty and wakes up one worker thread. Now suppose that all remaining 99 work items are enqueued, they do not wake up any worker threads because the queue is already non-empty. This causes a situation when there are 99 tasks in the task queue but only one worker thread is processing a task while the remaining worker threads are waiting. The fix is to always wakeup one worker thread while enqueuing a task. I also added a check to count the number of elements in the queue to help in debugging. Test Plan: make clean check. Reviewers: chip Reviewed By: chip CC: leveldb Differential Revision: https://reviews.facebook.net/D7203
-
- 08 12月, 2012 4 次提交
-
-
由 Zheng Shao 提交于
Summary: Added the following two options: [--bloom_bits=<int,e.g.:14>] [--compression_type=<no|snappy|zlib|bzip2>] These options will be used when ldb opens the leveldb database. Test Plan: Tried by hand for both success and failure cases. We do need a test framework. Reviewers: dhruba, emayanke, sheki Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7197
-
由 Abhishek Kona 提交于
Summary: How it works: * GetUpdatesSince takes a SequenceNumber. * A LogFile with the first SequenceNumber nearest and lesser than the requested Sequence Number is found. * Seek in the logFile till the requested SeqNumber is found. * Return an iterator which contains logic to return record's one by one. Test Plan: * Test case included to check the good code path. * Will update with more test-cases. * Feedback required on test-cases. Reviewers: dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7119
-
由 Kosie van der Merwe 提交于
Summary: Added 1 to indices where I shouldn't have so overrun array. Test Plan: make check Reviewers: sheki, emayanke, vamsi, dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D7227
-
由 Kosie van der Merwe 提交于
Summary: Added BitStreamPutInt() and BitStreamGetInt() which take a stream of chars and can write integers of arbitrary bit sizes to that stream at arbitrary positions. There are also convenience versions of these functions that take std::strings and leveldb::Slices. Test Plan: make check Reviewers: sheki, vamsi, dhruba, emayanke Reviewed By: vamsi CC: leveldb Differential Revision: https://reviews.facebook.net/D7071
-
- 05 12月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: A compaction is picked based on its score. It is useful to print the compaction score in the LOG because it aids in debugging. If one looks at the logs, one can find out why a compaction was preferred over another. Test Plan: make clean check Differential Revision: https://reviews.facebook.net/D7137
-
- 30 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: rocksdb README file. Test Plan: Reviewers: CC: Task ID: # Blame Rev:
-
- 29 11月, 2012 5 次提交
-
-
由 sheki 提交于
Summary: Create a directory "archive" in the DB directory. During DeleteObsolteFiles move the WAL files (*.log) to the Archive directory, instead of deleting. Test Plan: Created a DB using DB_Bench. Reopened it. Checked if files move. Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D6975
-
由 Abhishek Kona 提交于
Summary: Scripted and removed all trailing spaces and converted all tabs to spaces. Also fixed other lint errors. All lint errors from this point of time should be taken seriously. Test Plan: make all check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7059
-
由 Dhruba Borthakur 提交于
Summary: Test Plan: Reviewers: CC: Task ID: # Blame Rev:
-
由 Dhruba Borthakur 提交于
Summary: LevelDB should delete almost-new keys when a long-open snapshot exists. The previous behavior is to keep all versions that were created after the oldest open snapshot. This can lead to database size bloat for high-update workloads when there are long-open snapshots and long-open snapshot will be used for logical backup. By "almost new" I mean that the key was updated more than once after the oldest snapshot. If there were two snapshots with seq numbers s1 and s2 (s1 < s2), and if we find two instances of the same key k1 that lie entirely within s1 and s2 (i.e. s1 < k1 < s2), then the earlier version of k1 can be safely deleted because that version is not visible in any snapshot. Test Plan: unit test attached make clean check Differential Revision: https://reviews.facebook.net/D6999
-
由 Abhishek Kona 提交于
Summary: Added FBCODE like linting support to our codebase. Test Plan: arc lint lint's the code. Reviewers: dhruba Reviewed By: dhruba CC: emayanke, leveldb Differential Revision: https://reviews.facebook.net/D7041
-
- 28 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: Print out status at the end of a compaction run. This helps in debugging. Test Plan: make clean check Reviewers: sheki Reviewed By: sheki Differential Revision: https://reviews.facebook.net/D7035
-
- 27 11月, 2012 5 次提交
-
-
由 sheki 提交于
Test Plan: make check Reviewers: dhruba Reviewed By: dhruba CC: emayanke Differential Revision: https://reviews.facebook.net/D6993
-
由 Dhruba Borthakur 提交于
Summary: When we expand the range of keys for a level 0 compaction, we need to invoke ParentFilesInCompaction() only once for the entire range of keys that is being compacted. We were invoking it for each file that was being compacted, but this triggers an assertion because each file's range were contiguous but non-overlapping. I renamed ParentFilesInCompaction to ParentRangeInCompaction to adequately represent that it is the range-of-keys and not individual files that we compact in a single compaction run. Here is the assertion that is fixed by this patch. db_test: db/version_set.cc:585: void leveldb::Version::ExtendOverlappingInputs(int, const leveldb::Slice&, const leveldb::Slice&, std::vector<leveldb::FileMetaData*, std::allocator<leveldb::FileMetaData*> >*, int): Assertion `user_cmp->Compare(flimit, user_begin) >= 0' failed. Test Plan: make clean check OPT=-g Reviewers: sheki Reviewed By: sheki CC: MarkCallaghan, emayanke, leveldb Differential Revision: https://reviews.facebook.net/D6963
-
由 Dhruba Borthakur 提交于
-
由 Dhruba Borthakur 提交于
Summary: On fast filesystems (e.g. /dev/shm and ext4), the flushing of memstore to disk was fast and quick, and the background compaction thread was not getting scheduled fast enough to delete obsolete files before the db was closed. This caused the repair method to pick up those files that were not part of the db and the unit test was failing. The fix is to enhance the unti test to run a compaction before closing the database so that all files that are not part of the database are truly deleted from the filesystem. Test Plan: make c_test; ./c_test Reviewers: chip, emayanke, sheki Reviewed By: chip CC: leveldb Differential Revision: https://reviews.facebook.net/D6915
-
由 Chip Turner 提交于
Summary: It would appear our unit tests make use of code from ldb_cmd, and don't always require a valid database handle. D6855 was not aware db_ could sometimes be NULL for such commands, and so it broke reduce_levels_test. This moves the check elsewhere to (at least) fix the 'ldb dump' case of segfaulting when it couldn't open a database. Test Plan: make check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D6903
-
- 22 11月, 2012 2 次提交
-
-
由 Chip Turner 提交于
Summary: Link statically against snappy, using the gvfs one for facebook environments, and the bundled one otherwise. In addition, fix a few minor segfaults in ldb when it couldn't open the database, and update .gitignore to include a few other build artifacts. Test Plan: make check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D6855
-
由 Dhruba Borthakur 提交于
Support taking a configurable number of files from the same level to compact in a single compaction run. Summary: The compaction process takes some files from LevelK and merges it into LevelK+1. The number of files it picks from LevelK was capped such a way that the total amount of data picked does not exceed the maxfilesize of that level. This essentially meant that only one file from LevelK is picked for a single compaction. For bulkloads, we would like to take many many file from LevelK and compact them using a single compaction run. This patch introduces a option called the 'source_compaction_factor' (similar to expanded_compaction_factor). It is a multiplier that is multiplied by the maxfilesize of that level to arrive at the limit that is used to throttle the number of source files from LevelK. For bulk loads, set source_compaction_factor to a very high number so that multiple files from the same level are picked for compaction in a single compaction. The default value of source_compaction_factor is 1, so that we can keep backward compatibilty with existing compaction semantics. Test Plan: make clean check Reviewers: emayanke, sheki Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D6867
-
- 21 11月, 2012 2 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: This option is needed for fast bulk uploads. The goal is to load all the data into files in L0 without any interference from background compactions. Test Plan: make clean check Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D6849
-
由 Dhruba Borthakur 提交于
Summary: The method Finalize() recomputes the compaction score of each level and then sorts these score from largest to smallest. The idea is that the level with the largest compaction score will be a better candidate for compaction. There are usually very few levels, and a bubble sort code was used to sort these compaction scores. There existed a bug in the sorting code that skipped looking at the score for the n-1 level. This meant that even if the compaction score of the n-1 level is large, it will not be picked for compaction. This patch fixes the bug and also introduces "asserts" in the code to detect any possible inconsistencies caused by future bugs. This bug existed in the very first code change that introduced multi-threaded compaction to the leveldb code. That version of code was committed on Oct 19th via https://github.com/facebook/leveldb/commit/1ca0584345af85d2dccc434f451218119626d36e Test Plan: make clean check OPT=-g Reviewers: emayanke, sheki, MarkCallaghan Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D6837
-
- 20 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: make check OPT=-g fails with the following assert. ==== Test DBTest.ApproximateSizes db_test: db/version_set.cc:765: void leveldb::VersionSet::Builder::CheckConsistencyForDeletes(leveldb::VersionEdit*, int, int): Assertion `found' failed. The assertion was that file #7 that was being deleted did not preexists, but actualy it did pre-exist as shown in the manifest dump shows below. The bug was that we did not check for file existance at the same level. *************************Edit[0] = VersionEdit { Comparator: leveldb.BytewiseComparator } *************************Edit[1] = VersionEdit { LogNumber: 8 PrevLogNumber: 0 NextFile: 9 LastSeq: 80 AddFile: 0 7 8005319 'key000000' @ 1 : 1 .. 'key000079' @ 80 : 1 } *************************Edit[2] = VersionEdit { LogNumber: 8 PrevLogNumber: 0 NextFile: 13 LastSeq: 80 CompactPointer: 0 'key000079' @ 80 : 1 DeleteFile: 0 7 AddFile: 1 9 2101425 'key000000' @ 1 : 1 .. 'key000020' @ 21 : 1 AddFile: 1 10 2101425 'key000021' @ 22 : 1 .. 'key000041' @ 42 : 1 AddFile: 1 11 2101425 'key000042' @ 43 : 1 .. 'key000062' @ 63 : 1 AddFile: 1 12 1701165 'key000063' @ 64 : 1 .. 'key000079' @ 80 : 1 } Test Plan: Reviewers: CC: Task ID: # Blame Rev:
-