- 28 3月, 2013 1 次提交
-
-
由 Abhishek Kona 提交于
Summary: Earlier Statistics object was a raw pointer. This meant the user had to clear up the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted. Now Using a shared_ptr to manage this. Want this in before the next release. Test Plan: make all check. Reviewers: dhruba, emayanke Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D9735
-
- 21 3月, 2013 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: This patch allows an application to specify whether to use bufferedio, reads-via-mmaps and writes-via-mmaps per database. Earlier, there was a global static variable that was used to configure this functionality. The default setting remains the same (and is backward compatible): 1. use bufferedio 2. do not use mmaps for reads 3. use mmap for writes 4. use readaheads for reads needed for compaction I also added a parameter to db_bench to be able to explicitly specify whether to do readaheads for compactions or not. Test Plan: make check Reviewers: sheki, heyongqiang, MarkCallaghan Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D9429
-
- 20 3月, 2013 2 次提交
-
-
由 Mayank Agarwal 提交于
Summary: Makefile had options to ignore sign-comparisons and unused-parameters, which should be there. Also fixed the specific errors in the code-base Test Plan: make Reviewers: chip, dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D9531
-
由 Mark Callaghan 提交于
Summary: Add --benchmarks=levelstats option to report per-level stats (#files, #bytes) Change readwhilewriting test to report response time for writes but exclude them from the stats merged by all threads. Prevent "NaN" in stats output by preventing division by 0. Remove "o" file I committed by mistake. Task ID: # Blame Rev: Test Plan: make check Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9513
-
- 15 3月, 2013 1 次提交
-
-
由 Mark Callaghan 提交于
Summary: Add --benchmarks=updaterandom for read-modify-write workloads. This is different from --benchmarks=readrandomwriterandom in a few ways. First, an "operation" is the combined time to do the read & write rather than treating them as two ops. Second, the same key is used for the read & write. Change RandomGenerator to support rows larger than 1M. That was using "assert" to fail and assert is compiled-away when -DNDEBUG is used. Add more options to db_bench --duration - sets the number of seconds for tests to run. When not set the operation count continues to be the limit. This is used by random operation tests. --use_snapshot - when set GetSnapshot() is called prior to each random read. This is to measure the overhead from using snapshots. --get_approx - when set GetApproximateSizes() is called prior to each random read. This is to measure the overhead for a query optimizer. Task ID: # Blame Rev: Test Plan: run db_bench Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9267
-
- 04 3月, 2013 1 次提交
-
-
由 Mark Callaghan 提交于
Summary: This adds the rate_delay_limit_milliseconds option to make the delay configurable in MakeRoomForWrite when the max compaction score is too high. This delay is called the Ln slowdown. This change also counts the Ln slowdown per level to make it possible to see where the stalls occur. From IO-bound performance testing, the Level N stalls occur: * with compression -> at the largest uncompressed level. This makes sense because compaction for compressed levels is much slower. When Lx is uncompressed and Lx+1 is compressed then files pile up at Lx because the (Lx,Lx+1)->Lx+1 compaction process is the first to be slowed by compression. * without compression -> at level 1 Task ID: #1832108 Blame Rev: Test Plan: run with real data, added test Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D9045
-
- 02 3月, 2013 1 次提交
-
-
由 bil 提交于
Summary: 1. the default value for key size is still 16 2. enable the ability to set the key size via command line --key_size= Test Plan: build & run db_banch and pass some value via command line. verify it works correctly. Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D8943
-
- 26 2月, 2013 1 次提交
-
-
由 Abhishek Kona 提交于
Summary: Use only the counter mechanism. Do away with incNumFileOpens, incNumFileClose, incNumFileErrors s/NULL/nullptr/g in db/table_cache.cc Test Plan: make clean check Reviewers: dhruba, heyongqiang, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8841
-
- 22 2月, 2013 2 次提交
-
-
由 Vamsi Ponnekanti 提交于
Summary: I had added FLAGS_numdistinct and FLAGS_deletepercent for randomwithverify but forgot to add cmdline parsing for those flags. Test Plan: [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify --numdistinct=500 LevelDB: version 1.5 Date: Thu Feb 21 10:34:40 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fbf90bff700 randomwithverify : 4.693 micros/op 213098 ops/sec; ( get:900000 put:80000 del:20000 total:1000000 found:714556) [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify --deletepercent=5 LevelDB: version 1.5 Date: Thu Feb 21 10:35:03 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fe14dfff700 randomwithverify : 4.883 micros/op 204798 ops/sec; ( get:900000 put:50000 del:50000 total:1000000 found:443847) [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify --deletepercent=5 --numdistinct=500 LevelDB: version 1.5 Date: Thu Feb 21 10:36:18 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fc31c7ff700 randomwithverify : 4.920 micros/op 203233 ops/sec; ( get:900000 put:50000 del:50000 total:1000000 found:445522) Revert Plan: OK Task ID: # Reviewers: dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8769
-
由 Vamsi Ponnekanti 提交于
Summary: Added RandomWithVerify benchmark option. Test Plan: This whole diff is to test. [nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_bench --benchmarks=randomwithverify LevelDB: version 1.5 Date: Tue Feb 19 17:50:28 2013 CPU: 24 * Intel(R) Xeon(R) CPU X5650 @ 2.67GHz CPUCache: 12288 KB Keys: 16 bytes each Values: 100 bytes each (50 bytes after compression) Entries: 1000000 RawSize: 110.6 MB (estimated) FileSize: 62.9 MB (estimated) Compression: snappy WARNING: Assertions are enabled; benchmarks unnecessarily slow ------------------------------------------------ Created bg thread 0x7fa9c3fff700 randomwithverify : 5.004 micros/op 199836 ops/sec; ( get:900000 put:80000 del:20000 total:1000000 found:711992) Revert Plan: OK Task ID: # Reviewers: dhruba, emayanke Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8685
-
- 21 2月, 2013 1 次提交
-
-
由 Abhishek Kona 提交于
Summary: * Introduce is histogram in statistics.h * stop watch to measure time. * introduce two timers as a poc. Replaced NULL with nullptr to fight some lint errors Should be useful for google. Test Plan: ran db_bench and check stats. make all check Reviewers: dhruba, heyongqiang Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8637
-
- 26 1月, 2013 1 次提交
-
-
由 Chip Turner 提交于
Summary: Previously, if you opened a db with num_levels set lower than the database, you received the unhelpful message "Corruption: VersionEdit: new-file entry." Now you get a more verbose message describing the issue. Also, fix handling of compression_levels (both the run-over-the-end issue and the memory management of it). Lastly, unique_ptr'ify a couple of minor calls. Test Plan: make check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D8151
-
- 24 1月, 2013 1 次提交
-
-
由 Chip Turner 提交于
Summary: Replace manual memory management with std::unique_ptr in a number of places; not exhaustive, but this fixes a few leaks with file handles as well as clarifies semantics of the ownership of file handles with log classes. Test Plan: db_stress, make check Reviewers: dhruba Reviewed By: dhruba CC: zshao, leveldb, heyongqiang Differential Revision: https://reviews.facebook.net/D8043
-
- 09 1月, 2013 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: This was a peformance regression caused by https://reviews.facebook.net/D6729. The default value of max_grandparent_overlap_factor was erroneously set to 0 in db_bench. This was causing compactions to create really really small files because the max_grandparent_overlap_factor was erroneously set to zero in the benchmark. Test Plan: Run --benchmarks=overwrite Reviewers: heyongqiang, emayanke, sheki, MarkCallaghan Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D7797
-
- 08 1月, 2013 1 次提交
-
-
由 Mark Callaghan 提交于
Summary: Adds the option --seed to db_bench to specify the base for the per-thread RNG. When not set each thread uses the same value across runs of db_bench which defeats IO stress testing. Adds the option --read_range. When set to a value > 1 an iterator is created and each query done for the randomread benchmark will do a range scan for that many rows. When not set or set to 1 the existing behavior (a point lookup) is done. Fixes a bug where a printf format string was missing. Test Plan: run db_bench Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7749
-
- 29 11月, 2012 2 次提交
-
-
由 sheki 提交于
Summary: Create a directory "archive" in the DB directory. During DeleteObsolteFiles move the WAL files (*.log) to the Archive directory, instead of deleting. Test Plan: Created a DB using DB_Bench. Reopened it. Checked if files move. Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D6975
-
由 Abhishek Kona 提交于
Summary: Scripted and removed all trailing spaces and converted all tabs to spaces. Also fixed other lint errors. All lint errors from this point of time should be taken seriously. Test Plan: make all check Reviewers: dhruba Reviewed By: dhruba CC: leveldb Differential Revision: https://reviews.facebook.net/D7059
-
- 22 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Support taking a configurable number of files from the same level to compact in a single compaction run. Summary: The compaction process takes some files from LevelK and merges it into LevelK+1. The number of files it picks from LevelK was capped such a way that the total amount of data picked does not exceed the maxfilesize of that level. This essentially meant that only one file from LevelK is picked for a single compaction. For bulkloads, we would like to take many many file from LevelK and compact them using a single compaction run. This patch introduces a option called the 'source_compaction_factor' (similar to expanded_compaction_factor). It is a multiplier that is multiplied by the maxfilesize of that level to arrive at the limit that is used to throttle the number of source files from LevelK. For bulk loads, set source_compaction_factor to a very high number so that multiple files from the same level are picked for compaction in a single compaction. The default value of source_compaction_factor is 1, so that we can keep backward compatibilty with existing compaction semantics. Test Plan: make clean check Reviewers: emayanke, sheki Reviewed By: emayanke CC: leveldb Differential Revision: https://reviews.facebook.net/D6867
-
- 21 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: This option is needed for fast bulk uploads. The goal is to load all the data into files in L0 without any interference from background compactions. Test Plan: make clean check Reviewers: sheki Reviewed By: sheki CC: leveldb Differential Revision: https://reviews.facebook.net/D6849
-
- 15 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: The value specified in max_grandparent_overlap_factor is used to limit the file size in a compaction run. This patch makes it configurable when using db_bench. Test Plan: make clean db_bench Reviewers: MarkCallaghan, heyongqiang Reviewed By: heyongqiang CC: leveldb Differential Revision: https://reviews.facebook.net/D6729
-
- 13 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: The db_bench utility was broken in 1.5.4.fb because of a signed-unsigned comparision. The static variable FLAGS_min_level_to_compress was recently changed from int to 'unsigned in' but it is initilized to a nagative value -1. The segfault is of this type: Program received signal SIGSEGV, Segmentation fault. Open (this=0x7fffffffdee0) at db/db_bench.cc:939 939 db/db_bench.cc: No such file or directory. (gdb) where Test Plan: run db_bench with no options. Reviewers: heyongqiang Reviewed By: heyongqiang CC: MarkCallaghan, emayanke, sheki Differential Revision: https://reviews.facebook.net/D6663
-
- 10 11月, 2012 2 次提交
-
-
由 Abhishek Kona 提交于
Summary: Record BloomFliter hits and drop off reasons during compaction. Test Plan: Unit tests work. Reviewers: dhruba, heyongqiang Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D6591
-
由 heyongqiang 提交于
Summary: disable size compaction in ldb reduce_levels, this will avoid compactions rather than the manual comapction, added --compression=none|snappy|zlib|bzip2 and --file_size= per-file size to ldb reduce_levels command Test Plan: run ldb Reviewers: dhruba, MarkCallaghan Reviewed By: dhruba CC: sheki, emayanke Differential Revision: https://reviews.facebook.net/D6597
-
- 09 11月, 2012 1 次提交
-
-
由 Abhishek Kona 提交于
Summary: Prototype stat's collection. Diff is a good estimate of what the final code will look like. A few assumptions : * Used a global static instance of the statistics object. Plan to pass it to each internal function. Static allows metrics only at app level. * In the Ticker's do not do any locking. Depend on the mutex at each function of LevelDB. If we ever remove the mutex, we should change here too. The other option is use atomic objects anyways as there won't be any contention as they will be always acquired only by one thread. * The counters are dumb, increment through lifecycle. Plan to use ods etc to get last5min stat etc. Test Plan: made changes in db_bench Ran ./db_bench --statistics=1 --num=10000 --cache_size=5000 This will print the cache hit/miss stats. Reviewers: dhruba, heyongqiang Differential Revision: https://reviews.facebook.net/D6441
-
- 08 11月, 2012 1 次提交
-
-
由 heyongqiang 提交于
Summary: as subject Test Plan: run db_bench readrandom Reviewers: dhruba Reviewed By: dhruba CC: MarkCallaghan, emayanke, sheki Differential Revision: https://reviews.facebook.net/D6495
-
- 07 11月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: The default compilation process now uses "-Wall" to compile. Fix all compilation error generated by gcc. Test Plan: make all check Reviewers: heyongqiang, emayanke, sheki Reviewed By: heyongqiang CC: MarkCallaghan Differential Revision: https://reviews.facebook.net/D6525
-
- 03 11月, 2012 1 次提交
-
-
由 amayank 提交于
Make compression options configurable. These include window-bits, level and strategy for ZlibCompression Summary: Leveldb currently uses windowBits=-14 while using zlib compression.(It was earlier 15). This makes the setting configurable. Related changes here: https://reviews.facebook.net/D6105 Test Plan: make all check Reviewers: dhruba, MarkCallaghan, sheki, heyongqiang Differential Revision: https://reviews.facebook.net/D6393
-
- 02 11月, 2012 1 次提交
-
-
由 heyongqiang 提交于
Summary: as subject Test Plan: run db_bench and db_test Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D6111
-
- 30 10月, 2012 3 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: The leveldb API is enhanced to support different compression algorithms at different levels. This adds the option min_level_to_compress to db_bench that specifies the minimum level for which compression should be done when compression is enabled. This can be used to disable compression for levels 0 and 1 which are likely to suffer from stalls because of the CPU load for memtable flushes and (L0,L1) compaction. Level 0 is special as it gets frequent memtable flushes. Level 1 is special as it frequently gets all:all file compactions between it and level 0. But all other levels could be the same. For any level N where N > 1, the rate of sequential IO for that level should be the same. The last level is the exception because it might not be full and because files from it are not read to compact with the next larger level. The same amount of time will be spent doing compaction at any level N excluding N=0, 1 or the last level. By this standard all of those levels should use the same compression. The difference is that the loss (using more disk space) from a faster compression algorithm is less significant for N=2 than for N=3. So we might be willing to trade disk space for faster write rates with no compression for L0 and L1, snappy for L2, zlib for L3. Using a faster compression algorithm for the mid levels also allows us to reclaim some cpu without trading off much loss in disk space overhead. Also note that little is to be gained by compressing levels 0 and 1. For a 4-level tree they account for 10% of the data. For a 5-level tree they account for 1% of the data. With compression enabled: * memtable flush rate is ~18MB/second * (L0,L1) compaction rate is ~30MB/second With compression enabled but min_level_to_compress=2 * memtable flush rate is ~320MB/second * (L0,L1) compaction rate is ~560MB/second This practicaly takes the same code from https://reviews.facebook.net/D6225 but makes the leveldb api more general purpose with a few additional lines of code. Test Plan: make check Differential Revision: https://reviews.facebook.net/D6261
-
由 Mark Callaghan 提交于
Summary: Adds the "MB/sec in" and "MB/sec out" to this line: Amplification: 1.7 rate, 0.01 GB in, 0.02 GB out, 8.24 MB/sec in, 13.75 MB/sec out Changes all values to be reported per interval and since test start for this line: ... thread 0: (10000,60000) ops and (19155.6,27307.5) ops/second in (0.522041,2.197198) seconds Task ID: # Blame Rev: Test Plan: run db_bench Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D6291
-
由 Mark Callaghan 提交于
Summary: Adds a method that returns the score for the next level that most needs compaction. That method is then used by db_bench to rate limit threads. Threads are put to sleep at the end of each stats interval until the score is less than the limit. The limit is set via the --rate_limit=$double option. The specified value must be > 1.0. Also adds the option --stats_per_interval to enable additional metrics reported every stats interval. Task ID: # Blame Rev: Test Plan: run db_bench Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D6243
-
- 25 10月, 2012 1 次提交
-
-
由 Mark Callaghan 提交于
Summary: This adds more statistics to be reported by GetProperty("leveldb.stats"). The new stats include time spent waiting on stalls in MakeRoomForWrite. This also includes the total amplification rate where that is: (#bytes of sequential IO during compaction) / (#bytes from Put) This also includes a lot more data for the per-level compaction report. * Rn(MB) - MB read from level N during compaction between levels N and N+1 * Rnp1(MB) - MB read from level N+1 during compaction between levels N and N+1 * Wnew(MB) - new data written to the level during compaction * Amplify - ( Write(MB) + Rnp1(MB) ) / Rn(MB) * Rn - files read from level N during compaction between levels N and N+1 * Rnp1 - files read from level N+1 during compaction between levels N and N+1 * Wnp1 - files written to level N+1 during compaction between levels N and N+1 * NewW - new files written to level N+1 during compaction * Count - number of compactions done for this level This is the new output from DB::GetProperty("leveldb.stats"). The old output stopped at Write(MB) Compactions Level Files Size(MB) Time(sec) Read(MB) Write(MB) Rn(MB) Rnp1(MB) Wnew(MB) Amplify Read(MB/s) Write(MB/s) Rn Rnp1 Wnp1 NewW Count ------------------------------------------------------------------------------------------------------------------------------------- 0 3 6 33 0 576 0 0 576 -1.0 0.0 1.3 0 0 0 0 290 1 127 242 351 5316 5314 570 4747 567 17.0 12.1 12.1 287 2399 2685 286 32 2 161 328 54 822 824 326 496 328 4.0 1.9 1.9 160 251 411 160 161 Amplification: 22.3 rate, 0.56 GB in, 12.55 GB out Uptime(secs): 439.8 Stalls(secs): 206.938 level0_slowdown, 0.000 level0_numfiles, 24.129 memtable_compaction Task ID: # Blame Rev: Test Plan: run db_bench Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - (cherry picked from commit ecdeead38f86cc02e754d0032600742c4f02fec8) Reviewers: dhruba Differential Revision: https://reviews.facebook.net/D6153
-
- 20 10月, 2012 2 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: The parameter delete_obsolete_files_period_micros controls the periodicity of deleting obsolete files. db_bench was reading in this parameter intoa local variable called 'l' but was incorrectly using another local variable called 'n' while setting it in the db.options data structure. This patch also logs the value of delete_obsolete_files_period_micros in the LOG file at db startup time. I am hoping that this will improve the overall write throughput drastically. Test Plan: run db_bench Reviewers: MarkCallaghan, heyongqiang Reviewed By: MarkCallaghan Differential Revision: https://reviews.facebook.net/D6099
-
由 Dhruba Borthakur 提交于
published in https://reviews.facebook.net/D5997. Summary: This patch allows compaction to occur in multiple background threads concurrently. If a manual compaction is issued, the system falls back to a single-compaction-thread model. This is done to ensure correctess and simplicity of code. When the manual compaction is finished, the system resumes its concurrent-compaction mode automatically. The updates to the manifest are done via group-commit approach. Test Plan: run db_bench
-
- 17 10月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: The method DeleteObsolete files is a very costly methind, especially when the number of files in a system is large. It makes a list of all live-files and then scans the directory to compute the diff. By default, this method is executed after every compaction run. This patch makes it such that DeleteObsolete files is never invoked twice within a configured period. Test Plan: run all unit tests Reviewers: heyongqiang, MarkCallaghan Reviewed By: MarkCallaghan Differential Revision: https://reviews.facebook.net/D6045
-
- 16 10月, 2012 1 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: Enhance db_bench to allow setting the number of levels in a database. Test Plan: run db_bench and look at LOG Reviewers: heyongqiang, MarkCallaghan Reviewed By: MarkCallaghan CC: MarkCallaghan Differential Revision: https://reviews.facebook.net/D6027
-
- 04 10月, 2012 3 次提交
-
-
由 Dhruba Borthakur 提交于
Summary: We have seen that reading data via the pread call (instead of mmap) is much faster on Linux 2.6.x kernels. This patch makes an equivalent option to switch off mmaps for the write path as well. db_bench --mmap_write=0 will use write() instead of mmap() to write data to a file. This change is backward compatible, the default option is to continue using mmap for writing to a file. Test Plan: "make check all" Differential Revision: https://reviews.facebook.net/D5781
-
由 Mark Callaghan 提交于
Summary: The option is zero by default and in that case reporting is unchanged. By unchanged, the interval at which stats are reported is scaled after each report and newline is not issued after each report so one line is rewritten. When non-zero it specifies the constant interval (in operations) at which statistics are reported and the stats include the rate per interval. This makes it easier to determine whether QPS changes over the duration of the test. Task ID: # Blame Rev: Test Plan: run db_bench Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba CC: heyongqiang Differential Revision: https://reviews.facebook.net/D5817
-
由 Mark Callaghan 提交于
Summary: see above Task ID: # Blame Rev: Test Plan: run db_bench with invalid value for option Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - Reviewers: dhruba Reviewed By: dhruba CC: heyongqiang Differential Revision: https://reviews.facebook.net/D5823
-
- 03 10月, 2012 1 次提交
-
-
由 Mark Callaghan 提交于
Summary: stdlib.h is needed for exit() --readhead --> --readahead Task ID: # Blame Rev: Test Plan: compile Revert Plan: Database Impact: Memcache Impact: Other Notes: EImportant: - begin *PUBLIC* platform impact section - Bugzilla: # - end platform impact - fix compiler warnings & errors Reviewers: dhruba Reviewed By: dhruba CC: heyongqiang Differential Revision: https://reviews.facebook.net/D5805
-