1. 16 10月, 2012 1 次提交
  2. 04 10月, 2012 3 次提交
    • D
      An configurable option to write data using write instead of mmap. · c1006d42
      Dhruba Borthakur 提交于
      Summary:
      We have seen that reading data via the pread call (instead of
      mmap) is much faster on Linux 2.6.x kernels. This patch makes
      an equivalent option to switch off mmaps for the write path
      as well.
      
      db_bench --mmap_write=0 will use write() instead of mmap() to
      write data to a file.
      
      This change is backward compatible, the default
      option is to continue using mmap for writing to a file.
      
      Test Plan: "make check all"
      
      Differential Revision: https://reviews.facebook.net/D5781
      c1006d42
    • M
      Add --stats_interval option to db_bench · e678a594
      Mark Callaghan 提交于
      Summary:
      The option is zero by default and in that case reporting is unchanged.
      By unchanged, the interval at which stats are reported is scaled after each
      report and newline is not issued after each report so one line is rewritten.
      When non-zero it specifies the constant interval (in operations) at which
      statistics are reported and the stats include the rate per interval. This
      makes it easier to determine whether QPS changes over the duration of the test.
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D5817
      e678a594
    • M
      Fix the bounds check for the --readwritepercent option · d8763abe
      Mark Callaghan 提交于
      Summary:
      see above
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      run db_bench with invalid value for option
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D5823
      d8763abe
  3. 03 10月, 2012 1 次提交
    • M
      Fix compiler warnings and errors in ldb.c · 98804f91
      Mark Callaghan 提交于
      Summary:
      stdlib.h is needed for exit()
      --readhead --> --readahead
      
      Task ID: #
      
      Blame Rev:
      
      Test Plan:
      compile
      
      Revert Plan:
      
      Database Impact:
      
      Memcache Impact:
      
      Other Notes:
      
      EImportant:
      
      - begin *PUBLIC* platform impact section -
      Bugzilla: #
      - end platform impact -
      fix compiler warnings & errors
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D5805
      98804f91
  4. 02 10月, 2012 1 次提交
    • A
      Commandline tool to compace LevelDB databases. · fec81318
      Abhishek Kona 提交于
      Summary:
      A simple CLI which calles DB->CompactRange()
      Can take String key's as range.
      
      Test Plan:
      Inserted data into a table.
      Waited for a minute, used compact tool on it. File modification time's
      changed so Compact did something on the files.
      
      Existing unit tests work.
      
      Reviewers: heyongqiang, dhruba
      
      Reviewed By: dhruba
      
      Differential Revision: https://reviews.facebook.net/D5697
      fec81318
  5. 18 9月, 2012 1 次提交
    • H
      add an option to disable seek compaction · a8464ed8
      heyongqiang 提交于
      Summary:
      as subject. This diff should be good for benchmarking.
      
      will send another diff to make it better in the case the seek compaction is enable.
      In that coming diff, will not count a seek if the bloomfilter filters.
      
      Test Plan: build
      
      Reviewers: dhruba, MarkCallaghan
      
      Reviewed By: MarkCallaghan
      
      Differential Revision: https://reviews.facebook.net/D5481
      a8464ed8
  6. 17 9月, 2012 1 次提交
  7. 15 9月, 2012 2 次提交
  8. 14 9月, 2012 2 次提交
  9. 13 9月, 2012 1 次提交
  10. 07 9月, 2012 1 次提交
  11. 05 9月, 2012 1 次提交
    • D
      Benchmark with both reads and writes at the same time. · 94208a78
      Dhruba Borthakur 提交于
      Summary:
      This patch enables the db_bench benchmark to issue both random reads and random writes at the same time. This options can be trigged via
      ./db_bench --benchmarks=readrandomwriterandom
      
      The default percetage of reads is 90.
      
      One can change the percentage of reads by specifying the --readwritepercent.
      ./db_bench --benchmarks=readrandomwriterandom=50
      
      This is a feature request from Jeffro asking for leveldb performance with a 90:10 read:write ratio.
      
      Test Plan: run on test machine.
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D5067
      94208a78
  12. 30 8月, 2012 1 次提交
    • D
      The sharding of the block cache is limited to 2*20 pieces. · e5fe80e4
      Dhruba Borthakur 提交于
      Summary:
      The numbers of shards that the block cache is divided into is
      configurable. However, if the user specifies that he/she wants
      the block cache to be divided into more than 2**20 pieces, then
      the system will rey to allocate a huge array of that size) that
      could fail.
      
      It is better to limit the sharding of the block cache to an
      upper bound. The default sharding is 16 shards (i.e. 2**4)
      and the maximum is now 2 million shards (i.e. 2**20).
      
      Also, fixed a bug with the LRUCache where the numShardBits
      should be a private member of the LRUCache object rather than
      a static variable.
      
      Test Plan:
      run db_bench with --cache_numshardbits=64.
      
      Task ID: #
      
      Blame Rev:
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D5013
      e5fe80e4
  13. 29 8月, 2012 1 次提交
    • H
      merge 1.5 · a4f9b8b4
      heyongqiang 提交于
      Summary:
      
      as subject
      
      Test Plan:
      
      db_test table_test
      
      Reviewers: dhruba
      a4f9b8b4
  14. 28 8月, 2012 1 次提交
    • D
      Introduce a new method Env->Fsync() that issues fsync (instead of fdatasync). · fc20273e
      Dhruba Borthakur 提交于
      Summary:
      Introduce a new method Env->Fsync() that issues fsync (instead of fdatasync).
      This is needed for data durability when running on ext3 filesystems.
      Added options to the benchmark db_bench to generate performance numbers
      with either fsync or fdatasync enabled.
      
      Cleaned up Makefile to build leveldb_shell only when building the thrift
      leveldb server.
      
      Test Plan: build and run benchmark
      
      Reviewers: heyongqiang
      
      Reviewed By: heyongqiang
      
      Differential Revision: https://reviews.facebook.net/D4911
      fc20273e
  15. 23 8月, 2012 1 次提交
  16. 22 8月, 2012 1 次提交
  17. 20 8月, 2012 1 次提交
    • H
      add disable wal to db_bench · deb1a1fa
      heyongqiang 提交于
      Summary:
      as subject.
      
      ./db_bench --benchmarks=fillrandom --num=1000000 --disable_data_sync=1 --write_buffer_size=50000000 --target_file_size_base=100000000 --disable_wal=1
      
      LevelDB:    version 1.4
      Date:       Sun Aug 19 16:01:59 2012
      CPU:        8 * Intel(R) Xeon(R) CPU           L5630  @ 2.13GHz
      CPUCache:   12288 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      ------------------------------------------------
      fillrandom   :       4.591 micros/op 217797 ops/sec;   24.1 MB/s
      
      ./db_bench --benchmarks=fillrandom --num=1000000 --disable_data_sync=1 --write_buffer_size=50000000 --target_file_size_base=100000000
      
      LevelDB:    version 1.4
      Date:       Sun Aug 19 16:02:54 2012
      CPU:        8 * Intel(R) Xeon(R) CPU           L5630  @ 2.13GHz
      CPUCache:   12288 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      ------------------------------------------------
      fillrandom   :       3.696 micros/op 270530 ops/sec;   29.9 MB/s
      
      Test Plan: db_bench
      
      Reviewers: dhruba
      
      Differential Revision: https://reviews.facebook.net/D4767
      deb1a1fa
  18. 16 8月, 2012 1 次提交
  19. 14 6月, 2012 1 次提交
  20. 02 6月, 2012 1 次提交
  21. 31 5月, 2012 1 次提交
  22. 30 5月, 2012 1 次提交
  23. 23 5月, 2012 1 次提交
  24. 19 5月, 2012 1 次提交
  25. 17 5月, 2012 1 次提交
  26. 12 5月, 2012 1 次提交
  27. 10 5月, 2012 1 次提交
  28. 17 4月, 2012 1 次提交
    • S
      Added bloom filter support. · 85584d49
      Sanjay Ghemawat 提交于
      In particular, we add a new FilterPolicy class.  An instance
      of this class can be supplied in Options when opening a
      database.  If supplied, the instance is used to generate
      summaries of keys (e.g., a bloom filter) which are placed in
      sstables.  These summaries are consulted by DB::Get() so we
      can avoid reading sstable blocks that are guaranteed to not
      contain the key we are looking for.
      
      This change provides one implementation of FilterPolicy
      based on bloom filters.
      
      Other changes:
      - Updated version number to 1.4.
      - Some build tweaks.
      - C binding for CompactRange.
      - A few more benchmarks: deleteseq, deleterandom, readmissing, seekrandom.
      - Minor .gitignore update.
      85584d49
  29. 01 11月, 2011 1 次提交
    • H
      A number of fixes: · 36a5f8ed
      Hans Wennborg 提交于
      - Replace raw slice comparison with a call to user comparator.
        Added test for custom comparators.
      
      - Fix end of namespace comments.
      
      - Fixed bug in picking inputs for a level-0 compaction.
      
        When finding overlapping files, the covered range may expand
        as files are added to the input set.  We now correctly expand
        the range when this happens instead of continuing to use the
        old range.  For example, suppose L0 contains files with the
        following ranges:
      
            F1: a .. d
            F2:    c .. g
            F3:       f .. j
      
        and the initial compaction target is F3.  We used to search
        for range f..j which yielded {F2,F3}.  However we now expand
        the range as soon as another file is added.  In this case,
        when F2 is added, we expand the range to c..j and restart the
        search.  That picks up file F1 as well.
      
        This change fixes a bug related to deleted keys showing up
        incorrectly after a compaction as described in Issue 44.
      
      (Sync with upstream @25072954)
      36a5f8ed
  30. 06 10月, 2011 1 次提交
    • G
      A number of bugfixes: · 299ccedf
      Gabor Cselle 提交于
      - Added DB::CompactRange() method.
      
        Changed manual compaction code so it breaks up compactions of
        big ranges into smaller compactions.
      
        Changed the code that pushes the output of memtable compactions
        to higher levels to obey the grandparent constraint: i.e., we
        must never have a single file in level L that overlaps too
        much data in level L+1 (to avoid very expensive L-1 compactions).
      
        Added code to pretty-print internal keys.
      
      - Fixed bug where we would not detect overlap with files in
        level-0 because we were incorrectly using binary search
        on an array of files with overlapping ranges.
      
        Added "leveldb.sstables" property that can be used to dump
        all of the sstables and ranges that make up the db state.
      
      - Removing post_write_snapshot support.  Email to leveldb mailing
        list brought up no users, just confusion from one person about
        what it meant.
      
      - Fixing static_cast char to unsigned on BIG_ENDIAN platforms.
      
        Fixes	Issue 35 and Issue 36.
      
      - Comment clarification to address leveldb Issue 37.
      
      - Change license in posix_logger.h to match other files.
      
      - A build problem where uint32 was used instead of uint32_t.
      
      Sync with upstream @24408625
      299ccedf
  31. 02 9月, 2011 1 次提交
    • G
      Bugfixes: for Get(), don't hold mutex while writing log. · 72630236
      gabor@google.com 提交于
      - Fix bug in Get: when it triggers a compaction, it could sometimes
        mark the compaction with the wrong level (if there was a gap
        in the set of levels examined for the Get).
      
      - Do not hold mutex while writing to the log file or to the
        MANIFEST file.
      
        Added a new benchmark that runs a writer thread concurrently with
        reader threads.
      
        Percentiles
        ------------------------------
        micros/op: avg  median 99   99.9  99.99  99.999 max
        ------------------------------------------------------
        before:    42   38     110  225   32000  42000  48000
        after:     24   20     55   65    130    1100   7000
      
      - Fixed race in optimized Get.  It should have been using the
        pinned memtables, not the current memtables.
      
      
      
      git-svn-id: https://leveldb.googlecode.com/svn/trunk@50 62dab493-f737-651d-591e-8d6aee1b9529
      72630236
  32. 23 8月, 2011 1 次提交
    • G
      Bugfix for issue 33; reduce lock contention in Get(), parallel benchmarks. · e3584f9c
      gabor@google.com 提交于
      - Fix for issue 33 (non-null-terminated result from
        leveldb_property_value())
      
      - Support for running multiple instances of a benchmark in parallel.
      
      - Reduce lock contention on Get():
        (1) Do not hold the lock while searching memtables.
        (2) Shard block and table caches 16-ways.
      
        Benchmark for evaluating this change:
        $ db_bench --benchmarks=fillseq1,readrandom --threads=$n
        (fillseq1 is a small hack to make sure fillseq runs once regardless
        of number of threads specified on the command line).
      
      
      
      git-svn-id: https://leveldb.googlecode.com/svn/trunk@49 62dab493-f737-651d-591e-8d6aee1b9529
      e3584f9c
  33. 21 7月, 2011 1 次提交
    • G
      Speed up Snappy uncompression, new Logger interface. · 60bd8015
      gabor@google.com 提交于
      - Removed one copy of an uncompressed block contents changing
        the signature of Snappy_Uncompress() so it uncompresses into a
        flat array instead of a std::string.
              
        Speeds up readrandom ~10%.
      
      - Instead of a combination of Env/WritableFile, we now have a
        Logger interface that can be easily overridden applications
        that want to supply their own logging.
      
      - Separated out the gcc and Sun Studio parts of atomic_pointer.h
        so we can use 'asm', 'volatile' keywords for Sun Studio.
      
      
      
      
      git-svn-id: https://leveldb.googlecode.com/svn/trunk@39 62dab493-f737-651d-591e-8d6aee1b9529
      60bd8015
  34. 22 6月, 2011 1 次提交
    • G
      A number of smaller fixes and performance improvements: · ccf0fcd5
      gabor@google.com 提交于
      - Implemented Get() directly instead of building on top of a full
        merging iterator stack.  This speeds up the "readrandom" benchmark
        by up to 15-30%.
      
      - Fixed an opensource compilation problem.
        Added --db=<name> flag to control where the database is placed.
      
      - Automatically compact a file when we have done enough
        overlapping seeks to that file.
      
      - Fixed a performance bug where we would read from at least one
        file in a level even if none of the files overlapped the key
        being read.
      
      - Makefile fix for Mac OSX installations that have XCode 4 without XCode 3.
      
      - Unified the two occurrences of binary search in a file-list
        into one routine.
      
      - Found and fixed a bug where we would unnecessarily search the
        last file when looking for a key larger than all data in the
        level.
      
      - A fix to avoid the need for trivial move compactions and
        therefore gets rid of two out of five syncs in "fillseq".
      
      - Removed the MANIFEST file write when switching to a new
        memtable/log-file for a 10-20% improvement on fill speed on ext4.
      
      - Adding a SNAPPY setting in the Makefile for folks who have
        Snappy installed. Snappy compresses values and speeds up writes.
      
      
      
      git-svn-id: https://leveldb.googlecode.com/svn/trunk@32 62dab493-f737-651d-591e-8d6aee1b9529
      ccf0fcd5
  35. 28 5月, 2011 1 次提交
  36. 21 5月, 2011 1 次提交