1. 03 9月, 2014 1 次提交
    • T
      Refactor PerfStepTimer to stop on destruct · 6614a484
      Torrie Fischer 提交于
      This eliminates the need to remember to call PERF_TIMER_STOP when a section has
      been timed. This allows more useful design with the perf timers and enables
      possible return value optimizations. Simplistic example:
      
      class Foo {
        public:
          Foo(int v) : m_v(v);
        private:
          int m_v;
      }
      
      Foo makeFrobbedFoo(int *errno)
      {
        *errno = 0;
        return Foo();
      }
      
      Foo bar(int *errno)
      {
        PERF_TIMER_GUARD(some_timer);
      
        return makeFrobbedFoo(errno);
      }
      
      int main(int argc, char[] argv)
      {
        Foo f;
        int errno;
      
        f = bar(&errno);
      
        if (errno)
          return -1;
        return 0;
      }
      
      After bar() is called, perf_context.some_timer would be incremented as if
      Stop(&perf_context.some_timer) was called at the end, and the compiler is still
      able to produce optimizations on the return value from makeFrobbedFoo() through
      to main().
      6614a484
  2. 05 8月, 2014 1 次提交
  3. 31 7月, 2014 1 次提交
  4. 08 5月, 2014 1 次提交
  5. 02 5月, 2014 3 次提交
  6. 09 4月, 2014 1 次提交
  7. 09 2月, 2014 1 次提交
  8. 08 2月, 2014 3 次提交
    • K
      Fix incompatible compilation in Linux server · b8ea5e36
      Kai Liu 提交于
      b8ea5e36
    • K
      Make table properties shareable · 161ab42a
      kailiu 提交于
      Summary:
      We are going to expose properties of all tables to end users through "some" db interface.
      However, current design doesn't naturally fit for this need, which is because:
      
      1. If a table presents in table cache, we cannot simply return the reference to its table properties, because the table may be destroy after compaction (and we don't want to hold the ref of the version).
      2. Copy table properties is OK, but it's slow.
      
      Thus in this diff, I change the table reader's interface to return a shared pointer (for const table properties), instead a const refernce.
      
      Test Plan: `make check` passed
      
      Reviewers: haobo, sdong, dhruba
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15999
      161ab42a
    • Y
      Add support for plain table format to sst_dump. · 3ce8d9a9
      Yueh-Hsuan Chiang 提交于
      Summary:
      This diff enables the command line tool `sst_dump` to work for sst files
      under plain table format.  Changes include:
        * In tools/sst_dump.cc:
          - add support for plain table format
          - display prefix_extractor information when --show_properties is on
        * In table/format.cc
          - Now the table magic number of a Footer can be later initialized
            via ReadFooterFromFile().
        * In table/meta_bocks:
          - add function ReadTableMagicNumber() that reads the magic number of
            the specified file.
      
      Minor fixes:
       - remove a duplicate #include in table/table_test.cc
       - fix a commentary typo in include/rocksdb/memtablerep.h
       - fix lint errors.
      
      Test Plan:
      Runs sst_dump with both block-based and plain-table format files with
      different arguments, specifically those with --show-properties and --from.
      
      * sample output:
        https://reviews.facebook.net/P261
      
      Reviewers: kailiu, sdong, xjin
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D15903
      3ce8d9a9
  9. 05 12月, 2013 2 次提交
  10. 07 11月, 2013 1 次提交
    • D
      Fix stress test failure when using mmap-reads. · 292c2b33
      Dhruba Borthakur 提交于
      Summary:
      The mmap-read file->Read() does not use the scratch buffer to
      read in file-contents.
      
      Test Plan: ./db_stress --test_batches_snapshots=1 --ops_per_thread=100000000 --threads=32 --write_buffer_size=4194304 --destroy_db_initially=0 --reopen=0 --readpercent=45 --prefixpercent=5 --writepercent=35 --delpercent=5 --iterpercent=10 --db=/tmp/dhruba --max_key=100000000 --disable_seek_compaction=0 --mmap_read=1 --block_size=16384 --cache_size=1048576 --open_files=500000 --verify_checksum=1 --sync=1 --disable_wal=0 --disable_data_sync=0 --target_file_size_base=2097152 --target_file_size_multiplier=2 --max_write_buffer_number=3 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --filter_deletes=0
      
      Reviewers: haobo, kailiu
      
      Reviewed By: kailiu
      
      CC: leveldb, kailiu, emayanke
      
      Differential Revision: https://reviews.facebook.net/D13923
      292c2b33
  11. 02 11月, 2013 1 次提交
    • D
      Implement a compressed block cache. · b4ad5e89
      Dhruba Borthakur 提交于
      Summary:
      Rocksdb can now support a uncompressed block cache, or a compressed
      block cache or both. Lookups first look for a block in the
      uncompressed cache, if it is not found only then it is looked up
      in the compressed cache. If it is found in the compressed cache,
      then it is uncompressed and inserted into the uncompressed cache.
      
      It is possible that the same block resides in the compressed cache
      as well as the uncompressed cache at the same time. Both caches
      have their own individual LRU policy.
      
      Test Plan: Unit test case attached.
      
      Reviewers: kailiu, sdong, haobo, leveldb
      
      Reviewed By: haobo
      
      CC: xjin, haobo
      
      Differential Revision: https://reviews.facebook.net/D12675
      b4ad5e89
  12. 17 10月, 2013 1 次提交
  13. 05 10月, 2013 1 次提交
  14. 08 9月, 2013 1 次提交
    • H
      [RocksDB] Added nano second stopwatch and new perf counters to track block read cost · f2f4c807
      Haobo Xu 提交于
      Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
      
      Test Plan: perf_context_test, also testing with viewstate shadow traffic.
      
      Reviewers: dhruba
      
      Reviewed By: dhruba
      
      CC: leveldb, xjin
      
      Differential Revision: https://reviews.facebook.net/D12351
      f2f4c807
  15. 24 8月, 2013 1 次提交
  16. 30 4月, 2013 1 次提交
  17. 01 3月, 2013 1 次提交
  18. 10 1月, 2013 1 次提交
    • K
      Fixed wrong assumption in Table::Open() · 4e9d9d98
      Kosie van der Merwe 提交于
      Summary:
      `Table::Open()` assumes that `size` correctly describes the size of `file`, added a check that the footer is actually the right size and for good measure added assertions to `Footer::DecodeFrom()`.
      
      This was discovered by running `valgrind ./db_test` and seeing that `Footer::DecodeFrom()` was accessing uninitialized memory.
      
      Test Plan:
      make clean check
      
      ran `valgrind ./db_test` and saw DBTest.NoSpace no longer complains about a conditional jump being dependent on uninitialized memory.
      
      Reviewers: dhruba, vamsi, emayanke, sheki
      
      Reviewed By: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D7815
      4e9d9d98
  19. 21 12月, 2012 1 次提交
  20. 30 6月, 2012 1 次提交
  21. 29 6月, 2012 1 次提交
  22. 17 4月, 2012 1 次提交
    • S
      Added bloom filter support. · 85584d49
      Sanjay Ghemawat 提交于
      In particular, we add a new FilterPolicy class.  An instance
      of this class can be supplied in Options when opening a
      database.  If supplied, the instance is used to generate
      summaries of keys (e.g., a bloom filter) which are placed in
      sstables.  These summaries are consulted by DB::Get() so we
      can avoid reading sstable blocks that are guaranteed to not
      contain the key we are looking for.
      
      This change provides one implementation of FilterPolicy
      based on bloom filters.
      
      Other changes:
      - Updated version number to 1.4.
      - Some build tweaks.
      - C binding for CompactRange.
      - A few more benchmarks: deleteseq, deleterandom, readmissing, seekrandom.
      - Minor .gitignore update.
      85584d49
  23. 16 3月, 2012 1 次提交
  24. 01 11月, 2011 1 次提交
    • H
      A number of fixes: · 36a5f8ed
      Hans Wennborg 提交于
      - Replace raw slice comparison with a call to user comparator.
        Added test for custom comparators.
      
      - Fix end of namespace comments.
      
      - Fixed bug in picking inputs for a level-0 compaction.
      
        When finding overlapping files, the covered range may expand
        as files are added to the input set.  We now correctly expand
        the range when this happens instead of continuing to use the
        old range.  For example, suppose L0 contains files with the
        following ranges:
      
            F1: a .. d
            F2:    c .. g
            F3:       f .. j
      
        and the initial compaction target is F3.  We used to search
        for range f..j which yielded {F2,F3}.  However we now expand
        the range as soon as another file is added.  In this case,
        when F2 is added, we expand the range to c..j and restart the
        search.  That picks up file F1 as well.
      
        This change fixes a bug related to deleted keys showing up
        incorrectly after a compaction as described in Issue 44.
      
      (Sync with upstream @25072954)
      36a5f8ed
  25. 21 7月, 2011 1 次提交
    • G
      Speed up Snappy uncompression, new Logger interface. · 60bd8015
      gabor@google.com 提交于
      - Removed one copy of an uncompressed block contents changing
        the signature of Snappy_Uncompress() so it uncompresses into a
        flat array instead of a std::string.
              
        Speeds up readrandom ~10%.
      
      - Instead of a combination of Env/WritableFile, we now have a
        Logger interface that can be easily overridden applications
        that want to supply their own logging.
      
      - Separated out the gcc and Sun Studio parts of atomic_pointer.h
        so we can use 'asm', 'volatile' keywords for Sun Studio.
      
      
      
      
      git-svn-id: https://leveldb.googlecode.com/svn/trunk@39 62dab493-f737-651d-591e-8d6aee1b9529
      60bd8015
  26. 21 4月, 2011 1 次提交
  27. 20 4月, 2011 2 次提交
  28. 19 4月, 2011 1 次提交
  29. 13 4月, 2011 1 次提交
  30. 31 3月, 2011 1 次提交
  31. 23 3月, 2011 1 次提交
  32. 19 3月, 2011 1 次提交