1. 05 9月, 2014 6 次提交
  2. 04 9月, 2014 7 次提交
  3. 03 9月, 2014 10 次提交
    • I
      Merge pull request #259 from wankai/master · 9b976e34
      Igor Canadi 提交于
      typo improvement
      9b976e34
    • W
      Merge remote-tracking branch 'upstream/master' · 5d25a469
      wankai 提交于
      5d25a469
    • L
      call SanitizeDBOptionsByCFOptions() in the right place · 9b58c73c
      Lei Jin 提交于
      Summary: It only covers Open() with default column family right now
      
      Test Plan: make release
      
      Reviewers: igor, yhchiang, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22467
      9b58c73c
    • I
      Ignore missing column families · a84234a6
      Igor Canadi 提交于
      Summary:
      Before this diff, whenever we Write to non-existing column family, Write() would fail.
      
      This diff adds an option to not fail a Write() when WriteBatch points to non-existing column family. MongoDB said this would be useful for them, since they might have a transaction updating an index that was dropped by another thread. This way, they don't have to worry about checking if all indexes are alive on every write. They don't care if they lose writes to dropped index.
      
      Test Plan: added a small unit test
      
      Reviewers: sdong, yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22143
      a84234a6
    • F
      add assert to db Put in db_stress test · 8ed70fc2
      Feng Zhu 提交于
      Summary:
      1. assert db->Put to be true in db_stress
      2. begin column family with name "1".
      
      Test Plan: 1. ./db_stress
      
      Reviewers: ljin, yhchiang, dhruba, sdong, igor
      
      Reviewed By: sdong, igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22659
      8ed70fc2
    • I
      Merge pull request #242 from tdfischer/perf-timer-destructors · 7f19bb93
      Igor Canadi 提交于
      Refactor PerfStepTimer to automatically stop on destruct
      7f19bb93
    • F
      fix dropping column family bug · 8438a193
      Feng Zhu 提交于
      Summary: 1. db/db_impl.cc:2324 (DBImpl::BackgroundCompaction) should not raise bg_error_ when column family is dropped during compaction.
      
      Test Plan: 1. db_stress
      
      Reviewers: ljin, yhchiang, dhruba, igor, sdong
      
      Reviewed By: igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22653
      8438a193
    • T
      Refactor PerfStepTimer to stop on destruct · 6614a484
      Torrie Fischer 提交于
      This eliminates the need to remember to call PERF_TIMER_STOP when a section has
      been timed. This allows more useful design with the perf timers and enables
      possible return value optimizations. Simplistic example:
      
      class Foo {
        public:
          Foo(int v) : m_v(v);
        private:
          int m_v;
      }
      
      Foo makeFrobbedFoo(int *errno)
      {
        *errno = 0;
        return Foo();
      }
      
      Foo bar(int *errno)
      {
        PERF_TIMER_GUARD(some_timer);
      
        return makeFrobbedFoo(errno);
      }
      
      int main(int argc, char[] argv)
      {
        Foo f;
        int errno;
      
        f = bar(&errno);
      
        if (errno)
          return -1;
        return 0;
      }
      
      After bar() is called, perf_context.some_timer would be incremented as if
      Stop(&perf_context.some_timer) was called at the end, and the compiler is still
      able to produce optimizations on the return value from makeFrobbedFoo() through
      to main().
      6614a484
    • I
      Fix compile · 076bd01a
      Igor Canadi 提交于
      Summary: gcc on our dev boxes is not happy about __attribute__((unused))
      
      Test Plan: compiles now
      
      Reviewers: sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22707
      076bd01a
    • I
      Fix ios compile · 990df99a
      Igor Canadi 提交于
      Summary: We need to set contbuild for this :)
      
      Test Plan: compiles
      
      Reviewers: sdong, yhchiang, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22701
      990df99a
  4. 02 9月, 2014 2 次提交
    • I
      Don't let flush preempt compaction in certain cases · 7dcadb1d
      Igor Canadi 提交于
      Summary:
      I have an application configured with 16 background threads. Write rates are high. L0->L1 compactions is very slow and it limits the concurrency of the system. While it's happening, other 15 threads are idle. However, when there is a need of a flush, that one thread busy with L0->L1 is doing flush, instead of any other 15 threads that are just sitting there.
      
      This diff prevents that. If there are threads that are idle, we don't let flush preempt compaction.
      
      Test Plan: Will run stress test
      
      Reviewers: ljin, sdong, yhchiang
      
      Reviewed By: sdong, yhchiang
      
      Subscribers: dhruba, leveldb
      
      Differential Revision: https://reviews.facebook.net/D22299
      7dcadb1d
    • W
      typo improvement · dff2b1a8
      Wankai Zhang 提交于
      dff2b1a8
  5. 01 9月, 2014 1 次提交
  6. 31 8月, 2014 1 次提交
  7. 30 8月, 2014 4 次提交
    • L
      limit max bytes that can be read/written per pread/write syscall · 7e9f28cb
      Lei Jin 提交于
      Summary:
      BlockBasedTable sst file size can grow to a large size when universal
      compaction is used. When index block exceeds 2G, pread seems to fail and
      return truncated data and causes "trucated block" error. I tried to use
      ```
        #define _FILE_OFFSET_BITS 64
      ```
      But the problem still persists. Splitting a big write/read into smaller
      batches seems to solve the problem.
      
      Test Plan:
      successfully compacted a case with resulting sst file at ~90G (2.1G
      index block size)
      
      Reviewers: yhchiang, igor, sdong
      
      Reviewed By: sdong
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22569
      7e9f28cb
    • R
      Improve Cuckoo Table Reader performance. Inlined hash function and number of... · d20b8cfa
      Radheshyam Balasundaram 提交于
      Improve Cuckoo Table Reader performance. Inlined hash function and number of buckets a power of two.
      
      Summary:
      Use inlined hash functions instead of function pointer. Make number of buckets a power of two and use bitwise and instead of mod.
      After these changes, we get almost 50% improvement in performance.
      
      Results:
      With 120000000 items, utilization is 89.41%, number of hash functions: 2.
      Time taken per op is 0.231us (4.3 Mqps) with batch size of 0
      Time taken per op is 0.229us (4.4 Mqps) with batch size of 0
      Time taken per op is 0.185us (5.4 Mqps) with batch size of 0
      With 120000000 items, utilization is 89.41%, number of hash functions: 2.
      Time taken per op is 0.108us (9.3 Mqps) with batch size of 10
      Time taken per op is 0.100us (10.0 Mqps) with batch size of 10
      Time taken per op is 0.103us (9.7 Mqps) with batch size of 10
      With 120000000 items, utilization is 89.41%, number of hash functions: 2.
      Time taken per op is 0.101us (9.9 Mqps) with batch size of 25
      Time taken per op is 0.098us (10.2 Mqps) with batch size of 25
      Time taken per op is 0.097us (10.3 Mqps) with batch size of 25
      With 120000000 items, utilization is 89.41%, number of hash functions: 2.
      Time taken per op is 0.100us (10.0 Mqps) with batch size of 50
      Time taken per op is 0.097us (10.3 Mqps) with batch size of 50
      Time taken per op is 0.097us (10.3 Mqps) with batch size of 50
      With 120000000 items, utilization is 89.41%, number of hash functions: 2.
      Time taken per op is 0.102us (9.8 Mqps) with batch size of 100
      Time taken per op is 0.098us (10.2 Mqps) with batch size of 100
      Time taken per op is 0.115us (8.7 Mqps) with batch size of 100
      
      With 100000000 items, utilization is 74.51%, number of hash functions: 2.
      Time taken per op is 0.201us (5.0 Mqps) with batch size of 0
      Time taken per op is 0.155us (6.5 Mqps) with batch size of 0
      Time taken per op is 0.152us (6.6 Mqps) with batch size of 0
      With 100000000 items, utilization is 74.51%, number of hash functions: 2.
      Time taken per op is 0.089us (11.3 Mqps) with batch size of 10
      Time taken per op is 0.084us (11.9 Mqps) with batch size of 10
      Time taken per op is 0.086us (11.6 Mqps) with batch size of 10
      With 100000000 items, utilization is 74.51%, number of hash functions: 2.
      Time taken per op is 0.087us (11.5 Mqps) with batch size of 25
      Time taken per op is 0.085us (11.7 Mqps) with batch size of 25
      Time taken per op is 0.093us (10.8 Mqps) with batch size of 25
      With 100000000 items, utilization is 74.51%, number of hash functions: 2.
      Time taken per op is 0.094us (10.6 Mqps) with batch size of 50
      Time taken per op is 0.094us (10.7 Mqps) with batch size of 50
      Time taken per op is 0.093us (10.8 Mqps) with batch size of 50
      With 100000000 items, utilization is 74.51%, number of hash functions: 2.
      Time taken per op is 0.092us (10.9 Mqps) with batch size of 100
      Time taken per op is 0.089us (11.2 Mqps) with batch size of 100
      Time taken per op is 0.088us (11.3 Mqps) with batch size of 100
      
      With 80000000 items, utilization is 59.60%, number of hash functions: 2.
      Time taken per op is 0.154us (6.5 Mqps) with batch size of 0
      Time taken per op is 0.168us (6.0 Mqps) with batch size of 0
      Time taken per op is 0.190us (5.3 Mqps) with batch size of 0
      With 80000000 items, utilization is 59.60%, number of hash functions: 2.
      Time taken per op is 0.081us (12.4 Mqps) with batch size of 10
      Time taken per op is 0.077us (13.0 Mqps) with batch size of 10
      Time taken per op is 0.083us (12.1 Mqps) with batch size of 10
      With 80000000 items, utilization is 59.60%, number of hash functions: 2.
      Time taken per op is 0.077us (13.0 Mqps) with batch size of 25
      Time taken per op is 0.073us (13.7 Mqps) with batch size of 25
      Time taken per op is 0.073us (13.7 Mqps) with batch size of 25
      With 80000000 items, utilization is 59.60%, number of hash functions: 2.
      Time taken per op is 0.076us (13.1 Mqps) with batch size of 50
      Time taken per op is 0.072us (13.8 Mqps) with batch size of 50
      Time taken per op is 0.072us (13.8 Mqps) with batch size of 50
      With 80000000 items, utilization is 59.60%, number of hash functions: 2.
      Time taken per op is 0.077us (13.0 Mqps) with batch size of 100
      Time taken per op is 0.074us (13.6 Mqps) with batch size of 100
      Time taken per op is 0.073us (13.6 Mqps) with batch size of 100
      
      With 70000000 items, utilization is 52.15%, number of hash functions: 2.
      Time taken per op is 0.190us (5.3 Mqps) with batch size of 0
      Time taken per op is 0.186us (5.4 Mqps) with batch size of 0
      Time taken per op is 0.184us (5.4 Mqps) with batch size of 0
      With 70000000 items, utilization is 52.15%, number of hash functions: 2.
      Time taken per op is 0.079us (12.7 Mqps) with batch size of 10
      Time taken per op is 0.070us (14.2 Mqps) with batch size of 10
      Time taken per op is 0.072us (14.0 Mqps) with batch size of 10
      With 70000000 items, utilization is 52.15%, number of hash functions: 2.
      Time taken per op is 0.080us (12.5 Mqps) with batch size of 25
      Time taken per op is 0.072us (14.0 Mqps) with batch size of 25
      Time taken per op is 0.071us (14.1 Mqps) with batch size of 25
      With 70000000 items, utilization is 52.15%, number of hash functions: 2.
      Time taken per op is 0.082us (12.1 Mqps) with batch size of 50
      Time taken per op is 0.071us (14.1 Mqps) with batch size of 50
      Time taken per op is 0.073us (13.6 Mqps) with batch size of 50
      With 70000000 items, utilization is 52.15%, number of hash functions: 2.
      Time taken per op is 0.080us (12.5 Mqps) with batch size of 100
      Time taken per op is 0.077us (13.0 Mqps) with batch size of 100
      Time taken per op is 0.078us (12.8 Mqps) with batch size of 100
      
      Test Plan:
      make check all
      make valgrind_check
      make asan_check
      
      Reviewers: sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22539
      d20b8cfa
    • T
      ForwardIterator: reset incomplete iterators on Seek() · 0f9c43ea
      Tomislav Novak 提交于
      Summary:
      When reading from kBlockCacheTier, ForwardIterator's internal child iterators
      may end up in the incomplete state (read was unable to complete without doing
      disk I/O). `ForwardIterator::status()` will correctly report that; however, the
      iterator may be stuck in that state until all sub-iterators are rebuilt:
      
        * `NeedToSeekImmutable()` may return false even if some sub-iterators are
          incomplete
        * one of the child iterators may be an empty iterator without any state other
          that the kIncomplete status (created using `NewErrorIterator()`); seeking on
          any such iterator has no effect -- we need to construct it again
      
      Akin to rebuilding iterators after a superversion bump, this diff makes forward
      iterator reset all incomplete child iterators when `Seek()` or `Next()` are
      called.
      
      Test Plan: TEST_TMPDIR=/dev/shm/rocksdbtest ROCKSDB_TESTS=TailingIterator ./db_test
      
      Reviewers: igor, sdong, ljin
      
      Reviewed By: ljin
      
      Subscribers: lovro, march, leveldb
      
      Differential Revision: https://reviews.facebook.net/D22575
      0f9c43ea
    • L
      reduce recordTick overhead in compaction loop · 722d80c3
      Lei Jin 提交于
      Summary: It is too expensive to bump ticker to every key/vaue pair
      
      Test Plan: make release
      
      Reviewers: sdong, yhchiang, igor
      
      Reviewed By: igor
      
      Subscribers: leveldb
      
      Differential Revision: https://reviews.facebook.net/D22527
      722d80c3
  8. 29 8月, 2014 8 次提交
  9. 28 8月, 2014 1 次提交