1. 19 1月, 2017 3 次提交
  2. 18 1月, 2017 2 次提交
  3. 16 1月, 2017 2 次提交
  4. 14 1月, 2017 2 次提交
  5. 13 1月, 2017 1 次提交
  6. 12 1月, 2017 4 次提交
    • A
      direct reads refactor · dc2584ee
      Aaron Gao 提交于
      Summary:
      direct IO reads refactoring
      remove unnecessary classes and unified interfaces
      tested with db_bench
      
      need more change for options and ON/OFF for different files.
      Since disabled is default, it should be fine now
      Closes https://github.com/facebook/rocksdb/pull/1636
      
      Differential Revision: D4307189
      
      Pulled By: lightmark
      
      fbshipit-source-id: 6991e22
      dc2584ee
    • M
      Abort compactions more reliably when closing DB · d18dd2c4
      Mike Kolupaev 提交于
      Summary:
      DB shutdown aborts running compactions by setting an atomic shutting_down=true that CompactionJob periodically checks. Without this PR it checks it before processing every _output_ value. If compaction filter filters everything out, the compaction is uninterruptible. This PR adds checks for shutting_down on every _input_ value (in CompactionIterator and MergeHelper).
      
      There's also some minor code cleanup along the way.
      Closes https://github.com/facebook/rocksdb/pull/1639
      
      Differential Revision: D4306571
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: f050890
      d18dd2c4
    • A
      Guarding extra fallocate call with TRAVIS because its not working pro… · 62384ebe
      Anirban Rahut 提交于
      Summary:
      …perly on travis
      
       There is some old code in PosixWritableFile::Close(), which
      truncates the file to the measured size and then does an extra fallocate
      with KEEP_SIZE. This is commented as a failsafe because in some
      cases ftruncate doesn't do the right job (I don't know of an instance of
      this btw). However doing an fallocate with KEEP_SIZE should not increase
      the file size. However on Travis Worker which is Docker (likely AUFS )
      its not working. There are comments on web that show that the AUFS
      author had initially not implemented fallocate, and then did it later.
      So not sure what is the quality of the implementation.
      Closes https://github.com/facebook/rocksdb/pull/1765
      
      Differential Revision: D4401340
      
      Pulled By: anirbanr-fb
      
      fbshipit-source-id: e2d8100
      62384ebe
    • C
      Performance: Iterate vector by reference · 9f246298
      Changli Gao 提交于
      Summary: Closes https://github.com/facebook/rocksdb/pull/1763
      
      Differential Revision: D4398796
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: b82636d
      9f246298
  7. 11 1月, 2017 2 次提交
    • A
      Allow incrementing refcount on cache handles · fe395fb6
      Andrew Kryczka 提交于
      Summary:
      Previously the only way to increment a handle's refcount was to invoke Lookup(), which (1) did hash table lookup to get cache handle, (2) incremented that handle's refcount. For a future DeleteRange optimization, I added a function, Ref(), for when the caller already has a cache handle and only needs to do (2).
      Closes https://github.com/facebook/rocksdb/pull/1761
      
      Differential Revision: D4397114
      
      Pulled By: ajkr
      
      fbshipit-source-id: 9addbe5
      fe395fb6
    • S
      Fix build on FreeBSD · 2172b660
      Sunpoet Po-Chuan Hsieh 提交于
      Summary:
      ```
        CC       utilities/column_aware_encoding_exp.o
      utilities/column_aware_encoding_exp.cc:149:5: error: use of undeclared identifier 'exit'
          exit(1);
          ^
      utilities/column_aware_encoding_exp.cc:154:5: error: use of undeclared identifier 'exit'
          exit(1);
          ^
      utilities/column_aware_encoding_exp.cc:158:5: error: use of undeclared identifier 'exit'
          exit(1);
          ^
      3 errors generated.
      ```
      Closes https://github.com/facebook/rocksdb/pull/1754
      
      Differential Revision: D4399044
      
      Pulled By: IslamAbdelRahman
      
      fbshipit-source-id: fbab5a2
      2172b660
  8. 10 1月, 2017 4 次提交
  9. 09 1月, 2017 2 次提交
    • M
      Revert "PinnableSlice" · d0ba8ec8
      Maysam Yabandeh 提交于
      Summary:
      This reverts commit 54d94e9c.
      
      The pull request was landed by mistake.
      Closes https://github.com/facebook/rocksdb/pull/1755
      
      Differential Revision: D4391678
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 36d5149
      d0ba8ec8
    • M
      PinnableSlice · 54d94e9c
      Maysam Yabandeh 提交于
      Summary:
      Currently the point lookup values are copied to a string provided by the user.
      This incures an extra memcpy cost. This patch allows doing point lookup
      via a PinnableSlice which pins the source memory location (instead of
      copying their content) and releases them after the content is consumed
      by the user. The old API of Get(string) is translated to the new API
      underneath.
      
       Here is the summary for improvements:
       1. value 100 byte: 1.8%  regular, 1.2% merge values
       2. value 1k   byte: 11.5% regular, 7.5% merge values
       3. value 10k byte: 26% regular,    29.9% merge values
      
       The improvement for merge could be more if we extend this approach to
       pin the merge output and delay the full merge operation until the user
       actually needs it. We have put that for future work.
      
      PS:
      Sometimes we observe a small decrease in performance when switching from
      t5452014 to this patch but with the old Get(string) API. The difference
      is a little and could be noise. More importantly it is safely
      cancelled
      Closes https://github.com/facebook/rocksdb/pull/1732
      
      Differential Revision: D4374613
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: a077f1a
      54d94e9c
  10. 07 1月, 2017 3 次提交
  11. 06 1月, 2017 1 次提交
    • A
      Maintain position in range deletions map · b104b878
      Andrew Kryczka 提交于
      Summary:
      When deletion-collapsing mode is enabled (i.e., for DBIter/CompactionIterator), we maintain position in the tombstone maps across calls to ShouldDelete(). Since iterators often access keys sequentially (or reverse-sequentially), scanning forward/backward from the last position can be faster than binary-searching the map for every key.
      
      - When Next() is invoked on an iterator, we use kForwardTraversal to scan forwards, if needed, until arriving at the range deletion containing the next key.
      - Similarly for Prev(), we use kBackwardTraversal to scan backwards in the range deletion map.
      - When the iterator seeks, we use kBinarySearch for repositioning
      - After tombstones are added or before the first ShouldDelete() invocation, the current position is set to invalid, which forces kBinarySearch to be used.
      - Non-iterator users (i.e., Get()) use kFullScan, which has the same behavior as before---scan the whole map for every key passed to ShouldDelete().
      Closes https://github.com/facebook/rocksdb/pull/1701
      
      Differential Revision: D4350318
      
      Pulled By: ajkr
      
      fbshipit-source-id: 5129b76
      b104b878
  12. 04 1月, 2017 5 次提交
  13. 02 1月, 2017 1 次提交
  14. 01 1月, 2017 1 次提交
  15. 30 12月, 2016 2 次提交
    • M
      Delegate Cleanables · 0712d541
      Maysam Yabandeh 提交于
      Summary:
      Cleanable objects will perform the registered cleanups when
      they are destructed. We however rather to delay this cleaning like when
      we are gathering the merge operands. Current approach is to create the
      Cleanable object on heap (instead of on stack) and delay deleting it.
      
      By allowing Cleanables to delegate their cleanups to another cleanable
      object we can delay the cleaning without however the need to craete the
      cleanable object on heap and keeping it around. This patch applies this
      technique for the cleanups of BlockIter and shows improved performance
      for some in-memory benchmarks:
      +1.8% for merge worklaod, +6.4% for non-merge workload when the merge
      operator is specified.
      https://our.intern.facebook.com/intern/tasks?t=15168163
      
      Non-merge benchmark:
      TEST_TMPDIR=/dev/shm/v100nocomp/ ./db_bench --benchmarks=fillrandom
      --num=1000000 -value_size=100 -compression_type=none
      
      Reading random with no merge operator specified:
      TEST_TMPDIR=/dev/shm/v100nocomp/ ./db_bench
      --benchmarks="read
      Closes https://github.com/facebook/rocksdb/pull/1711
      
      Differential Revision: D4361163
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 9801e07
      0712d541
    • I
      Allow SstFileWriter to Fadvise the file away from page cache · d58ef52b
      Islam AbdelRahman 提交于
      Summary:
      Add `fadvise_trigger` option to `SstFileWriter`
      
      If fadvise_trigger is passed with a non-zero value, SstFileWriter will invalidate the os page cache every `fadvise_trigger` bytes for the sst file
      Closes https://github.com/facebook/rocksdb/pull/1731
      
      Differential Revision: D4371246
      
      Pulled By: IslamAbdelRahman
      
      fbshipit-source-id: 91caff1
      d58ef52b
  16. 29 12月, 2016 5 次提交