1. 12 1月, 2017 1 次提交
  2. 11 1月, 2017 2 次提交
    • A
      Allow incrementing refcount on cache handles · fe395fb6
      Andrew Kryczka 提交于
      Summary:
      Previously the only way to increment a handle's refcount was to invoke Lookup(), which (1) did hash table lookup to get cache handle, (2) incremented that handle's refcount. For a future DeleteRange optimization, I added a function, Ref(), for when the caller already has a cache handle and only needs to do (2).
      Closes https://github.com/facebook/rocksdb/pull/1761
      
      Differential Revision: D4397114
      
      Pulled By: ajkr
      
      fbshipit-source-id: 9addbe5
      fe395fb6
    • S
      Fix build on FreeBSD · 2172b660
      Sunpoet Po-Chuan Hsieh 提交于
      Summary:
      ```
        CC       utilities/column_aware_encoding_exp.o
      utilities/column_aware_encoding_exp.cc:149:5: error: use of undeclared identifier 'exit'
          exit(1);
          ^
      utilities/column_aware_encoding_exp.cc:154:5: error: use of undeclared identifier 'exit'
          exit(1);
          ^
      utilities/column_aware_encoding_exp.cc:158:5: error: use of undeclared identifier 'exit'
          exit(1);
          ^
      3 errors generated.
      ```
      Closes https://github.com/facebook/rocksdb/pull/1754
      
      Differential Revision: D4399044
      
      Pulled By: IslamAbdelRahman
      
      fbshipit-source-id: fbab5a2
      2172b660
  3. 10 1月, 2017 4 次提交
  4. 09 1月, 2017 2 次提交
    • M
      Revert "PinnableSlice" · d0ba8ec8
      Maysam Yabandeh 提交于
      Summary:
      This reverts commit 54d94e9c.
      
      The pull request was landed by mistake.
      Closes https://github.com/facebook/rocksdb/pull/1755
      
      Differential Revision: D4391678
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 36d5149
      d0ba8ec8
    • M
      PinnableSlice · 54d94e9c
      Maysam Yabandeh 提交于
      Summary:
      Currently the point lookup values are copied to a string provided by the user.
      This incures an extra memcpy cost. This patch allows doing point lookup
      via a PinnableSlice which pins the source memory location (instead of
      copying their content) and releases them after the content is consumed
      by the user. The old API of Get(string) is translated to the new API
      underneath.
      
       Here is the summary for improvements:
       1. value 100 byte: 1.8%  regular, 1.2% merge values
       2. value 1k   byte: 11.5% regular, 7.5% merge values
       3. value 10k byte: 26% regular,    29.9% merge values
      
       The improvement for merge could be more if we extend this approach to
       pin the merge output and delay the full merge operation until the user
       actually needs it. We have put that for future work.
      
      PS:
      Sometimes we observe a small decrease in performance when switching from
      t5452014 to this patch but with the old Get(string) API. The difference
      is a little and could be noise. More importantly it is safely
      cancelled
      Closes https://github.com/facebook/rocksdb/pull/1732
      
      Differential Revision: D4374613
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: a077f1a
      54d94e9c
  5. 07 1月, 2017 3 次提交
  6. 06 1月, 2017 1 次提交
    • A
      Maintain position in range deletions map · b104b878
      Andrew Kryczka 提交于
      Summary:
      When deletion-collapsing mode is enabled (i.e., for DBIter/CompactionIterator), we maintain position in the tombstone maps across calls to ShouldDelete(). Since iterators often access keys sequentially (or reverse-sequentially), scanning forward/backward from the last position can be faster than binary-searching the map for every key.
      
      - When Next() is invoked on an iterator, we use kForwardTraversal to scan forwards, if needed, until arriving at the range deletion containing the next key.
      - Similarly for Prev(), we use kBackwardTraversal to scan backwards in the range deletion map.
      - When the iterator seeks, we use kBinarySearch for repositioning
      - After tombstones are added or before the first ShouldDelete() invocation, the current position is set to invalid, which forces kBinarySearch to be used.
      - Non-iterator users (i.e., Get()) use kFullScan, which has the same behavior as before---scan the whole map for every key passed to ShouldDelete().
      Closes https://github.com/facebook/rocksdb/pull/1701
      
      Differential Revision: D4350318
      
      Pulled By: ajkr
      
      fbshipit-source-id: 5129b76
      b104b878
  7. 04 1月, 2017 5 次提交
  8. 02 1月, 2017 1 次提交
  9. 01 1月, 2017 1 次提交
  10. 30 12月, 2016 2 次提交
    • M
      Delegate Cleanables · 0712d541
      Maysam Yabandeh 提交于
      Summary:
      Cleanable objects will perform the registered cleanups when
      they are destructed. We however rather to delay this cleaning like when
      we are gathering the merge operands. Current approach is to create the
      Cleanable object on heap (instead of on stack) and delay deleting it.
      
      By allowing Cleanables to delegate their cleanups to another cleanable
      object we can delay the cleaning without however the need to craete the
      cleanable object on heap and keeping it around. This patch applies this
      technique for the cleanups of BlockIter and shows improved performance
      for some in-memory benchmarks:
      +1.8% for merge worklaod, +6.4% for non-merge workload when the merge
      operator is specified.
      https://our.intern.facebook.com/intern/tasks?t=15168163
      
      Non-merge benchmark:
      TEST_TMPDIR=/dev/shm/v100nocomp/ ./db_bench --benchmarks=fillrandom
      --num=1000000 -value_size=100 -compression_type=none
      
      Reading random with no merge operator specified:
      TEST_TMPDIR=/dev/shm/v100nocomp/ ./db_bench
      --benchmarks="read
      Closes https://github.com/facebook/rocksdb/pull/1711
      
      Differential Revision: D4361163
      
      Pulled By: maysamyabandeh
      
      fbshipit-source-id: 9801e07
      0712d541
    • I
      Allow SstFileWriter to Fadvise the file away from page cache · d58ef52b
      Islam AbdelRahman 提交于
      Summary:
      Add `fadvise_trigger` option to `SstFileWriter`
      
      If fadvise_trigger is passed with a non-zero value, SstFileWriter will invalidate the os page cache every `fadvise_trigger` bytes for the sst file
      Closes https://github.com/facebook/rocksdb/pull/1731
      
      Differential Revision: D4371246
      
      Pulled By: IslamAbdelRahman
      
      fbshipit-source-id: 91caff1
      d58ef52b
  11. 29 12月, 2016 5 次提交
  12. 28 12月, 2016 2 次提交
  13. 24 12月, 2016 1 次提交
  14. 23 12月, 2016 2 次提交
    • Y
      Print cache options to info log · ab48c165
      Yi Wu 提交于
      Summary:
      Improve cache options logging to info log.
      Also print the value of
      cache_index_and_filter_blocks_with_high_priority.
      Closes https://github.com/facebook/rocksdb/pull/1709
      
      Differential Revision: D4358776
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 8f030a0
      ab48c165
    • A
      direct io write support · 972f96b3
      Aaron Gao 提交于
      Summary:
      rocksdb direct io support
      
      ```
      [gzh@dev11575.prn2 ~/rocksdb] ./db_bench -benchmarks=fillseq --num=1000000
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      RocksDB:    version 5.0
      Date:       Wed Nov 23 13:17:43 2016
      CPU:        40 * Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
      CPUCache:   25600 KB
      Keys:       16 bytes each
      Values:     100 bytes each (50 bytes after compression)
      Entries:    1000000
      Prefix:    0 bytes
      Keys per prefix:    0
      RawSize:    110.6 MB (estimated)
      FileSize:   62.9 MB (estimated)
      Write rate: 0 bytes/second
      Compression: Snappy
      Memtablerep: skip_list
      Perf Level: 1
      WARNING: Assertions are enabled; benchmarks unnecessarily slow
      ------------------------------------------------
      Initializing RocksDB Options from the specified file
      Initializing RocksDB Options from command-line flags
      DB path: [/tmp/rocksdbtest-112628/dbbench]
      fillseq      :       4.393 micros/op 227639 ops/sec;   25.2 MB/s
      
      [gzh@dev11575.prn2 ~/roc
      Closes https://github.com/facebook/rocksdb/pull/1564
      
      Differential Revision: D4241093
      
      Pulled By: lightmark
      
      fbshipit-source-id: 98c29e3
      972f96b3
  15. 22 12月, 2016 3 次提交
  16. 21 12月, 2016 3 次提交
  17. 20 12月, 2016 2 次提交
    • A
      Collapse range deletions · 50e305de
      Andrew Kryczka 提交于
      Summary:
      Added a tombstone-collapsing mode to RangeDelAggregator, which eliminates overlap in the TombstoneMap. In this mode, we can check whether a tombstone covers a user key using upper_bound() (i.e., binary search). However, the tradeoff is the overhead to add tombstones is now higher, so at first I've only enabled it for range scans (compaction/flush/user iterators), where we expect a high number of calls to ShouldDelete() for the same tombstones. Point queries like Get() will still use the linear scan approach.
      
      Also in this diff I changed RangeDelAggregator's TombstoneMap to use multimap with user keys instead of map with internal keys. Callers sometimes provided ParsedInternalKey directly, from which it would've required string copying to derive an internal key Slice with which we could search the map.
      Closes https://github.com/facebook/rocksdb/pull/1614
      
      Differential Revision: D4270397
      
      Pulled By: ajkr
      
      fbshipit-source-id: 93092c7
      50e305de
    • Y
      Dump persistent cache options · 5d1457db
      Yi Wu 提交于
      Summary:
      Dump persistent cache options
      Closes https://github.com/facebook/rocksdb/pull/1679
      
      Differential Revision: D4337019
      
      Pulled By: yiwu-arbug
      
      fbshipit-source-id: 3812f8a
      5d1457db