1. 21 5月, 2014 1 次提交
  2. 20 5月, 2014 1 次提交
    • S
      ThreadPool to allow decrease number of threads and increase of number of... · 3df07d17
      sdong 提交于
      ThreadPool to allow decrease number of threads and increase of number of threads is to be instantly scheduled
      
      Summary:
      Add a feature to decrease the number of threads in thread pool.
      Also instantly schedule more threads if number of threads is increased.
      
      Here is the way it is implemented: each background thread needs its thread ID. After decreasing number of threads, all threads are woken up. The thread with the largest thread ID will terminate. If there are more threads to terminate, the thread will wake up all threads again.
      
      Another change is made so that when number of threads is increased, more threads are created and all previous excessive threads are woken up to do the work.
      
      Test Plan: Add a unit test.
      
      Reviewers: haobo, dhruba
      
      Reviewed By: haobo
      
      CC: yhchiang, igor, nkg-, leveldb
      
      Differential Revision: https://reviews.facebook.net/D18675
      3df07d17
  3. 22 4月, 2014 3 次提交
  4. 11 4月, 2014 2 次提交
  5. 10 4月, 2014 1 次提交
    • I
      Turn on -Wmissing-prototypes · 4daea663
      Igor Canadi 提交于
      Summary: Compiling for iOS has by default turned on -Wmissing-prototypes, which causes rocksdb to fail compiling. This diff turns on -Wmissing-prototypes in our compile options and cleans up all functions with missing prototypes.
      
      Test Plan: compiles
      
      Reviewers: dhruba, haobo, ljin, sdong
      
      Reviewed By: ljin
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D17649
      4daea663
  6. 27 3月, 2014 1 次提交
  7. 22 3月, 2014 1 次提交
    • S
      Fix data corruption by LogBuffer · 83ab62e2
      sdong 提交于
      Summary: LogBuffer::AddLogToBuffer() uses vsnprintf() in the wrong way, which might cause buffer overflow when log line is too line. Fix it.
      
      Test Plan: Add a unit test to cover most LogBuffer's most logic.
      
      Reviewers: igor, haobo, dhruba
      
      Reviewed By: igor
      
      CC: ljin, yhchiang, leveldb
      
      Differential Revision: https://reviews.facebook.net/D17103
      83ab62e2
  8. 15 3月, 2014 2 次提交
  9. 12 3月, 2014 1 次提交
  10. 07 3月, 2014 1 次提交
  11. 06 3月, 2014 1 次提交
    • K
      Make sure GetUniqueID releated tests run on "regular" storage · abeee9f2
      Kai Liu 提交于
      Summary:
      With the use of tmpfs or ramfs, unit tests related to GetUniqueID()
      failed because of the failure from ioctl, which doesn't work with these
      fancy file systems at all.
      
      I fixed this issue and make sure all related tests run on the "regular"
      storage (disk or flash).
      
      Test Plan: TEST_TMPDIR=/dev/shm make check -j32
      
      Reviewers: igor, dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D16593
      abeee9f2
  12. 17 11月, 2013 1 次提交
  13. 17 10月, 2013 1 次提交
  14. 16 10月, 2013 2 次提交
  15. 10 10月, 2013 1 次提交
    • I
      Env class that can randomly read and write · d0beadd4
      Igor Canadi 提交于
      Summary: I have implemented basic simple use case that I need for External Value Store I'm working on. There is a potential for making this prettier by refactoring/combining WritableFile and RandomAccessFile, avoiding some copypasta. However, I decided to implement just the basic functionality, so I can continue working on the other diff.
      
      Test Plan: Added a unittest
      
      Reviewers: dhruba, haobo, kailiu
      
      Reviewed By: haobo
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D13365
      d0beadd4
  16. 05 10月, 2013 1 次提交
  17. 24 9月, 2013 1 次提交
  18. 14 9月, 2013 1 次提交
    • H
      [RocksDB] fix build env_test · 88664480
      Haobo Xu 提交于
      Summary: move the TwoPools test to the end of thread related tests. Otherwise, the SetBackgroundThreads call would increase the Low pool size and affect the result of other tests.
      
      Test Plan: make env_test; ./env_test
      
      Reviewers: dhruba, emayanke, xjin
      
      Reviewed By: xjin
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12939
      88664480
  19. 13 9月, 2013 1 次提交
    • H
      [RocksDB] Enhance Env to support two thread pools LOW and HIGH · 1565dab8
      Haobo Xu 提交于
      Summary:
      this is the ground work for separating memtable flush jobs to their own thread pool.
      Both SetBackgroundThreads and Schedule take a third parameter Priority to indicate which thread pool they are working on. The names LOW and HIGH are just identifiers for two different thread pools, and does not indicate real difference in 'priority'. We can set number of threads in the pools independently.
      The thread pool implementation is refactored.
      
      Test Plan: make check
      
      Reviewers: dhruba, emayanke
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D12885
      1565dab8
  20. 24 8月, 2013 1 次提交
  21. 13 6月, 2013 1 次提交
    • H
      [RocksDB] cleanup EnvOptions · bdf10859
      Haobo Xu 提交于
      Summary:
      This diff simplifies EnvOptions by treating it as POD, similar to Options.
      - virtual functions are removed and member fields are accessed directly.
      - StorageOptions is removed.
      - Options.allow_readahead and Options.allow_readahead_compactions are deprecated.
      - Unused global variables are removed: useOsBuffer, useFsReadAhead, useMmapRead, useMmapWrite
      
      Test Plan: make check; db_stress
      
      Reviewers: dhruba
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D11175
      bdf10859
  22. 21 3月, 2013 1 次提交
    • D
      Ability to configure bufferedio-reads, filesystem-readaheads and mmap-read-write per database. · ad96563b
      Dhruba Borthakur 提交于
      Summary:
      This patch allows an application to specify whether to use bufferedio,
      reads-via-mmaps and writes-via-mmaps per database. Earlier, there
      was a global static variable that was used to configure this functionality.
      
      The default setting remains the same (and is backward compatible):
       1. use bufferedio
       2. do not use mmaps for reads
       3. use mmap for writes
       4. use readaheads for reads needed for compaction
      
      I also added a parameter to db_bench to be able to explicitly specify
      whether to do readaheads for compactions or not.
      
      Test Plan: make check
      
      Reviewers: sheki, heyongqiang, MarkCallaghan
      
      Reviewed By: sheki
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D9429
      ad96563b
  23. 01 3月, 2013 1 次提交
  24. 01 2月, 2013 1 次提交
    • K
      Fixed cache key for block cache · 4dcc0c89
      Kosie van der Merwe 提交于
      Summary:
      Added function to `RandomAccessFile` to generate an unique ID for that file. Currently only `PosixRandomAccessFile` has this behaviour implemented and only on Linux.
      
      Changed how key is generated in `Table::BlockReader`.
      
      Added tests to check whether the unique ID is stable, unique and not a prefix of another unique ID. Added tests to see that `Table` uses the cache more efficiently.
      
      Test Plan: make check
      
      Reviewers: chip, vamsi, dhruba
      
      Reviewed By: chip
      
      CC: leveldb
      
      Differential Revision: https://reviews.facebook.net/D8145
      4dcc0c89
  25. 07 11月, 2012 1 次提交
  26. 26 1月, 2012 1 次提交
  27. 01 11月, 2011 1 次提交
    • H
      A number of fixes: · 36a5f8ed
      Hans Wennborg 提交于
      - Replace raw slice comparison with a call to user comparator.
        Added test for custom comparators.
      
      - Fix end of namespace comments.
      
      - Fixed bug in picking inputs for a level-0 compaction.
      
        When finding overlapping files, the covered range may expand
        as files are added to the input set.  We now correctly expand
        the range when this happens instead of continuing to use the
        old range.  For example, suppose L0 contains files with the
        following ranges:
      
            F1: a .. d
            F2:    c .. g
            F3:       f .. j
      
        and the initial compaction target is F3.  We used to search
        for range f..j which yielded {F2,F3}.  However we now expand
        the range as soon as another file is added.  In this case,
        when F2 is added, we expand the range to c..j and restart the
        search.  That picks up file F1 as well.
      
        This change fixes a bug related to deleted keys showing up
        incorrectly after a compaction as described in Issue 44.
      
      (Sync with upstream @25072954)
      36a5f8ed
  28. 20 4月, 2011 2 次提交
  29. 19 4月, 2011 1 次提交
  30. 13 4月, 2011 1 次提交
  31. 31 3月, 2011 1 次提交
  32. 19 3月, 2011 1 次提交