1. 15 11月, 2017 1 次提交
    • L
      Btrfs: add write_flags for compression bio · f82b7359
      Liu Bo 提交于
      Compression code path has only flaged bios with REQ_OP_WRITE no matter
      where the bios come from, but it could be a sync write if fsync starts
      this writeback or a normal writeback write if wb kthread starts a
      periodic writeback.
      
      It breaks the rule that sync writes and writeback writes need to be
      differentiated from each other, because from the POV of block layer,
      all bios need to be recognized by these flags in order to do some
      management, e.g. throttlling.
      
      This passes writeback_control to compression write path so that it can
      send bios with proper flags to block layer.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f82b7359
  2. 02 11月, 2017 1 次提交
    • D
      btrfs: allow to set compression level for zlib · f51d2b59
      David Sterba 提交于
      Preliminary support for setting compression level for zlib, the
      following works:
      
      $ mount -o compess=zlib                 # default
      $ mount -o compess=zlib0                # same
      $ mount -o compess=zlib9                # level 9, slower sync, less data
      $ mount -o compess=zlib1                # level 1, faster sync, more data
      $ mount -o remount,compress=zlib3	# level set by remount
      
      The compress-force works the same as compress'.  The level is visible in
      the same format in /proc/mounts. Level set via file property does not
      work yet.
      
      Required patch: "btrfs: prepare for extensions in compression options"
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f51d2b59
  3. 18 8月, 2017 1 次提交
  4. 16 8月, 2017 2 次提交
    • T
      Btrfs: add skeleton code for compression heuristic · c2fcdcdf
      Timofey Titovets 提交于
      Add skeleton code for compresison heuristics. Now it iterates over all
      the pages, but in the end always says "yes, compress please", ie it does
      not change the current behaviour.
      
      In the future we're going to add various heuristics to analyze the data.
      This patch can be used as a baseline for measuring if the effectivness
      and performance.
      Signed-off-by: NTimofey Titovets <nefelim4ag@gmail.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ enhanced changelog, modified comments ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c2fcdcdf
    • N
      btrfs: Add zstd support · 5c1aab1d
      Nick Terrell 提交于
      Add zstd compression and decompression support to BtrFS. zstd at its
      fastest level compresses almost as well as zlib, while offering much
      faster compression and decompression, approaching lzo speeds.
      
      I benchmarked btrfs with zstd compression against no compression, lzo
      compression, and zlib compression. I benchmarked two scenarios. Copying
      a set of files to btrfs, and then reading the files. Copying a tarball
      to btrfs, extracting it to btrfs, and then reading the extracted files.
      After every operation, I call `sync` and include the sync time.
      Between every pair of operations I unmount and remount the filesystem
      to avoid caching. The benchmark files can be found in the upstream
      zstd source repository under
      `contrib/linux-kernel/{btrfs-benchmark.sh,btrfs-extract-benchmark.sh}`
      [1] [2].
      
      I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
      The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
      16 GB of RAM, and a SSD.
      
      The first compression benchmark is copying 10 copies of the unzipped
      Silesia corpus [3] into a BtrFS filesystem mounted with
      `-o compress-force=Method`. The decompression benchmark times how long
      it takes to `tar` all 10 copies into `/dev/null`. The compression ratio is
      measured by comparing the output of `df` and `du`. See the benchmark file
      [1] for details. I benchmarked multiple zstd compression levels, although
      the patch uses zstd level 1.
      
      | Method  | Ratio | Compression MB/s | Decompression speed |
      |---------|-------|------------------|---------------------|
      | None    |  0.99 |              504 |                 686 |
      | lzo     |  1.66 |              398 |                 442 |
      | zlib    |  2.58 |               65 |                 241 |
      | zstd 1  |  2.57 |              260 |                 383 |
      | zstd 3  |  2.71 |              174 |                 408 |
      | zstd 6  |  2.87 |               70 |                 398 |
      | zstd 9  |  2.92 |               43 |                 406 |
      | zstd 12 |  2.93 |               21 |                 408 |
      | zstd 15 |  3.01 |               11 |                 354 |
      
      The next benchmark first copies `linux-4.11.6.tar` [4] to btrfs. Then it
      measures the compression ratio, extracts the tar, and deletes the tar.
      Then it measures the compression ratio again, and `tar`s the extracted
      files into `/dev/null`. See the benchmark file [2] for details.
      
      | Method | Tar Ratio | Extract Ratio | Copy (s) | Extract (s)| Read (s) |
      |--------|-----------|---------------|----------|------------|----------|
      | None   |      0.97 |          0.78 |    0.981 |      5.501 |    8.807 |
      | lzo    |      2.06 |          1.38 |    1.631 |      8.458 |    8.585 |
      | zlib   |      3.40 |          1.86 |    7.750 |     21.544 |   11.744 |
      | zstd 1 |      3.57 |          1.85 |    2.579 |     11.479 |    9.389 |
      
      [1] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-benchmark.sh
      [2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-extract-benchmark.sh
      [3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
      [4] https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.11.6.tar.xz
      
      zstd source repository: https://github.com/facebook/zstdSigned-off-by: NNick Terrell <terrelln@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      5c1aab1d
  5. 20 6月, 2017 1 次提交
  6. 09 6月, 2017 1 次提交
  7. 28 2月, 2017 5 次提交
  8. 30 11月, 2016 1 次提交
  9. 12 3月, 2016 1 次提交
  10. 17 2月, 2015 1 次提交
  11. 01 12月, 2014 1 次提交
  12. 07 5月, 2013 1 次提交
    • E
      btrfs: make static code static & remove dead code · 48a3b636
      Eric Sandeen 提交于
      Big patch, but all it does is add statics to functions which
      are in fact static, then remove the associated dead-code fallout.
      
      removed functions:
      
      btrfs_iref_to_path()
      __btrfs_lookup_delayed_deletion_item()
      __btrfs_search_delayed_insertion_item()
      __btrfs_search_delayed_deletion_item()
      find_eb_for_page()
      btrfs_find_block_group()
      range_straddles_pages()
      extent_range_uptodate()
      btrfs_file_extent_length()
      btrfs_scrub_cancel_devid()
      btrfs_start_transaction_lflush()
      
      btrfs_print_tree() is left because it is used for debugging.
      btrfs_start_transaction_lflush() and btrfs_reada_detach() are
      left for symmetry.
      
      ulist.c functions are left, another patch will take care of those.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      48a3b636
  13. 22 3月, 2012 1 次提交
  14. 02 5月, 2011 1 次提交
  15. 22 12月, 2010 3 次提交
    • L
      btrfs: Extract duplicate decompress code · 3a39c18d
      Li Zefan 提交于
      Add a common function to copy decompressed data from working buffer
      to bio pages.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      3a39c18d
    • L
      btrfs: Add lzo compression support · a6fa6fae
      Li Zefan 提交于
      Lzo is a much faster compression algorithm than gzib, so would allow
      more users to enable transparent compression, and some users can
      choose from compression ratio and speed for different applications
      
      Usage:
      
       # mount -t btrfs -o compress[=<zlib,lzo>] dev /mnt
      or
       # mount -t btrfs -o compress-force[=<zlib,lzo>] dev /mnt
      
      "-o compress" without argument is still allowed for compatability.
      
      Compatibility:
      
      If we mount a filesystem with lzo compression, it will not be able be
      mounted in old kernels. One reason is, otherwise btrfs will directly
      dump compressed data, which sits in inline extent, to user.
      
      Performance:
      
      The test copied a linux source tarball (~400M) from an ext4 partition
      to the btrfs partition, and then extracted it.
      
      (time in second)
                 lzo        zlib        nocompress
      copy:      10.6       21.7        14.9
      extract:   70.1       94.4        66.6
      
      (data size in MB)
                 lzo        zlib        nocompress
      copy:      185.87     108.69      394.49
      extract:   193.80     132.36      381.21
      
      Changelog:
      
      v1 -> v2:
      - Select LZO_COMPRESS and LZO_DECOMPRESS in btrfs Kconfig.
      - Add incompability flag.
      - Fix error handling in compress code.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      a6fa6fae
    • L
      btrfs: Allow to add new compression algorithm · 261507a0
      Li Zefan 提交于
      Make the code aware of compression type, instead of always assuming
      zlib compression.
      
      Also make the zlib workspace function as common code for all
      compression types.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      261507a0
  16. 30 10月, 2008 1 次提交
    • C
      Btrfs: Add zlib compression support · c8b97818
      Chris Mason 提交于
      This is a large change for adding compression on reading and writing,
      both for inline and regular extents.  It does some fairly large
      surgery to the writeback paths.
      
      Compression is off by default and enabled by mount -o compress.  Even
      when the -o compress mount option is not used, it is possible to read
      compressed extents off the disk.
      
      If compression for a given set of pages fails to make them smaller, the
      file is flagged to avoid future compression attempts later.
      
      * While finding delalloc extents, the pages are locked before being sent down
      to the delalloc handler.  This allows the delalloc handler to do complex things
      such as cleaning the pages, marking them writeback and starting IO on their
      behalf.
      
      * Inline extents are inserted at delalloc time now.  This allows us to compress
      the data before inserting the inline extent, and it allows us to insert
      an inline extent that spans multiple pages.
      
      * All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
      are changed to record both an in-memory size and an on disk size, as well
      as a flag for compression.
      
      From a disk format point of view, the extent pointers in the file are changed
      to record the on disk size of a given extent and some encoding flags.
      Space in the disk format is allocated for compression encoding, as well
      as encryption and a generic 'other' field.  Neither the encryption or the
      'other' field are currently used.
      
      In order to limit the amount of data read for a single random read in the
      file, the size of a compressed extent is limited to 128k.  This is a
      software only limit, the disk format supports u64 sized compressed extents.
      
      In order to limit the ram consumed while processing extents, the uncompressed
      size of a compressed extent is limited to 256k.  This is a software only limit
      and will be subject to tuning later.
      
      Checksumming is still done on compressed extents, and it is done on the
      uncompressed version of the data.  This way additional encodings can be
      layered on without having to figure out which encoding to checksum.
      
      Compression happens at delalloc time, which is basically singled threaded because
      it is usually done by a single pdflush thread.  This makes it tricky to
      spread the compression load across all the cpus on the box.  We'll have to
      look at parallel pdflush walks of dirty inodes at a later time.
      
      Decompression is hooked into readpages and it does spread across CPUs nicely.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c8b97818