1. 21 8月, 2017 1 次提交
    • H
      btrfs: Do not use data_alloc_cluster in ssd mode · 583b7231
      Hans van Kranenburg 提交于
          This patch provides a band aid to improve the 'out of the box'
      behaviour of btrfs for disks that are detected as being an ssd.  In a
      general purpose mixed workload scenario, the current ssd mode causes
      overallocation of available raw disk space for data, while leaving
      behind increasing amounts of unused fragmented free space. This
      situation leads to early ENOSPC problems which are harming user
      experience and adoption of btrfs as a general purpose filesystem.
      
      This patch modifies the data extent allocation behaviour of the ssd mode
      to make it behave identical to nossd mode.  The metadata behaviour and
      additional ssd_spread option stay untouched so far.
      
      Recommendations for future development are to reconsider the current
      oversimplified nossd / ssd distinction and the broken detection
      mechanism based on the rotational attribute in sysfs and provide
      experienced users with a more flexible way to choose allocator behaviour
      for data and metadata, optimized for certain use cases, while keeping
      sane 'out of the box' default settings.  The internals of the current
      btrfs code have more potential than what currently gets exposed to the
      user to choose from.
      
          The SSD story...
      
          In the first year of btrfs development, around early 2008, btrfs
      gained a mount option which enables specific functionality for
      filesystems on solid state devices. The first occurance of this
      functionality is in commit e18e4809, labeled "Add mount -o ssd, which
      includes optimizations for seek free storage".
      
      The effect on allocating free space for doing (data) writes is to
      'cluster' writes together, writing them out in contiguous space, as
      opposed to a 'tetris' way of putting all separate writes into any free
      space fragment that fits (which is what the -o nossd behaviour does).
      
      A somewhat simplified explanation of what happens is that, when for
      example, the 'cluster' size is set to 2MiB, when we do some writes, the
      data allocator will search for a free space block that is 2MiB big, and
      put the writes in there. The ssd mode itself might allow a 2MiB cluster
      to be composed of multiple free space extents with some existing data in
      between, while the additional ssd_spread mount option kills off this
      option and requires fully free space.
      
      The idea behind this is (commit 536ac8ae): "The [...] clusters make it
      more likely a given IO will completely overwrite the ssd block, so it
      doesn't have to do an internal rwm cycle."; ssd block meaning nand erase
      block. So, effectively this means applying a "locality based algorithm"
      and trying to outsmart the actual ssd.
      
      Since then, various changes have been made to the involved code, but the
      basic idea is still present, and gets activated whenever the ssd mount
      option is active. This also happens by default, when the rotational flag
      as seen at /sys/block/<device>/queue/rotational is set to 0.
      
          However, there's a number of problems with this approach.
      
          First, what the optimization is trying to do is outsmart the ssd by
      assuming there is a relation between the physical address space of the
      block device as seen by btrfs and the actual physical storage of the
      ssd, and then adjusting data placement. However, since the introduction
      of the Flash Translation Layer (FTL) which is a part of the internal
      controller of an ssd, these attempts are futile. The use of good quality
      FTL in consumer ssd products might have been limited in 2008, but this
      situation has changed drastically soon after that time. Today, even the
      flash memory in your automatic cat feeding machine or your grandma's
      wheelchair has a full featured one.
      
      Second, the behaviour as described above results in the filesystem being
      filled up with badly fragmented free space extents because of relatively
      small pieces of space that are freed up by deletes, but not selected
      again as part of a 'cluster'. Since the algorithm prefers allocating a
      new chunk over going back to tetris mode, the end result is a filesystem
      in which all raw space is allocated, but which is composed of
      underutilized chunks with a 'shotgun blast' pattern of fragmented free
      space. Usually, the next problematic thing that happens is the
      filesystem wanting to allocate new space for metadata, which causes the
      filesystem to fail in spectacular ways.
      
      Third, the default mount options you get for an ssd ('ssd' mode enabled,
      'discard' not enabled), in combination with spreading out writes over
      the full address space and ignoring freed up space leads to worst case
      behaviour in providing information to the ssd itself, since it will
      never learn that all the free space left behind is actually free.  There
      are two ways to let an ssd know previously written data does not have to
      be preserved, which are sending explicit signals using discard or
      fstrim, or by simply overwriting the space with new data.  The worst
      case behaviour is the btrfs ssd_spread mount option in combination with
      not having discard enabled. It has a side effect of minimizing the reuse
      of free space previously written in.
      
      Fourth, the rotational flag in /sys/ does not reliably indicate if the
      device is a locally attached ssd. For example, iSCSI or NBD displays as
      non-rotational, while a loop device on an ssd shows up as rotational.
      
      The combination of the second and third problem effectively means that
      despite all the good intentions, the btrfs ssd mode reliably causes the
      ssd hardware and the filesystem structures and performance to be choked
      to death. The clickbait version of the title of this story would have
      been "Btrfs ssd optimizations considered harmful for ssds".
      
      The current nossd 'tetris' mode (even still without discard) allows a
      pattern of overwriting much more previously used space, causing many
      more implicit discards to happen because of the overwrite information
      the ssd gets. The actual location in the physical address space, as seen
      from the point of view of btrfs is irrelevant, because the actual writes
      to the low level flash are reordered anyway thanks to the FTL.
      
          Changes made in the code
      
      1. Make ssd mode data allocation identical to tetris mode, like nossd.
      2. Adjust and clean up filesystem mount messages so that we can easily
      identify if a kernel has this patch applied or not, when providing
      support to end users. Also, make better use of the *_and_info helpers to
      only trigger messages on actual state changes.
      
          Backporting notes
      
      Notes for whoever wants to backport this patch to their 4.9 LTS kernel:
      * First apply commit 951e7966 "btrfs: drop the nossd flag when
        remounting with -o ssd", or fixup the differences manually.
      * The rest of the conflicts are because of the fs_info refactoring. So,
        for example, instead of using fs_info, it's root->fs_info in
        extent-tree.c
      Signed-off-by: NHans van Kranenburg <hans.van.kranenburg@mendix.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      583b7231
  2. 16 8月, 2017 6 次提交
    • D
      btrfs: prepare for extensions in compression options · a7164fa4
      David Sterba 提交于
      This is a minimal patch intended to be backported to older kernels.
      We're going to extend the string specifying the compression method and
      this would fail on kernels before that change (the string is compared
      exactly).
      
      Relax the string matching only to the prefix, ie. ignoring anything that
      goes after "zlib" or "lzo", regardless of th format extension we decide
      to use. This applies to the mount options and properties.
      
      That way, patched old kernels could be booted on systems already
      utilizing the new compression spec.
      
      Applicable since commit 63541927, v3.14.
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a7164fa4
    • D
      btrfs: use GFP_KERNEL in mount and remount · 3ec83621
      David Sterba 提交于
      We don't need to restrict the allocation flags in btrfs_mount or
      _remount. No big filesystem locks are held (possibly s_umount but that
      does no count here).
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      3ec83621
    • Q
      btrfs: Do chunk level check for degraded remount · b382cfe8
      Qu Wenruo 提交于
      Just the same for mount time check, use btrfs_check_rw_degradable() to
      check if we are OK to be remounted rw.
      Signed-off-by: NQu Wenruo <quwenruo@cn.fujitsu.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b382cfe8
    • A
      btrfs: resume qgroup rescan on rw remount · 6c6b5a39
      Aleksa Sarai 提交于
      Several distributions mount the "proper root" as ro during initrd and
      then remount it as rw before pivot_root(2). Thus, if a rescan had been
      aborted by a previous shutdown, the rescan would never be resumed.
      
      This issue would manifest itself as several btrfs ioctl(2)s causing the
      entire machine to hang when btrfs_qgroup_wait_for_completion was hit
      (due to the fs_info->qgroup_rescan_running flag being set but the rescan
      itself not being resumed). Notably, Docker's btrfs storage driver makes
      regular use of BTRFS_QUOTA_CTL_DISABLE and BTRFS_IOC_QUOTA_RESCAN_WAIT
      (causing this problem to be manifested on boot for some machines).
      
      Cc: <stable@vger.kernel.org> # v3.11+
      Cc: Jeff Mahoney <jeffm@suse.com>
      Fixes: b382a324 ("Btrfs: fix qgroup rescan resume on mount")
      Signed-off-by: NAleksa Sarai <asarai@suse.de>
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Tested-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6c6b5a39
    • J
      btrfs: backref, add tracepoints for prelim_ref insertion and merging · 00142756
      Jeff Mahoney 提交于
      This patch adds a tracepoint event for prelim_ref insertion and
      merging.  For each, the ref being inserted or merged and the count
      of tree nodes is issued.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      00142756
    • N
      btrfs: Add zstd support · 5c1aab1d
      Nick Terrell 提交于
      Add zstd compression and decompression support to BtrFS. zstd at its
      fastest level compresses almost as well as zlib, while offering much
      faster compression and decompression, approaching lzo speeds.
      
      I benchmarked btrfs with zstd compression against no compression, lzo
      compression, and zlib compression. I benchmarked two scenarios. Copying
      a set of files to btrfs, and then reading the files. Copying a tarball
      to btrfs, extracting it to btrfs, and then reading the extracted files.
      After every operation, I call `sync` and include the sync time.
      Between every pair of operations I unmount and remount the filesystem
      to avoid caching. The benchmark files can be found in the upstream
      zstd source repository under
      `contrib/linux-kernel/{btrfs-benchmark.sh,btrfs-extract-benchmark.sh}`
      [1] [2].
      
      I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
      The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
      16 GB of RAM, and a SSD.
      
      The first compression benchmark is copying 10 copies of the unzipped
      Silesia corpus [3] into a BtrFS filesystem mounted with
      `-o compress-force=Method`. The decompression benchmark times how long
      it takes to `tar` all 10 copies into `/dev/null`. The compression ratio is
      measured by comparing the output of `df` and `du`. See the benchmark file
      [1] for details. I benchmarked multiple zstd compression levels, although
      the patch uses zstd level 1.
      
      | Method  | Ratio | Compression MB/s | Decompression speed |
      |---------|-------|------------------|---------------------|
      | None    |  0.99 |              504 |                 686 |
      | lzo     |  1.66 |              398 |                 442 |
      | zlib    |  2.58 |               65 |                 241 |
      | zstd 1  |  2.57 |              260 |                 383 |
      | zstd 3  |  2.71 |              174 |                 408 |
      | zstd 6  |  2.87 |               70 |                 398 |
      | zstd 9  |  2.92 |               43 |                 406 |
      | zstd 12 |  2.93 |               21 |                 408 |
      | zstd 15 |  3.01 |               11 |                 354 |
      
      The next benchmark first copies `linux-4.11.6.tar` [4] to btrfs. Then it
      measures the compression ratio, extracts the tar, and deletes the tar.
      Then it measures the compression ratio again, and `tar`s the extracted
      files into `/dev/null`. See the benchmark file [2] for details.
      
      | Method | Tar Ratio | Extract Ratio | Copy (s) | Extract (s)| Read (s) |
      |--------|-----------|---------------|----------|------------|----------|
      | None   |      0.97 |          0.78 |    0.981 |      5.501 |    8.807 |
      | lzo    |      2.06 |          1.38 |    1.631 |      8.458 |    8.585 |
      | zlib   |      3.40 |          1.86 |    7.750 |     21.544 |   11.744 |
      | zstd 1 |      3.57 |          1.85 |    2.579 |     11.479 |    9.389 |
      
      [1] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-benchmark.sh
      [2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-extract-benchmark.sh
      [3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
      [4] https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.11.6.tar.xz
      
      zstd source repository: https://github.com/facebook/zstdSigned-off-by: NNick Terrell <terrelln@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      5c1aab1d
  3. 06 7月, 2017 1 次提交
    • D
      VFS: Don't use save/replace_mount_options if not using generic_show_options · c3d98ea0
      David Howells 提交于
      btrfs, debugfs, reiserfs and tracefs call save_mount_options() and reiserfs
      calls replace_mount_options(), but they then implement their own
      ->show_options() methods and don't touch s_options, rendering the saved
      options unnecessary.  I'm trying to eliminate s_options to make it easier
      to implement a context-based mount where the mount options can be passed
      individually over a file descriptor.
      
      Remove the calls to save/replace_mount_options() call in these cases.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc: Chris Mason <clm@fb.com>
      cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      cc: Steven Rostedt <rostedt@goodmis.org>
      cc: linux-btrfs@vger.kernel.org
      cc: reiserfs-devel@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      c3d98ea0
  4. 30 6月, 2017 1 次提交
  5. 20 6月, 2017 4 次提交
  6. 21 4月, 2017 1 次提交
  7. 18 4月, 2017 1 次提交
  8. 12 4月, 2017 1 次提交
  9. 17 2月, 2017 1 次提交
  10. 14 2月, 2017 1 次提交
  11. 15 12月, 2016 1 次提交
  12. 13 12月, 2016 1 次提交
    • P
      printk/btrfs: handle more message headers · 262c5e86
      Petr Mladek 提交于
      Commit 4bcc595c ("printk: reinstate KERN_CONT for printing
      continuation lines") allows to define more message headers for a single
      message.  The motivation is that continuous lines might get mixed.
      Therefore it make sense to define the right log level for every piece of
      a cont line.
      
      The current btrfs_printk() macros do not support continuous lines at the
      moment.  But better be prepared for a custom messages and avoid
      potential "lvl" buffer overflow.
      
      This patch iterates over the entire message header.  It is interested
      only into the message level like the original code.
      
      This patch also introduces PRINTK_MAX_SINGLE_HEADER_LEN.  Three bytes
      are enough for the message level header at the moment.  But it used to
      be three, see the commit 04d2c8c8 ("printk: convert the format for
      KERN_<LEVEL> to a 2 byte pattern").
      
      Also I fixed the default ratelimit level.  It looked very strange when it
      was different from the default log level.
      
      [pmladek@suse.com: Fix a check of the valid message level]
        Link: http://lkml.kernel.org/r/20161111183236.GD2145@dhcp128.suse.cz
      Link: http://lkml.kernel.org/r/1478695291-12169-4-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NDavid Sterba <dsterba@suse.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Jaroslav Kysela <perex@perex.cz>
      Cc: Takashi Iwai <tiwai@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      262c5e86
  13. 06 12月, 2016 5 次提交
  14. 30 11月, 2016 1 次提交
  15. 27 9月, 2016 3 次提交
  16. 26 9月, 2016 1 次提交
    • J
      Btrfs: add a flags field to btrfs_fs_info · afcdd129
      Josef Bacik 提交于
      We have a lot of random ints in btrfs_fs_info that can be put into flags.  This
      is mostly equivalent with the exception of how we deal with quota going on or
      off, now instead we set a flag when we are turning it on or off and deal with
      that appropriately, rather than just having a pending state that the current
      quota_enabled gets set to.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      afcdd129
  17. 25 8月, 2016 1 次提交
    • W
      btrfs: fix fsfreeze hang caused by delayed iputs deal · 9e7cc91a
      Wang Xiaoguang 提交于
      When running fstests generic/068, sometimes we got below deadlock:
        xfs_io          D ffff8800331dbb20     0  6697   6693 0x00000080
        ffff8800331dbb20 ffff88007acfc140 ffff880034d895c0 ffff8800331dc000
        ffff880032d243e8 fffffffeffffffff ffff880032d24400 0000000000000001
        ffff8800331dbb38 ffffffff816a9045 ffff880034d895c0 ffff8800331dbba8
        Call Trace:
        [<ffffffff816a9045>] schedule+0x35/0x80
        [<ffffffff816abab2>] rwsem_down_read_failed+0xf2/0x140
        [<ffffffff8118f5e1>] ? __filemap_fdatawrite_range+0xd1/0x100
        [<ffffffff8134f978>] call_rwsem_down_read_failed+0x18/0x30
        [<ffffffffa06631fc>] ? btrfs_alloc_block_rsv+0x2c/0xb0 [btrfs]
        [<ffffffff810d32b5>] percpu_down_read+0x35/0x50
        [<ffffffff81217dfc>] __sb_start_write+0x2c/0x40
        [<ffffffffa067f5d5>] start_transaction+0x2a5/0x4d0 [btrfs]
        [<ffffffffa067f857>] btrfs_join_transaction+0x17/0x20 [btrfs]
        [<ffffffffa068ba34>] btrfs_evict_inode+0x3c4/0x5d0 [btrfs]
        [<ffffffff81230a1a>] evict+0xba/0x1a0
        [<ffffffff812316b6>] iput+0x196/0x200
        [<ffffffffa06851d0>] btrfs_run_delayed_iputs+0x70/0xc0 [btrfs]
        [<ffffffffa067f1d8>] btrfs_commit_transaction+0x928/0xa80 [btrfs]
        [<ffffffffa0646df0>] btrfs_freeze+0x30/0x40 [btrfs]
        [<ffffffff81218040>] freeze_super+0xf0/0x190
        [<ffffffff81229275>] do_vfs_ioctl+0x4a5/0x5c0
        [<ffffffff81003176>] ? do_audit_syscall_entry+0x66/0x70
        [<ffffffff810038cf>] ? syscall_trace_enter_phase1+0x11f/0x140
        [<ffffffff81229409>] SyS_ioctl+0x79/0x90
        [<ffffffff81003c12>] do_syscall_64+0x62/0x110
        [<ffffffff816acbe1>] entry_SYSCALL64_slow_path+0x25/0x25
      
      >From this warning, freeze_super() already holds SB_FREEZE_FS, but
      btrfs_freeze() will call btrfs_commit_transaction() again, if
      btrfs_commit_transaction() finds that it has delayed iputs to handle,
      it'll start_transaction(), which will try to get SB_FREEZE_FS lock
      again, then deadlock occurs.
      
      The root cause is that in btrfs, sync_filesystem(sb) does not make
      sure all metadata is updated. There still maybe some codes adding
      delayed iputs, see below sample race window:
      
               CPU1                                  |         CPU2
      |-> freeze_super()                             |
          |-> sync_filesystem(sb);                   |
          |                                          |-> cleaner_kthread()
          |                                          |   |-> btrfs_delete_unused_bgs()
          |                                          |       |-> btrfs_remove_chunk()
          |                                          |           |-> btrfs_remove_block_group()
          |                                          |               |-> btrfs_add_delayed_iput()
          |                                          |
          |-> sb->s_writers.frozen = SB_FREEZE_FS;   |
          |-> sb_wait_write(sb, SB_FREEZE_FS);       |
          |   acquire SB_FREEZE_FS lock.             |
          |                                          |
          |-> btrfs_freeze()                         |
              |-> btrfs_commit_transaction()         |
                  |-> btrfs_run_delayed_iputs()      |
                  |   will handle delayed iputs,     |
                  |   that means start_transaction() |
                  |   will be called, which will try |
                  |   to get SB_FREEZE_FS lock.      |
      
      To fix this issue, introduce a "int fs_frozen" to record internally whether
      fs has been frozen. If fs has been frozen, we can not handle delayed iputs.
      Signed-off-by: NWang Xiaoguang <wangxg.fnst@cn.fujitsu.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ add comment to btrfs_freeze ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      9e7cc91a
  18. 26 7月, 2016 6 次提交
  19. 18 6月, 2016 2 次提交
    • Z
      btrfs: avoid blocking open_ctree from cleaner_kthread · 90c711ab
      Zygo Blaxell 提交于
      This fixes a problem introduced in commit 2f3165ec
      "btrfs: don't force mounts to wait for cleaner_kthread to delete one or more subvolumes".
      
      open_ctree eventually calls btrfs_replay_log which in turn calls
      btrfs_commit_super which tries to lock the cleaner_mutex, causing a
      recursive mutex deadlock during mount.
      
      Instead of playing whack-a-mole trying to keep up with all the
      functions that may want to lock cleaner_mutex, put all the cleaner_mutex
      lockers back where they were, and attack the problem more directly:
      keep cleaner_kthread asleep until the filesystem is mounted.
      
      When filesystems are mounted read-only and later remounted read-write,
      open_ctree did not set fs_info->open and neither does anything else.
      Set this flag in btrfs_remount so that neither btrfs_delete_unused_bgs
      nor cleaner_kthread get confused by the common case of "/" filesystem
      read-only mount followed by read-write remount.
      Signed-off-by: NZygo Blaxell <ce3g8jdj@umail.furryterror.org>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      90c711ab
    • J
      btrfs: account for non-CoW'd blocks in btrfs_abort_transaction · 64c12921
      Jeff Mahoney 提交于
      The test for !trans->blocks_used in btrfs_abort_transaction is
      insufficient to determine whether it's safe to drop the transaction
      handle on the floor.  btrfs_cow_block, informed by should_cow_block,
      can return blocks that have already been CoW'd in the current
      transaction.  trans->blocks_used is only incremented for new block
      allocations. If an operation overlaps the blocks in the current
      transaction entirely and must abort the transaction, we'll happily
      let it clean up the trans handle even though it may have modified
      the blocks and will commit an incomplete operation.
      
      In the long-term, I'd like to do closer tracking of when the fs
      is actually modified so we can still recover as gracefully as possible,
      but that approach will need some discussion.  In the short term,
      since this is the only code using trans->blocks_used, let's just
      switch it to a bool indicating whether any blocks were used and set
      it when should_cow_block returns false.
      
      Cc: stable@vger.kernel.org # 3.4+
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      64c12921
  20. 06 6月, 2016 1 次提交