1. 30 11月, 2016 1 次提交
  2. 28 10月, 2016 1 次提交
    • C
      btrfs: fix races on root_log_ctx lists · 570dd450
      Chris Mason 提交于
      btrfs_remove_all_log_ctxs takes a shortcut where it avoids walking the
      list because it knows all of the waiters are patiently waiting for the
      commit to finish.
      
      But, there's a small race where btrfs_sync_log can remove itself from
      the list if it finds a log commit is already done.  Also, it uses
      list_del_init() to remove itself from the list, but there's no way to
      know if btrfs_remove_all_log_ctxs has already run, so we don't know for
      sure if it is safe to call list_del_init().
      
      This gets rid of all the shortcuts for btrfs_remove_all_log_ctxs(), and
      just calls it with the proper locking.
      
      This is part two of the corruption fixed by cbd60aa7.  I should have
      done this in the first place, but convinced myself the optimizations were
      safe.  A 12 hour run of dbench 2048 will eventually trigger a list debug
      WARN_ON for the list_del_init() in btrfs_sync_log().
      
      Fixes: d1433debReported-by: NDave Jones <davej@codemonkey.org.uk>
      cc: stable@vger.kernel.org # 3.15+
      Signed-off-by: NChris Mason <clm@fb.com>
      570dd450
  3. 27 9月, 2016 1 次提交
  4. 26 9月, 2016 1 次提交
    • J
      Btrfs: add a flags field to btrfs_fs_info · afcdd129
      Josef Bacik 提交于
      We have a lot of random ints in btrfs_fs_info that can be put into flags.  This
      is mostly equivalent with the exception of how we deal with quota going on or
      off, now instead we set a flag when we are turning it on or off and deal with
      that appropriately, rather than just having a pending state that the current
      quota_enabled gets set to.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      afcdd129
  5. 16 9月, 2016 1 次提交
  6. 06 9月, 2016 1 次提交
  7. 25 8月, 2016 2 次提交
    • F
      Btrfs: fix lockdep warning on deadlock against an inode's log mutex · 28a23593
      Filipe Manana 提交于
      Commit 44f714da ("Btrfs: improve performance on fsync against new
      inode after rename/unlink"), which landed in 4.8-rc2, introduced a
      possibility for a deadlock due to double locking of an inode's log mutex
      by the same task, which lockdep reports with:
      
      [23045.433975] =============================================
      [23045.434748] [ INFO: possible recursive locking detected ]
      [23045.435426] 4.7.0-rc6-btrfs-next-34+ #1 Not tainted
      [23045.436044] ---------------------------------------------
      [23045.436044] xfs_io/3688 is trying to acquire lock:
      [23045.436044]  (&ei->log_mutex){+.+...}, at: [<ffffffffa038552d>] btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]
                     but task is already holding lock:
      [23045.436044]  (&ei->log_mutex){+.+...}, at: [<ffffffffa038552d>] btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]
                     other info that might help us debug this:
      [23045.436044]  Possible unsafe locking scenario:
      
      [23045.436044]        CPU0
      [23045.436044]        ----
      [23045.436044]   lock(&ei->log_mutex);
      [23045.436044]   lock(&ei->log_mutex);
      [23045.436044]
                      *** DEADLOCK ***
      
      [23045.436044]  May be due to missing lock nesting notation
      
      [23045.436044] 3 locks held by xfs_io/3688:
      [23045.436044]  #0:  (&sb->s_type->i_mutex_key#15){+.+...}, at: [<ffffffffa035f2ae>] btrfs_sync_file+0x14e/0x425 [btrfs]
      [23045.436044]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffff8118446b>] __sb_start_write+0x5f/0xb0
      [23045.436044]  #2:  (&ei->log_mutex){+.+...}, at: [<ffffffffa038552d>] btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]
                     stack backtrace:
      [23045.436044] CPU: 4 PID: 3688 Comm: xfs_io Not tainted 4.7.0-rc6-btrfs-next-34+ #1
      [23045.436044] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
      [23045.436044]  0000000000000000 ffff88022f5f7860 ffffffff8127074d ffffffff82a54b70
      [23045.436044]  ffffffff82a54b70 ffff88022f5f7920 ffffffff81092897 ffff880228015d68
      [23045.436044]  0000000000000000 ffffffff82a54b70 ffffffff829c3f00 ffff880228015d68
      [23045.436044] Call Trace:
      [23045.436044]  [<ffffffff8127074d>] dump_stack+0x67/0x90
      [23045.436044]  [<ffffffff81092897>] __lock_acquire+0xcbb/0xe4e
      [23045.436044]  [<ffffffff8109155f>] ? mark_lock+0x24/0x201
      [23045.436044]  [<ffffffff8109179a>] ? mark_held_locks+0x5e/0x74
      [23045.436044]  [<ffffffff81092de0>] lock_acquire+0x12f/0x1c3
      [23045.436044]  [<ffffffff81092de0>] ? lock_acquire+0x12f/0x1c3
      [23045.436044]  [<ffffffffa038552d>] ? btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]  [<ffffffffa038552d>] ? btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]  [<ffffffff814a51a4>] mutex_lock_nested+0x77/0x3a7
      [23045.436044]  [<ffffffffa038552d>] ? btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]  [<ffffffffa039705e>] ? btrfs_release_delayed_node+0xb/0xd [btrfs]
      [23045.436044]  [<ffffffffa038552d>] btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]  [<ffffffffa038552d>] ? btrfs_log_inode+0x13a/0xc95 [btrfs]
      [23045.436044]  [<ffffffff810a0ed1>] ? vprintk_emit+0x453/0x465
      [23045.436044]  [<ffffffffa0385a61>] btrfs_log_inode+0x66e/0xc95 [btrfs]
      [23045.436044]  [<ffffffffa03c084d>] log_new_dir_dentries+0x26c/0x359 [btrfs]
      [23045.436044]  [<ffffffffa03865aa>] btrfs_log_inode_parent+0x4a6/0x628 [btrfs]
      [23045.436044]  [<ffffffffa0387552>] btrfs_log_dentry_safe+0x5a/0x75 [btrfs]
      [23045.436044]  [<ffffffffa035f464>] btrfs_sync_file+0x304/0x425 [btrfs]
      [23045.436044]  [<ffffffff811acaf4>] vfs_fsync_range+0x8c/0x9e
      [23045.436044]  [<ffffffff811acb22>] vfs_fsync+0x1c/0x1e
      [23045.436044]  [<ffffffff811acc79>] do_fsync+0x31/0x4a
      [23045.436044]  [<ffffffff811ace99>] SyS_fsync+0x10/0x14
      [23045.436044]  [<ffffffff814a88e5>] entry_SYSCALL_64_fastpath+0x18/0xa8
      [23045.436044]  [<ffffffff8108f039>] ? trace_hardirqs_off_caller+0x3f/0xaa
      
      An example reproducer for this is:
      
         $ mkfs.btrfs -f /dev/sdb
         $ mount /dev/sdb /mnt
         $ mkdir /mnt/dir
         $ touch /mnt/dir/foo
         $ sync
         $ mv /mnt/dir/foo /mnt/dir/bar
         $ touch /mnt/dir/foo
         $ xfs_io -c "fsync" /mnt/dir/bar
      
      This is because while logging the inode of file bar we end up logging its
      parent directory (since its inode has an unlink_trans field matching the
      current transaction id due to the rename operation), which in turn logs
      the inodes for all its new dentries, so that the new inode for the new
      file named foo gets logged which in turn triggered another logging attempt
      for the inode we are fsync'ing, since that inode had an old name that
      corresponds to the name of the new inode.
      
      So fix this by ensuring that when logging the inode for a new dentry that
      has a name matching an old name of some other inode, we don't log again
      the original inode that we are fsync'ing.
      
      Fixes: 44f714da ("Btrfs: improve performance on fsync against new inode after rename/unlink")
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      28a23593
    • Q
      btrfs: qgroup: Fix qgroup incorrectness caused by log replay · df2c95f3
      Qu Wenruo 提交于
      When doing log replay at mount time(after power loss), qgroup will leak
      numbers of replayed data extents.
      
      The cause is almost the same of balance.
      So fix it by manually informing qgroup for owner changed extents.
      
      The bug can be detected by btrfs/119 test case.
      
      Cc: Mark Fasheh <mfasheh@suse.de>
      Signed-off-by: NQu Wenruo <quwenruo@cn.fujitsu.com>
      Reviewed-and-Tested-by: NGoldwyn Rodrigues <rgoldwyn@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      df2c95f3
  8. 01 8月, 2016 1 次提交
    • F
      Btrfs: improve performance on fsync against new inode after rename/unlink · 44f714da
      Filipe Manana 提交于
      With commit 56f23fdb ("Btrfs: fix file/data loss caused by fsync after
      rename and new inode") we got simple fix for a functional issue when the
      following sequence of actions is done:
      
        at transaction N
        create file A at directory D
        at transaction N + M (where M >= 1)
        move/rename existing file A from directory D to directory E
        create a new file named A at directory D
        fsync the new file
        power fail
      
      The solution was to simply detect such scenario and fallback to a full
      transaction commit when we detect it. However this turned out to had a
      significant impact on throughput (and a bit on latency too) for benchmarks
      using the dbench tool, which simulates real workloads from smbd (Samba)
      servers. For example on a test vm (with a debug kernel):
      
      Unpatched:
      Throughput 19.1572 MB/sec  32 clients  32 procs  max_latency=1005.229 ms
      
      Patched:
      Throughput 23.7015 MB/sec  32 clients  32 procs  max_latency=809.206 ms
      
      The patched results (this patch is applied) are similar to the results of
      a kernel with the commit 56f23fdb ("Btrfs: fix file/data loss caused
      by fsync after rename and new inode") reverted.
      
      This change avoids the fallback to a transaction commit and instead makes
      sure all the names of the conflicting inode (the one that had a name in a
      past transaction that matches the name of the new file in the same parent
      directory) are logged so that at log replay time we don't lose neither the
      new file nor the old file, and the old file gets the name it was renamed
      to.
      
      This also ends up avoiding a full transaction commit for a similar case
      that involves an unlink instead of a rename of the old file:
      
        at transaction N
        create file A at directory D
        at transaction N + M (where M >= 1)
        remove file A
        create a new file named A at directory D
        fsync the new file
        power fail
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      44f714da
  9. 26 7月, 2016 3 次提交
  10. 18 6月, 2016 1 次提交
  11. 26 5月, 2016 1 次提交
  12. 13 5月, 2016 3 次提交
    • F
      Btrfs: add semaphore to synchronize direct IO writes with fsync · 5f9a8a51
      Filipe Manana 提交于
      Due to the optimization of lockless direct IO writes (the inode's i_mutex
      is not held) introduced in commit 38851cc1 ("Btrfs: implement unlocked
      dio write"), we started having races between such writes with concurrent
      fsync operations that use the fast fsync path. These races were addressed
      in the patches titled "Btrfs: fix race between fsync and lockless direct
      IO writes" and "Btrfs: fix race between fsync and direct IO writes for
      prealloc extents". The races happened because the direct IO path, like
      every other write path, does create extent maps followed by the
      corresponding ordered extents while the fast fsync path collected first
      ordered extents and then it collected extent maps. This made it possible
      to log file extent items (based on the collected extent maps) without
      waiting for the corresponding ordered extents to complete (get their IO
      done). The two fixes mentioned before added a solution that consists of
      making the direct IO path create first the ordered extents and then the
      extent maps, while the fsync path attempts to collect any new ordered
      extents once it collects the extent maps. This was simple and did not
      require adding any synchonization primitive to any data structure (struct
      btrfs_inode for example) but it makes things more fragile for future
      development endeavours and adds an exceptional approach compared to the
      other write paths.
      
      This change adds a read-write semaphore to the btrfs inode structure and
      makes the direct IO path create the extent maps and the ordered extents
      while holding read access on that semaphore, while the fast fsync path
      collects extent maps and ordered extents while holding write access on
      that semaphore. The logic for direct IO write path is encapsulated in a
      new helper function that is used both for cow and nocow direct IO writes.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      5f9a8a51
    • F
      Btrfs: fix empty symlink after creating symlink and fsync parent dir · 3f9749f6
      Filipe Manana 提交于
      If we create a symlink, fsync its parent directory, crash/power fail and
      mount the filesystem, we end up with an empty symlink, which not only is
      useless it's also not allowed in linux (the man page symlink(2) is well
      explicit about that).  So we just need to make sure to fully log an inode
      if it's a symlink, to ensure its inline extent gets logged, ensuring the
      same behaviour as ext3, ext4, xfs, reiserfs, f2fs, nilfs2, etc.
      
      Example reproducer:
      
        $ mkfs.btrfs -f /dev/sdb
        $ mount /dev/sdb /mnt
        $ mkdir /mnt/testdir
        $ sync
        $ ln -s /mnt/foo /mnt/testdir/bar
        $ xfs_io -c fsync /mnt/testdir
        <power fail>
        $ mount /dev/sdb /mnt
        $ readlink /mnt/testdir/bar
        <empty string>
      
      A test case for fstests follows soon.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      3f9749f6
    • F
      Btrfs: fix for incorrect directory entries after fsync log replay · 657ed1aa
      Filipe Manana 提交于
      If we move a directory to a new parent and later log that parent and don't
      explicitly log the old parent, when we replay the log we can end up with
      entries for the moved directory in both the old and new parent directories.
      Besides being ilegal to have directories with multiple hard links in linux,
      it also resulted in the leaving the inode item with a link count of 1.
      A similar issue also happens if we move a regular file - after the log tree
      is replayed the file has a link in both the old and new parent directories,
      when it should be only at the new directory.
      
      Sample reproducer:
      
        $ mkfs.btrfs -f /dev/sdc
        $ mount /dev/sdc /mnt
        $ mkdir /mnt/x
        $ mkdir /mnt/y
        $ touch /mnt/x/foo
        $ mkdir /mnt/y/z
        $ sync
        $ ln /mnt/x/foo /mnt/x/bar
        $ mv /mnt/y/z /mnt/x/z
        < power fail >
        $ mount /dev/sdc /mnt
        $ ls -1Ri /mnt
        /mnt:
        257 x
        258 y
      
        /mnt/x:
        259 bar
        259 foo
        260 z
      
        /mnt/x/z:
      
        /mnt/y:
        260 z
      
        /mnt/y/z:
      
        $ umount /dev/sdc
        $ btrfs check /dev/sdc
        Checking filesystem on /dev/sdc
        UUID: a67e2c4a-a4b4-4fdc-b015-9d9af1e344be
        checking extents
        checking free space cache
        checking fs roots
        root 5 inode 260 errors 2000, link count wrong
              unresolved ref dir 257 index 4 namelen 1 name z filetype 2 errors 0
              unresolved ref dir 258 index 2 namelen 1 name z filetype 2 errors 0
        (...)
      
      Attempting to remove the directory becomes impossible:
      
        $ mount /dev/sdc /mnt
        $ rmdir /mnt/y/z
        $ ls -lh /mnt/y
        ls: cannot access /mnt/y/z: No such file or directory
        total 0
        d????????? ? ? ? ?            ? z
        $ rmdir /mnt/x/z
        rmdir: failed to remove ‘/mnt/x/z’: Stale file handle
        $ ls -lh /mnt/x
        ls: cannot access /mnt/x/z: Stale file handle
        total 0
        -rw-r--r-- 2 root root 0 Apr  6 18:06 bar
        -rw-r--r-- 2 root root 0 Apr  6 18:06 foo
        d????????? ? ?    ?    ?            ? z
      
      So make sure that on rename we set the last_unlink_trans value for our
      inode, even if it's a directory, to the value of the current transaction's
      ID and that if the new parent directory is logged that we fallback to a
      transaction commit.
      
      A test case for fstests is being submitted as well.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      657ed1aa
  13. 29 4月, 2016 1 次提交
  14. 28 4月, 2016 1 次提交
  15. 11 4月, 2016 1 次提交
  16. 07 4月, 2016 1 次提交
    • F
      Btrfs: fix file/data loss caused by fsync after rename and new inode · 56f23fdb
      Filipe Manana 提交于
      If we rename an inode A (be it a file or a directory), create a new
      inode B with the old name of inode A and under the same parent directory,
      fsync inode B and then power fail, at log tree replay time we end up
      removing inode A completely. If inode A is a directory then all its files
      are gone too.
      
      Example scenarios where this happens:
      This is reproducible with the following steps, taken from a couple of
      test cases written for fstests which are going to be submitted upstream
      soon:
      
         # Scenario 1
      
         mkfs.btrfs -f /dev/sdc
         mount /dev/sdc /mnt
         mkdir -p /mnt/a/x
         echo "hello" > /mnt/a/x/foo
         echo "world" > /mnt/a/x/bar
         sync
         mv /mnt/a/x /mnt/a/y
         mkdir /mnt/a/x
         xfs_io -c fsync /mnt/a/x
         <power failure happens>
      
         The next time the fs is mounted, log tree replay happens and
         the directory "y" does not exist nor do the files "foo" and
         "bar" exist anywhere (neither in "y" nor in "x", nor the root
         nor anywhere).
      
         # Scenario 2
      
         mkfs.btrfs -f /dev/sdc
         mount /dev/sdc /mnt
         mkdir /mnt/a
         echo "hello" > /mnt/a/foo
         sync
         mv /mnt/a/foo /mnt/a/bar
         echo "world" > /mnt/a/foo
         xfs_io -c fsync /mnt/a/foo
         <power failure happens>
      
         The next time the fs is mounted, log tree replay happens and the
         file "bar" does not exists anymore. A file with the name "foo"
         exists and it matches the second file we created.
      
      Another related problem that does not involve file/data loss is when a
      new inode is created with the name of a deleted snapshot and we fsync it:
      
         mkfs.btrfs -f /dev/sdc
         mount /dev/sdc /mnt
         mkdir /mnt/testdir
         btrfs subvolume snapshot /mnt /mnt/testdir/snap
         btrfs subvolume delete /mnt/testdir/snap
         rmdir /mnt/testdir
         mkdir /mnt/testdir
         xfs_io -c fsync /mnt/testdir # or fsync some file inside /mnt/testdir
         <power failure>
      
         The next time the fs is mounted the log replay procedure fails because
         it attempts to delete the snapshot entry (which has dir item key type
         of BTRFS_ROOT_ITEM_KEY) as if it were a regular (non-root) entry,
         resulting in the following error that causes mount to fail:
      
         [52174.510532] BTRFS info (device dm-0): failed to delete reference to snap, inode 257 parent 257
         [52174.512570] ------------[ cut here ]------------
         [52174.513278] WARNING: CPU: 12 PID: 28024 at fs/btrfs/inode.c:3986 __btrfs_unlink_inode+0x178/0x351 [btrfs]()
         [52174.514681] BTRFS: Transaction aborted (error -2)
         [52174.515630] Modules linked in: btrfs dm_flakey dm_mod overlay crc32c_generic ppdev xor raid6_pq acpi_cpufreq parport_pc tpm_tis sg parport tpm evdev i2c_piix4 proc
         [52174.521568] CPU: 12 PID: 28024 Comm: mount Tainted: G        W       4.5.0-rc6-btrfs-next-27+ #1
         [52174.522805] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014
         [52174.524053]  0000000000000000 ffff8801df2a7710 ffffffff81264e93 ffff8801df2a7758
         [52174.524053]  0000000000000009 ffff8801df2a7748 ffffffff81051618 ffffffffa03591cd
         [52174.524053]  00000000fffffffe ffff88015e6e5000 ffff88016dbc3c88 ffff88016dbc3c88
         [52174.524053] Call Trace:
         [52174.524053]  [<ffffffff81264e93>] dump_stack+0x67/0x90
         [52174.524053]  [<ffffffff81051618>] warn_slowpath_common+0x99/0xb2
         [52174.524053]  [<ffffffffa03591cd>] ? __btrfs_unlink_inode+0x178/0x351 [btrfs]
         [52174.524053]  [<ffffffff81051679>] warn_slowpath_fmt+0x48/0x50
         [52174.524053]  [<ffffffffa03591cd>] __btrfs_unlink_inode+0x178/0x351 [btrfs]
         [52174.524053]  [<ffffffff8118f5e9>] ? iput+0xb0/0x284
         [52174.524053]  [<ffffffffa0359fe8>] btrfs_unlink_inode+0x1c/0x3d [btrfs]
         [52174.524053]  [<ffffffffa038631e>] check_item_in_log+0x1fe/0x29b [btrfs]
         [52174.524053]  [<ffffffffa0386522>] replay_dir_deletes+0x167/0x1cf [btrfs]
         [52174.524053]  [<ffffffffa038739e>] fixup_inode_link_count+0x289/0x2aa [btrfs]
         [52174.524053]  [<ffffffffa038748a>] fixup_inode_link_counts+0xcb/0x105 [btrfs]
         [52174.524053]  [<ffffffffa038a5ec>] btrfs_recover_log_trees+0x258/0x32c [btrfs]
         [52174.524053]  [<ffffffffa03885b2>] ? replay_one_extent+0x511/0x511 [btrfs]
         [52174.524053]  [<ffffffffa034f288>] open_ctree+0x1dd4/0x21b9 [btrfs]
         [52174.524053]  [<ffffffffa032b753>] btrfs_mount+0x97e/0xaed [btrfs]
         [52174.524053]  [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf
         [52174.524053]  [<ffffffff8117bafa>] mount_fs+0x67/0x131
         [52174.524053]  [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde
         [52174.524053]  [<ffffffffa032af81>] btrfs_mount+0x1ac/0xaed [btrfs]
         [52174.524053]  [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf
         [52174.524053]  [<ffffffff8108c262>] ? lockdep_init_map+0xb9/0x1b3
         [52174.524053]  [<ffffffff8117bafa>] mount_fs+0x67/0x131
         [52174.524053]  [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde
         [52174.524053]  [<ffffffff8119590f>] do_mount+0x8a6/0x9e8
         [52174.524053]  [<ffffffff811358dd>] ? strndup_user+0x3f/0x59
         [52174.524053]  [<ffffffff81195c65>] SyS_mount+0x77/0x9f
         [52174.524053]  [<ffffffff814935d7>] entry_SYSCALL_64_fastpath+0x12/0x6b
         [52174.561288] ---[ end trace 6b53049efb1a3ea6 ]---
      
      Fix this by forcing a transaction commit when such cases happen.
      This means we check in the commit root of the subvolume tree if there
      was any other inode with the same reference when the inode we are
      fsync'ing is a new inode (created in the current transaction).
      
      Test cases for fstests, covering all the scenarios given above, were
      submitted upstream for fstests:
      
        * fstests: generic test for fsync after renaming directory
          https://patchwork.kernel.org/patch/8694281/
      
        * fstests: generic test for fsync after renaming file
          https://patchwork.kernel.org/patch/8694301/
      
        * fstests: add btrfs test for fsync after snapshot deletion
          https://patchwork.kernel.org/patch/8670671/
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      56f23fdb
  17. 14 3月, 2016 1 次提交
  18. 12 3月, 2016 1 次提交
  19. 02 3月, 2016 3 次提交
    • F
      Btrfs: do not collect ordered extents when logging that inode exists · 5e33a2bd
      Filipe Manana 提交于
      When logging that an inode exists, for example as part of a directory
      fsync operation, we were collecting any ordered extents for the inode but
      we ended up doing nothing with them except tagging them as processed, by
      setting the flag BTRFS_ORDERED_LOGGED on them, which prevented a
      subsequent fsync of that inode (using the LOG_INODE_ALL mode) from
      collecting and processing them. This created a time window where a second
      fsync against the inode, using the fast path, ended up not logging the
      checksums for the new extents but it logged the extents since they were
      part of the list of modified extents. This happened because the ordered
      extents were not collected and checksums were not yet added to the csum
      tree - the ordered extents have not gone through btrfs_finish_ordered_io()
      yet (which is where we add them to the csum tree by calling
      inode.c:add_pending_csums()).
      
      So fix this by not collecting an inode's ordered extents if we are logging
      it with the LOG_INODE_EXISTS mode.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      5e33a2bd
    • F
      Btrfs: fix file loss on log replay after renaming a file and fsync · 2be63d5c
      Filipe Manana 提交于
      We have two cases where we end up deleting a file at log replay time
      when we should not. For this to happen the file must have been renamed
      and a directory inode must have been fsynced/logged.
      
      Two examples that exercise these two cases are listed below.
      
        Case 1)
      
        $ mkfs.btrfs -f /dev/sdb
        $ mount /dev/sdb /mnt
        $ mkdir -p /mnt/a/b
        $ mkdir /mnt/c
        $ touch /mnt/a/b/foo
        $ sync
        $ mv /mnt/a/b/foo /mnt/c/
        # Create file bar just to make sure the fsync on directory a/ does
        # something and it's not a no-op.
        $ touch /mnt/a/bar
        $ xfs_io -c "fsync" /mnt/a
        < power fail / crash >
      
        The next time the filesystem is mounted, the log replay procedure
        deletes file foo.
      
        Case 2)
      
        $ mkfs.btrfs -f /dev/sdb
        $ mount /dev/sdb /mnt
        $ mkdir /mnt/a
        $ mkdir /mnt/b
        $ mkdir /mnt/c
        $ touch /mnt/a/foo
        $ ln /mnt/a/foo /mnt/b/foo_link
        $ touch /mnt/b/bar
        $ sync
        $ unlink /mnt/b/foo_link
        $ mv /mnt/b/bar /mnt/c/
        $ xfs_io -c "fsync" /mnt/a/foo
        < power fail / crash >
      
        The next time the filesystem is mounted, the log replay procedure
        deletes file bar.
      
      The reason why the files are deleted is because when we log inodes
      other then the fsync target inode, we ignore their last_unlink_trans
      value and leave the log without enough information to later replay the
      rename operations. So we need to look at the last_unlink_trans values
      and fallback to a transaction commit if they are greater than the
      id of the last committed transaction.
      
      So fix this by looking at the last_unlink_trans values and fallback to
      transaction commits when needed. Also, when logging other inodes (for
      case 1 we logged descendants of the fsync target inode while for case 2
      we logged ascendants) we need to care about concurrent tasks updating
      the last_unlink_trans of inodes we are logging (which was already an
      existing problem in check_parent_dirs_for_sync()). Since we can not
      acquire their inode mutex (vfs' struct inode ->i_mutex), as that causes
      deadlocks with other concurrent operations that acquire the i_mutex of
      2 inodes (other fsyncs or renames for example), we need to serialize on
      the log_mutex of the inode we are logging. A task setting a new value for
      an inode's last_unlink_trans must acquire the inode's log_mutex and it
      must do this update before doing the actual unlink operation (which is
      already the case except when deleting a snapshot). Conversely the task
      logging the inode must first log the inode and then check the inode's
      last_unlink_trans value while holding its log_mutex, as if its value is
      not greater then the id of the last committed transaction it means it
      logged a safe state of the inode's items, while if its value is not
      smaller then the id of the last committed transaction it means the inode
      state it has logged might not be safe (the concurrent task might have
      just updated last_unlink_trans but hasn't done yet the unlink operation)
      and therefore a transaction commit must be done.
      
      Test cases for xfstests follow in separate patches.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      2be63d5c
    • F
      Btrfs: fix unreplayable log after snapshot delete + parent dir fsync · 1ec9a1ae
      Filipe Manana 提交于
      If we delete a snapshot, fsync its parent directory and crash/power fail
      before the next transaction commit, on the next mount when we attempt to
      replay the log tree of the root containing the parent directory we will
      fail and prevent the filesystem from mounting, which is solvable by wiping
      out the log trees with the btrfs-zero-log tool but very inconvenient as
      we will lose any data and metadata fsynced before the parent directory
      was fsynced.
      
      For example:
      
        $ mkfs.btrfs -f /dev/sdc
        $ mount /dev/sdc /mnt
        $ mkdir /mnt/testdir
        $ btrfs subvolume snapshot /mnt /mnt/testdir/snap
        $ btrfs subvolume delete /mnt/testdir/snap
        $ xfs_io -c "fsync" /mnt/testdir
        < crash / power failure and reboot >
        $ mount /dev/sdc /mnt
        mount: mount(2) failed: No such file or directory
      
      And in dmesg/syslog we get the following message and trace:
      
      [192066.361162] BTRFS info (device dm-0): failed to delete reference to snap, inode 257 parent 257
      [192066.363010] ------------[ cut here ]------------
      [192066.365268] WARNING: CPU: 4 PID: 5130 at fs/btrfs/inode.c:3986 __btrfs_unlink_inode+0x17a/0x354 [btrfs]()
      [192066.367250] BTRFS: Transaction aborted (error -2)
      [192066.368401] Modules linked in: btrfs dm_flakey dm_mod ppdev sha256_generic xor raid6_pq hmac drbg ansi_cprng aesni_intel acpi_cpufreq tpm_tis aes_x86_64 tpm ablk_helper evdev cryptd sg parport_pc i2c_piix4 psmouse lrw parport i2c_core pcspkr gf128mul processor serio_raw glue_helper button loop autofs4 ext4 crc16 mbcache jbd2 sd_mod sr_mod cdrom ata_generic virtio_scsi ata_piix libata virtio_pci virtio_ring crc32c_intel scsi_mod e1000 virtio floppy [last unloaded: btrfs]
      [192066.377154] CPU: 4 PID: 5130 Comm: mount Tainted: G        W       4.4.0-rc6-btrfs-next-20+ #1
      [192066.378875] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014
      [192066.380889]  0000000000000000 ffff880143923670 ffffffff81257570 ffff8801439236b8
      [192066.382561]  ffff8801439236a8 ffffffff8104ec07 ffffffffa039dc2c 00000000fffffffe
      [192066.384191]  ffff8801ed31d000 ffff8801b9fc9c88 ffff8801086875e0 ffff880143923710
      [192066.385827] Call Trace:
      [192066.386373]  [<ffffffff81257570>] dump_stack+0x4e/0x79
      [192066.387387]  [<ffffffff8104ec07>] warn_slowpath_common+0x99/0xb2
      [192066.388429]  [<ffffffffa039dc2c>] ? __btrfs_unlink_inode+0x17a/0x354 [btrfs]
      [192066.389236]  [<ffffffff8104ec68>] warn_slowpath_fmt+0x48/0x50
      [192066.389884]  [<ffffffffa039dc2c>] __btrfs_unlink_inode+0x17a/0x354 [btrfs]
      [192066.390621]  [<ffffffff81184b55>] ? iput+0xb0/0x266
      [192066.391200]  [<ffffffffa039ea25>] btrfs_unlink_inode+0x1c/0x3d [btrfs]
      [192066.391930]  [<ffffffffa03ca623>] check_item_in_log+0x1fe/0x29b [btrfs]
      [192066.392715]  [<ffffffffa03ca827>] replay_dir_deletes+0x167/0x1cf [btrfs]
      [192066.393510]  [<ffffffffa03cccc7>] replay_one_buffer+0x417/0x570 [btrfs]
      [192066.394241]  [<ffffffffa03ca164>] walk_up_log_tree+0x10e/0x1dc [btrfs]
      [192066.394958]  [<ffffffffa03cac72>] walk_log_tree+0xa5/0x190 [btrfs]
      [192066.395628]  [<ffffffffa03ce8b8>] btrfs_recover_log_trees+0x239/0x32c [btrfs]
      [192066.396790]  [<ffffffffa03cc8b0>] ? replay_one_extent+0x50a/0x50a [btrfs]
      [192066.397891]  [<ffffffffa0394041>] open_ctree+0x1d8b/0x2167 [btrfs]
      [192066.398897]  [<ffffffffa03706e1>] btrfs_mount+0x5ef/0x729 [btrfs]
      [192066.399823]  [<ffffffff8108ad98>] ? trace_hardirqs_on+0xd/0xf
      [192066.400739]  [<ffffffff8108959b>] ? lockdep_init_map+0xb9/0x1b3
      [192066.401700]  [<ffffffff811714b9>] mount_fs+0x67/0x131
      [192066.402482]  [<ffffffff81188560>] vfs_kern_mount+0x6c/0xde
      [192066.403930]  [<ffffffffa03702bd>] btrfs_mount+0x1cb/0x729 [btrfs]
      [192066.404831]  [<ffffffff8108ad98>] ? trace_hardirqs_on+0xd/0xf
      [192066.405726]  [<ffffffff8108959b>] ? lockdep_init_map+0xb9/0x1b3
      [192066.406621]  [<ffffffff811714b9>] mount_fs+0x67/0x131
      [192066.407401]  [<ffffffff81188560>] vfs_kern_mount+0x6c/0xde
      [192066.408247]  [<ffffffff8118ae36>] do_mount+0x893/0x9d2
      [192066.409047]  [<ffffffff8113009b>] ? strndup_user+0x3f/0x8c
      [192066.409842]  [<ffffffff8118b187>] SyS_mount+0x75/0xa1
      [192066.410621]  [<ffffffff8147e517>] entry_SYSCALL_64_fastpath+0x12/0x6b
      [192066.411572] ---[ end trace 2de42126c1e0a0f0 ]---
      [192066.412344] BTRFS: error (device dm-0) in __btrfs_unlink_inode:3986: errno=-2 No such entry
      [192066.413748] BTRFS: error (device dm-0) in btrfs_replay_log:2464: errno=-2 No such entry (Failed to recover log tree)
      [192066.415458] BTRFS error (device dm-0): cleaner transaction attach returned -30
      [192066.444613] BTRFS: open_ctree failed
      
      This happens because when we are replaying the log and processing the
      directory entry pointing to the snapshot in the subvolume tree, we treat
      its btrfs_dir_item item as having a location with a key type matching
      BTRFS_INODE_ITEM_KEY, which is wrong because the type matches
      BTRFS_ROOT_ITEM_KEY and therefore must be processed differently, as the
      object id refers to a root number and not to an inode in the root
      containing the parent directory.
      
      So fix this by triggering a transaction commit if an fsync against the
      parent directory is requested after deleting a snapshot. This is the
      simplest approach for a rare use case. Some alternative that avoids the
      transaction commit would require more code to explicitly delete the
      snapshot at log replay time (factoring out common code from ioctl.c:
      btrfs_ioctl_snap_destroy()), special care at fsync time to remove the
      log tree of the snapshot's root from the log root of the root of tree
      roots, amongst other steps.
      
      A test case for xfstests that triggers the issue follows.
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            _cleanup_flakey
            cd /
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
        . ./common/dmflakey
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_dm_target flakey
        _require_metadata_journaling $SCRATCH_DEV
      
        rm -f $seqres.full
      
        _scratch_mkfs >>$seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create a snapshot at the root of our filesystem (mount point path), delete it,
        # fsync the mount point path, crash and mount to replay the log. This should
        # succeed and after the filesystem is mounted the snapshot should not be visible
        # anymore.
        _run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT $SCRATCH_MNT/snap1
        _run_btrfs_util_prog subvolume delete $SCRATCH_MNT/snap1
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT
        _flakey_drop_and_remount
        [ -e $SCRATCH_MNT/snap1 ] && \
            echo "Snapshot snap1 still exists after log replay"
      
        # Similar scenario as above, but this time the snapshot is created inside a
        # directory and not directly under the root (mount point path).
        mkdir $SCRATCH_MNT/testdir
        _run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT $SCRATCH_MNT/testdir/snap2
        _run_btrfs_util_prog subvolume delete $SCRATCH_MNT/testdir/snap2
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/testdir
        _flakey_drop_and_remount
        [ -e $SCRATCH_MNT/testdir/snap2 ] && \
            echo "Snapshot snap2 still exists after log replay"
      
        _unmount_flakey
      
        echo "Silence is golden"
        status=0
        exit
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Tested-by: NLiu Bo <bo.li.liu@oracle.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      1ec9a1ae
  20. 26 1月, 2016 1 次提交
    • F
      Btrfs: fix race between fsync and lockless direct IO writes · de0ee0ed
      Filipe Manana 提交于
      An fsync, using the fast path, can race with a concurrent lockless direct
      IO write and end up logging a file extent item that points to an extent
      that wasn't written to yet. This is because the fast fsync path collects
      ordered extents into a local list and then collects all the new extent
      maps to log file extent items based on them, while the direct IO write
      path creates the new extent map before it creates the corresponding
      ordered extent (and submitting the respective bio(s)).
      
      So fix this by making the direct IO write path create ordered extents
      before the extent maps and make the fast fsync path collect any new
      ordered extents after it collects the extent maps.
      Note that making the fsync handler call inode_dio_wait() (after acquiring
      the inode's i_mutex) would not work and lead to a deadlock when doing
      AIO, as through AIO we end up in a path where the fsync handler is called
      (through dio_aio_complete_work() -> dio_complete() -> vfs_fsync_range())
      before the inode's dio counter is decremented (inode_dio_wait() waits
      for this counter to have a value of zero).
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      de0ee0ed
  21. 26 10月, 2015 1 次提交
    • F
      Btrfs: fix regression running delayed references when using qgroups · b06c4bf5
      Filipe Manana 提交于
      In the kernel 4.2 merge window we had a big changes to the implementation
      of delayed references and qgroups which made the no_quota field of delayed
      references not used anymore. More specifically the no_quota field is not
      used anymore as of:
      
        commit 0ed4792a ("btrfs: qgroup: Switch to new extent-oriented qgroup mechanism.")
      
      Leaving the no_quota field actually prevents delayed references from
      getting merged, which in turn cause the following BUG_ON(), at
      fs/btrfs/extent-tree.c, to be hit when qgroups are enabled:
      
        static int run_delayed_tree_ref(...)
        {
           (...)
           BUG_ON(node->ref_mod != 1);
           (...)
        }
      
      This happens on a scenario like the following:
      
        1) Ref1 bytenr X, action = BTRFS_ADD_DELAYED_REF, no_quota = 1, added.
      
        2) Ref2 bytenr X, action = BTRFS_DROP_DELAYED_REF, no_quota = 0, added.
           It's not merged with Ref1 because Ref1->no_quota != Ref2->no_quota.
      
        3) Ref3 bytenr X, action = BTRFS_ADD_DELAYED_REF, no_quota = 1, added.
           It's not merged with the reference at the tail of the list of refs
           for bytenr X because the reference at the tail, Ref2 is incompatible
           due to Ref2->no_quota != Ref3->no_quota.
      
        4) Ref4 bytenr X, action = BTRFS_DROP_DELAYED_REF, no_quota = 0, added.
           It's not merged with the reference at the tail of the list of refs
           for bytenr X because the reference at the tail, Ref3 is incompatible
           due to Ref3->no_quota != Ref4->no_quota.
      
        5) We run delayed references, trigger merging of delayed references,
           through __btrfs_run_delayed_refs() -> btrfs_merge_delayed_refs().
      
        6) Ref1 and Ref3 are merged as Ref1->no_quota = Ref3->no_quota and
           all other conditions are satisfied too. So Ref1 gets a ref_mod
           value of 2.
      
        7) Ref2 and Ref4 are merged as Ref2->no_quota = Ref4->no_quota and
           all other conditions are satisfied too. So Ref2 gets a ref_mod
           value of 2.
      
        8) Ref1 and Ref2 aren't merged, because they have different values
           for their no_quota field.
      
        9) Delayed reference Ref1 is picked for running (select_delayed_ref()
           always prefers references with an action == BTRFS_ADD_DELAYED_REF).
           So run_delayed_tree_ref() is called for Ref1 which triggers the
           BUG_ON because Ref1->red_mod != 1 (equals 2).
      
      So fix this by removing the no_quota field, as it's not used anymore as
      of commit 0ed4792a ("btrfs: qgroup: Switch to new extent-oriented
      qgroup mechanism.").
      
      The use of no_quota was also buggy in at least two places:
      
      1) At delayed-refs.c:btrfs_add_delayed_tree_ref() - we were setting
         no_quota to 0 instead of 1 when the following condition was true:
         is_fstree(ref_root) || !fs_info->quota_enabled
      
      2) At extent-tree.c:__btrfs_inc_extent_ref() - we were attempting to
         reset a node's no_quota when the condition "!is_fstree(root_objectid)
         || !root->fs_info->quota_enabled" was true but we did it only in
         an unused local stack variable, that is, we never reset the no_quota
         value in the node itself.
      
      This fixes the remainder of problems several people have been having when
      running delayed references, mostly while a balance is running in parallel,
      on a 4.2+ kernel.
      
      Very special thanks to Stéphane Lesimple for helping debugging this issue
      and testing this fix on his multi terabyte filesystem (which took more
      than one day to balance alone, plus fsck, etc).
      
      Also, this fixes deadlock issue when using the clone ioctl with qgroups
      enabled, as reported by Elias Probst in the mailing list. The deadlock
      happens because after calling btrfs_insert_empty_item we have our path
      holding a write lock on a leaf of the fs/subvol tree and then before
      releasing the path we called check_ref() which did backref walking, when
      qgroups are enabled, and tried to read lock the same leaf. The trace for
      this case is the following:
      
        INFO: task systemd-nspawn:6095 blocked for more than 120 seconds.
        (...)
        Call Trace:
          [<ffffffff86999201>] schedule+0x74/0x83
          [<ffffffff863ef64c>] btrfs_tree_read_lock+0xc0/0xea
          [<ffffffff86137ed7>] ? wait_woken+0x74/0x74
          [<ffffffff8639f0a7>] btrfs_search_old_slot+0x51a/0x810
          [<ffffffff863a129b>] btrfs_next_old_leaf+0xdf/0x3ce
          [<ffffffff86413a00>] ? ulist_add_merge+0x1b/0x127
          [<ffffffff86411688>] __resolve_indirect_refs+0x62a/0x667
          [<ffffffff863ef546>] ? btrfs_clear_lock_blocking_rw+0x78/0xbe
          [<ffffffff864122d3>] find_parent_nodes+0xaf3/0xfc6
          [<ffffffff86412838>] __btrfs_find_all_roots+0x92/0xf0
          [<ffffffff864128f2>] btrfs_find_all_roots+0x45/0x65
          [<ffffffff8639a75b>] ? btrfs_get_tree_mod_seq+0x2b/0x88
          [<ffffffff863e852e>] check_ref+0x64/0xc4
          [<ffffffff863e9e01>] btrfs_clone+0x66e/0xb5d
          [<ffffffff863ea77f>] btrfs_ioctl_clone+0x48f/0x5bb
          [<ffffffff86048a68>] ? native_sched_clock+0x28/0x77
          [<ffffffff863ed9b0>] btrfs_ioctl+0xabc/0x25cb
        (...)
      
      The problem goes away by eleminating check_ref(), which no longer is
      needed as its purpose was to get a value for the no_quota field of
      a delayed reference (this patch removes the no_quota field as mentioned
      earlier).
      Reported-by: NStéphane Lesimple <stephane_btrfs@lesimple.fr>
      Tested-by: NStéphane Lesimple <stephane_btrfs@lesimple.fr>
      Reported-by: NElias Probst <mail@eliasprobst.eu>
      Reported-by: NPeter Becker <floyd.net@gmail.com>
      Reported-by: NMalte Schröder <malte@tnxip.de>
      Reported-by: NDerek Dongray <derek@valedon.co.uk>
      Reported-by: NErkki Seppala <flux-btrfs@inside.org>
      Cc: stable@vger.kernel.org  # 4.2+
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NQu Wenruo <quwenruo@cn.fujitsu.com>
      b06c4bf5
  22. 11 10月, 2015 2 次提交
  23. 29 9月, 2015 1 次提交
  24. 20 8月, 2015 3 次提交
    • F
      Btrfs: fix file read corruption after extent cloning and fsync · b84b8390
      Filipe Manana 提交于
      If we partially clone one extent of a file into a lower offset of the
      file, fsync the file, power fail and then mount the fs to trigger log
      replay, we can get multiple checksum items in the csum tree that overlap
      each other and result in checksum lookup failures later. Those failures
      can make file data read requests assume a checksum value of 0, but they
      will not return an error (-EIO for example) to userspace exactly because
      the expected checksum value 0 is a special value that makes the read bio
      endio callback return success and set all the bytes of the corresponding
      page with the value 0x01 (at fs/btrfs/inode.c:__readpage_endio_check()).
      From a userspace perspective this is equivalent to file corruption
      because we are not returning what was written to the file.
      
      Details about how this can happen, and why, are included inline in the
      following reproducer test case for fstests and the comment added to
      tree-log.c.
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            _cleanup_flakey
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
        . ./common/dmflakey
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_dm_flakey
        _require_cloner
        _require_metadata_journaling $SCRATCH_DEV
      
        rm -f $seqres.full
      
        _scratch_mkfs >>$seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create our test file with a single 100K extent starting at file
        # offset 800K. We fsync the file here to make the fsync log tree gets
        # a single csum item that covers the whole 100K extent, which causes
        # the second fsync, done after the cloning operation below, to not
        # leave in the log tree two csum items covering two sub-ranges
        # ([0, 20K[ and [20K, 100K[)) of our extent.
        $XFS_IO_PROG -f -c "pwrite -S 0xaa 800K 100K"  \
                        -c "fsync"                     \
                         $SCRATCH_MNT/foo | _filter_xfs_io
      
        # Now clone part of our extent into file offset 400K. This adds a file
        # extent item to our inode's metadata that points to the 100K extent
        # we created before, using a data offset of 20K and a data length of
        # 20K, so that it refers to the sub-range [20K, 40K[ of our original
        # extent.
        $CLONER_PROG -s $((800 * 1024 + 20 * 1024)) -d $((400 * 1024)) \
            -l $((20 * 1024)) $SCRATCH_MNT/foo $SCRATCH_MNT/foo
      
        # Now fsync our file to make sure the extent cloning is durably
        # persisted. This fsync will not add a second csum item to the log
        # tree containing the checksums for the blocks in the sub-range
        # [20K, 40K[ of our extent, because there was already a csum item in
        # the log tree covering the whole extent, added by the first fsync
        # we did before.
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
      
        echo "File digest before power failure:"
        md5sum $SCRATCH_MNT/foo | _filter_scratch
      
        # Silently drop all writes and ummount to simulate a crash/power
        # failure.
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        # Allow writes again, mount to trigger log replay and validate file
        # contents.
        # The fsync log replay first processes the file extent item
        # corresponding to the file offset 400K (the one which refers to the
        # [20K, 40K[ sub-range of our 100K extent) and then processes the file
        # extent item for file offset 800K. It used to happen that when
        # processing the later, it erroneously left in the csum tree 2 csum
        # items that overlapped each other, 1 for the sub-range [20K, 40K[ and
        # 1 for the whole range of our extent. This introduced a problem where
        # subsequent lookups for the checksums of blocks within the range
        # [40K, 100K[ of our extent would not find anything because lookups in
        # the csum tree ended up looking only at the smaller csum item, the
        # one covering the subrange [20K, 40K[. This made read requests assume
        # an expected checksum with a value of 0 for those blocks, which caused
        # checksum verification failure when the read operations finished.
        # However those checksum failure did not result in read requests
        # returning an error to user space (like -EIO for e.g.) because the
        # expected checksum value had the special value 0, and in that case
        # btrfs set all bytes of the corresponding pages with the value 0x01
        # and produce the following warning in dmesg/syslog:
        #
        #  "BTRFS warning (device dm-0): csum failed ino 257 off 917504 csum\
        #   1322675045 expected csum 0"
        #
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        echo "File digest after log replay:"
        # Must match the same digest he had after cloning the extent and
        # before the power failure happened.
        md5sum $SCRATCH_MNT/foo | _filter_scratch
      
        _unmount_flakey
      
        status=0
        exit
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      b84b8390
    • Z
      btrfs: Remove unused arguments in tree-log.c · 60d53eb3
      Zhaolei 提交于
      Following arguments are not used in tree-log.c:
       insert_one_name(): path, type
       wait_log_commit(): trans
       wait_for_writer(): trans
      
      This patch remove them.
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      60d53eb3
    • Z
      btrfs: Remove useless condition in start_log_trans() · 34eb2a52
      Zhaolei 提交于
      Dan Carpenter <dan.carpenter@oracle.com> reported a smatch warning
      for start_log_trans():
       fs/btrfs/tree-log.c:178 start_log_trans()
       warn: we tested 'root->log_root' before and it was 'false'
      
       fs/btrfs/tree-log.c
       147          if (root->log_root) {
       We test "root->log_root" here.
       ...
      
      Reason:
       Condition of:
       fs/btrfs/tree-log.c:178: if (!root->log_root) {
       is not necessary after commit: 7237f183
      
       It caused a smatch warning, and no functionally error.
      
      Fix:
       Deleting above condition will make smatch shut up,
       but a better way is to do cleanup for start_log_trans()
       to remove duplicated code and make code more readable.
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      34eb2a52
  25. 09 8月, 2015 2 次提交
    • F
      Btrfs: fix stale dir entries after removing a link and fsync · 18aa0922
      Filipe Manana 提交于
      We have one more case where after a log tree is replayed we get
      inconsistent metadata leading to stale directory entries, due to
      some directories having entries pointing to some inode while the
      inode does not have a matching BTRFS_INODE_[REF|EXTREF]_KEY item.
      
      To trigger the problem we need to have a file with multiple hard links
      belonging to different parent directories. Then if one of those hard
      links is removed and we fsync the file using one of its other links
      that belongs to a different parent directory, we end up not logging
      the fact that the removed hard link doesn't exists anymore in the
      parent directory.
      
      Simple reproducer:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            _cleanup_flakey
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
        . ./common/dmflakey
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs generic
        _supported_os Linux
        _require_scratch
        _require_dm_flakey
        _require_metadata_journaling $SCRATCH_DEV
      
        rm -f $seqres.full
      
        _scratch_mkfs >>$seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create our test directory and file.
        mkdir $SCRATCH_MNT/testdir
        touch $SCRATCH_MNT/foo
        ln $SCRATCH_MNT/foo $SCRATCH_MNT/testdir/foo2
        ln $SCRATCH_MNT/foo $SCRATCH_MNT/testdir/foo3
      
        # Make sure everything done so far is durably persisted.
        sync
      
        # Now we remove one of our file's hardlinks in the directory testdir.
        unlink $SCRATCH_MNT/testdir/foo3
      
        # We now fsync our file using the "foo" link, which has a parent that
        # is not the directory "testdir".
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
      
        # Silently drop all writes and unmount to simulate a crash/power
        # failure.
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        # Allow writes again, mount to trigger journal/log replay.
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        # After the journal/log is replayed we expect to not see the "foo3"
        # link anymore and we should be able to remove all names in the
        # directory "testdir" and then remove it (no stale directory entries
        # left after the journal/log replay).
        echo "Entries in testdir:"
        ls -1 $SCRATCH_MNT/testdir
      
        rm -f $SCRATCH_MNT/testdir/*
        rmdir $SCRATCH_MNT/testdir
      
        _unmount_flakey
      
        status=0
        exit
      
      The test fails with:
      
        $ ./check generic/107
        FSTYP         -- btrfs
        PLATFORM      -- Linux/x86_64 debian3 4.1.0-rc6-btrfs-next-11+
        MKFS_OPTIONS  -- /dev/sdc
        MOUNT_OPTIONS -- /dev/sdc /home/fdmanana/btrfs-tests/scratch_1
      
        generic/107 3s ... - output mismatch (see .../results/generic/107.out.bad)
          --- tests/generic/107.out	2015-08-01 01:39:45.807462161 +0100
          +++ /home/fdmanana/git/hub/xfstests/results//generic/107.out.bad
          @@ -1,3 +1,5 @@
           QA output created by 107
           Entries in testdir:
           foo2
          +foo3
          +rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/testdir': Directory not empty
          ...
          _check_btrfs_filesystem: filesystem on /dev/sdc is inconsistent \
            (see /home/fdmanana/git/hub/xfstests/results//generic/107.full)
          _check_dmesg: something found in dmesg (see .../results/generic/107.dmesg)
        Ran: generic/107
        Failures: generic/107
        Failed 1 of 1 tests
      
        $ cat /home/fdmanana/git/hub/xfstests/results//generic/107.full
        (...)
        checking fs roots
        root 5 inode 257 errors 200, dir isize wrong
      	unresolved ref dir 257 index 3 namelen 4 name foo3 filetype 1 errors 5, no dir item, no inode ref
        (...)
      
      And produces the following warning in dmesg:
      
        [127298.759064] BTRFS info (device dm-0): failed to delete reference to foo3, inode 258 parent 257
        [127298.762081] ------------[ cut here ]------------
        [127298.763311] WARNING: CPU: 10 PID: 7891 at fs/btrfs/inode.c:3956 __btrfs_unlink_inode+0x182/0x35a [btrfs]()
        [127298.767327] BTRFS: Transaction aborted (error -2)
        (...)
        [127298.788611] Call Trace:
        [127298.789137]  [<ffffffff8145f077>] dump_stack+0x4f/0x7b
        [127298.790090]  [<ffffffff81095de5>] ? console_unlock+0x356/0x3a2
        [127298.791157]  [<ffffffff8104b3b0>] warn_slowpath_common+0xa1/0xbb
        [127298.792323]  [<ffffffffa065ad09>] ? __btrfs_unlink_inode+0x182/0x35a [btrfs]
        [127298.793633]  [<ffffffff8104b410>] warn_slowpath_fmt+0x46/0x48
        [127298.794699]  [<ffffffffa065ad09>] __btrfs_unlink_inode+0x182/0x35a [btrfs]
        [127298.797640]  [<ffffffffa065be8f>] btrfs_unlink_inode+0x1e/0x40 [btrfs]
        [127298.798876]  [<ffffffffa065bf11>] btrfs_unlink+0x60/0x9b [btrfs]
        [127298.800154]  [<ffffffff8116fb48>] vfs_unlink+0x9c/0xed
        [127298.801303]  [<ffffffff81173481>] do_unlinkat+0x12b/0x1fb
        [127298.802450]  [<ffffffff81253855>] ? lockdep_sys_exit_thunk+0x12/0x14
        [127298.803797]  [<ffffffff81174056>] SyS_unlinkat+0x29/0x2b
        [127298.805017]  [<ffffffff81465197>] system_call_fastpath+0x12/0x6f
        [127298.806310] ---[ end trace bbfddacb7aaada7b ]---
        [127298.807325] BTRFS warning (device dm-0): __btrfs_unlink_inode:3956: Aborting unused transaction(No such entry).
      
      So fix this by logging all parent inodes, current and old ones, to make
      sure we do not get stale entries after log replay. This is not a simple
      solution such as triggering a full transaction commit because it would
      imply full transaction commit when an inode is fsynced in the same
      transaction that modified it and reloaded it after eviction (because its
      last_unlink_trans is set to the same value as its last_trans as of the
      commit with the title "Btrfs: fix stale dir entries after unlink, inode
      eviction and fsync"), and it would also make fstest generic/066 fail
      since one of the fsyncs triggers a full commit and the next fsync will
      not find the inode in the log anymore (therefore not removing the xattr).
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      18aa0922
    • F
      Btrfs: fix stale directory entries after fsync log replay · bb53eda9
      Filipe Manana 提交于
      We have another case where after an fsync log replay we get an inode with
      a wrong link count (smaller than it should be) and a number of directory
      entries greater than its link count. This happens when we add a new link
      hard link to our inode A and then we fsync some other inode B that has
      the side effect of logging the parent directory inode too. In this case
      at log replay time we add the new hard link to our inode (the item with
      key BTRFS_INODE_REF_KEY) when processing the parent directory but we
      never adjust the link count of our inode A. As a result we get stale dir
      entries for our inode A that can never be deleted and therefore it makes
      it impossible to remove the parent directory (as its i_size can never
      decrease back to 0).
      
      A simple reproducer for fstests that triggers this issue:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            _cleanup_flakey
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
        . ./common/dmflakey
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs generic
        _supported_os Linux
        _require_scratch
        _require_dm_flakey
        _require_metadata_journaling $SCRATCH_DEV
      
        rm -f $seqres.full
      
        _scratch_mkfs >>$seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create our test directory and files.
        mkdir $SCRATCH_MNT/testdir
        touch $SCRATCH_MNT/testdir/foo
        touch $SCRATCH_MNT/testdir/bar
      
        # Make sure everything done so far is durably persisted.
        sync
      
        # Create one hard link for file foo and another one for file bar. After
        # that fsync only the file bar.
        ln $SCRATCH_MNT/testdir/bar $SCRATCH_MNT/testdir/bar_link
        ln $SCRATCH_MNT/testdir/foo $SCRATCH_MNT/testdir/foo_link
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/testdir/bar
      
        # Silently drop all writes on scratch device to simulate power failure.
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        # Allow writes again and mount the fs to trigger log/journal replay.
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        # Now verify both our files have a link count of 2.
        echo "Link count for file foo: $(stat --format=%h $SCRATCH_MNT/testdir/foo)"
        echo "Link count for file bar: $(stat --format=%h $SCRATCH_MNT/testdir/bar)"
      
        # We should be able to remove all the links of our files in testdir, and
        # after that the parent directory should become empty and therefore
        # possible to remove it.
        rm -f $SCRATCH_MNT/testdir/*
        rmdir $SCRATCH_MNT/testdir
      
        _unmount_flakey
      
        # The fstests framework will call fsck against our filesystem which will verify
        # that all metadata is in a consistent state.
      
        status=0
        exit
      
      The test fails with:
      
       -Link count for file foo: 2
       +Link count for file foo: 1
        Link count for file bar: 2
       +rm: cannot remove '/home/fdmanana/btrfs-tests/scratch_1/testdir/foo_link': Stale file handle
       +rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/testdir': Directory not empty
       (...)
       _check_btrfs_filesystem: filesystem on /dev/sdc is inconsistent
      
      And fsck's output:
      
        (...)
        checking fs roots
        root 5 inode 258 errors 2001, no inode item, link count wrong
            unresolved ref dir 257 index 5 namelen 8 name foo_link filetype 1 errors 4, no inode ref
        Checking filesystem on /dev/sdc
        (...)
      
      So fix this by marking inodes for link count fixup at log replay time
      whenever a directory entry is replayed if the entry was created in the
      transaction where the fsync was made and if it points to a non-directory
      inode.
      
      This isn't a new problem/regression, the issue exists for a long time,
      possibly since the log tree feature was added (2008).
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      bb53eda9
  26. 02 7月, 2015 1 次提交
    • F
      Btrfs: fix fsync after truncate when no_holes feature is enabled · a89ca6f2
      Filipe Manana 提交于
      When we have the no_holes feature enabled, if a we truncate a file to a
      smaller size, truncate it again but to a size greater than or equals to
      its original size and fsync it, the log tree will not have any information
      about the hole covering the range [truncate_1_offset, new_file_size[.
      Which means if the fsync log is replayed, the file will remain with the
      state it had before both truncate operations.
      
      Without the no_holes feature this does not happen, since when the inode
      is logged (full sync flag is set) it will find in the fs/subvol tree a
      leaf with a generation matching the current transaction id that has an
      explicit extent item representing the hole.
      
      Fix this by adding an explicit extent item representing a hole between
      the last extent and the inode's i_size if we are doing a full sync.
      
      The issue is easy to reproduce with the following test case for fstests:
      
        . ./common/rc
        . ./common/filter
        . ./common/dmflakey
      
        _need_to_be_root
        _supported_fs generic
        _supported_os Linux
        _require_scratch
        _require_dm_flakey
      
        # This test was motivated by an issue found in btrfs when the btrfs
        # no-holes feature is enabled (introduced in kernel 3.14). So enable
        # the feature if the fs being tested is btrfs.
        if [ $FSTYP == "btrfs" ]; then
            _require_btrfs_fs_feature "no_holes"
            _require_btrfs_mkfs_feature "no-holes"
            MKFS_OPTIONS="$MKFS_OPTIONS -O no-holes"
        fi
      
        rm -f $seqres.full
      
        _scratch_mkfs >>$seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create our test files and make sure everything is durably persisted.
        $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 64K"         \
                        -c "pwrite -S 0xbb 64K 61K"       \
                        $SCRATCH_MNT/foo | _filter_xfs_io
        $XFS_IO_PROG -f -c "pwrite -S 0xee 0 64K"         \
                        -c "pwrite -S 0xff 64K 61K"       \
                        $SCRATCH_MNT/bar | _filter_xfs_io
        sync
      
        # Now truncate our file foo to a smaller size (64Kb) and then truncate
        # it to the size it had before the shrinking truncate (125Kb). Then
        # fsync our file. If a power failure happens after the fsync, we expect
        # our file to have a size of 125Kb, with the first 64Kb of data having
        # the value 0xaa and the second 61Kb of data having the value 0x00.
        $XFS_IO_PROG -c "truncate 64K" \
                     -c "truncate 125K" \
                     -c "fsync" \
                     $SCRATCH_MNT/foo
      
        # Do something similar to our file bar, but the first truncation sets
        # the file size to 0 and the second truncation expands the size to the
        # double of what it was initially.
        $XFS_IO_PROG -c "truncate 0" \
                     -c "truncate 253K" \
                     -c "fsync" \
                     $SCRATCH_MNT/bar
      
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        # Allow writes again, mount to trigger log replay and validate file
        # contents.
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        # We expect foo to have a size of 125Kb, the first 64Kb of data all
        # having the value 0xaa and the remaining 61Kb to be a hole (all bytes
        # with value 0x00).
        echo "File foo content after log replay:"
        od -t x1 $SCRATCH_MNT/foo
      
        # We expect bar to have a size of 253Kb and no extents (any byte read
        # from bar has the value 0x00).
        echo "File bar content after log replay:"
        od -t x1 $SCRATCH_MNT/bar
      
        status=0
        exit
      
      The expected file contents in the golden output are:
      
        File foo content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0200000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        *
        0372000
        File bar content after log replay:
        0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        *
        0772000
      
      Without this fix, their contents are:
      
        File foo content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0200000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb
        *
        0372000
        File bar content after log replay:
        0000000 ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee
        *
        0200000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        *
        0372000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        *
        0772000
      
      A test case submission for fstests follows soon.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      a89ca6f2
  27. 01 7月, 2015 2 次提交
    • F
      Btrfs: fix fsync xattr loss in the fast fsync path · 36283bf7
      Filipe Manana 提交于
      After commit 4f764e51 ("Btrfs: remove deleted xattrs on fsync log
      replay"), we can end up in a situation where during log replay we end up
      deleting xattrs that were never deleted when their file was last fsynced.
      
      This happens in the fast fsync path (flag BTRFS_INODE_NEEDS_FULL_SYNC is
      not set in the inode) if the inode has the flag BTRFS_INODE_COPY_EVERYTHING
      set, the xattr was added in a past transaction and the leaf where the
      xattr is located was not updated (COWed or created) in the current
      transaction. In this scenario the xattr item never ends up in the log
      tree and therefore at log replay time, which makes the replay code delete
      the xattr from the fs/subvol tree as it thinks that xattr was deleted
      prior to the last fsync.
      
      Fix this by always logging all xattrs, which is the simplest and most
      reliable way to detect deleted xattrs and replay the deletes at log replay
      time.
      
      This issue is reproducible with the following test case for fstests:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
      
        here=`pwd`
        tmp=/tmp/$$
        status=1	# failure is the default!
      
        _cleanup()
        {
            _cleanup_flakey
            rm -f $tmp.*
        }
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
        . ./common/dmflakey
        . ./common/attr
      
        # real QA test starts here
      
        # We create a lot of xattrs for a single file. Only btrfs and xfs are currently
        # able to store such a large mount of xattrs per file, other filesystems such
        # as ext3/4 and f2fs for example, fail with ENOSPC even if we attempt to add
        # less than 1000 xattrs with very small values.
        _supported_fs btrfs xfs
        _supported_os Linux
        _need_to_be_root
        _require_scratch
        _require_dm_flakey
        _require_attrs
        _require_metadata_journaling $SCRATCH_DEV
      
        rm -f $seqres.full
      
        _scratch_mkfs >> $seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create the test file with some initial data and make sure everything is
        # durably persisted.
        $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" $SCRATCH_MNT/foo | _filter_xfs_io
        sync
      
        # Add many small xattrs to our file.
        # We create such a large amount because it's needed to trigger the issue found
        # in btrfs - we need to have an amount that causes the fs to have at least 3
        # btree leafs with xattrs stored in them, and it must work on any leaf size
        # (maximum leaf/node size is 64Kb).
        num_xattrs=2000
        for ((i = 1; i <= $num_xattrs; i++)); do
            name="user.attr_$(printf "%04d" $i)"
            $SETFATTR_PROG -n $name -v "val_$(printf "%04d" $i)" $SCRATCH_MNT/foo
        done
      
        # Sync the filesystem to force a commit of the current btrfs transaction, this
        # is a necessary condition to trigger the bug on btrfs.
        sync
      
        # Now update our file's data and fsync the file.
        # After a successful fsync, if the fsync log/journal is replayed we expect to
        # see all the xattrs we added before with the same values (and the updated file
        # data of course). Btrfs used to delete some of these xattrs when it replayed
        # its fsync log/journal.
        $XFS_IO_PROG -c "pwrite -S 0xbb 8K 16K" \
                     -c "fsync" \
                     $SCRATCH_MNT/foo | _filter_xfs_io
      
        # Simulate a crash/power loss.
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        # Allow writes again and mount. This makes the fs replay its fsync log.
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        echo "File content after crash and log replay:"
        od -t x1 $SCRATCH_MNT/foo
      
        echo "File xattrs after crash and log replay:"
        for ((i = 1; i <= $num_xattrs; i++)); do
            name="user.attr_$(printf "%04d" $i)"
            echo -n "$name="
            $GETFATTR_PROG --absolute-names -n $name --only-values $SCRATCH_MNT/foo
            echo
        done
      
        status=0
        exit
      
      The golden output expects all xattrs to be available, and with the correct
      values, after the fsync log is replayed.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      36283bf7
    • F
      Btrfs: fix fsync data loss after append write · e4545de5
      Filipe Manana 提交于
      If we do an append write to a file (which increases its inode's i_size)
      that does not have the flag BTRFS_INODE_NEEDS_FULL_SYNC set in its inode,
      and the previous transaction added a new hard link to the file, which sets
      the flag BTRFS_INODE_COPY_EVERYTHING in the file's inode, and then fsync
      the file, the inode's new i_size isn't logged. This has the consequence
      that after the fsync log is replayed, the file size remains what it was
      before the append write operation, which means users/applications will
      not be able to read the data that was successsfully fsync'ed before.
      
      This happens because neither the inode item nor the delayed inode get
      their i_size updated when the append write is made - doing so would
      require starting a transaction in the buffered write path, something that
      we do not do intentionally for performance reasons.
      
      Fix this by making sure that when the flag BTRFS_INODE_COPY_EVERYTHING is
      set the inode is logged with its current i_size (log the in-memory inode
      into the log tree).
      
      This issue is not a recent regression and is easy to reproduce with the
      following test case for fstests:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
      
        here=`pwd`
        tmp=/tmp/$$
        status=1	# failure is the default!
      
        _cleanup()
        {
                _cleanup_flakey
                rm -f $tmp.*
        }
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
        . ./common/dmflakey
      
        # real QA test starts here
        _supported_fs generic
        _supported_os Linux
        _need_to_be_root
        _require_scratch
        _require_dm_flakey
        _require_metadata_journaling $SCRATCH_DEV
      
        _crash_and_mount()
        {
                # Simulate a crash/power loss.
                _load_flakey_table $FLAKEY_DROP_WRITES
                _unmount_flakey
                # Allow writes again and mount. This makes the fs replay its fsync log.
                _load_flakey_table $FLAKEY_ALLOW_WRITES
                _mount_flakey
        }
      
        rm -f $seqres.full
      
        _scratch_mkfs >> $seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create the test file with some initial data and then fsync it.
        # The fsync here is only needed to trigger the issue in btrfs, as it causes the
        # the flag BTRFS_INODE_NEEDS_FULL_SYNC to be removed from the btrfs inode.
        $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" \
                        -c "fsync" \
                        $SCRATCH_MNT/foo | _filter_xfs_io
        sync
      
        # Add a hard link to our file.
        # On btrfs this sets the flag BTRFS_INODE_COPY_EVERYTHING on the btrfs inode,
        # which is a necessary condition to trigger the issue.
        ln $SCRATCH_MNT/foo $SCRATCH_MNT/bar
      
        # Sync the filesystem to force a commit of the current btrfs transaction, this
        # is a necessary condition to trigger the bug on btrfs.
        sync
      
        # Now append more data to our file, increasing its size, and fsync the file.
        # In btrfs because the inode flag BTRFS_INODE_COPY_EVERYTHING was set and the
        # write path did not update the inode item in the btree nor the delayed inode
        # item (in memory struture) in the current transaction (created by the fsync
        # handler), the fsync did not record the inode's new i_size in the fsync
        # log/journal. This made the data unavailable after the fsync log/journal is
        # replayed.
        $XFS_IO_PROG -c "pwrite -S 0xbb 32K 32K" \
                     -c "fsync" \
                     $SCRATCH_MNT/foo | _filter_xfs_io
      
        echo "File content after fsync and before crash:"
        od -t x1 $SCRATCH_MNT/foo
      
        _crash_and_mount
      
        echo "File content after crash and log replay:"
        od -t x1 $SCRATCH_MNT/foo
      
        status=0
        exit
      
      The expected file output before and after the crash/power failure expects the
      appended data to be available, which is:
      
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0100000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb
        *
        0200000
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      e4545de5
  28. 03 6月, 2015 1 次提交
    • L
      Btrfs: remove csum_bytes_left · 0c304304
      Liu Bo 提交于
      After commit 8407f553
      ("Btrfs: fix data corruption after fast fsync and writeback error"),
      during wait_ordered_extents(), we wait for ordered extent setting
      BTRFS_ORDERED_IO_DONE or BTRFS_ORDERED_IOERR, at which point we've
      already got checksum information, so we don't need to check
      (csum_bytes_left == 0) in the whole logging path.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      0c304304