1. 12 3月, 2018 11 次提交
  2. 08 3月, 2018 1 次提交
  3. 07 3月, 2018 1 次提交
  4. 02 3月, 2018 3 次提交
  5. 01 3月, 2018 9 次提交
    • C
      ceph: fix potential memory leak in init_caches() · 1c789249
      Chengguang Xu 提交于
      There is lack of cache destroy operation for ceph_file_cachep
      when failing from fscache register.
      Signed-off-by: NChengguang Xu <cgxu519@icloud.com>
      Reviewed-by: NIlya Dryomov <idryomov@gmail.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      1c789249
    • F
      Btrfs: fix log replay failure after unlink and link combination · 1f250e92
      Filipe Manana 提交于
      If we have a file with 2 (or more) hard links in the same directory,
      remove one of the hard links, create a new file (or link an existing file)
      in the same directory with the name of the removed hard link, and then
      finally fsync the new file, we end up with a log that fails to replay,
      causing a mount failure.
      
      Example:
      
        $ mkfs.btrfs -f /dev/sdb
        $ mount /dev/sdb /mnt
      
        $ mkdir /mnt/testdir
        $ touch /mnt/testdir/foo
        $ ln /mnt/testdir/foo /mnt/testdir/bar
      
        $ sync
      
        $ unlink /mnt/testdir/bar
        $ touch /mnt/testdir/bar
        $ xfs_io -c "fsync" /mnt/testdir/bar
      
        <power failure>
      
        $ mount /dev/sdb /mnt
        mount: mount(2) failed: /mnt: No such file or directory
      
      When replaying the log, for that example, we also see the following in
      dmesg/syslog:
      
        [71813.671307] BTRFS info (device dm-0): failed to delete reference to bar, inode 258 parent 257
        [71813.674204] ------------[ cut here ]------------
        [71813.675694] BTRFS: Transaction aborted (error -2)
        [71813.677236] WARNING: CPU: 1 PID: 13231 at fs/btrfs/inode.c:4128 __btrfs_unlink_inode+0x17b/0x355 [btrfs]
        [71813.679669] Modules linked in: btrfs xfs f2fs dm_flakey dm_mod dax ghash_clmulni_intel ppdev pcbc aesni_intel aes_x86_64 crypto_simd cryptd glue_helper evdev psmouse i2c_piix4 parport_pc i2c_core pcspkr sg serio_raw parport button sunrpc loop autofs4 ext4 crc16 mbcache jbd2 zstd_decompress zstd_compress xxhash raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c crc32c_generic raid1 raid0 multipath linear md_mod ata_generic sd_mod virtio_scsi ata_piix libata virtio_pci virtio_ring crc32c_intel floppy virtio e1000 scsi_mod [last unloaded: btrfs]
        [71813.679669] CPU: 1 PID: 13231 Comm: mount Tainted: G        W        4.15.0-rc9-btrfs-next-56+ #1
        [71813.679669] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org 04/01/2014
        [71813.679669] RIP: 0010:__btrfs_unlink_inode+0x17b/0x355 [btrfs]
        [71813.679669] RSP: 0018:ffffc90001cef738 EFLAGS: 00010286
        [71813.679669] RAX: 0000000000000025 RBX: ffff880217ce4708 RCX: 0000000000000001
        [71813.679669] RDX: 0000000000000000 RSI: ffffffff81c14bae RDI: 00000000ffffffff
        [71813.679669] RBP: ffffc90001cef7c0 R08: 0000000000000001 R09: 0000000000000001
        [71813.679669] R10: ffffc90001cef5e0 R11: ffffffff8343f007 R12: ffff880217d474c8
        [71813.679669] R13: 00000000fffffffe R14: ffff88021ccf1548 R15: 0000000000000101
        [71813.679669] FS:  00007f7cee84c480(0000) GS:ffff88023fc80000(0000) knlGS:0000000000000000
        [71813.679669] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        [71813.679669] CR2: 00007f7cedc1abf9 CR3: 00000002354b4003 CR4: 00000000001606e0
        [71813.679669] Call Trace:
        [71813.679669]  btrfs_unlink_inode+0x17/0x41 [btrfs]
        [71813.679669]  drop_one_dir_item+0xfa/0x131 [btrfs]
        [71813.679669]  add_inode_ref+0x71e/0x851 [btrfs]
        [71813.679669]  ? __lock_is_held+0x39/0x71
        [71813.679669]  ? replay_one_buffer+0x53/0x53a [btrfs]
        [71813.679669]  replay_one_buffer+0x4a4/0x53a [btrfs]
        [71813.679669]  ? rcu_read_unlock+0x3a/0x57
        [71813.679669]  ? __lock_is_held+0x39/0x71
        [71813.679669]  walk_up_log_tree+0x101/0x1d2 [btrfs]
        [71813.679669]  walk_log_tree+0xad/0x188 [btrfs]
        [71813.679669]  btrfs_recover_log_trees+0x1fa/0x31e [btrfs]
        [71813.679669]  ? replay_one_extent+0x544/0x544 [btrfs]
        [71813.679669]  open_ctree+0x1cf6/0x2209 [btrfs]
        [71813.679669]  btrfs_mount_root+0x368/0x482 [btrfs]
        [71813.679669]  ? trace_hardirqs_on_caller+0x14c/0x1a6
        [71813.679669]  ? __lockdep_init_map+0x176/0x1c2
        [71813.679669]  ? mount_fs+0x64/0x10b
        [71813.679669]  mount_fs+0x64/0x10b
        [71813.679669]  vfs_kern_mount+0x68/0xce
        [71813.679669]  btrfs_mount+0x13e/0x772 [btrfs]
        [71813.679669]  ? trace_hardirqs_on_caller+0x14c/0x1a6
        [71813.679669]  ? __lockdep_init_map+0x176/0x1c2
        [71813.679669]  ? mount_fs+0x64/0x10b
        [71813.679669]  mount_fs+0x64/0x10b
        [71813.679669]  vfs_kern_mount+0x68/0xce
        [71813.679669]  do_mount+0x6e5/0x973
        [71813.679669]  ? memdup_user+0x3e/0x5c
        [71813.679669]  SyS_mount+0x72/0x98
        [71813.679669]  entry_SYSCALL_64_fastpath+0x1e/0x8b
        [71813.679669] RIP: 0033:0x7f7cedf150ba
        [71813.679669] RSP: 002b:00007ffca71da688 EFLAGS: 00000206
        [71813.679669] Code: 7f a0 e8 51 0c fd ff 48 8b 43 50 f0 0f ba a8 30 2c 00 00 02 72 17 41 83 fd fb 74 11 44 89 ee 48 c7 c7 7d 11 7f a0 e8 38 f5 8d e0 <0f> ff 44 89 e9 ba 20 10 00 00 eb 4d 48 8b 4d b0 48 8b 75 88 4c
        [71813.679669] ---[ end trace 83bd473fc5b4663b ]---
        [71813.854764] BTRFS: error (device dm-0) in __btrfs_unlink_inode:4128: errno=-2 No such entry
        [71813.886994] BTRFS: error (device dm-0) in btrfs_replay_log:2307: errno=-2 No such entry (Failed to recover log tree)
        [71813.903357] BTRFS error (device dm-0): cleaner transaction attach returned -30
        [71814.128078] BTRFS error (device dm-0): open_ctree failed
      
      This happens because the log has inode reference items for both inode 258
      (the first file we created) and inode 259 (the second file created), and
      when processing the reference item for inode 258, we replace the
      corresponding item in the subvolume tree (which has two names, "foo" and
      "bar") witht he one in the log (which only has one name, "foo") without
      removing the corresponding dir index keys from the parent directory.
      Later, when processing the inode reference item for inode 259, which has
      a name of "bar" associated to it, we notice that dir index entries exist
      for that name and for a different inode, so we attempt to unlink that
      name, which fails because the inode reference item for inode 258 no longer
      has the name "bar" associated to it, making a call to btrfs_unlink_inode()
      fail with a -ENOENT error.
      
      Fix this by unlinking all the names in an inode reference item from a
      subvolume tree that are not present in the inode reference item found in
      the log tree, before overwriting it with the item from the log tree.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      1f250e92
    • F
      Btrfs: fix log replay failure after linking special file and fsync · 9a6509c4
      Filipe Manana 提交于
      If in the same transaction we rename a special file (fifo, character/block
      device or symbolic link), create a hard link for it having its old name
      then sync the log, we will end up with a log that can not be replayed and
      at when attempting to replay it, an EEXIST error is returned and mounting
      the filesystem fails. Example scenario:
      
        $ mkfs.btrfs -f /dev/sdc
        $ mount /dev/sdc /mnt
        $ mkdir /mnt/testdir
        $ mkfifo /mnt/testdir/foo
        # Make sure everything done so far is durably persisted.
        $ sync
      
        # Create some unrelated file and fsync it, this is just to create a log
        # tree. The file must be in the same directory as our special file.
        $ touch /mnt/testdir/f1
        $ xfs_io -c "fsync" /mnt/testdir/f1
      
        # Rename our special file and then create a hard link with its old name.
        $ mv /mnt/testdir/foo /mnt/testdir/bar
        $ ln /mnt/testdir/bar /mnt/testdir/foo
      
        # Create some other unrelated file and fsync it, this is just to persist
        # the log tree which was modified by the previous rename and link
        # operations. Alternatively we could have modified file f1 and fsync it.
        $ touch /mnt/f2
        $ xfs_io -c "fsync" /mnt/f2
      
        <power failure>
      
        $ mount /dev/sdc /mnt
        mount: mount /dev/sdc on /mnt failed: File exists
      
      This happens because when both the log tree and the subvolume's tree have
      an entry in the directory "testdir" with the same name, that is, there
      is one key (258 INODE_REF 257) in the subvolume tree and another one in
      the log tree (where 258 is the inode number of our special file and 257
      is the inode for directory "testdir"). Only the data of those two keys
      differs, in the subvolume tree the index field for inode reference has
      a value of 3 while the log tree it has a value of 5. Because the same key
      exists in both trees, but have different index, the log replay fails with
      an -EEXIST error when attempting to replay the inode reference from the
      log tree.
      
      Fix this by setting the last_unlink_trans field of the inode (our special
      file) to the current transaction id when a hard link is created, as this
      forces logging the parent directory inode, solving the conflict at log
      replay time.
      
      A new generic test case for fstests was also submitted.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9a6509c4
    • F
      Btrfs: send, fix issuing write op when processing hole in no data mode · d4dfc0f4
      Filipe Manana 提交于
      When doing an incremental send of a filesystem with the no-holes feature
      enabled, we end up issuing a write operation when using the no data mode
      send flag, instead of issuing an update extent operation. Fix this by
      issuing the update extent operation instead.
      
      Trivial reproducer:
      
        $ mkfs.btrfs -f -O no-holes /dev/sdc
        $ mkfs.btrfs -f /dev/sdd
        $ mount /dev/sdc /mnt/sdc
        $ mount /dev/sdd /mnt/sdd
      
        $ xfs_io -f -c "pwrite -S 0xab 0 32K" /mnt/sdc/foobar
        $ btrfs subvolume snapshot -r /mnt/sdc /mnt/sdc/snap1
      
        $ xfs_io -c "fpunch 8K 8K" /mnt/sdc/foobar
        $ btrfs subvolume snapshot -r /mnt/sdc /mnt/sdc/snap2
      
        $ btrfs send /mnt/sdc/snap1 | btrfs receive /mnt/sdd
        $ btrfs send --no-data -p /mnt/sdc/snap1 /mnt/sdc/snap2 \
             | btrfs receive -vv /mnt/sdd
      
      Before this change the output of the second receive command is:
      
        receiving snapshot snap2 uuid=f6922049-8c22-e544-9ff9-fc6755918447...
        utimes
        write foobar, offset 8192, len 8192
        utimes foobar
        BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=f6922049-8c22-e544-9ff9-...
      
      After this change it is:
      
        receiving snapshot snap2 uuid=564d36a3-ebc8-7343-aec9-bf6fda278e64...
        utimes
        update_extent foobar: offset=8192, len=8192
        utimes foobar
        BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=564d36a3-ebc8-7343-aec9-bf6fda278e64...
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d4dfc0f4
    • A
      btrfs: use proper endianness accessors for super_copy · 3c181c12
      Anand Jain 提交于
      The fs_info::super_copy is a byte copy of the on-disk structure and all
      members must use the accessor macros/functions to obtain the right
      value.  This was missing in update_super_roots and in sysfs readers.
      
      Moving between opposite endianness hosts will report bogus numbers in
      sysfs, and mount may fail as the root will not be restored correctly. If
      the filesystem is always used on a same endian host, this will not be a
      problem.
      
      Fix this by using the btrfs_set_super...() functions to set
      fs_info::super_copy values, and for the sysfs, use the cached
      fs_info::nodesize/sectorsize values.
      
      CC: stable@vger.kernel.org
      Fixes: df93589a ("btrfs: export more from FS_INFO to sysfs")
      Signed-off-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ update changelog ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      3c181c12
    • H
      btrfs: alloc_chunk: fix DUP stripe size handling · 92e222df
      Hans van Kranenburg 提交于
      In case of using DUP, we search for enough unallocated disk space on a
      device to hold two stripes.
      
      The devices_info[ndevs-1].max_avail that holds the amount of unallocated
      space found is directly assigned to stripe_size, while it's actually
      twice the stripe size.
      
      Later on in the code, an unconditional division of stripe_size by
      dev_stripes corrects the value, but in the meantime there's a check to
      see if the stripe_size does not exceed max_chunk_size. Since during this
      check stripe_size is twice the amount as intended, the check will reduce
      the stripe_size to max_chunk_size if the actual correct to be used
      stripe_size is more than half the amount of max_chunk_size.
      
      The unconditional division later tries to correct stripe_size, but will
      actually make sure we can't allocate more than half the max_chunk_size.
      
      Fix this by moving the division by dev_stripes before the max chunk size
      check, so it always contains the right value, instead of putting a duct
      tape division in further on to get it fixed again.
      
      Since in all other cases than DUP, dev_stripes is 1, this change only
      affects DUP.
      
      Other attempts in the past were made to fix this:
      * 37db63a4 "Btrfs: fix max chunk size check in chunk allocator" tried
      to fix the same problem, but still resulted in part of the code acting
      on a wrongly doubled stripe_size value.
      * 86db2578 "Btrfs: fix max chunk size on raid5/6" unintentionally
      broke this fix again.
      
      The real problem was already introduced with the rest of the code in
      73c5de00.
      
      The user visible result however will be that the max chunk size for DUP
      will suddenly double, while it's actually acting according to the limits
      in the code again like it was 5 years ago.
      Reported-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Link: https://www.spinics.net/lists/linux-btrfs/msg69752.html
      Fixes: 73c5de00 ("btrfs: quasi-round-robin for chunk allocation")
      Fixes: 86db2578 ("Btrfs: fix max chunk size on raid5/6")
      Signed-off-by: NHans van Kranenburg <hans.van.kranenburg@mendix.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ update comment ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      92e222df
    • N
      btrfs: Handle btrfs_set_extent_delalloc failure in relocate_file_extent_cluster · 765f3ceb
      Nikolay Borisov 提交于
      Essentially duplicate the error handling from the above block which
      handles the !PageUptodate(page) case and additionally clear
      EXTENT_BOUNDARY.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      765f3ceb
    • N
      btrfs: handle failure of add_pending_csums · ac01f26a
      Nikolay Borisov 提交于
      add_pending_csums was added as part of the new data=ordered
      implementation in e6dcd2dc ("Btrfs: New data=ordered
      implementation"). Even back then it called the btrfs_csum_file_blocks
      which can fail but it never bothered handling the failure. In ENOMEM
      situation this could lead to the filesystem failing to write the
      checksums for a particular extent and not detect this. On read this
      could lead to the filesystem erroring out due to crc mismatch. Fix it by
      propagating failure from add_pending_csums and handling them.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ac01f26a
    • J
      btrfs: use kvzalloc to allocate btrfs_fs_info · a8fd1f71
      Jeff Mahoney 提交于
      The srcu_struct in btrfs_fs_info scales in size with NR_CPUS.  On
      kernels built with NR_CPUS=8192, this can result in kmalloc failures
      that prevent mounting.
      
      There is work in progress to try to resolve this for every user of
      srcu_struct but using kvzalloc will work around the failures until
      that is complete.
      
      As an example with NR_CPUS=512 on x86_64: the overall size of
      subvol_srcu is 3460 bytes, fs_info is 6496.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a8fd1f71
  6. 27 2月, 2018 5 次提交
    • C
      xfs: fix potential memory leak in mount option parsing · 5b4c845e
      Chengguang Xu 提交于
      When specifying string type mount option (e.g., logdev)
      several times in a mount, current option parsing may
      cause memory leak. Hence, call kfree for previous one
      in this case.
      Signed-off-by: NChengguang Xu <cgxu519@icloud.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      5b4c845e
    • J
      blockdev: Avoid two active bdev inodes for one device · 560e7cb2
      Jan Kara 提交于
      When blkdev_open() races with device removal and creation it can happen
      that unhashed bdev inode gets associated with newly created gendisk
      like:
      
      CPU0					CPU1
      blkdev_open()
        bdev = bd_acquire()
      					del_gendisk()
      					  bdev_unhash_inode(bdev);
      					remove device
      					create new device with the same number
        __blkdev_get()
          disk = get_gendisk()
            - gets reference to gendisk of the new device
      
      Now another blkdev_open() will not find original 'bdev' as it got
      unhashed, create a new one and associate it with the same 'disk' at
      which point problems start as we have two independent page caches for
      one device.
      
      Fix the problem by verifying that the bdev inode didn't get unhashed
      before we acquired gendisk reference. That way we make sure gendisk can
      get associated only with visible bdev inodes.
      Tested-by: NHou Tao <houtao1@huawei.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      560e7cb2
    • J
      genhd: Fix use after free in __blkdev_get() · 89736653
      Jan Kara 提交于
      When two blkdev_open() calls race with device removal and recreation,
      __blkdev_get() can use looked up gendisk after it is freed:
      
      CPU0				CPU1			CPU2
      							del_gendisk(disk);
      							  bdev_unhash_inode(inode);
      blkdev_open()			blkdev_open()
        bdev = bd_acquire(inode);
          - creates and returns new inode
      				  bdev = bd_acquire(inode);
      				    - returns the same inode
        __blkdev_get(devt)		  __blkdev_get(devt)
          disk = get_gendisk(devt);
            - got structure of device going away
      							<finish device removal>
      							<new device gets
      							 created under the same
      							 device number>
      				  disk = get_gendisk(devt);
      				    - got new device structure
      				  if (!bdev->bd_openers) {
      				    does the first open
      				  }
          if (!bdev->bd_openers)
            - false
          } else {
            put_disk_and_module(disk)
              - remember this was old device - this was last ref and disk is
                now freed
          }
          disk_unblock_events(disk); -> oops
      
      Fix the problem by making sure we drop reference to disk in
      __blkdev_get() only after we are really done with it.
      Reported-by: NHou Tao <houtao1@huawei.com>
      Tested-by: NHou Tao <houtao1@huawei.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      89736653
    • J
      genhd: Add helper put_disk_and_module() · 9df6c299
      Jan Kara 提交于
      Add a proper counterpart to get_disk_and_module() -
      put_disk_and_module(). Currently it is opencoded in several places.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9df6c299
    • J
      direct-io: Fix sleep in atomic due to sync AIO · d9c10e5b
      Jan Kara 提交于
      Commit e864f395 "fs: add RWF_DSYNC aand RWF_SYNC" added additional
      way for direct IO to become synchronous and thus trigger fsync from the
      IO completion handler. Then commit 9830f4be "fs: Use RWF_* flags for
      AIO operations" allowed these flags to be set for AIO as well. However
      that commit forgot to update the condition checking whether the IO
      completion handling should be defered to a workqueue and thus AIO DIO
      with RWF_[D]SYNC set will call fsync() from IRQ context resulting in
      sleep in atomic.
      
      Fix the problem by checking directly iocb flags (the same way as it is
      done in dio_complete()) instead of checking all conditions that could
      lead to IO being synchronous.
      
      CC: Christoph Hellwig <hch@lst.de>
      CC: Goldwyn Rodrigues <rgoldwyn@suse.com>
      CC: stable@vger.kernel.org
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Fixes: 9830f4beSigned-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d9c10e5b
  7. 26 2月, 2018 5 次提交
  8. 23 2月, 2018 5 次提交