1. 17 1月, 2019 8 次提交
  2. 13 1月, 2019 9 次提交
  3. 10 1月, 2019 21 次提交
    • P
      smb3: fix large reads on encrypted connections · ba77e8c7
      Paul Aurich 提交于
      commit 6d2f84eee098540ae857998fe32f29b9e2cd9613 upstream.
      
      When passing a large read to receive_encrypted_read(), ensure that the
      demultiplex_thread knows that a MID was processed.  Without this, those
      operations never complete.
      
      This is a similar issue/fix to lease break handling:
      commit 7af929d6
      ("smb3: fix lease break problem introduced by compounding")
      
      CC: Stable <stable@vger.kernel.org> # 4.19+
      Fixes: b24df3e3 ("cifs: update receive_encrypted_standard to handle compounded responses")
      Signed-off-by: NPaul Aurich <paul@darkrain42.org>
      Tested-by: NYves-Alexis Perez <corsac@corsac.net>
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      Reviewed-by: NRonnie Sahlberg <lsahlber@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ba77e8c7
    • G
      CIFS: Fix error mapping for SMB2_LOCK command which caused OFD lock problem · 1827d1c4
      Georgy A Bystrenin 提交于
      commit 9a596f5b39593414c0ec80f71b94a226286f084e upstream.
      
      While resolving a bug with locks on samba shares found a strange behavior.
      When a file locked by one node and we trying to lock it from another node
      it fail with errno 5 (EIO) but in that case errno must be set to
      (EACCES | EAGAIN).
      This isn't happening when we try to lock file second time on same node.
      In this case it returns EACCES as expected.
      Also this issue not reproduces when we use SMB1 protocol (vers=1.0 in
      mount options).
      
      Further investigation showed that the mapping from status_to_posix_error
      is different for SMB1 and SMB2+ implementations.
      For SMB1 mapping is [NT_STATUS_LOCK_NOT_GRANTED to ERRlock]
      (See fs/cifs/netmisc.c line 66)
      but for SMB2+ mapping is [STATUS_LOCK_NOT_GRANTED to -EIO]
      (see fs/cifs/smb2maperror.c line 383)
      
      Quick changes in SMB2+ mapping from EIO to EACCES has fixed issue.
      
      BUG: https://bugzilla.kernel.org/show_bug.cgi?id=201971Signed-off-by: NGeorgy A Bystrenin <gkot@altlinux.org>
      Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com>
      CC: Stable <stable@vger.kernel.org>
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1827d1c4
    • J
      f2fs: sanity check of xattr entry size · 5036fcd9
      Jaegeuk Kim 提交于
      commit 64beba0558fce7b59e9a8a7afd77290e82a22163 upstream.
      
      There is a security report where f2fs_getxattr() has a hole to expose wrong
      memory region when the image is malformed like this.
      
      f2fs_getxattr: entry->e_name_len: 4, size: 12288, buffer_size: 16384, len: 4
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5036fcd9
    • M
      f2fs: fix validation of the block count in sanity_check_raw_super · 58d7ab71
      Martin Blumenstingl 提交于
      commit 88960068f25fcc3759455d85460234dcc9d43fef upstream.
      
      Treat "block_count" from struct f2fs_super_block as 64-bit little endian
      value in sanity_check_raw_super() because struct f2fs_super_block
      declares "block_count" as "__le64".
      
      This fixes a bug where the superblock validation fails on big endian
      devices with the following error:
        F2FS-fs (sda1): Wrong segment_count / block_count (61439 > 0)
        F2FS-fs (sda1): Can't find valid F2FS filesystem in 1th superblock
        F2FS-fs (sda1): Wrong segment_count / block_count (61439 > 0)
        F2FS-fs (sda1): Can't find valid F2FS filesystem in 2th superblock
      As result of this the partition cannot be mounted.
      
      With this patch applied the superblock validation works fine and the
      partition can be mounted again:
        F2FS-fs (sda1): Mounted with checkpoint version = 7c84
      
      My little endian x86-64 hardware was able to mount the partition without
      this fix.
      To confirm that mounting f2fs filesystems works on big endian machines
      again I tested this on a 32-bit MIPS big endian (lantiq) device.
      
      Fixes: 0cfe75c5 ("f2fs: enhance sanity_check_raw_super() to avoid potential overflows")
      Cc: stable@vger.kernel.org
      Signed-off-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      58d7ab71
    • P
      f2fs: read page index before freeing · ce5b0057
      Pan Bian 提交于
      commit 0ea295dd853e0879a9a30ab61f923c26be35b902 upstream.
      
      The function truncate_node frees the page with f2fs_put_page. However,
      the page index is read after that. So, the patch reads the index before
      freeing the page.
      
      Fixes: bf39c00a ("f2fs: drop obsolete node page when it is truncated")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NPan Bian <bianpan2016@163.com>
      Reviewed-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ce5b0057
    • D
      dax: Use non-exclusive wait in wait_entry_unlocked() · 9621ea6b
      Dan Williams 提交于
      commit d8a706414af4827fc0b4b1c0c631c607351938b9 upstream.
      
      get_unlocked_entry() uses an exclusive wait because it is guaranteed to
      eventually obtain the lock and follow on with an unlock+wakeup cycle.
      The wait_entry_unlocked() path does not have the same guarantee. Rather
      than open-code an extra wakeup, just switch to a non-exclusive wait.
      
      Cc: Matthew Wilcox <willy@infradead.org>
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      9621ea6b
    • M
      dax: Don't access a freed inode · c555772c
      Matthew Wilcox 提交于
      commit 55e56f06ed71d9441f3abd5b1d3c1a870812b3fe upstream.
      
      After we drop the i_pages lock, the inode can be freed at any time.
      The get_unlocked_entry() code has no choice but to reacquire the lock,
      so it can't be used here.  Create a new wait_entry_unlocked() which takes
      care not to acquire the lock or dereference the address_space in any way.
      
      Fixes: c2a7d2a1 ("filesystem-dax: Introduce dax_lock_mapping_entry()")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatthew Wilcox <willy@infradead.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      c555772c
    • F
      Btrfs: send, fix race with transaction commits that create snapshots · 9eec74b4
      Filipe Manana 提交于
      commit be6821f82c3cc36e026f5afd10249988852b35ea upstream.
      
      If we create a snapshot of a snapshot currently being used by a send
      operation, we can end up with send failing unexpectedly (returning
      -ENOENT error to user space for example). The following diagram shows
      how this happens.
      
                  CPU 1                                   CPU2                                CPU3
      
       btrfs_ioctl_send()
        (...)
                                           create_snapshot()
                                            -> creates snapshot of a
                                               root used by the send
                                               task
                                            btrfs_commit_transaction()
                                             create_pending_snapshot()
        __get_inode_info()
         btrfs_search_slot()
          btrfs_search_slot_get_root()
           down_read commit_root_sem
      
           get reference on eb of the
           commit root
            -> eb with bytenr == X
      
           up_read commit_root_sem
      
                                              btrfs_cow_block(root node)
                                               btrfs_free_tree_block()
                                                -> creates delayed ref to
                                                   free the extent
      
                                             btrfs_run_delayed_refs()
                                              -> runs the delayed ref,
                                                 adds extent to
                                                 fs_info->pinned_extents
      
                                             btrfs_finish_extent_commit()
                                              unpin_extent_range()
                                               -> marks extent as free
                                                  in the free space cache
      
                                            transaction commit finishes
      
                                                                             btrfs_start_transaction()
                                                                              (...)
                                                                              btrfs_cow_block()
                                                                               btrfs_alloc_tree_block()
                                                                                btrfs_reserve_extent()
                                                                                 -> allocates extent at
                                                                                    bytenr == X
                                                                                btrfs_init_new_buffer(bytenr X)
                                                                                 btrfs_find_create_tree_block()
                                                                                  alloc_extent_buffer(bytenr X)
                                                                                   find_extent_buffer(bytenr X)
                                                                                    -> returns existing eb,
                                                                                       which the send task got
      
                                                                              (...)
                                                                               -> modifies content of the
                                                                                  eb with bytenr == X
      
          -> uses an eb that now
             belongs to some other
             tree and no more matches
             the commit root of the
             snapshot, resuts will be
             unpredictable
      
      The consequences of this race can be various, and can lead to searches in
      the commit root performed by the send task failing unexpectedly (unable to
      find inode items, returning -ENOENT to user space, for example) or not
      failing because an inode item with the same number was added to the tree
      that reused the metadata extent, in which case send can behave incorrectly
      in the worst case or just fail later for some reason.
      
      Fix this by performing a copy of the commit root's extent buffer when doing
      a search in the context of a send operation.
      
      CC: stable@vger.kernel.org # 4.4.x: 1fc28d8e: Btrfs: move get root out of btrfs_search_slot to a helper
      CC: stable@vger.kernel.org # 4.4.x: f9ddfd05: Btrfs: remove unused check of skip_locking
      CC: stable@vger.kernel.org # 4.4.x
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9eec74b4
    • J
      btrfs: run delayed items before dropping the snapshot · 6911b074
      Josef Bacik 提交于
      commit 0568e82dbe2510fc1fa664f58e5c997d3f1e649e upstream.
      
      With my delayed refs patches in place we started seeing a large amount
      of aborts in __btrfs_free_extent:
      
       BTRFS error (device sdb1): unable to find ref byte nr 91947008 parent 0 root 35964  owner 1 offset 0
       Call Trace:
        ? btrfs_merge_delayed_refs+0xaf/0x340
        __btrfs_run_delayed_refs+0x6ea/0xfc0
        ? btrfs_set_path_blocking+0x31/0x60
        btrfs_run_delayed_refs+0xeb/0x180
        btrfs_commit_transaction+0x179/0x7f0
        ? btrfs_check_space_for_delayed_refs+0x30/0x50
        ? should_end_transaction.isra.19+0xe/0x40
        btrfs_drop_snapshot+0x41c/0x7c0
        btrfs_clean_one_deleted_snapshot+0xb5/0xd0
        cleaner_kthread+0xf6/0x120
        kthread+0xf8/0x130
        ? btree_invalidatepage+0x90/0x90
        ? kthread_bind+0x10/0x10
        ret_from_fork+0x35/0x40
      
      This was because btrfs_drop_snapshot depends on the root not being
      modified while it's dropping the snapshot.  It will unlock the root node
      (and really every node) as it walks down the tree, only to re-lock it
      when it needs to do something.  This is a problem because if we modify
      the tree we could cow a block in our path, which frees our reference to
      that block.  Then once we get back to that shared block we'll free our
      reference to it again, and get ENOENT when trying to lookup our extent
      reference to that block in __btrfs_free_extent.
      
      This is ultimately happening because we have delayed items left to be
      processed for our deleted snapshot _after_ all of the inodes are closed
      for the snapshot.  We only run the delayed inode item if we're deleting
      the inode, and even then we do not run the delayed insertions or delayed
      removals.  These can be run at any point after our final inode does its
      last iput, which is what triggers the snapshot deletion.  We can end up
      with the snapshot deletion happening and then have the delayed items run
      on that file system, resulting in the above problem.
      
      This problem has existed forever, however my patches made it much easier
      to hit as I wake up the cleaner much more often to deal with delayed
      iputs, which made us more likely to start the snapshot dropping work
      before the transaction commits, which is when the delayed items would
      generally be run.  Before, generally speaking, we would run the delayed
      items, commit the transaction, and wakeup the cleaner thread to start
      deleting snapshots, which means we were less likely to hit this problem.
      You could still hit it if you had multiple snapshots to be deleted and
      ended up with lots of delayed items, but it was definitely harder.
      
      Fix for now by simply running all the delayed items before starting to
      drop the snapshot.  We could make this smarter in the future by making
      the delayed items per-root, and then simply drop any delayed items for
      roots that we are going to delete.  But for now just a quick and easy
      solution is the safest.
      
      CC: stable@vger.kernel.org # 4.4+
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6911b074
    • F
      Btrfs: fix fsync of files with multiple hard links in new directories · 10b04210
      Filipe Manana 提交于
      commit 41bd60676923822de1df2c50b3f9a10171f4338a upstream.
      
      The log tree has a long standing problem that when a file is fsync'ed we
      only check for new ancestors, created in the current transaction, by
      following only the hard link for which the fsync was issued. We follow the
      ancestors using the VFS' dget_parent() API. This means that if we create a
      new link for a file in a directory that is new (or in an any other new
      ancestor directory) and then fsync the file using an old hard link, we end
      up not logging the new ancestor, and on log replay that new hard link and
      ancestor do not exist. In some cases, involving renames, the file will not
      exist at all.
      
      Example:
      
        mkfs.btrfs -f /dev/sdb
        mount /dev/sdb /mnt
      
        mkdir /mnt/A
        touch /mnt/foo
        ln /mnt/foo /mnt/A/bar
        xfs_io -c fsync /mnt/foo
      
        <power failure>
      
      In this example after log replay only the hard link named 'foo' exists
      and directory A does not exist, which is unexpected. In other major linux
      filesystems, such as ext4, xfs and f2fs for example, both hard links exist
      and so does directory A after mounting again the filesystem.
      
      Checking if any new ancestors are new and need to be logged was added in
      2009 by commit 12fcfd22 ("Btrfs: tree logging unlink/rename fixes"),
      however only for the ancestors of the hard link (dentry) for which the
      fsync was issued, instead of checking for all ancestors for all of the
      inode's hard links.
      
      So fix this by tracking the id of the last transaction where a hard link
      was created for an inode and then on fsync fallback to a full transaction
      commit when an inode has more than one hard link and at least one new hard
      link was created in the current transaction. This is the simplest solution
      since this is not a common use case (adding frequently hard links for
      which there's an ancestor created in the current transaction and then
      fsync the file). In case it ever becomes a common use case, a solution
      that consists of iterating the fs/subvol btree for each hard link and
      check if any ancestor is new, could be implemented.
      
      This solves many unexpected scenarios reported by Jayashree Mohan and
      Vijay Chidambaram, and for which there is a new test case for fstests
      under review.
      
      Fixes: 12fcfd22 ("Btrfs: tree logging unlink/rename fixes")
      CC: stable@vger.kernel.org # 4.4+
      Reported-by: NVijay Chidambaram <vvijay03@gmail.com>
      Reported-by: NJayashree Mohan <jayashree2912@gmail.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      10b04210
    • L
      btrfs: skip file_extent generation check for free_space_inode in run_delalloc_nocow · 7708a830
      Lu Fengqi 提交于
      commit 27a7ff554e8d349627a90bda275c527b7348adae upstream.
      
      The test case btrfs/001 with inode_cache mount option will encounter the
      following warning:
      
        WARNING: CPU: 1 PID: 23700 at fs/btrfs/inode.c:956 cow_file_range.isra.19+0x32b/0x430 [btrfs]
        CPU: 1 PID: 23700 Comm: btrfs Kdump: loaded Tainted: G        W  O      4.20.0-rc4-custom+ #30
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
        RIP: 0010:cow_file_range.isra.19+0x32b/0x430 [btrfs]
        Call Trace:
         ? free_extent_buffer+0x46/0x90 [btrfs]
         run_delalloc_nocow+0x455/0x900 [btrfs]
         btrfs_run_delalloc_range+0x1a7/0x360 [btrfs]
         writepage_delalloc+0xf9/0x150 [btrfs]
         __extent_writepage+0x125/0x3e0 [btrfs]
         extent_write_cache_pages+0x1b6/0x3e0 [btrfs]
         ? __wake_up_common_lock+0x63/0xc0
         extent_writepages+0x50/0x80 [btrfs]
         do_writepages+0x41/0xd0
         ? __filemap_fdatawrite_range+0x9e/0xf0
         __filemap_fdatawrite_range+0xbe/0xf0
         btrfs_fdatawrite_range+0x1b/0x50 [btrfs]
         __btrfs_write_out_cache+0x42c/0x480 [btrfs]
         btrfs_write_out_ino_cache+0x84/0xd0 [btrfs]
         btrfs_save_ino_cache+0x551/0x660 [btrfs]
         commit_fs_roots+0xc5/0x190 [btrfs]
         btrfs_commit_transaction+0x2bf/0x8d0 [btrfs]
         btrfs_mksubvol+0x48d/0x4d0 [btrfs]
         btrfs_ioctl_snap_create_transid+0x170/0x180 [btrfs]
         btrfs_ioctl_snap_create_v2+0x124/0x180 [btrfs]
         btrfs_ioctl+0x123f/0x3030 [btrfs]
      
      The file extent generation of the free space inode is equal to the last
      snapshot of the file root, so the inode will be passed to cow_file_rage.
      But the inode was created and its extents were preallocated in
      btrfs_save_ino_cache, there are no cow copies on disk.
      
      The preallocated extent is not yet in the extent tree, and
      btrfs_cross_ref_exist will ignore the -ENOENT returned by
      check_committed_ref, so we can directly write the inode to the disk.
      
      Fixes: 78d4295b ("btrfs: lift some btrfs_cross_ref_exist checks in nocow path")
      CC: stable@vger.kernel.org # 4.18+
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NLu Fengqi <lufq.fnst@cn.fujitsu.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7708a830
    • A
      btrfs: dev-replace: go back to suspend state if another EXCL_OP is running · c1f90eb0
      Anand Jain 提交于
      commit 05c49e6bc1e8866ecfd674ebeeb58cdbff9145c2 upstream.
      
      In a secnario where balance and replace co-exists as below,
      
        - start balance
        - pause balance
        - start replace
        - reboot
      
      and when system restarts, balance resumes first. Then the replace is
      attempted to restart but will fail as the EXCL_OP lock is already held
      by the balance. If so place the replace state back to
      BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED state.
      
      Fixes: 010a47bd ("btrfs: add proper safety check before resuming dev-replace")
      CC: stable@vger.kernel.org # 4.18+
      Signed-off-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c1f90eb0
    • A
      btrfs: dev-replace: go back to suspended state if target device is missing · 28867a52
      Anand Jain 提交于
      commit 0d228ece59a35a9b9e8ff0d40653234a6d90f61e upstream.
      
      At the time of forced unmount we place the running replace to
      BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED state, so when the system comes
      back and expect the target device is missing.
      
      Then let the replace state continue to be in
      BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED state instead of
      BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED as there isn't any matching scrub
      running as part of replace.
      
      Fixes: e93c89c1 ("Btrfs: add new sources for device replace code")
      CC: stable@vger.kernel.org # 4.4+
      Signed-off-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      28867a52
    • T
      ext4: check for shutdown and r/o file system in ext4_write_inode() · 0cb4f655
      Theodore Ts'o 提交于
      commit 18f2c4fcebf2582f96cbd5f2238f4f354a0e4847 upstream.
      
      If the file system has been shut down or is read-only, then
      ext4_write_inode() needs to bail out early.
      
      Also use jbd2_complete_transaction() instead of ext4_force_commit() so
      we only force a commit if it is needed.
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0cb4f655
    • T
      ext4: force inode writes when nfsd calls commit_metadata() · bf2fd1f9
      Theodore Ts'o 提交于
      commit fde872682e175743e0c3ef939c89e3c6008a1529 upstream.
      
      Some time back, nfsd switched from calling vfs_fsync() to using a new
      commit_metadata() hook in export_operations().  If the file system did
      not provide a commit_metadata() hook, it fell back to using
      sync_inode_metadata().  Unfortunately doesn't work on all file
      systems.  In particular, it doesn't work on ext4 due to how the inode
      gets journalled --- the VFS writeback code will not always call
      ext4_write_inode().
      
      So we need to provide our own ext4_nfs_commit_metdata() method which
      calls ext4_write_inode() directly.
      
      Google-Bug-Id: 121195940
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bf2fd1f9
    • T
      ext4: avoid declaring fs inconsistent due to invalid file handles · 26366388
      Theodore Ts'o 提交于
      commit 8a363970d1dc38c4ec4ad575c862f776f468d057 upstream.
      
      If we receive a file handle, either from NFS or open_by_handle_at(2),
      and it points at an inode which has not been initialized, and the file
      system has metadata checksums enabled, we shouldn't try to get the
      inode, discover the checksum is invalid, and then declare the file
      system as being inconsistent.
      
      This can be reproduced by creating a test file system via "mke2fs -t
      ext4 -O metadata_csum /tmp/foo.img 8M", mounting it, cd'ing into that
      directory, and then running the following program.
      
      #define _GNU_SOURCE
      #include <fcntl.h>
      
      struct handle {
      	struct file_handle fh;
      	unsigned char fid[MAX_HANDLE_SZ];
      };
      
      int main(int argc, char **argv)
      {
      	struct handle h = {{8, 1 }, { 12, }};
      
      	open_by_handle_at(AT_FDCWD, &h.fh, O_RDONLY);
      	return 0;
      }
      
      Google-Bug-Id: 120690101
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      26366388
    • T
      ext4: include terminating u32 in size of xattr entries when expanding inodes · 6633fcb2
      Theodore Ts'o 提交于
      commit a805622a757b6d7f65def4141d29317d8e37b8a1 upstream.
      
      In ext4_expand_extra_isize_ea(), we calculate the total size of the
      xattr header, plus the xattr entries so we know how much of the
      beginning part of the xattrs to move when expanding the inode extra
      size.  We need to include the terminating u32 at the end of the xattr
      entries, or else if there is uninitialized, non-zero bytes after the
      xattr entries and before the xattr values, the list of xattr entries
      won't be properly terminated.
      Reported-by: NSteve Graham <stgraham2000@gmail.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6633fcb2
    • R
      ext4: fix EXT4_IOC_GROUP_ADD ioctl · 11bb168b
      ruippan (潘睿) 提交于
      commit e647e29196b7f802f8242c39ecb7cc937f5ef217 upstream.
      
      Commit e2b911c5 ("ext4: clean up feature test macros with
      predicate functions") broke the EXT4_IOC_GROUP_ADD ioctl.  This was
      not noticed since only very old versions of resize2fs (before
      e2fsprogs 1.42) use this ioctl.  However, using a new kernel with an
      enterprise Linux userspace will cause attempts to use online resize to
      fail with "No reserved GDT blocks".
      
      Fixes: e2b911c5 ("ext4: clean up feature test macros with predicate...")
      Cc: stable@kernel.org # v4.4
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: Nruippan (潘睿) <ruippan@tencent.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      11bb168b
    • M
      ext4: missing unlock/put_page() in ext4_try_to_write_inline_data() · 0d078853
      Maurizio Lombardi 提交于
      commit 132d00becb31e88469334e1e62751c81345280e0 upstream.
      
      In case of error, ext4_try_to_write_inline_data() should unlock
      and release the page it holds.
      
      Fixes: f19d5870 ("ext4: add normal write support for inline data")
      Cc: stable@kernel.org # 3.8
      Signed-off-by: NMaurizio Lombardi <mlombard@redhat.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0d078853
    • P
      ext4: fix possible use after free in ext4_quota_enable · 0a1c177d
      Pan Bian 提交于
      commit 61157b24e60fb3cd1f85f2c76a7b1d628f970144 upstream.
      
      The function frees qf_inode via iput but then pass qf_inode to
      lockdep_set_quota_inode on the failure path. This may result in a
      use-after-free bug. The patch frees df_inode only when it is never used.
      
      Fixes: daf647d2 ("ext4: add lockdep annotations for i_data_sem")
      Cc: stable@kernel.org # 4.6
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NPan Bian <bianpan2016@163.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0a1c177d
    • T
      ext4: add ext4_sb_bread() to disambiguate ENOMEM cases · b878c8a7
      Theodore Ts'o 提交于
      commit fb265c9cb49e2074ddcdd4de99728aefdd3b3592 upstream.
      
      Today, when sb_bread() returns NULL, this can either be because of an
      I/O error or because the system failed to allocate the buffer.  Since
      it's an old interface, changing would require changing many call
      sites.
      
      So instead we create our own ext4_sb_bread(), which also allows us to
      set the REQ_META flag.
      
      Also fixed a problem in the xattr code where a NULL return in a
      function could also mean that the xattr was not found, which could
      lead to the wrong error getting returned to userspace.
      
      Fixes: ac27a0ec ("ext4: initial copy of files from ext3")
      Cc: stable@kernel.org # 2.6.19
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b878c8a7
  4. 29 12月, 2018 2 次提交
    • I
      proc/sysctl: don't return ENOMEM on lookup when a table is unregistering · 6bb41321
      Ivan Delalande 提交于
      commit ea5751ccd665a2fd1b24f9af81f6167f0718c5f6 upstream.
      
      proc_sys_lookup can fail with ENOMEM instead of ENOENT when the
      corresponding sysctl table is being unregistered. In our case we see
      this upon opening /proc/sys/net/*/conf files while network interfaces
      are being deleted, which confuses our configuration daemon.
      
      The problem was successfully reproduced and this fix tested on v4.9.122
      and v4.20-rc6.
      
      v2: return ERR_PTRs in all cases when proc_sys_make_inode fails instead
      of mixing them with NULL. Thanks Al Viro for the feedback.
      
      Fixes: ace0c791 ("proc/sysctl: Don't grab i_lock under sysctl_lock.")
      Cc: stable@vger.kernel.org
      Signed-off-by: NIvan Delalande <colona@arista.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6bb41321
    • R
      ubifs: Handle re-linking of inodes correctly while recovery · 07364588
      Richard Weinberger 提交于
      commit e58725d51fa8da9133f3f1c54170aa2e43056b91 upstream.
      
      UBIFS's recovery code strictly assumes that a deleted inode will never
      come back, therefore it removes all data which belongs to that inode
      as soon it faces an inode with link count 0 in the replay list.
      Before O_TMPFILE this assumption was perfectly fine. With O_TMPFILE
      it can lead to data loss upon a power-cut.
      
      Consider a journal with entries like:
      0: inode X (nlink = 0) /* O_TMPFILE was created */
      1: data for inode X /* Someone writes to the temp file */
      2: inode X (nlink = 0) /* inode was changed, xattr, chmod, … */
      3: inode X (nlink = 1) /* inode was re-linked via linkat() */
      
      Upon replay of entry #2 UBIFS will drop all data that belongs to inode X,
      this will lead to an empty file after mounting.
      
      As solution for this problem, scan the replay list for a re-link entry
      before dropping data.
      
      Fixes: 474b9370 ("ubifs: Implement O_TMPFILE")
      Cc: stable@vger.kernel.org
      Cc: Russell Senior <russell@personaltelco.net>
      Cc: Rafał Miłecki <zajec5@gmail.com>
      Reported-by: NRussell Senior <russell@personaltelco.net>
      Reported-by: NRafał Miłecki <zajec5@gmail.com>
      Tested-by: NRafał Miłecki <rafal@milecki.pl>
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      07364588