- 20 9月, 2019 1 次提交
-
-
由 Matthew Bobrowski 提交于
Modify the calling convention for the iomap_dio_rw ->end_io() callback. Rather than passing either dio->error or dio->size as the 'size' argument, instead pass both the dio->error and the dio->size value separately. In the instance that an error occurred during a write, we currently cannot determine whether any blocks have been allocated beyond the current EOF and data has subsequently been written to these blocks within the ->end_io() callback. As a result, we cannot judge whether we should take the truncate failed write path. Having both dio->error and dio->size will allow us to perform such checks within this callback. Signed-off-by: NMatthew Bobrowski <mbobrowski@mbobrowski.org> [hch: minor cleanups] Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com> Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
- 26 7月, 2019 5 次提交
-
-
由 Al Viro 提交于
We need to drop everything we remove from the tree, whether mnt_has_parent() is true or not. Usually the bug manifests as a slow memory leak (leaked struct mount for initramfs); it becomes much more visible in mount_subtree() users, such as btrfs. There we leak a struct mount for btrfs superblock being mounted, which prevents fs shutdown on subsequent umount. Fixes: 56cbb429 ("switch the remnants of releasing the mountpoint away from fs_pin") Reported-by: NNikolay Borisov <nborisov@suse.com> Tested-by: NNikolay Borisov <nborisov@suse.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Naohiro Aota 提交于
btrfs_lock_and_flush_ordered_range() loads given "*cached_state" into cachedp, which, in general, is NULL. Then, lock_extent_bits() updates "cachedp", but it never goes backs to the caller. Thus the caller still see its "cached_state" to be NULL and never free the state allocated under btrfs_lock_and_flush_ordered_range(). As a result, we will see massive state leak with e.g. fstests btrfs/005. Fix this bug by properly handling the pointers. Fixes: bd80d94e ("btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range") Reviewed-by: NNikolay Borisov <nborisov@suse.com> Signed-off-by: NNaohiro Aota <naohiro.aota@wdc.com> Signed-off-by: NDavid Sterba <dsterba@suse.com>
-
由 Gustavo A. R. Silva 提交于
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warnings: Warning level 3 was used: -Wimplicit-fallthrough=3 fs/afs/fsclient.c: In function ‘afs_deliver_fs_fetch_acl’: fs/afs/fsclient.c:2199:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/fsclient.c:2202:2: note: here case 1: ^~~~ fs/afs/fsclient.c:2216:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/fsclient.c:2219:2: note: here case 2: ^~~~ fs/afs/fsclient.c:2225:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/fsclient.c:2228:2: note: here case 3: ^~~~ This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
由 Gustavo A. R. Silva 提交于
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warnings: fs/afs/yfsclient.c: In function ‘yfs_deliver_fs_fetch_opaque_acl’: fs/afs/yfsclient.c:1984:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/yfsclient.c:1987:2: note: here case 1: ^~~~ fs/afs/yfsclient.c:2005:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/yfsclient.c:2008:2: note: here case 2: ^~~~ fs/afs/yfsclient.c:2014:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/yfsclient.c:2017:2: note: here case 3: ^~~~ fs/afs/yfsclient.c:2035:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/yfsclient.c:2038:2: note: here case 4: ^~~~ fs/afs/yfsclient.c:2047:19: warning: this statement may fall through [-Wimplicit-fallthrough=] call->unmarshall++; ~~~~~~~~~~~~~~~~^~ fs/afs/yfsclient.c:2050:2: note: here case 5: ^~~~ Warning level 3 was used: -Wimplicit-fallthrough=3 Also, fix some commenting style issues. This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
-
由 Jens Axboe 提交于
Daniel reports that when testing an http server that uses io_uring to poll for incoming connections, sometimes it hard crashes. This is due to an uninitialized list member for the io_uring request. Normally this doesn't trigger and none of the test cases caught it. Reported-by: NDaniel Kozak <kozzi11@gmail.com> Tested-by: NDaniel Kozak <kozzi11@gmail.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 25 7月, 2019 4 次提交
-
-
由 Nikolay Borisov 提交于
Commit 06297d8c ("btrfs: switch extent_buffer blocking_writers from atomic to int") changed the type of blocking_writers but forgot to adjust relevant code in btrfs_tree_unlock by converting the smp_mb__after_atomic to smp_mb. This opened up the possibility of a deadlock due to re-ordering of setting blocking_writers and checking/waking up the waiter. This particular lockup is explained in a comment above waitqueue_active() function. Fix it by converting the memory barrier to a full smp_mb, accounting for the fact that blocking_writers is a simple integer. Fixes: 06297d8c ("btrfs: switch extent_buffer blocking_writers from atomic to int") Tested-by: NJohannes Thumshirn <jthumshirn@suse.com> Signed-off-by: NNikolay Borisov <nborisov@suse.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com>
-
由 Jann Horn 提交于
When going through execve(), zero out the NUMA fault statistics instead of freeing them. During execve, the task is reachable through procfs and the scheduler. A concurrent /proc/*/sched reader can read data from a freed ->numa_faults allocation (confirmed by KASAN) and write it back to userspace. I believe that it would also be possible for a use-after-free read to occur through a race between a NUMA fault and execve(): task_numa_fault() can lead to task_numa_compare(), which invokes task_weight() on the currently running task of a different CPU. Another way to fix this would be to make ->numa_faults RCU-managed or add extra locking, but it seems easier to wipe the NUMA fault statistics on execve. Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Fixes: 82727018 ("sched/numa: Call task_numa_free() from do_execve()") Link: https://lkml.kernel.org/r/20190716152047.14424-1-jannh@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Masahiro Yamada 提交于
Detected by: $ ./scripts/spdxcheck.py fs/iomap/Makefile: 1:27 Invalid License ID: GPL-2.0-or-newer Fixes: 1c230208 ("iomap: start moving code to fs/iomap/") Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Linus Torvalds 提交于
It turns out that 'access()' (and 'faccessat()') can cause a lot of RCU work because it installs a temporary credential that gets allocated and freed for each system call. The allocation and freeing overhead is mostly benign, but because credentials can be accessed under the RCU read lock, the freeing involves a RCU grace period. Which is not a huge deal normally, but if you have a lot of access() calls, this causes a fair amount of seconday damage: instead of having a nice alloc/free patterns that hits in hot per-CPU slab caches, you have all those delayed free's, and on big machines with hundreds of cores, the RCU overhead can end up being enormous. But it turns out that all of this is entirely unnecessary. Exactly because access() only installs the credential as the thread-local subjective credential, the temporary cred pointer doesn't actually need to be RCU free'd at all. Once we're done using it, we can just free it synchronously and avoid all the RCU overhead. So add a 'non_rcu' flag to 'struct cred', which can be set by users that know they only use it in non-RCU context (there are other potential users for this). We can make it a union with the rcu freeing list head that we need for the RCU case, so this doesn't need any extra storage. Note that this also makes 'get_current_cred()' clear the new non_rcu flag, in case we have filesystems that take a long-term reference to the cred and then expect the RCU delayed freeing afterwards. It's not entirely clear that this is required, but it makes for clear semantics: the subjective cred remains non-RCU as long as you only access it synchronously using the thread-local accessors, but you _can_ use it as a generic cred if you want to. It is possible that we should just remove the whole RCU markings for ->cred entirely. Only ->real_cred is really supposed to be accessed through RCU, and the long-term cred copies that nfs uses might want to explicitly re-enable RCU freeing if required, rather than have get_current_cred() do it implicitly. But this is a "minimal semantic changes" change for the immediate problem. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NEric Dumazet <edumazet@google.com> Acked-by: NPaul E. McKenney <paulmck@linux.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Jan Glauber <jglauber@marvell.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Jayachandran Chandrasekharan Nair <jnair@marvell.com> Cc: Greg KH <greg@kroah.com> Cc: Kees Cook <keescook@chromium.org> Cc: David Howells <dhowells@redhat.com> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 7月, 2019 3 次提交
-
-
由 Zhengyuan Liu 提交于
We are using PAGE_SIZE as the unit to determine if the total len in async_list has exceeded max_pages, it's not fair for smaller io sizes. For example, if we are doing 1k-size io streams, we will never exceed max_pages since len >>= PAGE_SHIFT always gets zero. So use original bytes to make it more accurate. Signed-off-by: NZhengyuan Liu <liuzhengyuan@kylinos.cn> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Jens Axboe 提交于
Hrvoje reports that when a large fixed buffer is registered and IO is being done to the latter pages of said buffer, the IO submission time is much worse: reading to the start of the buffer: 11238 ns reading to the end of the buffer: 1039879 ns In fact, it's worse by two orders of magnitude. The reason for that is how io_uring figures out how to setup the iov_iter. We point the iter at the first bvec, and then use iov_iter_advance() to fast-forward to the offset within that buffer we need. However, that is abysmally slow, as it entails iterating the bvecs that we setup as part of buffer registration. There's really no need to use this generic helper, as we know it's a BVEC type iterator, and we also know that each bvec is PAGE_SIZE in size, apart from possibly the first and last. Hence we can just use a shift on the offset to find the right index, and then adjust the iov_iter appropriately. After this fix, the timings are: reading to the start of the buffer: 10135 ns reading to the end of the buffer: 1377 ns Or about an 755x improvement for the tail page. Reported-by: NHrvoje Zeba <zeba.hrvoje@gmail.com> Tested-by: NHrvoje Zeba <zeba.hrvoje@gmail.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Jens Axboe 提交于
A caller is supposed to pass in REQ_NOWAIT if we can't block for any given operation, but O_DIRECT for block devices just ignore this. Hence we'll block for various resource shortages on the block layer side, like having to wait for requests. Use the new REQ_NOWAIT_INLINE to ask for this error to be returned inline, so we can handle it appropriately and return -EAGAIN to the caller. Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 19 7月, 2019 13 次提交
-
-
由 Matteo Croce 提交于
In the sysctl code the proc_dointvec_minmax() function is often used to validate the user supplied value between an allowed range. This function uses the extra1 and extra2 members from struct ctl_table as minimum and maximum allowed value. On sysctl handler declaration, in every source file there are some readonly variables containing just an integer which address is assigned to the extra1 and extra2 members, so the sysctl range is enforced. The special values 0, 1 and INT_MAX are very often used as range boundary, leading duplication of variables like zero=0, one=1, int_max=INT_MAX in different source files: $ git grep -E '\.extra[12].*&(zero|one|int_max)' |wc -l 248 Add a const int array containing the most commonly used values, some macros to refer more easily to the correct array member, and use them instead of creating a local one for every object file. This is the bloat-o-meter output comparing the old and new binary compiled with the default Fedora config: # scripts/bloat-o-meter -d vmlinux.o.old vmlinux.o add/remove: 2/2 grow/shrink: 0/2 up/down: 24/-188 (-164) Data old new delta sysctl_vals - 12 +12 __kstrtab_sysctl_vals - 12 +12 max 14 10 -4 int_max 16 - -16 one 68 - -68 zero 128 28 -100 Total: Before=20583249, After=20583085, chg -0.00% [mcroce@redhat.com: tipc: remove two unused variables] Link: http://lkml.kernel.org/r/20190530091952.4108-1-mcroce@redhat.com [akpm@linux-foundation.org: fix net/ipv6/sysctl_net_ipv6.c] [arnd@arndb.de: proc/sysctl: make firmware loader table conditional] Link: http://lkml.kernel.org/r/20190617130014.1713870-1-arnd@arndb.de [akpm@linux-foundation.org: fix fs/eventpoll.c] Link: http://lkml.kernel.org/r/20190430180111.10688-1-mcroce@redhat.comSigned-off-by: NMatteo Croce <mcroce@redhat.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NKees Cook <keescook@chromium.org> Reviewed-by: NAaron Tomlin <atomlin@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keith Busch 提交于
migrate_page_move_mapping() doesn't use the mode argument. Remove it and update callers accordingly. Link: http://lkml.kernel.org/r/20190508210301.8472-1-keith.busch@intel.comSigned-off-by: NKeith Busch <keith.busch@intel.com> Reviewed-by: NZi Yan <ziy@nvidia.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yang Shi 提交于
Commit 7635d9cb ("mm, thp, proc: report THP eligibility for each vma") introduced THPeligible bit for processes' smaps. But, when checking the eligibility for shmem vma, __transparent_hugepage_enabled() is called to override the result from shmem_huge_enabled(). It may result in the anonymous vma's THP flag override shmem's. For example, running a simple test which create THP for shmem, but with anonymous THP disabled, when reading the process's smaps, it may show: 7fc92ec00000-7fc92f000000 rw-s 00000000 00:14 27764 /dev/shm/test Size: 4096 kB ... [snip] ... ShmemPmdMapped: 4096 kB ... [snip] ... THPeligible: 0 And, /proc/meminfo does show THP allocated and PMD mapped too: ShmemHugePages: 4096 kB ShmemPmdMapped: 4096 kB This doesn't make too much sense. The shmem objects should be treated separately from anonymous THP. Calling shmem_huge_enabled() with checking MMF_DISABLE_THP sounds good enough. And, we could skip stack and dax vma check since we already checked if the vma is shmem already. Also check if vma is suitable for THP by calling transhuge_vma_suitable(). And minor fix to smaps output format and documentation. Link: http://lkml.kernel.org/r/1560401041-32207-3-git-send-email-yang.shi@linux.alibaba.com Fixes: 7635d9cb ("mm, thp, proc: report THP eligibility for each vma") Signed-off-by: NYang Shi <yang.shi@linux.alibaba.com> Acked-by: NHugh Dickins <hughd@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Steve French 提交于
To 2.21 Signed-off-by: NSteve French <stfrench@microsoft.com>
-
由 Ronnie Sahlberg 提交于
Servers can defer destaging any data and updating the mtime until close(). This means that if we do a setinfo to modify the mtime while other handles are open for write the server may overwrite our setinfo timestamps when if flushes the file on close() of the writeable handle. To solve this we add an explicit flush when the mtime is about to be updated. This fixes "cp -p" to preserve mtime when copying a file onto an SMB2 share. CC: Stable <stable@vger.kernel.org> Signed-off-by: NRonnie Sahlberg <lsahlber@redhat.com> Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NSteve French <stfrench@microsoft.com>
-
由 Steve French 提交于
We can cut one third of the traffic on open by not querying the inode number explicitly via SMB3 query_info since it is now returned on open in the qfid context. This is better in multiple ways, and speeds up file open about 10% (more if network is slow). Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NSteve French <stfrench@microsoft.com>
-
由 Trond Myklebust 提交于
Add tracepoints to allow debugging of the event chain leading to a pnfs fallback to doing I/O through the MDS. Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
-
由 Trond Myklebust 提交于
If the client has to stop in pnfs_update_layout() to wait for another layoutget to complete, it currently exits and defaults to I/O through the MDS if the layoutget was successful. Fixes: d03360aa ("pNFS: Ensure we return the error if someone kills...") Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # v4.20+
-
由 Amir Goldstein 提交于
cifs has both source and destination inodes locked throughout the copy. Like ->write_iter(), we update mtime and strip setuid bits of destination file before copy and like ->read_iter(), we update atime of source file after copy. Signed-off-by: NAmir Goldstein <amir73il@gmail.com> Signed-off-by: NSteve French <stfrench@microsoft.com>
-
由 Aurelien Aptel 提交于
Prevent deadlock between open_shroot() and cifs_mark_open_files_invalid() by releasing the lock before entering SMB2_open, taking it again after and checking if we still need to use the result. Link: https://lore.kernel.org/linux-cifs/684ed01c-cbca-2716-bc28-b0a59a0f8521@prodrive-technologies.com/T/#u Fixes: 3d4ef9a1 ("smb3: fix redundant opens on root") Signed-off-by: NAurelien Aptel <aaptel@suse.com> Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NSteve French <stfrench@microsoft.com> CC: Stable <stable@vger.kernel.org>
-
由 Trond Myklebust 提交于
mirror->mirror_ds can be NULL if uninitialised, but can contain a PTR_ERR() if call to GETDEVICEINFO failed. Fixes: 65990d1a ("pNFS/flexfiles: Fix a deadlock on LAYOUTGET") Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # 4.10+
-
由 Trond Myklebust 提交于
The NFSv4.1 protocol explicitly forbids us from using the zero stateid together with layoutget, so when we see that nfs4_select_rw_stateid() is unable to return a valid delegation, lock or open stateid, then we should initiate recovery and retry. Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
-
由 Zhengyuan Liu 提交于
There is a hang issue while using fio to do some basic test. The issue can be easily reproduced using the below script: while true do fio --ioengine=io_uring -rw=write -bs=4k -numjobs=1 \ -size=1G -iodepth=64 -name=uring --filename=/dev/zero done After several minutes (or more), fio would block at io_uring_enter->io_cqring_wait in order to waiting for previously committed sqes to be completed and can't return to user anymore until we send a SIGTERM to fio. After receiving SIGTERM, fio hangs at io_ring_ctx_wait_and_kill with a backtrace like this: [54133.243816] Call Trace: [54133.243842] __schedule+0x3a0/0x790 [54133.243868] schedule+0x38/0xa0 [54133.243880] schedule_timeout+0x218/0x3b0 [54133.243891] ? sched_clock+0x9/0x10 [54133.243903] ? wait_for_completion+0xa3/0x130 [54133.243916] ? _raw_spin_unlock_irq+0x2c/0x40 [54133.243930] ? trace_hardirqs_on+0x3f/0xe0 [54133.243951] wait_for_completion+0xab/0x130 [54133.243962] ? wake_up_q+0x70/0x70 [54133.243984] io_ring_ctx_wait_and_kill+0xa0/0x1d0 [54133.243998] io_uring_release+0x20/0x30 [54133.244008] __fput+0xcf/0x270 [54133.244029] ____fput+0xe/0x10 [54133.244040] task_work_run+0x7f/0xa0 [54133.244056] do_exit+0x305/0xc40 [54133.244067] ? get_signal+0x13b/0xbd0 [54133.244088] do_group_exit+0x50/0xd0 [54133.244103] get_signal+0x18d/0xbd0 [54133.244112] ? _raw_spin_unlock_irqrestore+0x36/0x60 [54133.244142] do_signal+0x34/0x720 [54133.244171] ? exit_to_usermode_loop+0x7e/0x130 [54133.244190] exit_to_usermode_loop+0xc0/0x130 [54133.244209] do_syscall_64+0x16b/0x1d0 [54133.244221] entry_SYSCALL_64_after_hwframe+0x49/0xbe The reason is that we had added a req to ctx->pending_async at the very end, but it didn't get a chance to be processed. How could this happen? fio#cpu0 wq#cpu1 io_add_to_prev_work io_sq_wq_submit_work atomic_read() <<< 1 atomic_dec_return() << 1->0 list_empty(); <<< true; list_add_tail() atomic_read() << 0 or 1? As atomic_ops.rst states, atomic_read does not guarantee that the runtime modification by any other thread is visible yet, so we must take care of that with a proper implicit or explicit memory barrier. This issue was detected with the help of Jackie's <liuyun01@kylinos.cn> Fixes: 31b51510 ("io_uring: allow workqueue item to handle multiple buffered requests") Signed-off-by: NZhengyuan Liu <liuzhengyuan@kylinos.cn> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 18 7月, 2019 1 次提交
-
-
由 Trond Myklebust 提交于
Add a per-transport maximum limit in the socket case, and add helpers to allow the NFSv4 code to discover that limit. Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
-
- 17 7月, 2019 13 次提交
-
-
由 Johannes Thumshirn 提交于
btrfs_get_io_geometry() calls btrfs_get_chunk_map() to acquire a reference on a extent_map, but on normal operation it does not drop this reference anymore. This leads to excessive kmemleak reports. Always call free_extent_map(), not just in the error case. Fixes: 5f141126 ("btrfs: Introduce btrfs_io_geometry infrastructure") Reviewed-by: NNikolay Borisov <nborisov@suse.com> Signed-off-by: NJohannes Thumshirn <jthumshirn@suse.de> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com>
-
由 Johannes Thumshirn 提交于
fs_info::csum_hash gets initialized in btrfs_init_csum_hash() which is called by open_ctree(). But it only gets freed if open_ctree() fails, not on normal operation. This leads to a memory leak like the following found by kmemleak: unreferenced object 0xffff888132cb8720 (size 96): comm "mount", pid 450, jiffies 4294912436 (age 17.584s) hex dump (first 32 bytes): 04 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<000000000c9643d4>] crypto_create_tfm+0x2d/0xd0 [<00000000ae577f68>] crypto_alloc_tfm+0x4b/0xb0 [<000000002b5cdf30>] open_ctree+0xb84/0x2060 [btrfs] [<0000000043204297>] btrfs_mount_root+0x552/0x640 [btrfs] [<00000000c99b10ea>] legacy_get_tree+0x22/0x40 [<0000000071a6495f>] vfs_get_tree+0x1f/0xc0 [<00000000f180080e>] fc_mount+0x9/0x30 [<000000009e36cebd>] vfs_kern_mount.part.11+0x6a/0x80 [<0000000004594c05>] btrfs_mount+0x174/0x910 [btrfs] [<00000000c99b10ea>] legacy_get_tree+0x22/0x40 [<0000000071a6495f>] vfs_get_tree+0x1f/0xc0 [<00000000b86e92c5>] do_mount+0x6b0/0x940 [<0000000097464494>] ksys_mount+0x7b/0xd0 [<0000000057213c80>] __x64_sys_mount+0x1c/0x20 [<00000000cb689b5e>] do_syscall_64+0x43/0x130 [<000000002194e289>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Free fs_info::csum_hash in close_ctree() to avoid the memory leak. Fixes: 6d97c6e3 ("btrfs: add boilerplate code for directly including the crypto framework") Reviewed-by: NQu Wenruo <wqu@suse.com> Signed-off-by: NJohannes Thumshirn <jthumshirn@suse.de> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com>
-
由 YueHaibing 提交于
If CONFIG_BTRFS_FS is y and CONFIG_LIBCRC32C is m, building fails: fs/btrfs/super.o: In function `btrfs_mount_root': super.c:(.text+0xb7f9): undefined reference to `crc32c_impl' fs/btrfs/super.o: In function `init_btrfs_fs': super.c:(.init.text+0x3465): undefined reference to `crc32c_impl' fs/btrfs/extent-tree.o: In function `hash_extent_data_ref': extent-tree.c:(.text+0xe60): undefined reference to `crc32c' extent-tree.c:(.text+0xe78): undefined reference to `crc32c' extent-tree.c:(.text+0xe8b): undefined reference to `crc32c' fs/btrfs/dir-item.o: In function `btrfs_insert_xattr_item': dir-item.c:(.text+0x291): undefined reference to `crc32c' fs/btrfs/dir-item.o: In function `btrfs_insert_dir_item': dir-item.c:(.text+0x429): undefined reference to `crc32c' Select LIBCRC32C to fix it. Reported-by: NHulk Robot <hulkci@huawei.com> Fixes: d5178578 ("btrfs: directly call into crypto framework for checksumming") Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com>
-
由 Qu Wenruo 提交于
As btrfs(5) specified: Note If nodatacow or nodatasum are enabled, compression is disabled. If NODATASUM or NODATACOW set, we should not compress the extent. Normally NODATACOW is detected properly in run_delalloc_range() so compression won't happen for NODATACOW. However for NODATASUM we don't have any check, and it can cause compressed extent without csum pretty easily, just by: mkfs.btrfs -f $dev mount $dev $mnt -o nodatasum touch $mnt/foobar mount -o remount,datasum,compress $mnt xfs_io -f -c "pwrite 0 128K" $mnt/foobar And in fact, we have a bug report about corrupted compressed extent without proper data checksum so even RAID1 can't recover the corruption. (https://bugzilla.kernel.org/show_bug.cgi?id=199707) Running compression without proper checksum could cause more damage when corruption happens, as compressed data could make the whole extent unreadable, so there is no need to allow compression for NODATACSUM. The fix will refactor the inode compression check into two parts: - inode_can_compress() As the hard requirement, checked at btrfs_run_delalloc_range(), so no compression will happen for NODATASUM inode at all. - inode_need_compress() As the soft requirement, checked at btrfs_run_delalloc_range() and compress_file_range(). Reported-by: NJames Harvey <jamespharvey20@gmail.com> CC: stable@vger.kernel.org # 4.4+ Signed-off-by: NQu Wenruo <wqu@suse.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com>
-
由 Darrick J. Wong 提交于
Move internal function declarations out of fs/internal.h into include/linux/iomap.h so that our transition is complete. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Darrick J. Wong 提交于
Move the main iteration code into a separate file so that we can group related functions in a single file instead of having a single enormous source file. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Darrick J. Wong 提交于
Move the buffered IO code into a separate file so that we can group related functions in a single file instead of having a single enormous source file. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Darrick J. Wong 提交于
Move the direct IO code into a separate file so that we can group related functions in a single file instead of having a single enormous source file. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Darrick J. Wong 提交于
Move the SEEK_HOLE/SEEK_DATA code into a separate file so that we can group related functions in a single file instead of having a single enormous source file. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Darrick J. Wong 提交于
Move the file mapping reporting code (FIEMAP/FIBMAP) into a separate file so that we can group related functions in a single file instead of having a single enormous source file. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Darrick J. Wong 提交于
Move the swapfile activation code into a separate file so that we can group related functions in a single file instead of having a single enormous source file. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Al Viro 提交于
We used to need rather convoluted ordering trickery to guarantee that dput() of ex-mountpoints happens before the final mntput() of the same. Since we don't need that anymore, there's no point playing with fs_pin for that. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Lift getting the original mount (dentry is actually not needed at all) of the mountpoint into the callers - to do_move_mount() and pivot_root() level. That simplifies the cleanup in those and allows to get saner arguments for attach_mnt_recursive(). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-