- 02 8月, 2014 8 次提交
-
-
由 Pavel Shilovsky 提交于
Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NPavel Shilovsky <pshilovsky@samba.org> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Pavel Shilovsky 提交于
If wsize changes on reconnect we need to use new writedata structure that for retrying. Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NPavel Shilovsky <pshilovsky@samba.org> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Pavel Shilovsky 提交于
If a server change maximum buffer size for write (wsize) requests on reconnect we can fail on repeating with a big size buffer on -EAGAIN error in writepages. Fix this by checking wsize all the time before repeating request in writepages. Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NPavel Shilovsky <pshilovsky@samba.org> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Pavel Shilovsky 提交于
Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NPavel Shilovsky <pshilovsky@samba.org> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Pavel Shilovsky 提交于
Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NPavel Shilovsky <pshilovsky@samba.org> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Steve French 提交于
The recent session setup patch set (cifs-Separate-rawntlmssp-auth-from-CIFS_SessSetup.patch) had introduced a trivial sparse build warning. Signed-off-by: NSteve French <smfrench@gmail.com> Cc: Sachin Prabhu <sprabhu@redhat.com>
-
由 Pavel Shilovsky 提交于
Signed-off-by: NPavel Shilovsky <pshilovsky@samba.org> Reviewed-by: NJeff Layton <jlayton@samba.org> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Pavel Shilovsky 提交于
If we get into read_into_pages() from cifs_readv_receive() and then loose a network, we issue cifs_reconnect that moves all mids to a private list and issue their callbacks. The callback of the async read request sets a mid to retry, frees it and wakes up a process that waits on the rdata completion. After the connection is established we return from read_into_pages() with a short read, use the mid that was freed before and try to read the remaining data from the a newly created socket. Both actions are not what we want to do. In reconnect cases (-EAGAIN) we should not mask off the error with a short read but should return the error code instead. Acked-by: NJeff Layton <jlayton@samba.org> Cc: stable@vger.kernel.org Signed-off-by: NPavel Shilovsky <pshilovsky@samba.org> Signed-off-by: NSteve French <smfrench@gmail.com>
-
- 01 8月, 2014 5 次提交
-
-
由 Sachin Prabhu 提交于
Separate rawntlmssp authentication from CIFS_SessSetup(). Also cleanup CIFS_SessSetup() since we no longer do any auth within it. Signed-off-by: NSachin Prabhu <sprabhu@redhat.com> Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Sachin Prabhu 提交于
Signed-off-by: NSachin Prabhu <sprabhu@redhat.com> Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Sachin Prabhu 提交于
Signed-off-by: NSachin Prabhu <sprabhu@redhat.com> Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Sachin Prabhu 提交于
In preparation for splitting CIFS_SessSetup() into smaller more manageable chunks, we first add helper functions. We then proceed to split out lanman auth out of CIFS_SessSetup() Signed-off-by: NSachin Prabhu <sprabhu@redhat.com> Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NSteve French <smfrench@gmail.com>
-
由 Sachin Prabhu 提交于
The functionality provided by free_rsp_buf() is duplicated in a number of places. Replace these instances with a call to free_rsp_buf(). Signed-off-by: NSachin Prabhu <sprabhu@redhat.com> Reviewed-by: NShirish Pargaonkar <spargaonkar@suse.com> Signed-off-by: NSteve French <smfrench@gmail.com>
-
- 30 7月, 2014 1 次提交
-
-
由 David Howells 提交于
Correctly assemble the client UUID by OR'ing in the flags rather than assigning them over the other components. Reported-by: NHimangi Saraogi <himangi774@gmail.com> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 7月, 2014 4 次提交
-
-
由 Vasily Averin 提交于
Currently umount on symlink blocks following umount: /vz is separate mount # ls /vz/ -al | grep test drwxr-xr-x. 2 root root 4096 Jul 19 01:14 testdir lrwxrwxrwx. 1 root root 11 Jul 19 01:16 testlink -> /vz/testdir # umount -l /vz/testlink umount: /vz/testlink: not mounted (expected) # lsof /vz # umount /vz umount: /vz: device is busy. (unexpected) In this case mountpoint_last() gets an extra refcount on path->mnt Signed-off-by: NVasily Averin <vvs@openvz.org> Acked-by: NIan Kent <raven@themaw.net> Acked-by: NJeff Layton <jlayton@primarydata.com> Cc: stable@vger.kernel.org Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Boaz Harrosh 提交于
The following warnings: fs/direct-io.c: In function ‘__blockdev_direct_IO’: fs/direct-io.c:1011:12: warning: ‘to’ may be used uninitialized in this function [-Wmaybe-uninitialized] fs/direct-io.c:913:16: note: ‘to’ was declared here fs/direct-io.c:1011:12: warning: ‘from’ may be used uninitialized in this function [-Wmaybe-uninitialized] fs/direct-io.c:913:10: note: ‘from’ was declared here are false positive because dio_get_page() either fails, or sets both 'from' and 'to'. Paul Bolle said ... Maybe it's better to move initializing "to" and "from" out of dio_get_page(). That _might_ make it easier for both the the reader and the compiler to understand what's going on. Something like this: Christoph Hellwig said ... The fix of moving the code definitively looks nicer, while I think uninitialized_var is horrible wart that won't get anywhere near my code. Boaz Harrosh: I agree with Christoph and Paul Signed-off-by: NBoaz Harrosh <boaz@plexistor.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Hugh Dickins 提交于
If a filesystem uses simple_xattr to support user extended attributes, LTP setxattr01 and xfstests generic/062 fail with "Cannot allocate memory": simple_xattr_alloc()'s wrap-around test mistakenly excludes values of zero size. Fix that off-by-one (but apparently no filesystem needs them yet). Signed-off-by: NHugh Dickins <hughd@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jeff Layton <jlayton@poochiereds.net> Cc: Aristeu Rozanski <aris@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Silesh C V 提交于
Commit 079148b9 ("coredump: factor out the setting of PF_DUMPCORE") cleaned up the setting of PF_DUMPCORE by removing it from all the linux_binfmt->core_dump() and moving it to zap_threads().But this ended up clearing all the previously set flags. This causes issues during core generation when tsk->flags is checked again (eg. for PF_USED_MATH to dump floating point registers). Fix this. Signed-off-by: NSilesh C V <svellattu@mvista.com> Acked-by: NOleg Nesterov <oleg@redhat.com> Cc: Mandeep Singh Baines <msb@chromium.org> Cc: <stable@vger.kernel.org> [3.10+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 7月, 2014 1 次提交
-
-
由 Kinglong Mee 提交于
Commit 8c7424cf "nfsd4: don't try to encode conflicting owner if low on space" forgot to free conf->data in nfsd4_encode_lockt and before sign conf->data to NULL in nfsd4_encode_lock_denied, causing a leak. Worse, kfree() can be called on an uninitialized pointer in the case of a succesful lock (or one that fails for a reason other than a conflict). (Note that lock->lk_denied.ld_owner.data appears it should be zero here, until you notice that it's one arm of a union the other arm of which is written to in the succesful case by the memcpy(&lock->lk_resp_stateid, &lock_stp->st_stid.sc_stateid, sizeof(stateid_t)); in nfsd4_lock(). In the 32-bit case this overwrites ld_owner.data.) Signed-off-by: NKinglong Mee <kinglongmee@gmail.com> Fixes: 8c7424cf ""nfsd4: don't try to encode conflicting owner if low on space" Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 22 7月, 2014 2 次提交
-
-
由 Andrew Gallagher 提交于
Here some additional changes to set a capability flag so that clients can detect when it's appropriate to return -ENOSYS from open. This amends the following commit introduced in 3.14: 7678ac50 fuse: support clients that don't implement 'open' However we can only add the flag to 3.15 and later since there was no protocol version update in 3.14. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Cc: <stable@vger.kernel.org> # v3.15+
-
由 Miklos Szeredi 提交于
Default s_time_gran is 1, don't overwrite that if userspace didn't explicitly specify one. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Cc: <stable@vger.kernel.org> # v3.15+
-
- 20 7月, 2014 2 次提交
-
-
由 Eric Sandeen 提交于
commit 99994cde btrfs: dev delete should remove sysfs entry added a btrfs_kobj_rm_device, which dereferences device->bdev... right after we check whether device->bdev might be NULL. I don't honestly know if it's possible to have a NULL device->bdev here, but assuming that it is (given the test), we need to move the kobject removal to be under that test. (Coverity spotted this) Signed-off-by: NEric Sandeen <sandeen@redhat.com> Signed-off-by: NChris Mason <clm@fb.com>
-
由 Liu Bo 提交于
xfstests generic/127 detected this problem. With commit 7fc34a62, now fsync will only flush data within the passed range. This is the cause of the above problem, -- btrfs's fsync has a stage called 'sync log' which will wait for all the ordered extents it've recorded to finish. In xfstests/generic/127, with mixed operations such as truncate, fallocate, punch hole, and mapwrite, we get some pre-allocated extents, and mapwrite will mmap, and then msync. And I find that msync will wait for quite a long time (about 20s in my case), thanks to ftrace, it turns out that the previous fallocate calls 'btrfs_wait_ordered_range()' to flush dirty pages, but as the range of dirty pages may be larger than 'btrfs_wait_ordered_range()' wants, there can be some ordered extents created but not getting corresponding pages flushed, then they're left in memory until we fsync which runs into the stage 'sync log', and fsync will just wait for the system writeback thread to flush those pages and get ordered extents finished, so the latency is inevitable. This adds a flush similar to btrfs_start_ordered_extent() in btrfs_wait_logged_extents() to fix that. Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NLiu Bo <bo.li.liu@oracle.com> Signed-off-by: NChris Mason <clm@fb.com>
-
- 18 7月, 2014 8 次提交
-
-
由 Fabian Frederick 提交于
Cc: cluster-devel@redhat.com Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: cluster-devel@redhat.com Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Bob Peterson 提交于
This patch removes the GLF_NOCACHE flag from the glocks associated with flocks. There should be no good reason not to cache glocks for flocks: they only force the glock to be demoted before they can be reacquired, which can slow down performance and even cause glock hangs, especially in cases where the flocks are held in Shared (SH) mode. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Bob Peterson 提交于
This patch allows flock glocks to use a non-blocking dequeue rather than dq_wait. It also reverts the previous patch I had posted regarding dq_wait. The reverted patch isn't necessarily a bad idea, but I decided this might avoid unforeseen side effects, and was therefore safer. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Fabian Frederick 提交于
kcalloc manages count*sizeof overflow. Cc: cluster-devel@redhat.com Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Steven Whitehouse 提交于
Normally GFP_KERNEL is ok here, but there is now a rarely used code path relating to deallocation of unlinked inodes (in certain corner cases) which if hit at times of memory shortage can cause recursion while trying to free memory. One solution would be to try and move the gfs2_glock_get() call so that it is no longer called while another glock is held, but that doesn't look at all easy, so GFP_NOFS is the best solution for the time being. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Steven Whitehouse 提交于
We must not leave items on the LRU list with GLF_LOCK set, since they can be removed if the glock is brought back into use, which may then potentially result in a hang, waiting for GLF_LOCK to clear. It doesn't happen very often, since it requires a glock that has not been used for a long time to be brought back into use at the same moment that the shrinker is part way through disposing of glocks. The fix is to set GLF_LOCK at a later time, when we already know that the other locks can be obtained. Also, we now only release the lru_lock in case a resched is needed, rather than on every iteration. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Bob Peterson 提交于
Function gfs2_glock_dq_wait is supposed to dequeue a glock and then wait for the lock to be demoted. The problem is, if this is a shared lock, its demote will depend on the other holders, which means you might end up waiting forever because the other process is blocked. This problem is especially apparent when dealing with nested flocks. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 16 7月, 2014 1 次提交
-
-
由 Niu Yawei 提交于
Commit 1ab6c499 (fs: convert fs shrinkers to new scan/count API) accidentally removed locking from quota shrinker. Fix it - dqcache_shrink_scan() should use dq_list_lock to protect the scan on free_dquots list. CC: stable@vger.kernel.org Fixes: 1ab6c499Signed-off-by: NNiu Yawei <yawei.niu@intel.com> Signed-off-by: NJan Kara <jack@suse.cz>
-
- 15 7月, 2014 4 次提交
-
-
由 Dave Chinner 提交于
When quota is on, it is expected that unused quota inodes have a value of NULLFSINO. The changes to support a separate project quota in 3.12 broken this rule for non-project quota inode enabled filesystem, as the code now refuses to write the group quota inode if neither group or project quotas are enabled. This regression was introduced by commit d892d586 ("xfs: Start using pquotaino from the superblock"). In this case, we should be writing NULLFSINO rather than nothing to ensure that we leave the group quota inode in a valid state while quotas are enabled. Failure to do so doesn't cause a current kernel to break - the separate project quota inodes introduced translation code to always treat a zero inode as NULLFSINO. This was introduced by commit 01026297 ("xfs: Initialize all quota inodes to be NULLFSINO") with is also in 3.12 but older kernels do not do this and hence taking a filesystem back to an older kernel can result in quotas failing initialisation at mount time. When that happens, we see this in dmesg: [ 1649.215390] XFS (sdb): Mounting Filesystem [ 1649.316894] XFS (sdb): Failed to initialize disk quotas. [ 1649.316902] XFS (sdb): Ending clean mount By ensuring that we write NULLFSINO to quota inodes that aren't active, we avoid this problem. We have to be really careful when determining if the quota inodes are active or not, because we don't want to write a NULLFSINO if the quota inodes are active and we simply aren't updating them. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NBrian Foster <bfoster@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com>
-
由 Dave Chinner 提交于
The allocation stack switch at xfs_bmapi_allocate() has served it's purpose, but is no longer a sufficient solution to the stack usage problem we have in the XFS allocation path. Whilst the kernel stack size is now 16k, that is not a valid reason for undoing all our "keep stack usage down" modifications. What it does allow us to do is have the freedom to refine and perfect the modifications knowing that if we get it wrong it won't blow up in our faces - we have a safety net now. This is important because we still have the issue of older kernels having smaller stacks and that they are still supported and are demonstrating a wide range of different stack overflows. Red Hat has several open bugs for allocation based stack overflows from directory modifications and direct IO block allocation and these problems still need to be solved. If we can solve them upstream, then distro's won't need to bake their own unique solutions. To that end, I've observed that every allocation based stack overflow report has had a specific characteristic - it has happened during or directly after a bmap btree block split. That event requires a new block to be allocated to the tree, and so we effectively stack one allocation stack on top of another, and that's when we get into trouble. A further observation is that bmap btree block splits are much rarer than writeback allocation - over a range of different workloads I've observed the ratio of bmap btree inserts to splits ranges from 100:1 (xfstests run) to 10000:1 (local VM image server with sparse files that range in the hundreds of thousands to millions of extents). Either way, bmap btree split events are much, much rarer than allocation events. Finally, we have to move the kswapd state to the allocation workqueue work when allocation is done on behalf of kswapd. This is proving to cause significant perturbation in performance under memory pressure and appears to be generating allocation deadlock warnings under some workloads, so avoiding the use of a workqueue for the majority of kswapd writeback allocation will minimise the impact of such behaviour. Hence it makes sense to move the stack switch to xfs_btree_split() and only do it for bmap btree splits. Stack switches during allocation will be much rarer, so there won't be significant performacne overhead caused by switching stacks. The worse case stack from all allocation paths will be split, not just writeback. And the majority of memory allocations will be done in the correct context (e.g. kswapd) without causing additional latency, and so we simplify the memory reclaim interactions between processes, workqueues and kswapd. The worst stack I've been able to generate with this patch in place is 5600 bytes deep. It's very revealing because we exit XFS at: 37) 1768 64 kmem_cache_alloc+0x13b/0x170 about 1800 bytes of stack consumed, and the remaining 3800 bytes (and 36 functions) is memory reclaim, swap and the IO stack. And this occurs in the inode allocation from an open(O_CREAT) syscall, not writeback. The amount of stack being used is much less than I've previously be able to generate - fs_mark testing has been able to generate stack usage of around 7k without too much trouble; with this patch it's only just getting to 5.5k. This is primarily because the metadata allocation paths (e.g. directory blocks) are no longer causing double splits on the same stack, and hence now stack tracing is showing swapping being the worst stack consumer rather than XFS. Performance of fs_mark inode create workloads is unchanged. Performance of fs_mark async fsync workloads is consistently good with context switches reduced by around 150,000/s (30%). Performance of dbench, streaming IO and postmark is unchanged. Allocation deadlock warnings have not been seen on the workloads that generated them since adding this patch. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NBrian Foster <bfoster@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com>
-
由 Dave Chinner 提交于
This reverts commit 1f6d6482. This commit resulted in regressions in performance in low memory situations where kswapd was doing writeback of delayed allocation blocks. It resulted in significant parallelism of the kswapd work and with the special kswapd flags meant that hundreds of active allocation could dip into kswapd specific memory reserves and avoid being throttled. This cause a large amount of performance variation, as well as random OOM-killer invocations that didn't previously exist. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NBrian Foster <bfoster@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com>
-
由 Benjamin LaHaise 提交于
As of commit f8567a38 it is now possible to have put_reqs_available() called from irq context. While put_reqs_available() is per cpu, it did not protect itself from interrupts on the same CPU. This lead to aio_complete() corrupting the available io requests count when run under a heavy O_DIRECT workloads as reported by Robert Elliott. Fix this by disabling irq updates around the per cpu batch updates of reqs_available. Many thanks to Robert and folks for testing and tracking this down. Reported-by: NRobert Elliot <Elliott@hp.com> Tested-by: NRobert Elliot <Elliott@hp.com> Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org> Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@infradead.org> Cc: stable@vger.kenel.org
-
- 14 7月, 2014 3 次提交
-
-
由 Fabian Frederick 提交于
kcalloc manages count*sizeof overflow. Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
-
由 Maxim Patlasov 提交于
tmp_page to be freed if fuse_write_file_get() returns NULL. Signed-off-by: NMaxim Patlasov <mpatlasov@parallels.com> Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
-
由 Trond Myklebust 提交于
Once we've started sending unstable NFS writes, we do not want to clear pg_moreio, or we may end up sending the very last request as a stable write if the commit lists are still empty. Do, however, reset pg_moreio in the case where we end up having to recoalesce the write if an attempt to use pNFS failed. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
- 13 7月, 2014 1 次提交
-
-
由 Trond Myklebust 提交于
Cc: Weston Andros Adamson <dros@primarydata.com> Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-