- 09 9月, 2010 7 次提交
-
-
由 Jeff Layton 提交于
When cifs_demultiplex_thread exits, it does a number of cleanup tasks including freeing the TCP_Server_Info struct. Much of the existing code in cifs assumes that when there is a cisfSesInfo struct, that it holds a reference to a valid TCP_Server_Info struct. We can never allow cifsd to exit when a cifsSesInfo struct is still holding a reference to the server. The server pointers will then point to freed memory. This patch eliminates a couple of questionable conditions where it does this. The idea here is to make an -EINTR return from kernel_recvmsg behave the same way as -ERESTARTSYS or -EAGAIN. If the task was signalled from cifs_put_tcp_session, then tcpStatus will be CifsExiting, and the kernel_recvmsg call will return quickly. There's also another condition where this can occur too -- if the tcpStatus is still in CifsNew, then it will also exit if the server closes the socket prematurely. I think we'll probably also need to fix that situation, but that requires a bit more consideration. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Steve French 提交于
This function is not used, so remove the definition and declaration. Reviewed-by: NJeff Layton <jlayton@samba.org> Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Jeff Layton 提交于
The VFS always checks that the source and target of a rename are on the same vfsmount, and hence have the same superblock. So, this check is redundant. Remove it and simplify the error handling. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Steve French 提交于
This reverts commit 9fbc5908. The change to kernel crypto and fixes to ntlvm2 and ntlmssp series, introduced a regression. Deferring this patch series to 2.6.37 after Shirish fixes it. Signed-off-by: NSteve French <sfrench@us.ibm.com> Acked-by: NJeff Layton <jlayton@redhat.com> CC: Shirish Pargaonkar <shirishp@us.ibm.com>
-
由 Steve French 提交于
This reverts commit 3ec6bbcd. The change to kernel crypto and fixes to ntlvm2 and ntlmssp series, introduced a regression. Deferring this patch series to 2.6.37 after Shirish fixes it. Signed-off-by: NSteve French <sfrench@us.ibm.com> Acked-by: NJeff Layton <jlayton@redhat.com> CC: Shirish Pargaonkar <shirishp@us.ibm.com>
-
由 Steve French 提交于
This reverts commit 2d20ca83. The change to kernel crypto and fixes to ntlvm2 and ntlmssp series, introduced a regression. Deferring this patch series to 2.6.37 after Shirish fixes it. Signed-off-by: NSteve French <sfrench@us.ibm.com> Acked-by: NJeff Layton <jlayton@redhat.com> CC: Shirish Pargaonkar <shirishp@us.ibm.com>
-
由 Steve French 提交于
The change to kernel crypto and fixes to ntlvm2 and ntlmssp series, introduced a regression. Deferring this patch series to 2.6.37 after Shirish fixes it. This reverts commit c89e5198. Signed-off-by: NSteve French <sfrench@us.ibm.com> Acked-by: NJeff Layton <jlayton@redhat.com> CC: Shirish Pargaonkar <shirishp@us.ibm.com>
-
- 08 9月, 2010 1 次提交
-
-
由 Valerie Aurora 提交于
Sanity check the flags passed to change_mnt_propagation(). Exactly one flag should be set. Return EINVAL otherwise. Userspace can pass in arbitrary combinations of MS_* flags to mount(). do_change_type() is called if any of MS_SHARED, MS_PRIVATE, MS_SLAVE, or MS_UNBINDABLE is set. do_change_type() clears MS_REC and then calls change_mnt_propagation() with the rest of the user-supplied flags. change_mnt_propagation() clearly assumes only one flag is set but do_change_type() does not check that this is true. For example, mount() with flags MS_SHARED | MS_RDONLY does not actually make the mount shared or read-only but does clear MNT_UNBINDABLE. Signed-off-by: NValerie Aurora <vaurora@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 9月, 2010 1 次提交
-
-
由 Dan Carpenter 提交于
d_path() returns an ERR_PTR and it doesn't return NULL. Signed-off-by: NDan Carpenter <error27@gmail.com> Cc: stable <stable@kernel.org> Reviewed-by: N"Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 03 9月, 2010 2 次提交
-
-
由 Tao Ma 提交于
In xfs_vn_fiemap, we set bvm_count to fi_extent_max + 1 and want to return fi_extent_max extents, but actually it won't work for a sparse file. The reason is that in xfs_getbmap we will calculate holes and set it in 'out', while out is malloced by bmv_count(fi_extent_max+1) which didn't consider holes. So in the worst case, if 'out' vector looks like [hole, extent, hole, extent, hole, ... hole, extent, hole], we will only return half of fi_extent_max extents. This patch add a new parameter BMV_IF_NO_HOLES for bvm_iflags. So with this flags, we don't use our 'out' in xfs_getbmap for a hole. The solution is a bit ugly by just don't increasing index of 'out' vector. I felt that it is not easy to skip it at the very beginning since we have the complicated check and some function like xfs_getbmapx_fix_eof_hole to adjust 'out'. Cc: Dave Chinner <david@fromorbit.com> Signed-off-by: NTao Ma <tao.ma@oracle.com> Signed-off-by: NAlex Elder <aelder@sgi.com>
-
由 Dave Chinner 提交于
If we attempt to preallocate more than 2^32 blocks of space in a single syscall, the transaction block reservation will overflow leading to a hangs in the superblock block accounting code. This is trivially reproduced with xfs_io. Fix the problem by capping the allocation reservation to the maximum number of blocks a single xfs_bmapi() call can allocate (2^21 blocks). Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
- 02 9月, 2010 2 次提交
-
-
由 Arkadiusz Mi?kiewicz 提交于
Currently on-disk structure is able to keep only 16bit project quota id, so disallow 32bit ones. This fixes a problem where parts of kernel structures holding project quota id are 32bit while parts (on-disk) are 16bit variables which causes project quota member files to be inaccessible for some operations (like mv/rm). Signed-off-by: NArkadiusz Mi?kiewicz <arekm@maven.pl> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NAlex Elder <aelder@sgi.com>
-
由 Dave Chinner 提交于
When doing large parallel file creates on a 16p machines, large amounts of time is being spent in _xfs_buf_find(). A system wide profile with perf top shows this: 1134740.00 19.3% _xfs_buf_find 733142.00 12.5% __ticket_spin_lock The problem is that the hash contains 45,000 buffers, and the hash table width is only 256 buffers. That means we've got around 200 buffers per chain, and searching it is quite expensive. The hash table size needs to increase. Secondly, every time we do a lookup, we promote the buffer we find to the head of the hash chain. This is causing cachelines to be dirtied and causes invalidation of cachelines across all CPUs that may have walked the hash chain recently. hence every walk of the hash chain is effectively a cold cache walk. Remove the promotion to avoid this invalidation. The results are: 1045043.00 21.2% __ticket_spin_lock 326184.00 6.6% _xfs_buf_find A 70% drop in the CPU usage when looking up buffers. Unfortunately that does not result in an increase in performance underthis workload as contention on the inode_lock soaks up most of the reduction in CPU usage. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
- 30 8月, 2010 2 次提交
-
-
由 Dan Carpenter 提交于
p9_client_walk() can return error values if we run out of space or there is a problem with the network. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NEric Van Hensbergen <ericvh@gmail.com>
-
由 Ryusuke Konishi 提交于
If load_nilfs() gets an error while doing recovery, it will fail to free the shadow inode of dat (nilfs->ns_gc_dat). This fixes the leak issue. Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
-
- 28 8月, 2010 3 次提交
-
-
由 Eric Paris 提交于
The fsnotify main loop has 2 bools which indicated if we processed the inode or vfsmount mark in that particular pass through the loop. These bool can we replaced with the inode_group and vfsmount_group variables and actually make the code a little easier to understand. Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Eric Paris 提交于
Marks were stored on the inode and vfsmonut mark list in order from highest memory address to lowest memory address. The code to walk those lists thought they were in order from lowest to highest with unpredictable results when trying to match up marks from each. It was possible that extra events would be sent to userspace when inode marks ignoring events wouldn't get matched with the vfsmount marks. This problem only affected fanotify when using both vfsmount and inode marks simultaneously. Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Andreas Gruenbacher 提交于
The appropriate error code when privileged operations are denied is EPERM, not EACCES. Signed-off-by: NAndreas Gruenbacher <agruen@suse.de> Signed-off-by: NEric Paris <paris@paris.rdu.redhat.com>
-
- 27 8月, 2010 11 次提交
-
-
由 Tyler Hicks 提交于
Fixes a regression caused by 21edad32 When file name encryption was enabled, ecryptfs_lookup() failed to use the encrypted and encoded version of the upper, plaintext, file name when performing a lookup in the lower file system. This made it impossible to lookup existing encrypted file names and any newly created files would have plaintext file names in the lower file system. https://bugs.launchpad.net/ecryptfs/+bug/623087Signed-off-by: NTyler Hicks <tyhicks@linux.vnet.ibm.com>
-
由 Jerome Marchand 提交于
Some ecryptfs init functions are not prefixed by __init and thus not freed after initialization. This patch saved about 1kB in ecryptfs module. Signed-off-by: NJerome Marchand <jmarchan@redhat.com> Signed-off-by: NTyler Hicks <tyhicks@linux.vnet.ibm.com>
-
由 Julia Lawall 提交于
In this code, 0 is returned on memory allocation failure, even though other failures return -ENOMEM or other similar values. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression ret; expression x,e1,e2,e3; @@ ret = 0 ... when != ret = e1 *x = \(kmalloc\|kcalloc\|kzalloc\)(...) ... when != ret = e2 if (x == NULL) { ... when != ret = e3 return ret; } // </smpl> Signed-off-by: NJulia Lawall <julia@diku.dk> Signed-off-by: NTyler Hicks <tyhicks@linux.vnet.ibm.com>
-
由 Takashi Iwai 提交于
The commit ebabe9a9 pass a struct path to vfs_statfs introduced the struct path initialization, and this seems to trigger an Oops on my machine. fh_dentry field may be NULL and set later in fh_verify(), thus the initialization of path must be after fh_verify(). Signed-off-by: NTakashi Iwai <tiwai@suse.de> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
If we already had a RW open for a file, and get a readonly open, we were piggybacking on the existing RW open. That's inconsistent with the downgrade logic which blows away the RW open assuming you'll still have a readonly open. Also, make sure there is a readonly or writeonly open available for locking, again to prevent bad behavior in downgrade cases when any RW open may be lost. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
It's OK for this function to return without setting filp--we do it in the special-stateid case. And there's a legitimate case where we can hit this, since we do permit reads on write-only stateid's. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Suresh Jayaraman 提交于
On 08/26/2010 01:56 AM, joe hefner wrote: > On a recent Fedora (13), I am seeing a mount failure message that I can not explain. I have a Windows Server 2003ýa with a share set up for access only for a specific username (say userfoo). If I try to mount it from Linux,ýusing userfoo and the correct password all is well. If I try with a bad password or with some other username (userbar), it fails with "Permission denied" as expected. If I try to mount as username = administrator, and give the correct administrator password, I would also expect "Permission denied", but I see "Cannot allocate memory" instead. > ýfs/cifs/netmisc.c: Mapping smb error code 5 to POSIX err -13 > ýfs/cifs/cifssmb.c: Send error in QPathInfo = -13 > ýCIFS VFS: cifs_read_super: get root inode failed Looks like the commit 0b8f18e3 assumed that cifs_get_inode_info() and friends fail only due to memory allocation error when the inode is NULL which is not the case if CIFSSMBQPathInfo() fails and returns an error. Fix this by propagating the actual error code back. Acked-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Dan Carpenter 提交于
get_ticket_handler() returns a valid pointer or it returns ERR_PTR(-ENOMEM) if kzalloc() fails. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We are in a position to return an error; do that instead. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Dan Carpenter 提交于
ceph_mdsc_build_path() returns an ERR_PTR but this code is set up to handle NULL returns. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 26 8月, 2010 3 次提交
-
-
由 Steve French 提交于
CC: Shirish Pargaonkar <shirishp@us.ibm.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Alan Cox 提交于
Just scrubbing some warnings so I can see real problem ones in the build noise. For 32bit we need to coax gcc politely into believing we really honestly intend to the casts. Using (u64)(unsigned long) means we cast from a pointer to a type of the right size and then extend it. This stops the warning spew. Signed-off-by: NAlan Cox <alan@linux.intel.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Dan Carpenter 提交于
ceph_get_inode() returns an ERR_PTR and it doesn't return a NULL. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 25 8月, 2010 3 次提交
-
-
由 Sage Weil 提交于
Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We used to use i_head_snapc to keep track of which snapc the current epoch of dirty data was dirtied under. It is used by queue_cap_snap to set up the cap_snap. However, since we queue cap snaps for any dirty caps, not just for dirty file data, we need to keep a valid i_head_snapc anytime we have dirty|flushing caps. This fixes a NULL pointer deref in queue_cap_snap when writing back dirty caps without data (e.g., snaptest-authwb.sh). Signed-off-by: NSage Weil <sage@newdream.net>
-
Eliminiate sparse warning during usage of crypto_shash_* APIs error: bad constant expression Allocate memory for shash descriptors once, so that we do not kmalloc/kfree it for every signature generation (shash descriptor for md5 hash). From ed7538619817777decc44b5660b52268077b74f3 Mon Sep 17 00:00:00 2001 From: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Date: Tue, 24 Aug 2010 11:47:43 -0500 Subject: [PATCH] eliminate sparse warnings during crypto_shash_* APis usage Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 24 8月, 2010 5 次提交
-
-
由 Christoph Hellwig 提交于
If xfs_map_blocks returns EAGAIN because of lock contention we must redirty the page and not disard the pagecache content and return an error from writepage. We used to do this correctly, but the logic got lost during the recent reshuffle of the writepage code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reported-by: NMike Gao <ygao.linux@gmail.com> Tested-by: NMike Gao <ygao.linux@gmail.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NDave Chinner <dchinner@redhat.com>
-
由 Dave Chinner 提交于
Formatting items requires memory allocation when using delayed logging. Currently that memory allocation is done while holding the CIL context lock in read mode. This means that if memory allocation takes some time (e.g. enters reclaim), we cannot push on the CIL until the allocation(s) required by formatting complete. This can stall CIL pushes for some time, and once a push is stalled so are all new transaction commits. Fix this splitting the item formatting into two steps. The first step which does the allocation and memcpy() into the allocated buffer is now done outside the CIL context lock, and only the CIL insert is done inside the CIL context lock. This avoids the stall issue. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
Delayed logging adds some serialisation to the log force process to ensure that it does not deference a bad commit context structure when determining if a CIL push is necessary or not. It does this by grabing the CIL context lock exclusively, then dropping it before pushing the CIL if necessary. This causes serialisation of all log forces and pushes regardless of whether a force is necessary or not. As a result fsync heavy workloads (like dbench) can be significantly slower with delayed logging than without. To avoid this penalty, copy the current sequence from the context to the CIL structure when they are swapped. This allows us to do unlocked checks on the current sequence without having to worry about dereferencing context structures that may have already been freed. Hence we can remove the CIL context locking in the forcing code and only call into the push code if the current context matches the sequence we need to force. By passing the sequence into the push code, we can check the sequence again once we have the CIL lock held exclusive and abort if the sequence has already been pushed. This avoids a lock round-trip and unnecessary CIL pushes when we have racing push calls. The result is that the regression in dbench performance goes away - this change improves dbench performance on a ramdisk from ~2100MB/s to ~2500MB/s. This compares favourably to not using delayed logging which retuns ~2500MB/s for the same workload. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
When we need to cover the log, we issue dummy transactions to ensure the current log tail is on disk. Unfortunately we currently use the root inode in the dummy transaction, and the act of committing the transaction dirties the inode at the VFS level. As a result, the VFS writeback of the dirty inode will prevent the filesystem from idling long enough for the log covering state machine to complete. The state machine gets stuck in a loop issuing new dummy transactions to cover the log and never makes progress. To avoid this problem, the dummy transactions should not cause externally visible state changes. To ensure this occurs, make sure that dummy transactions log an unchanging field in the superblock as it's state is never propagated outside the filesystem. This allows the log covering state machine to complete successfully and the filesystem now correctly enters a fully idle state about 90s after the last modification was made. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Stuart Brodsky 提交于
Because of delayed updates to sb_icount field in the super block, it is possible to allocate over maxicount number of inodes. This causes the arithmetic to calculate a negative number of free inodes in user commands like df or stat -f. Since maxicount is a somewhat arbitrary number, a slight over allocation is not critical but user commands should be displayed as 0 or greater and never go negative. To do this the value in the stats buffer f_ffree is capped to never go negative. [ Modified to use max_t as per Christoph's comment. ] Signed-off-by: NStu Brodsky <sbrodsky@sgi.com> Signed-off-by: NDave Chinner <dchinner@redhat.com>
-