- 18 9月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
Each NFS version has its own version of the rename args container. Standardize them on a common one that's identical to the one NFSv4 uses. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 17 9月, 2010 11 次提交
-
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Remove all remaining references to the struct nameidata from the low level NFS layers. Again pass down a partially initialised struct nfs_open_context when we want to do atomic open+create. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Remove references to 'struct nameidata' from the low-level open_revalidate code, and replace them with a struct nfs_open_context which will be correctly initialised upon success. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Start moving the 'struct nameidata' dependent code out of the lower level NFS code in preparation for the removal of open intents. Instead of the struct nameidata, we pass down a partially initialised struct nfs_open_context that will be fully initialised by the atomic open upon success. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
As a convenience, introduce a kernel command line option to enable NFSROOT debugging messages. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Clean up: now that mount option parsing for nfsroot is handled in fs/nfs/super.c, remove code in fs/nfs/nfsroot.c that is no longer used. This includes code that constructs the legacy nfs_mount_data structure, and code that does a MNT call to the server. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Replace duplicate code in NFSROOT for mounting an NFS server on '/' with logic that uses the existing mainline text-based logic in the NFS client. Add documenting comments where appropriate. Note that this means NFSROOT mounts now use the same default settings as v2/v3 mounts done via mount(2) from user space. vers=3,tcp,rsize=<negotiated default>,wsize=<negotiated default> As before, however, no version/protocol negotiation with the server is done. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Clean up: To reduce confusion, rename nfs_root_name as nfs_root_parms, as this buffer contains more than just the name of the remote server. Introduce documenting comments around nfs_root_setup(). Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
During boot, a random character is displayed instead of a tab. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 28 8月, 2010 3 次提交
-
-
由 Eric Paris 提交于
The fsnotify main loop has 2 bools which indicated if we processed the inode or vfsmount mark in that particular pass through the loop. These bool can we replaced with the inode_group and vfsmount_group variables and actually make the code a little easier to understand. Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Eric Paris 提交于
Marks were stored on the inode and vfsmonut mark list in order from highest memory address to lowest memory address. The code to walk those lists thought they were in order from lowest to highest with unpredictable results when trying to match up marks from each. It was possible that extra events would be sent to userspace when inode marks ignoring events wouldn't get matched with the vfsmount marks. This problem only affected fanotify when using both vfsmount and inode marks simultaneously. Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 Andreas Gruenbacher 提交于
The appropriate error code when privileged operations are denied is EPERM, not EACCES. Signed-off-by: NAndreas Gruenbacher <agruen@suse.de> Signed-off-by: NEric Paris <paris@paris.rdu.redhat.com>
-
- 27 8月, 2010 11 次提交
-
-
由 Tyler Hicks 提交于
Fixes a regression caused by 21edad32 When file name encryption was enabled, ecryptfs_lookup() failed to use the encrypted and encoded version of the upper, plaintext, file name when performing a lookup in the lower file system. This made it impossible to lookup existing encrypted file names and any newly created files would have plaintext file names in the lower file system. https://bugs.launchpad.net/ecryptfs/+bug/623087Signed-off-by: NTyler Hicks <tyhicks@linux.vnet.ibm.com>
-
由 Jerome Marchand 提交于
Some ecryptfs init functions are not prefixed by __init and thus not freed after initialization. This patch saved about 1kB in ecryptfs module. Signed-off-by: NJerome Marchand <jmarchan@redhat.com> Signed-off-by: NTyler Hicks <tyhicks@linux.vnet.ibm.com>
-
由 Julia Lawall 提交于
In this code, 0 is returned on memory allocation failure, even though other failures return -ENOMEM or other similar values. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression ret; expression x,e1,e2,e3; @@ ret = 0 ... when != ret = e1 *x = \(kmalloc\|kcalloc\|kzalloc\)(...) ... when != ret = e2 if (x == NULL) { ... when != ret = e3 return ret; } // </smpl> Signed-off-by: NJulia Lawall <julia@diku.dk> Signed-off-by: NTyler Hicks <tyhicks@linux.vnet.ibm.com>
-
由 Takashi Iwai 提交于
The commit ebabe9a9 pass a struct path to vfs_statfs introduced the struct path initialization, and this seems to trigger an Oops on my machine. fh_dentry field may be NULL and set later in fh_verify(), thus the initialization of path must be after fh_verify(). Signed-off-by: NTakashi Iwai <tiwai@suse.de> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
If we already had a RW open for a file, and get a readonly open, we were piggybacking on the existing RW open. That's inconsistent with the downgrade logic which blows away the RW open assuming you'll still have a readonly open. Also, make sure there is a readonly or writeonly open available for locking, again to prevent bad behavior in downgrade cases when any RW open may be lost. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
It's OK for this function to return without setting filp--we do it in the special-stateid case. And there's a legitimate case where we can hit this, since we do permit reads on write-only stateid's. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Suresh Jayaraman 提交于
On 08/26/2010 01:56 AM, joe hefner wrote: > On a recent Fedora (13), I am seeing a mount failure message that I can not explain. I have a Windows Server 2003ýa with a share set up for access only for a specific username (say userfoo). If I try to mount it from Linux,ýusing userfoo and the correct password all is well. If I try with a bad password or with some other username (userbar), it fails with "Permission denied" as expected. If I try to mount as username = administrator, and give the correct administrator password, I would also expect "Permission denied", but I see "Cannot allocate memory" instead. > ýfs/cifs/netmisc.c: Mapping smb error code 5 to POSIX err -13 > ýfs/cifs/cifssmb.c: Send error in QPathInfo = -13 > ýCIFS VFS: cifs_read_super: get root inode failed Looks like the commit 0b8f18e3 assumed that cifs_get_inode_info() and friends fail only due to memory allocation error when the inode is NULL which is not the case if CIFSSMBQPathInfo() fails and returns an error. Fix this by propagating the actual error code back. Acked-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Dan Carpenter 提交于
get_ticket_handler() returns a valid pointer or it returns ERR_PTR(-ENOMEM) if kzalloc() fails. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We are in a position to return an error; do that instead. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Dan Carpenter 提交于
ceph_mdsc_build_path() returns an ERR_PTR but this code is set up to handle NULL returns. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 26 8月, 2010 3 次提交
-
-
由 Steve French 提交于
CC: Shirish Pargaonkar <shirishp@us.ibm.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Alan Cox 提交于
Just scrubbing some warnings so I can see real problem ones in the build noise. For 32bit we need to coax gcc politely into believing we really honestly intend to the casts. Using (u64)(unsigned long) means we cast from a pointer to a type of the right size and then extend it. This stops the warning spew. Signed-off-by: NAlan Cox <alan@linux.intel.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Dan Carpenter 提交于
ceph_get_inode() returns an ERR_PTR and it doesn't return a NULL. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NSage Weil <sage@newdream.net>
-
- 25 8月, 2010 3 次提交
-
-
由 Sage Weil 提交于
Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Sage Weil 提交于
We used to use i_head_snapc to keep track of which snapc the current epoch of dirty data was dirtied under. It is used by queue_cap_snap to set up the cap_snap. However, since we queue cap snaps for any dirty caps, not just for dirty file data, we need to keep a valid i_head_snapc anytime we have dirty|flushing caps. This fixes a NULL pointer deref in queue_cap_snap when writing back dirty caps without data (e.g., snaptest-authwb.sh). Signed-off-by: NSage Weil <sage@newdream.net>
-
Eliminiate sparse warning during usage of crypto_shash_* APIs error: bad constant expression Allocate memory for shash descriptors once, so that we do not kmalloc/kfree it for every signature generation (shash descriptor for md5 hash). From ed7538619817777decc44b5660b52268077b74f3 Mon Sep 17 00:00:00 2001 From: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Date: Tue, 24 Aug 2010 11:47:43 -0500 Subject: [PATCH] eliminate sparse warnings during crypto_shash_* APis usage Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 24 8月, 2010 8 次提交
-
-
由 Christoph Hellwig 提交于
If xfs_map_blocks returns EAGAIN because of lock contention we must redirty the page and not disard the pagecache content and return an error from writepage. We used to do this correctly, but the logic got lost during the recent reshuffle of the writepage code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reported-by: NMike Gao <ygao.linux@gmail.com> Tested-by: NMike Gao <ygao.linux@gmail.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NDave Chinner <dchinner@redhat.com>
-
由 Dave Chinner 提交于
Formatting items requires memory allocation when using delayed logging. Currently that memory allocation is done while holding the CIL context lock in read mode. This means that if memory allocation takes some time (e.g. enters reclaim), we cannot push on the CIL until the allocation(s) required by formatting complete. This can stall CIL pushes for some time, and once a push is stalled so are all new transaction commits. Fix this splitting the item formatting into two steps. The first step which does the allocation and memcpy() into the allocated buffer is now done outside the CIL context lock, and only the CIL insert is done inside the CIL context lock. This avoids the stall issue. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
Delayed logging adds some serialisation to the log force process to ensure that it does not deference a bad commit context structure when determining if a CIL push is necessary or not. It does this by grabing the CIL context lock exclusively, then dropping it before pushing the CIL if necessary. This causes serialisation of all log forces and pushes regardless of whether a force is necessary or not. As a result fsync heavy workloads (like dbench) can be significantly slower with delayed logging than without. To avoid this penalty, copy the current sequence from the context to the CIL structure when they are swapped. This allows us to do unlocked checks on the current sequence without having to worry about dereferencing context structures that may have already been freed. Hence we can remove the CIL context locking in the forcing code and only call into the push code if the current context matches the sequence we need to force. By passing the sequence into the push code, we can check the sequence again once we have the CIL lock held exclusive and abort if the sequence has already been pushed. This avoids a lock round-trip and unnecessary CIL pushes when we have racing push calls. The result is that the regression in dbench performance goes away - this change improves dbench performance on a ramdisk from ~2100MB/s to ~2500MB/s. This compares favourably to not using delayed logging which retuns ~2500MB/s for the same workload. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
When we need to cover the log, we issue dummy transactions to ensure the current log tail is on disk. Unfortunately we currently use the root inode in the dummy transaction, and the act of committing the transaction dirties the inode at the VFS level. As a result, the VFS writeback of the dirty inode will prevent the filesystem from idling long enough for the log covering state machine to complete. The state machine gets stuck in a loop issuing new dummy transactions to cover the log and never makes progress. To avoid this problem, the dummy transactions should not cause externally visible state changes. To ensure this occurs, make sure that dummy transactions log an unchanging field in the superblock as it's state is never propagated outside the filesystem. This allows the log covering state machine to complete successfully and the filesystem now correctly enters a fully idle state about 90s after the last modification was made. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Stuart Brodsky 提交于
Because of delayed updates to sb_icount field in the super block, it is possible to allocate over maxicount number of inodes. This causes the arithmetic to calculate a negative number of free inodes in user commands like df or stat -f. Since maxicount is a somewhat arbitrary number, a slight over allocation is not critical but user commands should be displayed as 0 or greater and never go negative. To do this the value in the stats buffer f_ffree is capped to never go negative. [ Modified to use max_t as per Christoph's comment. ] Signed-off-by: NStu Brodsky <sbrodsky@sgi.com> Signed-off-by: NDave Chinner <dchinner@redhat.com>
-
由 Dave Chinner 提交于
During data integrity (WB_SYNC_ALL) writeback, wbc->nr_to_write will go negative on inodes with more than 1024 dirty pages due to implementation details of write_cache_pages(). Currently XFS will abort page clustering in writeback once nr_to_write drops below zero, and so for data integrity writeback we will do very inefficient page at a time allocation and IO submission for inodes with large numbers of dirty pages. Fix this by only aborting the page clustering code when wbc->nr_to_write is negative and the sync mode is WB_SYNC_NONE. Cc: <stable@kernel.org> Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
Commit 7124fe0a ("xfs: validate untrusted inode numbers during lookup") changes the inode lookup code to do btree lookups for untrusted inode numbers. This change made an invalid assumption about the alignment of inodes and hence incorrectly calculated the first inode in the cluster. As a result, some inode numbers were being incorrectly considered invalid when they were actually valid. The issue was not picked up by the xfstests suite because it always runs fsr and dump (the two utilities that utilise the bulkstat interface) on cache hot inodes and hence the lookup code in the cold cache path was not sufficiently exercised to uncover this intermittent problem. Fix the issue by relaxing the btree lookup criteria and then checking if the record returned contains the inode number we are lookup for. If it we get an incorrect record, then the inode number is invalid. Cc: <stable@kernel.org> Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
Under heavy load parallel metadata loads (e.g. dbench), we can fail to mark all the inodes in a cluster being freed as XFS_ISTALE as we skip inodes we cannot get the XFS_ILOCK_EXCL or the flush lock on. When this happens and the inode cluster buffer has already been marked stale and freed, inode reclaim can try to write the inode out as it is dirty and not marked stale. This can result in writing th metadata to an freed extent, or in the case it has already been overwritten trigger a magic number check failure and return an EUCLEAN error such as: Filesystem "ram0": inode 0x442ba1 background reclaim flush failed with 117 Fix this by ensuring that we hoover up all in memory inodes in the cluster and mark them XFS_ISTALE when freeing the cluster. Cc: <stable@kernel.org> Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-