- 10 4月, 2018 12 次提交
-
-
由 David Howells 提交于
Split the AFS dynamic root stuff out of the main directory handling file and into its own file as they share little in common. The dynamic root code also gets its own dentry and inode ops tables. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Each afs dentry is tagged with the version that the parent directory was at last time it was validated and, currently, if this differs, the directory is scanned and the dentry is refreshed. However, this leads to an excessive amount of revalidation on directories that get modified on the client without conflict with another client. We know there's no conflict because the parent directory's data version number got incremented by exactly 1 on any create, mkdir, unlink, etc., therefore we can trust the current state of the unaffected dentries when we perform a local directory modification. Optimise by keeping track of the last version of the parent directory that was changed outside of the client in the parent directory's vnode and using that to validate the dentries rather than the current version. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Rearrange the AFSFetchStatus to inode attribute mapping code in a number of ways: (1) Use an XDR structure rather than a series of incremented pointer accesses when decoding an AFSFetchStatus object. This allows out-of-order decode. (2) Don't store the if_version value but rather just check it and abort if it's not something we can handle. (3) Store the owner and group in the status record as raw values rather than converting them to kuid/kgid. Do that when they're mapped into i_uid/i_gid. (4) Validate the type and abort code up front and abort if they're wrong. (5) Split the inode attribute setting out into its own function from the XDR decode of an AFSFetchStatus object. This allows it to be called from elsewhere too. (6) Differentiate changes to data from changes to metadata. (7) Use the split-out attribute mapping function from afs_iget(). Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Store the data version number indicated by an FS.FetchData op into the read request structure so that it's accessible by the page reader. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
We no longer parse symlinks when we get the inode to determine if this symlink is actually a mountpoint as we detect that by examining the mode instead (symlinks are always 0777 and mountpoints 0644). Access the cache after mapping the status so that we don't have to manually set the inode size now. Note that this may need adjusting if the disconnected operation is implemented as the file metadata may have to be obtained from the cache. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Introduce a proc file that displays a bunch of statistics for the AFS filesystem in the current network namespace. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Dump an AFS FileStatus record that is detected as invalid. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Implement @cell substitution handling such that if @cell is seen as a name in a dynamic root mount, then the name of the root cell for that network namespace will be substituted for @cell during lookup. The substitution of @cell for the current net namespace is set by writing the cell name to /proc/fs/afs/rootcell. The value can be obtained by reading the file. For example: # mount -t afs none /kafs -o dyn # echo grand.central.org >/proc/fs/afs/rootcell # ls /kafs/@cell archive/ cvs/ doc/ local/ project/ service/ software/ user/ www/ # cat /proc/fs/afs/rootcell grand.central.org Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Implement the AFS feature by which @sys at the end of a pathname component may be substituted for one of a list of values, typically naming the operating system. Up to 16 alternatives may be specified and these are tried in turn until one works. Each network namespace has[*] a separate independent list. Upon creation of a new network namespace, the list of values is initialised[*] to a single OpenAFS-compatible string representing arch type plus "_linux26". For example, on x86_64, the sysname is "amd64_linux26". [*] Or will, once network namespace support is finalised in kAFS. The list may be set by: # for i in foo bar linux-x86_64; do echo $i; done >/proc/fs/afs/sysname for which separate writes to the same fd are amalgamated and applied on close. The LF character may be used as a separator to specify multiple items in the same write() call. The list may be cleared by: # echo >/proc/fs/afs/sysname and read by: # cat /proc/fs/afs/sysname foo bar linux-x86_64 Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
When afs_lookup() is called, prospectively look up the next 50 uncached fids also from that same directory and cache the results, rather than just looking up the one file requested. This allows us to use the FS.InlineBulkStatus RPC op to increase efficiency by fetching up to 50 file statuses at a time. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
AFS cells that are added or set as the workstation cell through /proc are pinned against removal by setting the AFS_CELL_FL_NO_GC flag on them and taking a ref. The ref should be only taken if the flag wasn't already set. Fix this by making it conditional. Without this an assertion failure will occur during module removal indicating that the refcount is too elevated. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Fix warnings raised by checker, including: (*) Warnings raised by unequal comparison for the purposes of sorting, where the endianness doesn't matter: fs/afs/addr_list.c:246:21: warning: restricted __be16 degrades to integer fs/afs/addr_list.c:246:30: warning: restricted __be16 degrades to integer fs/afs/addr_list.c:248:21: warning: restricted __be32 degrades to integer fs/afs/addr_list.c:248:49: warning: restricted __be32 degrades to integer fs/afs/addr_list.c:283:21: warning: restricted __be16 degrades to integer fs/afs/addr_list.c:283:30: warning: restricted __be16 degrades to integer (*) afs_set_cb_interest() is not actually used and can be removed. (*) afs_cell_gc_delay() should be provided with a sysctl. (*) afs_cell_destroy() needs to use rcu_access_pointer() to read cell->vl_addrs. (*) afs_init_fs_cursor() should be static. (*) struct afs_vnode::permit_cache needs to be marked __rcu. (*) afs_server_rcu() needs to use rcu_access_pointer(). (*) afs_destroy_server() should use rcu_access_pointer() on server->addresses as the server object is no longer accessible. (*) afs_find_server() casts __be16/__be32 values to int in order to directly compare them for the purpose of finding a match in a list, but is should also annotate the cast with __force to avoid checker warnings. (*) afs_check_permit() accesses vnode->permit_cache outside of the RCU readlock, though it doesn't then access the value; the extraneous access is deleted. False positives: (*) Conditional locking around the code in xdr_decode_AFSFetchStatus. This can be dealt with in a separate patch. fs/afs/fsclient.c:148:9: warning: context imbalance in 'xdr_decode_AFSFetchStatus' - different lock contexts for basic block (*) Incorrect handling of seq-retry lock context balance: fs/afs/inode.c:455:38: warning: context imbalance in 'afs_getattr' - different lock contexts for basic block fs/afs/server.c:52:17: warning: context imbalance in 'afs_find_server' - different lock contexts for basic block fs/afs/server.c:128:17: warning: context imbalance in 'afs_find_server_by_uuid' - different lock contexts for basic block Errors: (*) afs_lookup_cell_rcu() needs to break out of the seq-retry loop, not go round again if it successfully found the workstation cell. (*) Fix UUID decode in afs_deliver_cb_probe_uuid(). (*) afs_cache_permit() has a missing rcu_read_unlock() before one of the jumps to the someone_else_changed_it label. Move the unlock to after the label. (*) afs_vl_get_addrs_u() is using ntohl() rather than htonl() when encoding to XDR. (*) afs_deliver_yfsvl_get_endpoints() is using htonl() rather than ntohl() when decoding from XDR. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 06 4月, 2018 1 次提交
-
-
由 David Howells 提交于
Pass the object size in to fscache_acquire_cookie() and fscache_write_page() rather than the netfs providing a callback by which it can be received. This makes it easier to update the size of the object when a new page is written that extends the object. The current object size is also passed by fscache to the check_aux function, obviating the need to store it in the aux data. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NAnna Schumaker <anna.schumaker@netapp.com> Tested-by: NSteve Dickson <steved@redhat.com>
-
- 04 4月, 2018 4 次提交
-
-
由 David Howells 提交于
Attach copies of the index key and auxiliary data to the fscache cookie so that: (1) The callbacks to the netfs for this stuff can be eliminated. This can simplify things in the cache as the information is still available, even after the cache has relinquished the cookie. (2) Simplifies the locking requirements of accessing the information as we don't have to worry about the netfs object going away on us. (3) The cache can do lazy updating of the coherency information on disk. As long as the cache is flushed before reboot/poweroff, there's no need to update the coherency info on disk every time it changes. (4) Cookies can be hashed or put in a tree as the index key is easily available. This allows: (a) Checks for duplicate cookies can be made at the top fscache layer rather than down in the bowels of the cache backend. (b) Caching can be added to a netfs object that has a cookie if the cache is brought online after the netfs object is allocated. A certain amount of space is made in the cookie for inline copies of the data, but if it won't fit there, extra memory will be allocated for it. The downside of this is that live cache operation requires more memory. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NAnna Schumaker <anna.schumaker@netapp.com> Tested-by: NSteve Dickson <steved@redhat.com>
-
由 David Howells 提交于
When relinquishing cookies, either due to iget failure or to inode eviction, retire a cookie if we think the corresponding vnode got deleted on the server rather than just letting it lie in the cache. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
AFS vnodes (files) are referenced by a triplet of { volume ID, vnode ID, uniquifier }. Currently, kafs is only using the vnode ID as the file key in the volume fscache index and checking the uniquifier on cookie acquisition against the contents of the auxiliary data stored in the cache. Unfortunately, this is subject to a race in which an FS.RemoveFile or FS.RemoveDir op is issued against the server but the local afs inode isn't torn down and disposed off before another thread issues something like FS.CreateFile. The latter then gets given the vnode ID that just got removed, but with a new uniquifier and a cookie collision occurs in the cache because the cookie is only keyed on the vnode ID whereas the inode is keyed on the vnode ID plus the uniquifier. Fix this by keying the cookie on the uniquifier in addition to the vnode ID and dropping the uniquifier from the auxiliary data supplied. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Invalidate any data stored in fscache for a vnode that changes on the server so that we don't end up with the cache in a bad state locally. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 28 3月, 2018 1 次提交
-
-
由 David Howells 提交于
In rxrpc and afs, use the debug_ids that are monotonically allocated to various objects as they're allocated rather than pointers as kernel pointers are now hashed making them less useful. Further, the debug ids aren't reused anywhere nearly as quickly. In addition, allow kernel services that use rxrpc, such as afs, to take numbers from the rxrpc counter, assign them to their own call struct and pass them in to rxrpc for both client and service calls so that the trace lines for each will have the same ID tag. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 20 3月, 2018 1 次提交
-
-
由 Peter Zijlstra 提交于
The old wait_on_atomic_t() is going to get removed, use the more flexible wait_var_event() API instead. No change in functionality. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: David Howells <dhowells@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 2月, 2018 7 次提交
-
-
由 David Howells 提交于
Support the AFS dynamic root which is a pseudo-volume that doesn't connect to any server resource, but rather is just a root directory that dynamically creates mountpoint directories where the name of such a directory is the name of the cell. Such a mount can be created thus: mount -t afs none /afs -o dyn Dynamic root superblocks aren't shared except by bind mounts and propagation. Cell root volumes can then be mounted by referring to them by name, e.g.: ls /afs/grand.central.org/ ls /afs/.grand.central.org/ The kernel will upcall to consult the DNS if the address wasn't supplied directly. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Rearrange afs_select_fileserver() a little to put the use_server chunk before the next_server chunk so that with the removal of a couple of gotos the main path through the function is all one sequence. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Remove some old unused code. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Fix server list handling in the following ways: (1) In afs_alloc_volume(), remove duplicate server list build code. This was already done by afs_alloc_server_list() which afs_alloc_volume() previously called. This just results in twice as many VL RPCs. (2) In afs_deliver_vl_get_entry_by_name_u(), use the number of server records indicated by ->nServers in the UVLDB record returned by the VL.GetEntryByNameU RPC call rather than scanning all NMAXNSERVERS slots. Unused slots may contain garbage. (3) In afs_alloc_server_list(), don't stop converting a UVLDB record into a server list just because we can't look up one of the servers. Just skip that server and go on to the next. If we can't look up any of the servers then we'll fail at the end. Without this patch, an attempt to view the umich.edu root cell using something like "ls /afs/umich.edu" on a dynamic root (future patch) mount or an autocell mount will result in ENOMEDIUM. The failure is due to kafs not stopping after nServers'worth of records have been read, but then trying to access a server with a garbage UUID and getting an error, which aborts the server list build. Fixes: d2ddc776 ("afs: Overhaul volume and server record caching and fileserver rotation") Reported-by: NJonathan Billings <jsbillings@jsbillings.org> Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: stable@vger.kernel.org
-
由 David Howells 提交于
In afs_select_fileserver(), we need to clear the ->responded flag in the address list when reusing it. We should also clear it in afs_select_current_fileserver(). To this end, just memset() the object before initialising it. Fixes: d2ddc776 ("afs: Overhaul volume and server record caching and fileserver rotation") Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: stable@vger.kernel.org
-
由 David Howells 提交于
afs_select_fileserver() ends the address cursor it is using in the case in which we get some sort of network error and run out of addresses to iterate through, before it jumps to try the next server. This also needs to be done when the server aborts with some sort of error that means we should try the next server. Fix this by: (1) Move the iterate_address afs_end_cursor() call to the next_server case. (2) End the cursor in the failed case. (3) Make afs_end_cursor() clear the ->begun flag and ->addr pointer in the address cursor. (4) Make afs_end_cursor() able to be called on an already cleared cursor. Without this, something like the following oops may occur: AFS: Assertion failed 18446612134397189888 == 0 is false 0xffff88007c279f00 == 0x0 is false ------------[ cut here ]------------ kernel BUG at fs/afs/rotate.c:360! RIP: 0010:afs_select_fileserver+0x79b/0xa30 [kafs] Call Trace: afs_statfs+0xcc/0x180 [kafs] ? p9_client_statfs+0x9e/0x110 [9pnet] ? _cond_resched+0x19/0x40 statfs_by_dentry+0x6d/0x90 vfs_statfs+0x1b/0xc0 user_statfs+0x4b/0x80 SYSC_statfs+0x15/0x30 SyS_statfs+0xe/0x10 entry_SYSCALL_64_fastpath+0x20/0x83 Fixes: d2ddc776 ("afs: Overhaul volume and server record caching and fileserver rotation") Reported-by: NMarc Dionne <marc.dionne@auristor.com> Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: stable@vger.kernel.org
-
由 David Howells 提交于
afs_alloc_volume() needs to release the cell ref it obtained in the case of an error. Fix this by adding an afs_put_cell() call into the error path. This can triggered when a lookup for a cell in a dynamic root or an autocell mount returns an error whilst trying to look up the server (such as ENOMEDIUM). This results in an assertion failure oops when the module is unloaded due to outstanding refs on a cell record. Fixes: d2ddc776 ("afs: Overhaul volume and server record caching and fileserver rotation") Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: stable@vger.kernel.org
-
- 29 1月, 2018 1 次提交
-
-
由 Jeff Layton 提交于
For AFS, it's generally treated as an opaque value, so we use the *_raw variants of the API here. Note that AFS has quite a different definition for this counter. AFS only increments it on changes to the data to the data in regular files and contents of the directories. Inode metadata changes do not result in a version increment. We'll need to reconcile that somehow if we ever want to present this to userspace via statx. Signed-off-by: NJeff Layton <jlayton@redhat.com>
-
- 02 1月, 2018 3 次提交
-
-
由 David Howells 提交于
afs_write_end() is missing page unlock and put if afs_fill_page() fails. Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Repeating creation and deletion of a file on an afs mount will run the box out of memory, e.g.: dd if=/dev/zero of=/afs/scratch/m0 bs=$((1024*1024)) count=512 rm /afs/scratch/m0 The problem seems to be that it's not properly decrementing the nlink count so that the inode can be scrapped. Note that this doesn't fix local creation followed by remote deletion. That's harder to handle and will require a separate patch as we're not told that the file has been deleted - only that the directory has changed. Reported-by: NMarc Dionne <marc.dionne@auristor.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 Dan Carpenter 提交于
Smatch warns that: fs/afs/rxrpc.c:922 afs_extract_data() error: uninitialized symbol 'remote_abort'. Smatch is right that "remote_abort" might be uninitialized when we pass it to afs_set_call_complete(). I don't know if that function uses the uninitialized variable. Anyway, the comment for rxrpc_kernel_recv_data(), says that "*_abort should also be initialised to 0." and this patch does that. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 01 12月, 2017 2 次提交
-
-
由 David Howells 提交于
When an AFS inode is allocated by afs_alloc_inode(), the allocated afs_vnode struct isn't necessarily reset from the last time it was used as an inode because the slab constructor is only invoked once when the memory is obtained from the page allocator. This means that information can leak from one inode to the next because we're not calling kmem_cache_zalloc(). Some of the information isn't reset, in particular the permit cache pointer. Bring the clearances up to date. Signed-off-by: NDavid Howells <dhowells@redhat.com> Tested-by: NMarc Dionne <marc.dionne@auristor.com>
-
由 David Howells 提交于
Fix four refcount bugs in afs_cache_permit(): (1) When checking the result of the kzalloc(), we can't just return, but must put 'permits'. (2) We shouldn't put permits immediately after hashing a new permit as we need to keep the pointer stable so that we can check to see if vnode->permit_cache has changed before we decide whether to assign to it. (3) 'permits' is being put twice. (4) We need to put either the replacement or the thing replaced after the assignment to vnode->permit_cache. Without this, lots of the following are seen: Kernel BUG at ffffffffa039857b [verbose debug info unavailable] ------------[ cut here ]------------ Kernel BUG at ffffffffa039858a [verbose debug info unavailable] ------------[ cut here ]------------ The addresses are in the .text..refcount section of the kafs.ko module. Following the relocation records for the __ex_table section shows one to be due to the decrement in afs_put_permits() and the other to be key_get() in afs_cache_permit(). Occasionally, the following is seen: refcount_t overflow at afs_cache_permit+0x57d/0x5c0 [kafs] in cc1[562], uid/euid: 0/0 WARNING: CPU: 0 PID: 562 at kernel/panic.c:657 refcount_error_report+0x9c/0xac ... Reported-by: NMarc Dionne <marc.dionne@auristor.com> Signed-off-by: NDavid Howells <dhowells@redhat.com> Tested-by: NMarc Dionne <marc.dionne@auristor.com>
-
- 28 11月, 2017 1 次提交
-
-
由 Linus Torvalds 提交于
This is a pure automated search-and-replace of the internal kernel superblock flags. The s_flags are now called SB_*, with the names and the values for the moment mirroring the MS_* flags that they're equivalent to. Note how the MS_xyz flags are the ones passed to the mount system call, while the SB_xyz flags are what we then use in sb->s_flags. The script to do this was: # places to look in; re security/*: it generally should *not* be # touched (that stuff parses mount(2) arguments directly), but # there are two places where we really deal with superblock flags. FILES="drivers/mtd drivers/staging/lustre fs ipc mm \ include/linux/fs.h include/uapi/linux/bfs_fs.h \ security/apparmor/apparmorfs.c security/apparmor/include/lib.h" # the list of MS_... constants SYMS="RDONLY NOSUID NODEV NOEXEC SYNCHRONOUS REMOUNT MANDLOCK \ DIRSYNC NOATIME NODIRATIME BIND MOVE REC VERBOSE SILENT \ POSIXACL UNBINDABLE PRIVATE SLAVE SHARED RELATIME KERNMOUNT \ I_VERSION STRICTATIME LAZYTIME SUBMOUNT NOREMOTELOCK NOSEC BORN \ ACTIVE NOUSER" SED_PROG= for i in $SYMS; do SED_PROG="$SED_PROG -e s/MS_$i/SB_$i/g"; done # we want files that contain at least one of MS_..., # with fs/namespace.c and fs/pnode.c excluded. L=$(for i in $SYMS; do git grep -w -l MS_$i $FILES; done| sort|uniq|grep -v '^fs/namespace.c'|grep -v '^fs/pnode.c') for f in $L; do sed -i $f $SED_PROG; done Requested-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 11月, 2017 5 次提交
-
-
由 Colin Ian King 提交于
The assignment of dvnode to itself is redundant and can be removed. Cleans up warning detected by cppcheck: fs/afs/dir.c:975: (warning) Redundant assignment of 'dvnode' to itself. Fixes: d2ddc776 ("afs: Overhaul volume and server record caching and fileserver rotation") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 Gustavo A. R. Silva 提交于
Due to recent changes this piece of code is no longer needed. Addresses-Coverity-ID: 1462033 Link: https://lkml.kernel.org/r/4923.1510957307@warthog.procyon.org.ukSigned-off-by: NGustavo A. R. Silva <garsilva@embeddedor.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
afs_mkdir(), afs_create(), afs_link() and afs_symlink() all need to drop the target dentry if a signal causes the operation to be killed immediately before we try to contact the server. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Fix some of dentry handling in AFS directory ops: (1) Do d_drop() on the new_dentry before assigning a new inode to it in afs_vnode_new_inode(). It's fine to do this before calling afs_iget() because the operation has taken place on the server. (2) Replace d_instantiate()/d_rehash() with d_add(). (3) Don't d_drop() the new_dentry in afs_rename() on error. Also fix afs_link() and afs_rename() to call key_put() on all error paths where the key is taken. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Make afs_write_begin() wait for a page that's marked PG_writeback because: (1) We need to avoid interference with the data being stored so that the data on the server ends up in a defined state. (2) page->private is used to track the window of dirty data within a page, but it's also used by the storage code to track what's being written, being cleared by the completion notification. Ownership can't be relinquished by the storage code until completion because it a store fails, the data must be remarked dirty. Tracing shows something like the following (edited): x86_64-linux-gn-15940 [1] afs_page_dirty: vn=ffff8800bef33800 9c75 begin 0-125 kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 store+ 0-125 x86_64-linux-gn-15940 [1] afs_page_dirty: vn=ffff8800bef33800 9c75 begin 0-2052 kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 clear 0-2052 kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 store 0-0 kworker/u8:3-114 [2] afs_page_dirty: vn=ffff8800bef33800 9c75 WARN 0-0 The clear (completion) corresponding to the store+ (store continuation from a previous page) happens between the second begin (afs_write_begin) and the store corresponding to that. This results in the second store not seeing any data to write back, leading to the following warning: WARNING: CPU: 2 PID: 114 at ../fs/afs/write.c:403 afs_write_back_from_locked_page+0x19d/0x76c [kafs] Modules linked in: kafs(E) CPU: 2 PID: 114 Comm: kworker/u8:3 Tainted: G E 4.14.0-fscache+ #242 Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014 Workqueue: writeback wb_workfn (flush-afs-2) task: ffff8800cad72600 task.stack: ffff8800cad44000 RIP: 0010:afs_write_back_from_locked_page+0x19d/0x76c [kafs] RSP: 0018:ffff8800cad47aa0 EFLAGS: 00010246 RAX: 0000000000000001 RBX: ffff8800bef33a20 RCX: 0000000000000000 RDX: 000000000000000f RSI: ffffffff81c5d0e0 RDI: ffff8800cad72e78 RBP: ffff8800d31ea1e8 R08: ffff8800c1358000 R09: ffff8800ca00e400 R10: ffff8800cad47a38 R11: ffff8800c5d9e400 R12: 0000000000000000 R13: ffffea0002d9df00 R14: ffffffffa0023c1c R15: 0000000000007fdf FS: 0000000000000000(0000) GS:ffff8800ca700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f85ac6c4000 CR3: 0000000001c10001 CR4: 00000000001606e0 Call Trace: ? clear_page_dirty_for_io+0x23a/0x267 afs_writepages_region+0x1be/0x286 [kafs] afs_writepages+0x60/0x127 [kafs] do_writepages+0x36/0x70 __writeback_single_inode+0x12f/0x635 writeback_sb_inodes+0x2cc/0x452 __writeback_inodes_wb+0x68/0x9f wb_writeback+0x208/0x470 ? wb_workfn+0x22b/0x565 wb_workfn+0x22b/0x565 ? worker_thread+0x230/0x2ac process_one_work+0x2cc/0x517 ? worker_thread+0x230/0x2ac worker_thread+0x1d4/0x2ac ? rescuer_thread+0x29b/0x29b kthread+0x15d/0x165 ? kthread_create_on_node+0x3f/0x3f ? call_usermodehelper_exec_async+0x118/0x11f ret_from_fork+0x24/0x30 Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 17 11月, 2017 1 次提交
-
-
由 David Howells 提交于
Fix the AFS file locking whereby the use of the big kernel lock (which could be slept with) was replaced by a spinlock (which couldn't). The problem is that the AFS code was doing stuff inside the critical section that might call schedule(), so this is a broken transformation. Fix this by the following means: (1) Use a state machine with a proper state that can only be changed under the spinlock rather than using a collection of bit flags. (2) Cache the key used for the lock and the lock type in the afs_vnode struct so that the manager work function doesn't have to refer to a file_lock struct that's been dequeued. This makes signal handling safer. (4) Move the unlock from afs_do_unlk() to afs_fl_release_private() which means that unlock is achieved in other circumstances too. (5) Unlock the file on the server before taking the next conflicting lock. Also change: (1) Check the permits on a file before actually trying the lock. (2) fsync the file before effecting an explicit unlock operation. We don't fsync if the lock is erased otherwise as we might not be in a context where we can actually do that. Further fixes: (1) Fixed-fileserver address rotation is made to work. It's only used by the locking functions, so couldn't be tested before. Fixes: 72f98e72 ("locks: turn lock_flocks into a spinlock") Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: jlayton@redhat.com
-
- 16 11月, 2017 1 次提交
-
-
由 Mel Gorman 提交于
Every pagevec_init user claims the pages being released are hot even in cases where it is unlikely the pages are hot. As no one cares about the hotness of pages being released to the allocator, just ditch the parameter. No performance impact is expected as the overhead is marginal. The parameter is removed simply because it is a bit stupid to have a useless parameter copied everywhere. Link: http://lkml.kernel.org/r/20171018075952.10627-6-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Andi Kleen <ak@linux.intel.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Jan Kara <jack@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-