- 22 8月, 2019 1 次提交
-
-
由 Marc Dionne 提交于
The afs_lookup trace event can cause the following: [ 216.576777] BUG: kernel NULL pointer dereference, address: 000000000000023b [ 216.576803] #PF: supervisor read access in kernel mode [ 216.576813] #PF: error_code(0x0000) - not-present page ... [ 216.576913] RIP: 0010:trace_event_raw_event_afs_lookup+0x9e/0x1c0 [kafs] If the inode from afs_do_lookup() is an error other than ENOENT, or if it is ENOENT and afs_try_auto_mntpt() returns an error, the trace event will try to dereference the error pointer as a valid pointer. Use IS_ERR_OR_NULL to only pass a valid pointer for the trace, or NULL. Ideally the trace would include the error value, but for now just avoid the oops. Fixes: 80548b03 ("afs: Add more tracepoints") Signed-off-by: NMarc Dionne <marc.dionne@auristor.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 30 7月, 2019 3 次提交
-
-
由 David Howells 提交于
In the in-kernel afs filesystem, the d_fsdata dentry field is used to hold the data version of the parent directory when it was created or when d_revalidate() last caused it to be updated. This is compared to the ->invalid_before field in the directory inode, rather than the actual data version number, thereby allowing changes due to local edits to be ignored. Only if the server data version gets bumped unexpectedly (eg. by a competing client), do we need to revalidate stuff. However, the d_fsdata field should also be updated if an rpc op is performed that modifies that particular dentry. Such ops return the revised data version of the directory(ies) involved, so we should use that. This is particularly problematic for rename, since a dentry from one directory may be moved directly into another directory (ie. mv a/x b/x). It would then be sporting the wrong data version - and if this is in the future, for the destination directory, revalidations would be missed, leading to foreign renames and hard-link deletion being missed. Fix this by the following means: (1) Return the data version number from operations that read the directory contents - if they issue the read. This starts in afs_dir_iterate() and is used, ignored or passed back by its callers. (2) In afs_lookup*(), set the dentry version to the version returned by (1) before d_splice_alias() is called and the dentry published. (3) In afs_d_revalidate(), set the dentry version to that returned from (1) if an rpc call was issued. This means that if a parallel procedure, such as mkdir(), modifies the directory, we won't accidentally use the data version from that. (4) In afs_{mkdir,create,link,symlink}(), set the new dentry's version to the directory data version before d_instantiate() is called. (5) In afs_{rmdir,unlink}, update the target dentry's version to the directory data version as soon as we've updated the directory inode. (6) In afs_rename(), we need to unhash the old dentry before we start so that we don't get afs_d_revalidate() reverting the version change in cross-directory renames. We then need to set both the old and the new dentry versions the data version of the new directory before we call d_move() as d_move() will rehash them. Fixes: 1da177e4 ("Linux-2.6.12-rc2") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
In the in-kernel afs filesystem, d_fsdata is set with the data version of the parent directory. afs_d_revalidate() will update this to the current directory version, but it shouldn't do this if it the value it read from d_fsdata is the same as no lock is held and cmpxchg() is not used. Fix the code to only change the value if it is different from the current directory version. Fixes: 260a9803 ("[AFS]: Add "directory write" support.") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
When afs_rename() calculates the expected data version of the target directory in a cross-directory rename, it doesn't increment it as it should, so it always thinks that the target inode is unexpectedly modified on the server. Fixes: a58823ac ("afs: Fix application of status and callback to be under same lock") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 21 6月, 2019 3 次提交
-
-
由 Zhengyuan Liu 提交于
As Gustavo said in other patches doing the same replace, we can now use the new struct_size() helper to avoid leaving these open-coded and prone to type mistake. Signed-off-by: NZhengyuan Liu <liuzhengyuan@kylinos.cn> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Add a couple of tracepoints to track callback management: (1) afs_cb_miss - Logs when we were unable to apply a callback, either due to the inode being discarded or due to a competing thread applying a callback first. (2) afs_cb_break - Logs when we attempted to clear the noted callback promise, either due to the server explicitly breaking the callback, the callback promise lapsing or a local event obsoleting it. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Don't check that dentry->d_inode is valid in afs_unlink(). We should be able to take that as given. This caused Smatch to issue the following warning: fs/afs/dir.c:1392 afs_unlink() error: we previously assumed 'vnode' could be null (see line 1375) Reported-by: Nkbuild test robot <lkp@intel.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 31 5月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAllison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 5月, 2019 4 次提交
-
-
由 David Howells 提交于
Fix afs_do_lookup() such that when it does an inline bulk status fetch op, it will update inodes that are already extant (something that afs_iget() doesn't do) and to cache permits for each inode created (thereby avoiding a follow up FS.FetchStatus call to determine this). Extant inodes need looking up in advance so that their cb_break counters before and after the operation can be compared. To this end, the inode pointers are cached so that they don't need looking up again after the op. Fixes: 5cf9dd55 ("afs: Prospectively look up extra files when doing a single lookup") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Pass the server and volume break counts from before the status fetch operation that queried the attributes of a file into afs_iget5_set() so that the new vnode's break counters can be initialised appropriately. This allows detection of a volume or server break that happened whilst we were fetching the status or setting up the vnode. Fixes: c435ee34 ("afs: Overhaul the callback handling") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Make use of the status update for the target file that the YFS.RemoveFile2 RPC op returns to correctly update the vnode as to whether the file was actually deleted or just had nlink reduced. Fixes: 30062bd1 ("afs: Implement YFS support in the fs client") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Use RCU-based freeing for afs_cb_interest struct objects and use RCU on vnode->cb_interest. Use that change to allow afs_check_validity() to use read_seqbegin_or_lock() instead of read_seqlock_excl(). This also requires the caller of afs_check_validity() to hold the RCU read lock across the call. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 16 5月, 2019 3 次提交
-
-
由 David Howells 提交于
When applying the status and callback in the response of an operation, apply them in the same critical section so that there's no race between checking the callback state and checking status-dependent state (such as the data version). Fix this by: (1) Allocating a joint {status,callback} record (afs_status_cb) before calling the RPC function for each vnode for which the RPC reply contains a status or a status plus a callback. A flag is set in the record to indicate if a callback was actually received. (2) These records are passed into the RPC functions to be filled in. The afs_decode_status() and yfs_decode_status() functions are removed and the cb_lock is no longer taken. (3) xdr_decode_AFSFetchStatus() and xdr_decode_YFSFetchStatus() no longer update the vnode. (4) xdr_decode_AFSCallBack() and xdr_decode_YFSCallBack() no longer update the vnode. (5) vnodes, expected data-version numbers and callback break counters (cb_break) no longer need to be passed to the reply delivery functions. Note that, for the moment, the file locking functions still need access to both the call and the vnode at the same time. (6) afs_vnode_commit_status() is now given the cb_break value and the expected data_version and the task of applying the status and the callback to the vnode are now done here. This is done under a single taking of vnode->cb_lock. (7) afs_pages_written_back() is now called by afs_store_data() rather than by the reply delivery function. afs_pages_written_back() has been moved to before the call point and is now given the first and last page numbers rather than a pointer to the call. (8) The indicator from YFS.RemoveFile2 as to whether the target file actually got removed (status.abort_code == VNOVNODE) rather than merely dropping a link is now checked in afs_unlink rather than in xdr_decode_YFSFetchStatus(). Supplementary fixes: (*) afs_cache_permit() now gets the caller_access mask from the afs_status_cb object rather than picking it out of the vnode's status record. afs_fetch_status() returns caller_access through its argument list for this purpose also. (*) afs_inode_init_from_status() now uses a write lock on cb_lock rather than a read lock and now sets the callback inside the same critical section. Fixes: c435ee34 ("afs: Overhaul the callback handling") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
afs_do_lookup() will do an order-1 allocation to allocate status records if there are more than 39 vnodes to stat. Fix this by allocating an array of {status,callback} records for each vnode we want to examine using vmalloc() if larger than a page. This not only gets rid of the order-1 allocation, but makes it easier to grow beyond 50 records for YFS servers. It also allows us to move to {status,callback} tuples for other calls too and makes it easier to lock across the application of the status and the callback to the vnode. Fixes: 5cf9dd55 ("afs: Prospectively look up extra files when doing a single lookup") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Make certain RPC operations non-interruptible, including: (*) Set attributes (*) Store data We don't want to get interrupted during a flush on close, flush on unlock, writeback or an inode update, leaving us in a state where we still need to do the writeback or update. (*) Extend lock (*) Release lock We don't want to get lock extension interrupted as the file locks on the server are time-limited. Interruption during lock release is less of an issue since the lock is time-limited, but it's better to complete the release to avoid a several-minute wait to recover it. *Setting* the lock isn't a problem if it's interrupted since we can just return to the user and tell them they were interrupted - at which point they can elect to retry. (*) Silly unlink We want to remove silly unlink files if we can, rather than leaving them for the salvager to clear up. Note that whilst these calls are no longer interruptible, they do have timeouts on them, so if the server stops responding the call will fail with something like ETIME or ECONNRESET. Without this, the following: kAFS: Unexpected error from FS.StoreData -512 appears in dmesg when a pending store data gets interrupted and some processes may just hang. Additionally, make the code that checks/updates the server record ignore failure due to interruption if the main call is uninterruptible and if the server has an address list. The next op will check it again since the expiration time on the old list has past. Fixes: d2ddc776 ("afs: Overhaul volume and server record caching and fileserver rotation") Reported-by: NJonathan Billings <jsbillings@jsbillings.org> Reported-by: NMarc Dionne <marc.dionne@auristor.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 07 5月, 2019 1 次提交
-
-
由 David Howells 提交于
Log more information when "kAFS: AFS vnode with undefined type\n" is displayed due to a vnode record being retrieved from the server that appears to have a duff file type (usually 0). This prints more information to try and help pin down the problem. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 25 4月, 2019 4 次提交
-
-
由 David Howells 提交于
Add four more tracepoints: (1) afs_make_fs_call1 - Split from afs_make_fs_call but takes a filename to log also. (2) afs_make_fs_call2 - Like the above but takes two filenames to log. (3) afs_lookup - Log the result of doing a successful lookup, including a negative result (fid 0:0). (4) afs_get_tree - Log the set up of a volume for mounting. It also extends the name buffer on the afs_edit_dir tracepoint to 24 chars and puts quotes around the filename in the text representation. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Implement sillyrename for AFS unlink and rename, using the NFS variant implementation as a basis. Note that the asynchronous file locking extender/releaser has to be notified with a state change to stop it complaining if there's a race between that and the actual file deletion. A tracepoint, afs_silly_rename, is also added to note the silly rename and the cleanup. The afs_edit_dir tracepoint is given some extra reason indicators and the afs_flock_ev tracepoint is given a silly-delete file lock cancellation indicator. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Add a tracepoint (afs_reload_dir) to indicate when a directory is being reloaded. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Improve the content of directory check failure reports from: kAFS: afs_dir_check_page(6d57): bad magic 1/2 is 0000 to dump more information about the individual blocks in a directory page. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 30 11月, 2018 1 次提交
-
-
由 David Howells 提交于
Use d_instantiate() rather than d_add() and don't d_drop() in afs_vnode_new_inode(). The dentry shouldn't be removed as it's not changing its name. Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 24 10月, 2018 5 次提交
-
-
由 David Howells 提交于
Implement support for talking to YFS-variant fileservers in the cache manager and the filesystem client. These implement upgraded services on the same port as their AFS services. YFS fileservers provide expanded capabilities over AFS. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Get the target vnode in afs_rmdir() and validate it before we attempt the deletion, The vnode pointer will be passed through to the delivery function in a later patch so that the delivery function can mark it deleted. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Call the function to commit the status on a new file, dir or symlink so that the access rights for the caller's key are cached for that object. Without this, the next access to the file will cause a FetchStatus operation to be emitted to retrieve the access rights. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Increase the sizes of the volume ID to 64 bits and the vnode ID (inode number equivalent) to 96 bits to allow the support of YFS. This requires the iget comparator to check the vnode->fid rather than i_ino and i_generation as i_ino is not sufficiently capacious. It also requires this data to be placed into the vnode cache key for fscache. For the moment, just discard the top 32 bits of the vnode ID when returning it though stat. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Add a couple of tracepoints to log the production of I/O errors within the AFS filesystem. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 06 8月, 2018 2 次提交
-
-
由 Al Viro 提交于
simpler logics in callers that way Acked-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
->lookup() methods can (and should) use d_splice_alias() instead of d_add(). Even if they are not going to be hit by open_by_handle(), code does get copied around; besides, d_splice_alias() has better calling conventions for use in ->lookup(), so the code gets simpler. Acked-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 14 5月, 2018 2 次提交
-
-
由 David Howells 提交于
It's possible for an AFS file server to issue a whole-volume notification that callbacks on all the vnodes in the file have been broken. This is done for R/O and backup volumes (which don't have per-file callbacks) and for things like a volume being taken offline. Fix callback handling to detect whole-volume notifications, to track it across operations and to check it during inode validation. Fixes: c435ee34 ("afs: Overhaul the callback handling") Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
The afs directory loading code (primarily afs_read_dir()) locks all the pages that hold a directory's content blob to defend against getdents/getdents races and getdents/lookup races where the competitors issue conflicting reads on the same data. As the reads will complete consecutively, they may retrieve different versions of the data and one may overwrite the data that the other is busy parsing. Fix this by not locking the pages at all, but rather by turning the validation lock into an rwsem and getting an exclusive lock on it whilst reading the data or validating the attributes and a shared lock whilst parsing the data. Sharing the attribute validation lock should be fine as the data fetch will retrieve the attributes also. The individual page locks aren't needed at all as the only place they're being used is to serialise data loading. Without this patch, the: if (!test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) { ... } part of afs_read_dir() may be skipped, leaving the pages unlocked when we hit the success: clause - in which case we try to unlock the not-locked pages, leading to the following oops: page:ffffe38b405b4300 count:3 mapcount:0 mapping:ffff98156c83a978 index:0x0 flags: 0xfffe000001004(referenced|private) raw: 000fffe000001004 ffff98156c83a978 0000000000000000 00000003ffffffff raw: dead000000000100 dead000000000200 0000000000000001 ffff98156b27c000 page dumped because: VM_BUG_ON_PAGE(!PageLocked(page)) page->mem_cgroup:ffff98156b27c000 ------------[ cut here ]------------ kernel BUG at mm/filemap.c:1205! ... RIP: 0010:unlock_page+0x43/0x50 ... Call Trace: afs_dir_iterate+0x789/0x8f0 [kafs] ? _cond_resched+0x15/0x30 ? kmem_cache_alloc_trace+0x166/0x1d0 ? afs_do_lookup+0x69/0x490 [kafs] ? afs_do_lookup+0x101/0x490 [kafs] ? key_default_cmp+0x20/0x20 ? request_key+0x3c/0x80 ? afs_lookup+0xf1/0x340 [kafs] ? __lookup_slow+0x97/0x150 ? lookup_slow+0x35/0x50 ? walk_component+0x1bf/0x490 ? path_lookupat.isra.52+0x75/0x200 ? filename_lookup.part.66+0xa0/0x170 ? afs_end_vnode_operation+0x41/0x60 [kafs] ? __check_object_size+0x9c/0x171 ? strncpy_from_user+0x4a/0x170 ? vfs_statx+0x73/0xe0 ? __do_sys_newlstat+0x39/0x70 ? __x64_sys_getdents+0xc9/0x140 ? __x64_sys_getdents+0x140/0x140 ? do_syscall_64+0x5b/0x160 ? entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: f3ddee8d ("afs: Fix directory handling") Reported-by: NMarc Dionne <marc.dionne@auristor.com> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 10 4月, 2018 10 次提交
-
-
由 David Howells 提交于
Processes like ld that do lots of small writes that aren't necessarily contiguous result in a lot of small StoreData operations to the server, the idea being that if someone else changes the data on the server, we only write our changes over that and not the space between. Further, we don't want to write back empty space if we can avoid it to make it easier for the server to do sparse files. However, making lots of tiny RPC ops is a lot less efficient for the server than one big one because each op requires allocation of resources and the taking of locks, so we want to compromise a bit. Reduce the load by the following: (1) If a file is just created locally or has just been truncated with O_TRUNC locally, allow subsequent writes to the file to be merged with intervening space if that space doesn't cross an entire intervening page. (2) Don't flush the file on ->flush() but rather on ->release() if the file was open for writing. Just linking vmlinux.o, without this patch, looking in /proc/fs/afs/stats: file-wr : n=441 nb=513581204 and after the patch: file-wr : n=62 nb=513668555 there were 379 fewer StoreData RPC operations at the expense of an extra 87K being written. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Locally edit the contents of an AFS directory upon a successful inode operation that modifies that directory (such as mkdir, create and unlink) so that we can avoid the current practice of re-downloading the directory after each change. This is viable provided that the directory version number we get back from the modifying RPC op is exactly incremented by 1 from what we had previously. The data in the directory contents is in a defined format that we have to parse locally to perform lookups and readdir, so modifying isn't a problem. If the edit fails, we just clear the VALID flag on the directory and it will be reloaded next time it is needed. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Adjust the AFS directory XDR structures in a number of superficial ways: (1) Rename them to all begin afs_xdr_. (2) Use u8 instead of uint8_t. (3) Mark the structures as __packed so they don't get rearranged by the compiler. (4) Rename the hdr member of afs_xdr_dir_block to meta. (5) Rename the pagehdr member of afs_xdr_dir_block to hdr. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Split the directory content definitions into a header file so that they can be used by multiple .c files. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
AFS directories are structured blobs that are downloaded just like files and then parsed by the lookup and readdir code and, as such, are currently handled in the pagecache like any other file, with the entire directory content being thrown away each time the directory changes. However, since the blob is a known structure and since the data version counter on a directory increases by exactly one for each change committed to that directory, we can actually edit the directory locally rather than fetching it from the server after each locally-induced change. What we can't do, though, is mix data from the server and data from the client since the server is technically at liberty to rearrange or compress a directory if it sees fit, provided it updates the data version number when it does so and breaks the callback (ie. sends a notification). Further, lookup with lookup-ahead, readdir and, when it arrives, local editing are likely want to scan the whole of a directory. So directory handling needs to be improved to maintain the coherency of the directory blob prior to permitting local directory editing. To this end: (1) If any directory page gets discarded, invalidate and reread the entire directory. (2) If readpage notes that if when it fetches a single page that the version number has changed, the entire directory is flagged for invalidation. (3) Read as much of the directory in one go as we can. Note that this removes local caching of directories in fscache for the moment as we can't pass the pages to fscache_read_or_alloc_pages() since page->lru is in use by the LRU. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Split the AFS dynamic root stuff out of the main directory handling file and into its own file as they share little in common. The dynamic root code also gets its own dentry and inode ops tables. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Each afs dentry is tagged with the version that the parent directory was at last time it was validated and, currently, if this differs, the directory is scanned and the dentry is refreshed. However, this leads to an excessive amount of revalidation on directories that get modified on the client without conflict with another client. We know there's no conflict because the parent directory's data version number got incremented by exactly 1 on any create, mkdir, unlink, etc., therefore we can trust the current state of the unaffected dentries when we perform a local directory modification. Optimise by keeping track of the last version of the parent directory that was changed outside of the client in the parent directory's vnode and using that to validate the dentries rather than the current version. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Introduce a proc file that displays a bunch of statistics for the AFS filesystem in the current network namespace. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Implement @cell substitution handling such that if @cell is seen as a name in a dynamic root mount, then the name of the root cell for that network namespace will be substituted for @cell during lookup. The substitution of @cell for the current net namespace is set by writing the cell name to /proc/fs/afs/rootcell. The value can be obtained by reading the file. For example: # mount -t afs none /kafs -o dyn # echo grand.central.org >/proc/fs/afs/rootcell # ls /kafs/@cell archive/ cvs/ doc/ local/ project/ service/ software/ user/ www/ # cat /proc/fs/afs/rootcell grand.central.org Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Implement the AFS feature by which @sys at the end of a pathname component may be substituted for one of a list of values, typically naming the operating system. Up to 16 alternatives may be specified and these are tried in turn until one works. Each network namespace has[*] a separate independent list. Upon creation of a new network namespace, the list of values is initialised[*] to a single OpenAFS-compatible string representing arch type plus "_linux26". For example, on x86_64, the sysname is "amd64_linux26". [*] Or will, once network namespace support is finalised in kAFS. The list may be set by: # for i in foo bar linux-x86_64; do echo $i; done >/proc/fs/afs/sysname for which separate writes to the same fd are amalgamated and applied on close. The LF character may be used as a separator to specify multiple items in the same write() call. The list may be cleared by: # echo >/proc/fs/afs/sysname and read by: # cat /proc/fs/afs/sysname foo bar linux-x86_64 Signed-off-by: NDavid Howells <dhowells@redhat.com>
-