- 13 8月, 2008 1 次提交
-
-
由 David Chinner 提交于
A lot of code has been converted away from semaphores, but there are still comments that reference semaphore behaviour. The log code is the worst offender. Update the comments to reflect what the code really does now. SGI-PV: 981498 SGI-Modid: xfs-linux-melb:xfs-kern:31814a Signed-off-by: NDavid Chinner <david@fromorbit.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
- 28 7月, 2008 3 次提交
-
-
由 Matthew Wilcox 提交于
The l_flushsema doesn't exactly have completion semantics, nor mutex semantics. It's used as a list of tasks which are waiting to be notified that a flush has completed. It was also being used in a way that was potentially racy, depending on the semaphore implementation. By using a sv_t instead of a semaphore we avoid the need for a separate counter, since we know we just need to wake everything on the queue. Original waitqueue implementation from Matthew Wilcox. Cleanup and conversion to sv_t by Christoph Hellwig. SGI-PV: 981507 SGI-Modid: xfs-linux-melb:xfs-kern:31059a Signed-off-by: NMatthew Wilcox <willy@linux.intel.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 Michael Nishimoto 提交于
We found this while experimenting with 2GiB xfs logs. The previous code never assumed that xfs logs would ever get so large. SGI-PV: 981502 SGI-Modid: xfs-linux-melb:xfs-kern:31058a Signed-off-by: NMichael Nishimoto <miken@agami.com> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 Denys Vlasenko 提交于
kmem_free() function takes (ptr, size) arguments but doesn't actually use second one. This patch removes size argument from all callsites. SGI-PV: 981498 SGI-Modid: xfs-linux-melb:xfs-kern:31050a Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
- 12 7月, 2008 1 次提交
-
-
由 Dave Chinner 提交于
When we release the iclog, we do an atomic_dec_and_lock to determine if we are the last reference and need to trigger update of log headers and writeout. However, in xlog_state_get_iclog_space() we also need to check if we have the last reference count there. If we do, we release the log buffer, otherwise we decrement the reference count. But the compare and decrement in xlog_state_get_iclog_space() is not atomic, so both places can see a reference count of 2 and neither will release the iclog. That leads to a filesystem hang. Close the race by replacing the atomic_read() and atomic_dec() pair with atomic_add_unless() to ensure that they are executed atomically. Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NTim Shimmin <tes@sgi.com> Tested-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 4月, 2008 9 次提交
-
-
由 David Chinner 提交于
Unmounting the log can fail. unlikely, but it can. Catch all the error conditions an make sure it's propagated upwards. SGI-PV: 980084 SGI-Modid: xfs-linux-melb:xfs-kern:30833a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NNiv Sardi <xaiki@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
xfs_log_force() is declared to return an error, but we almost never check it. We don't need to check it in most cases; if there's a log I/O error then we'll be shutting down the filesystem anyway and that means we'll catch the error somewhere else. However, on certain calls we should be returning an error - sync transactions, fsync, sync writes, etc. so this isn't a pure black and white distinction. Hence make xfs_log_force() a void function that issues a warning to the syslog on error, and call _xfs_log_force() in all the places where we actually care about the error status returned. SGI-PV: 980084 SGI-Modid: xfs-linux-melb:xfs-kern:30832a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NNiv Sardi <xaiki@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 Harvey Harrison 提交于
__FUNCTION__ is gcc-specific, use __func__ SGI-PV: 976035 SGI-Modid: xfs-linux-melb:xfs-kern:30775a Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
Recent changes to xlog_state_release_iclog() placed the grant_lock inside the icloglock. forced unmount of the log does this the opposite way around, but does not depend on the order for correct working. Fix the inversion by changing the order locks are gained in xfs_log_force_umount(). SGI-PV: 979661 SGI-Modid: xfs-linux-melb:xfs-kern:30773a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
To reduce contention on the log in large CPU count, separate out different parts of the xlog_t structure onto different cachelines. Move each lock onto a different cacheline along with all the members that are accessed/modified while that lock is held. Also, move the debugging code into debug code. SGI-PV: 978729 SGI-Modid: xfs-linux-melb:xfs-kern:30772a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
The ticket allocator is just a simple slab implementation internal to the log. It requires the icloglock to be held when manipulating it and this contributes to contention on that lock. Just kill the entire allocator and use a memory zone instead. While there, allow us to gracefully fail allocation with ENOMEM. SGI-PV: 978729 SGI-Modid: xfs-linux-melb:xfs-kern:30771a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
Rather than use the icloglock for protecting the iclog completion callback chain, use a new per-iclog lock so that walking the callback chain doesn't require holding a global lock. This reduces contention on the icloglock during transaction commit and log I/O completion by reducing the number of times we need to hold the global icloglock during these operations. SGI-PV: 978729 SGI-Modid: xfs-linux-melb:xfs-kern:30770a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
Now that we update the log tail LSN less frequently on transaction completion, we pass the contention straight to the global log state lock (l_iclog_lock) during transaction completion. We currently have to take this lock to decrement the iclog reference count. there is a reference count on each iclog, so we need to take þhe global lock for all refcount changes. When large numbers of processes are all doing small trnasctions, the iclog reference counts will be quite high, and the state change that absolutely requires the l_iclog_lock is the except rather than the norm. Change the reference counting on the iclogs to use atomic_inc/dec so that we can use atomic_dec_and_lock during transaction completion and avoid the need for grabbing the l_iclog_lock for every reference count decrement except the one that matters - the last. SGI-PV: 975671 SGI-Modid: xfs-linux-melb:xfs-kern:30505a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
When hundreds of processors attempt to commit transactions at the same time, they can contend on the AIL lock when updating the tail LSN held in the in-core log structure. At the moment, the tail LSN is only needed when actually writing out an iclog, so it really does not need to be updated on every single transaction completion - only those that result in switching iclogs and flushing them to disk. The result is that we reduce the number of times we need to grab the AIL lock and the log grant lock by up to two orders of magnitude on large processor count machines. The problem has previously been hidden by AIL lock contention walking the AIL list which was recently solved and uncovered this issue. SGI-PV: 975671 SGI-Modid: xfs-linux-melb:xfs-kern:30504a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
- 10 4月, 2008 1 次提交
-
-
由 Eric Sandeen 提交于
Remove macro-to-small-function indirection from xfs_sb.h, and remove some which are completely unused. SGI-PV: 976035 SGI-Modid: xfs-linux-melb:xfs-kern:30528a Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NDonald Douwsma <donaldd@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
- 14 2月, 2008 1 次提交
-
-
由 Marcin Slusarz 提交于
remove beX_add functions and replace all uses with beX_add_cpu Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Cc: Mark Fasheh <mark.fasheh@oracle.com> Reviewed-by: NDave Chinner <dgc@sgi.com> Cc: Timothy Shimmin <tes@sgi.com> Cc: <linux-ext4@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 2月, 2008 10 次提交
-
-
由 David Chinner 提交于
When many hundreds to thousands of threads all try to do simultaneous transactions and the log is in a tail-pushing situation (i.e. full), we can get multiple threads walking the AIL list and contending on the AIL lock. The AIL push is, in effect, a simple I/O dispatch algorithm complicated by the ordering constraints placed on it by the transaction subsystem. It really does not need multiple threads to push on it - even when only a single CPU is pushing the AIL, it can push the I/O out far faster that pretty much any disk subsystem can handle. So, to avoid contention problems stemming from multiple list walkers, move the list walk off into another thread and simply provide a "target" to push to. When a thread requires a push, it sets the target and wakes the push thread, then goes to sleep waiting for the required amount of space to become available in the log. This mechanism should also be a lot fairer under heavy load as the waiters will queue in arrival order, rather than queuing in "who completed a push first" order. Also, by moving the pushing to a separate thread we can do more effectively overload detection and prevention as we can keep context from loop iteration to loop iteration. That is, we can push only part of the list each loop and not have to loop back to the start of the list every time we run. This should also help by reducing the number of items we try to lock and/or push items that we cannot move. Note that this patch is not intended to solve the inefficiencies in the AIL structure and the associated issues with extremely large list contents. That needs to be addresses separately; parallel access would cause problems to any new structure as well, so I'm only aiming to isolate the structure from unbounded parallelism here. SGI-PV: 972759 SGI-Modid: xfs-linux-melb:xfs-kern:30371a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 Tim Shimmin 提交于
The BPCSHIFT based macros, btoc*, ctob*, offtoc* and ctooff are either not used or don't need to be used. The NDPP, NDPP, NBBY macros don't need to be used but instead are replaced directly by PAGE_SIZE and PAGE_CACHE_SIZE where appropriate. Initial patch and motivation from Nicolas Kaiser. SGI-PV: 971186 SGI-Modid: xfs-linux-melb:xfs-kern:30096a Signed-off-by: NTim Shimmin <tes@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 Niv Sardi 提交于
This assert is bogus. We can have a forced shutdown occur between the check for the XLOG_FORCED_SHUTDOWN and the ASSERT. Also, the logging system shouldn't care about the state of XFS_FORCED_SHUTDOWN, it should only check XLOG_FORCED_SHUTDOWN. The logging system has it's own forced shutdown flag so, for the case of a forced shutdown that's not due to a logging error, we can flush the log. SGI-PV: 972985 SGI-Modid: xfs-linux-melb:xfs-kern:30029a Signed-off-by: NNiv Sardi <xaiki@sgi.com> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 David Chinner 提交于
These are mostly locking annotations, marking things static, casts where needed and declaring stuff in header files. SGI-PV: 971186 SGI-Modid: xfs-linux-melb:xfs-kern:30002a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
-
由 Christoph Hellwig 提交于
Mostly trivial conversion with one exceptions: h_num_logops was kept in native endian previously and only converted to big endian in xlog_sync, but we always keep it big endian now. With todays cpus fast byteswap instructions that's not an issue but the new variant keeps the code clean and maintainable. SGI-PV: 971186 SGI-Modid: xfs-linux-melb:xfs-kern:29821a Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Christoph Hellwig 提交于
- the various assign lsn macros are replaced by a single inline, xlog_assign_lsn, which is equivalent to ASSIGN_ANY_LSN_HOST except for a more sane calling convention. ASSIGN_LSN_DISK is replaced by xlog_assign_lsn and a manual bytespap, and ASSIGN_LSN by the same, except we pass the cycle and block arguments explicitly instead of a log paramter. The latter two variants only had 2, respectively one user anyway. - the GET_CYCLE is replaced by a xlog_get_cycle inline with exactly the same calling conventions. - GET_CLIENT_ID is replaced by xlog_get_client_id which leaves away the unused arch argument. Instead of conditional defintions depending on host endianess we now do an unconditional swap and shift then, which generates equal code. - the unused XLOG_SET macro is removed. SGI-PV: 971186 SGI-Modid: xfs-linux-melb:xfs-kern:29820a Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Christoph Hellwig 提交于
- the various assign lsn macros are replaced by a single inline, xlog_assign_lsn, which is equivalent to ASSIGN_ANY_LSN_HOST except for a more sane calling convention. ASSIGN_LSN_DISK is replaced by xlog_assign_lsn and a manual bytespap, and ASSIGN_LSN by the same, except we pass the cycle and block arguments explicitly instead of a log paramter. The latter two variants only had 2, respectively one user anyway. - the GET_CYCLE is replaced by a xlog_get_cycle inline with exactly the same calling conventions. - GET_CLIENT_ID is replaced by xlog_get_client_id which leaves away the unused arch argument. Instead of conditional defintions depending on host endianess we now do an unconditional swap and shift then, which generates equal code. - the unused XLOG_SET macro is removed. SGI-PV: 971186 SGI-Modid: xfs-linux-melb:xfs-kern:29819a Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Eric Sandeen 提交于
remove spinlock init abstraction macro in spin.h, remove the callers, and remove the file. Move no-op spinlock_destroy to xfs_linux.h Cleanup spinlock locals in xfs_mount.c SGI-PV: 970382 SGI-Modid: xfs-linux-melb:xfs-kern:29751a Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NDonald Douwsma <donaldd@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Eric Sandeen 提交于
Un-obfuscate GRANT_LOCK, remove GRANT_LOCK->mutex_lock->spin_lock macros, call spin_lock directly, remove extraneous cookie holdover from old xfs code, and change lock type to spinlock_t. SGI-PV: 970382 SGI-Modid: xfs-linux-melb:xfs-kern:29741a Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NDonald Douwsma <donaldd@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Eric Sandeen 提交于
Un-obfuscate LOG_LOCK, remove LOG_LOCK->mutex_lock->spin_lock macros, call spin_lock directly, remove extraneous cookie holdover from old xfs code, and change lock type to spinlock_t. SGI-PV: 970382 SGI-Modid: xfs-linux-melb:xfs-kern:29740a Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NDonald Douwsma <donaldd@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
- 16 10月, 2007 2 次提交
-
-
由 Christoph Hellwig 提交于
... or in the case of XLOG_TIC_ADD_OPHDR remove a useless macro entirely. SGI-PV: 968563 SGI-Modid: xfs-linux-melb:xfs-kern:29511a Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Christoph Hellwig 提交于
All flags are added to xfs_mount's m_flag instead. Note that the 32bit inode flag was duplicated in both of them, but only cleared in the mount when it was not nessecary due to the filesystem beeing small enough. Two flags are still required here - one to indicate the mount option setting, and one to indicate if it applies or not. SGI-PV: 969608 SGI-Modid: xfs-linux-melb:xfs-kern:29507a Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
- 15 10月, 2007 2 次提交
-
-
由 Eric Sandeen 提交于
Remove sizing of logbuf size & count based on physical memory; this was never a very good gauge as it's looking at global memory, but deciding on sizing per-filesystem; no account is made of the total number of filesystems, for example. For now just take the largest "default" case, as was set for machines with >400MB - 8 x 32k buffers. This can always be tuned higher or lower with mount options if necessary. Removes one more user of xfs_physmem. SGI-PV: 968563 SGI-Modid: xfs-linux-melb:xfs-kern:29323a Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
If the underlying block device suddenly stops supporting barriers, we need to handle the -EOPNOTSUPP error in a sane manner rather than shutting down the filesystem. If we get this error, clear the barrier flag, reissue the I/O, and tell the world bad things are occurring. SGI-PV: 964544 SGI-Modid: xfs-linux-melb:xfs-kern:28568a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
- 05 9月, 2007 1 次提交
-
-
由 Christoph Hellwig 提交于
Sparse now warns about comparing pointers to 0, so change all instance where that happens to NULL instead. SGI-PV: 968555 SGI-Modid: xfs-linux-melb:xfs-kern:29308a Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
- 14 7月, 2007 4 次提交
-
-
由 David Chinner 提交于
When we have a couple of hundred transactions on the fly at once, they all typically modify the on disk superblock in some way. create/unclink/mkdir/rmdir modify inode counts, allocation/freeing modify free block counts. When these counts are modified in a transaction, they must eventually lock the superblock buffer and apply the mods. The buffer then remains locked until the transaction is committed into the incore log buffer. The result of this is that with enough transactions on the fly the incore superblock buffer becomes a bottleneck. The result of contention on the incore superblock buffer is that transaction rates fall - the more pressure that is put on the superblock buffer, the slower things go. The key to removing the contention is to not require the superblock fields in question to be locked. We do that by not marking the superblock dirty in the transaction. IOWs, we modify the incore superblock but do not modify the cached superblock buffer. In short, we do not log superblock modifications to critical fields in the superblock on every transaction. In fact we only do it just before we write the superblock to disk every sync period or just before unmount. This creates an interesting problem - if we don't log or write out the fields in every transaction, then how do the values get recovered after a crash? the answer is simple - we keep enough duplicate, logged information in other structures that we can reconstruct the correct count after log recovery has been performed. It is the AGF and AGI structures that contain the duplicate information; after recovery, we walk every AGI and AGF and sum their individual counters to get the correct value, and we do a transaction into the log to correct them. An optimisation of this is that if we have a clean unmount record, we know the value in the superblock is correct, so we can avoid the summation walk under normal conditions and so mount/recovery times do not change under normal operation. One wrinkle that was discovered during development was that the blocks used in the freespace btrees are never accounted for in the AGF counters. This was once a valid optimisation to make; when the filesystem is full, the free space btrees are empty and consume no space. Hence when it matters, the "accounting" is correct. But that means the when we do the AGF summations, we would not have a correct count and xfs_check would complain. Hence a new counter was added to track the number of blocks used by the free space btrees. This is an *on-disk format change*. As a result of this, lazy superblock counters are a mkfs option and at the moment on linux there is no way to convert an old filesystem. This is possible - xfs_db can be used to twiddle the right bits and then xfs_repair will do the format conversion for you. Similarly, you can convert backwards as well. At some point we'll add functionality to xfs_admin to do the bit twiddling easily.... SGI-PV: 964999 SGI-Modid: xfs-linux-melb:xfs-kern:28652a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
When setting the length of the iclogbuf to write out we should just be changing the desired byte count rather completely reassociating the buffer memory with the buffer. Reassociating the buffer memory changes the apparent length of the buffer and hence when we free the buffer, we don't free all the vmap()d space we originally allocated. SGI-PV: 964983 SGI-Modid: xfs-linux-melb:xfs-kern:28640a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
Don't reference the log buffer after running the callbacks as the callback can trigger the log buffers to be freed during unmount. SGI-PV: 964545 SGI-Modid: xfs-linux-melb:xfs-kern:28567a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Christoph Hellwig 提交于
Many block drivers (aoe, iscsi) really want refcountable pages in bios, which is what almost everyone send down. XFS unfortunately has a few places where it sends down buffers that may come from kmalloc, which breaks them. Fix the places that use kmalloc()d buffers. SGI-PV: 964546 SGI-Modid: xfs-linux-melb:xfs-kern:28562a Signed-Off-By: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
- 28 9月, 2006 3 次提交
-
-
由 Tim Shimmin 提交于
space for the unmount record - which becomes a problem in the freeze/thaw scenario. SGI-PV: 942533 SGI-Modid: xfs-linux-melb:xfs-kern:26815a Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Nathan Scott 提交于
one page. SGI-PV: 955302 SGI-Modid: xfs-linux-melb:xfs-kern:26800a Signed-off-by: NNathan Scott <nathans@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Nathan Scott 提交于
ramdisks. SGI-PV: 954802 SGI-Modid: xfs-linux-melb:xfs-kern:26627a Signed-off-by: NNathan Scott <nathans@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
- 28 7月, 2006 1 次提交
-
-
由 Nathan Scott 提交于
flags from iclog buffers before submitting them for writing. SGI-PV: 954772 SGI-Modid: xfs-linux-melb:xfs-kern:26605a Signed-off-by: NNathan Scott <nathans@sgi.com>
-
- 28 6月, 2006 1 次提交
-
-
由 Nathan Scott 提交于
SGI-PV: 904196 SGI-Modid: xfs-linux-melb:xfs-kern:26366a Signed-off-by: NNathan Scott <nathans@sgi.com>
-