- 13 6月, 2009 2 次提交
-
-
由 Christoph Hellwig 提交于
Identation got messed up when merging the current_umask changes with the generic ACL support. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Christoph Hellwig 提交于
Fix warnings about unitialized dquot variables by making sure xfs_qm_vop_dqalloc touches it even when quotas are disabled. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
- 12 6月, 2009 2 次提交
-
-
由 Felix Blyakher 提交于
Regression from commit 28e21170. Need to free temporary buffer allocated in xfs_getbmap(). Signed-off-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NHedi Berriche <hedi@sgi.com> Reported-by: NJustin Piszcz <jpiszcz@lucidpixels.com> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
the write_super method is used for (1) writing back the superblock periodically from pdflush (2) called just before ->sync_fs for data integerity syncs We don't need (1) because we have our own peridoc writeout through xfssyncd, and we don't need (2) because xfs_fs_sync_fs performs a proper synchronous superblock writeout after all other data and metadata has been written out. Also remove ->s_dirt tracking as it's only used to decide when too call ->write_super. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 10 6月, 2009 1 次提交
-
-
由 Christoph Hellwig 提交于
This patch rips out the XFS ACL handling code and uses the generic fs/posix_acl.c code instead. The ondisk format is of course left unchanged. This also introduces the same ACL caching all other Linux filesystems do by adding pointers to the acl and default acl in struct xfs_inode. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
- 08 6月, 2009 12 次提交
-
-
由 Christoph Hellwig 提交于
SYNC_BDFLUSH is a leftover from IRIX and rather misnamed for todays code. Make xfs_sync_fsdata and xfs_dq_sync use the SYNC_TRYLOCK flag for not blocking on logs just as the inode sync code already does. For xfs_sync_fsdata it's a trivial 1:1 replacement, but for xfs_qm_sync I use the opportunity to decouple the non-blocking lock case from the different flushing modes, similar to the inode sync code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
We want to wait for all I/O to finish when we do data integrity syncs. So there is no reason to keep SYNC_WAIT separate from SYNC_IOWAIT. This causes a little change in behaviour for the ENOSPC flushing code which now does a second submission and wait of buffered I/O, but that should finish ASAP as we already did an asynchronous writeout earlier. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJosef 'Jeff' Sipek <jeffpc@josefsipek.net> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
xfs_sync_inodes is used to write back either file data or inode metadata. In general we always do these separately, except for one fishy case in xfs_fs_put_super that does both. So separate xfs_sync_inodes into separate xfs_sync_data and xfs_sync_attr functions. In xfs_fs_put_super we first call the data sync and then the attr sync as that was the previous order. The moved log force in that path doesn't make a difference because we will force the log again as part of the real unmount process. The filesystem readonly checks are not performed by the new function but instead moved into the callers, given that most callers alredy have it further up in the stack. Also add debug checks that we do not pass in incorrect flags in the new xfs_sync_data and xfs_sync_attr function and fix the one place that did pass in a wrong flag. Also remove a comment mentioning xfs_sync_inodes that has been incorrect for a while because we always take either the iolock or ilock in the sync path these days. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
Use xfs_inode_ag_iterator instead of opencoding the inode walk in the quota code. Mark xfs_inode_ag_iterator and xfs_sync_inode_valid non-static to allow using them from the quota code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJosef 'Jeff' Sipek <jeffpc@josefsipek.net> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Dave Chinner 提交于
Given that we walk across the per-ag inode lists so often, it makes sense to introduce an iterator for this. Convert the sync and reclaim code to use this new iterator, quota code will follow in the next patch. Also change xfs_reclaim_inode to return -EGAIN instead of 1 for an inode already under reclaim. This simplifies the AG iterator and doesn't matter for the only other caller. [hch: merged the lookup and execute callbacks back into one to get the pag_ici_lock locking correct and simplify the code flow] Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Dave Chinner 提交于
The noblock parameter of xfs_reclaim_inodes is only ever set to zero. Remove it and all the conditional code that is never executed. Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Dave Chinner 提交于
Separate the validation of inodes found by the radix tree walk from the radix tree lookup. Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
In many cases we only want to sync inode metadata. Split out the inode flushing into a separate helper to prepare factoring the inode sync code. Based on a patch from Dave Chinner, but redone to keep the current behaviour exactly and leave changes to the flushing logic to another patch. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Dave Chinner 提交于
In many cases we only want to sync inode data. Start spliting the inode sync into data sync and inode sync by factoring out the inode data flush. [hch: minor cleanups] Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
Kill the quota ops function vector and replace it with direct calls or stubs in the CONFIG_XFS_QUOTA=n case. Make sure we check XFS_IS_QUOTA_RUNNING in the right spots. We can remove the number of those checks because the XFS_TRANS_DQ_DIRTY flag can't be set otherwise. This brings us back closer to the way this code worked in IRIX and earlier Linux versions, but we keep a lot of the more useful factoring of common code. Eventually we should also kill xfs_qm_bhv.c, but that's left for a later patch. Reduces the size of the source code by about 250 lines and the size of XFS module by about 1.5 kilobytes with quotas enabled: text data bss dec hex filename 615957 2960 3848 622765 980ad fs/xfs/xfs.o 617231 3152 3848 624231 98667 fs/xfs/xfs.o.old Fallout: - xfs_qm_dqattach is split into xfs_qm_dqattach_locked which expects the inode locked and xfs_qm_dqattach which does the locking around it, thus removing XFS_QMOPT_ILOCKED. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
Arkadiusz has seen really strange crashes in xfs_qm_dqcheck that I can only explain by a log item being too smal to actually fit the xfs_dqblk_t we're dereferencing all over xfs_qm_dqcheck. So add graceful checks for NULL or too small quota items to the log recovery code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
由 Christoph Hellwig 提交于
Commit a6634fba3dec4a92f0a2c4e30c80b634c0576ad5 in xfsprogs increased the maximum log size supported by mkfs. Merged back the changes to xfs_fs.h so the growfs enforced the same limit and the headers are in sync. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
-
- 02 6月, 2009 4 次提交
-
-
由 Felix Blyakher 提交于
It's possible to recurse into filesystem from the memory allocation, which deadlocks in xfs_qm_shake(). Add check for __GFP_FS, and bail out if it is not set. Signed-off-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NHedi Berriche <hedi@sgi.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Eric Sandeen 提交于
In the case where growing a filesystem would leave the last AG too small, the fixup code has an overflow in the calculation of the new size with one fewer ag, because "nagcount" is a 32 bit number. If the new filesystem has > 2^32 blocks in it this causes a problem resulting in an EINVAL return from growfs: # xfs_io -f -c "truncate 19998630180864" fsfile # mkfs.xfs -f -bsize=4096 -dagsize=76288719b,size=3905982455b fsfile # mount -o loop fsfile /mnt # xfs_growfs /mnt meta-data=/dev/loop0 isize=256 agcount=52, agsize=76288719 blks = sectsz=512 attr=2 data = bsize=4096 blocks=3905982455, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0 xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Invalid argument Reported-by: richard.ems@cape-horn-eng.com Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Felix Blyakher 提交于
Regreesion from commit ef8f7fc5, which rearranged the code in xfs_swap_extents() leading to double unlock of xfs inode ilock. That resulted in xfs_fsr deadlocking itself on platforms, which don't handle double unlock of rw_semaphore nicely. It caused the count go negative, which represents the write holder, without really having one. ia64 is one of the platforms where deadlock was easily reproduced and the fix was tested. Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Felix Blyakher 提交于
It's possible to recurse into filesystem from the memory allocation, which deadlocks in xfs_qm_shake(). Add check for __GFP_FS, and bail out if it is not set. Signed-off-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NHedi Berriche <hedi@sgi.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
- 27 5月, 2009 1 次提交
-
-
由 Eric Sandeen 提交于
In the case where growing a filesystem would leave the last AG too small, the fixup code has an overflow in the calculation of the new size with one fewer ag, because "nagcount" is a 32 bit number. If the new filesystem has > 2^32 blocks in it this causes a problem resulting in an EINVAL return from growfs: # xfs_io -f -c "truncate 19998630180864" fsfile # mkfs.xfs -f -bsize=4096 -dagsize=76288719b,size=3905982455b fsfile # mount -o loop fsfile /mnt # xfs_growfs /mnt meta-data=/dev/loop0 isize=256 agcount=52, agsize=76288719 blks = sectsz=512 attr=2 data = bsize=4096 blocks=3905982455, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0 xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Invalid argument Reported-by: richard.ems@cape-horn-eng.com Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
- 23 5月, 2009 1 次提交
-
-
由 Martin K. Petersen 提交于
Until now we have had a 1:1 mapping between storage device physical block size and the logical block sized used when addressing the device. With SATA 4KB drives coming out that will no longer be the case. The sector size will be 4KB but the logical block size will remain 512-bytes. Hence we need to distinguish between the physical block size and the logical ditto. This patch renames hardsect_size to logical_block_size. Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 08 5月, 2009 1 次提交
-
-
由 Felix Blyakher 提交于
Regreesion from commit ef8f7fc5, which rearranged the code in xfs_swap_extents() leading to double unlock of xfs inode ilock. That resulted in xfs_fsr deadlocking itself on platforms, which don't handle double unlock of rw_semaphore nicely. It caused the count go negative, which represents the write holder, without really having one. ia64 is one of the platforms where deadlock was easily reproduced and the fix was tested. Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
- 30 4月, 2009 5 次提交
-
-
由 Christoph Hellwig 提交于
xfs_getbmap (or rather the formatters called by it) copy out the getbmap structures under the ilock, which can deadlock against mmap. This has been reported via bugzilla a while ago (#717) and has recently also shown up via lockdep. So allocate a temporary buffer to format the kernel getbmap structures into and then copy them out after dropping the locks. A little problem with this is that we limit the number of extents we can copy out by the maximum allocation size, but I see no real way around that. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Christoph Hellwig 提交于
- reshuffle various conditionals for data vs attr fork to make the code more readable - do fine-grainded goto-based error handling - exit early from conditionals instead of keeping a long else branch around - allow kmem_alloc to fail Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Olaf Weber 提交于
There had been reports where xfs filesystem was randomly corrupted with fsfuzzer, and xfs failed to handle it gracefully. This patch fixes couple of reported problem by providing additional checks in the superblock validation routine. Signed-off-by: NOlaf Weber <olaf@sgi.com> Reviewed-by: NJosef 'Jeff' Sipek <jeffpc@josefsipek.net> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Lachlan McIlroy 提交于
We had some systems crash with this stack: [<a00000010000cb20>] ia64_leave_kernel+0x0/0x280 [<a00000021291ca00>] xfs_bmbt_get_startoff+0x0/0x20 [xfs] [<a0000002129080b0>] xfs_bmap_last_offset+0x210/0x280 [xfs] [<a00000021295b010>] xfs_file_last_byte+0x70/0x1a0 [xfs] [<a00000021295b200>] xfs_itruncate_start+0xc0/0x1a0 [xfs] [<a0000002129935f0>] xfs_inactive_free_eofblocks+0x290/0x460 [xfs] [<a000000212998fb0>] xfs_release+0x1b0/0x240 [xfs] [<a0000002129ad930>] xfs_file_release+0x70/0xa0 [xfs] [<a000000100162ea0>] __fput+0x1a0/0x420 [<a000000100163160>] fput+0x40/0x60 The problem here is that xfs_file_last_byte() does not acquire the inode lock and can therefore race with another thread that is modifying the extext list. While xfs_bmap_last_offset() is trying to lookup what was the last extent some extents were merged and the extent list shrunk so the index we lookup is now beyond the end of the extent list and potentially in a freed buffer. Signed-off-by: NLachlan McIlroy <lmcilroy@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Christoph Hellwig 提交于
xfs_getbmap (or rather the formatters called by it) copy out the getbmap structures under the ilock, which can deadlock against mmap. This has been reported via bugzilla a while ago (#717) and has recently also shown up via lockdep. So allocate a temporary buffer to format the kernel getbmap structures into and then copy them out after dropping the locks. A little problem with this is that we limit the number of extents we can copy out by the maximum allocation size, but I see no real way around that. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
- 29 4月, 2009 3 次提交
-
-
由 Christoph Hellwig 提交于
- reshuffle various conditionals for data vs attr fork to make the code more readable - do fine-grainded goto-based error handling - exit early from conditionals instead of keeping a long else branch around - allow kmem_alloc to fail Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NEric Sandeen <sandeen@sandeen.net> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Olaf Weber 提交于
There had been reports where xfs filesystem was randomly corrupted with fsfuzzer, and xfs failed to handle it gracefully. This patch fixes couple of reported problem by providing additional checks in the superblock validation routine. Signed-off-by: NOlaf Weber <olaf@sgi.com> Reviewed-by: NJosef 'Jeff' Sipek <jeffpc@josefsipek.net> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
由 Lachlan McIlroy 提交于
We had some systems crash with this stack: [<a00000010000cb20>] ia64_leave_kernel+0x0/0x280 [<a00000021291ca00>] xfs_bmbt_get_startoff+0x0/0x20 [xfs] [<a0000002129080b0>] xfs_bmap_last_offset+0x210/0x280 [xfs] [<a00000021295b010>] xfs_file_last_byte+0x70/0x1a0 [xfs] [<a00000021295b200>] xfs_itruncate_start+0xc0/0x1a0 [xfs] [<a0000002129935f0>] xfs_inactive_free_eofblocks+0x290/0x460 [xfs] [<a000000212998fb0>] xfs_release+0x1b0/0x240 [xfs] [<a0000002129ad930>] xfs_file_release+0x70/0xa0 [xfs] [<a000000100162ea0>] __fput+0x1a0/0x420 [<a000000100163160>] fput+0x40/0x60 The problem here is that xfs_file_last_byte() does not acquire the inode lock and can therefore race with another thread that is modifying the extext list. While xfs_bmap_last_offset() is trying to lookup what was the last extent some extents were merged and the extent list shrunk so the index we lookup is now beyond the end of the extent list and potentially in a freed buffer. Signed-off-by: NLachlan McIlroy <lmcilroy@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NFelix Blyakher <felixb@sgi.com> Signed-off-by: NFelix Blyakher <felixb@sgi.com>
-
- 21 4月, 2009 1 次提交
-
-
由 Li Zefan 提交于
Remove open-coded memdup_user() Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 07 4月, 2009 7 次提交
-
-
由 Dave Chinner 提交于
The only thing we need to do now when we get an ENOSPC condition during delayed allocation reservation is flush all the other inodes with delalloc blocks on them and retry without EOF preallocation. Remove the unneeded mess that is xfs_flush_space() and just call xfs_flush_inodes() directly from xfs_iomap_write_delay(). Also, change the location of the retry label to avoid trying to do EOF preallocation because we don't want to do that at ENOSPC. This enables us to remove the BMAPI_SYNC flag as it is no longer used. Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
If we are creating lots of small files, we can fail to get a reservation for inode create earlier than we should due to EOF preallocation done during delayed allocation reservation. Hence on the first reservation ENOSPC failure flush all the delayed allocation blocks out of the system and retry. This fixes the last commonly triggered spurious ENOSPC issue that has been reported. Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
xfs_flush_inodes() currently uses a magic timeout to wait for some inodes to be flushed before returning. This isn't really reliable but used to be the best that could be done due to deadlock potential of waiting for the entire flush. Now the inode flush is safe to execute while we hold page and inode locks, we can wait for all the inodes to flush synchronously. Convert the wait mechanism to a completion to do this efficiently. This should remove all remaining spurious ENOSPC errors from the delayed allocation reservation path. This is extracted almost line for line from a larger patch from Mikulas Patocka. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
When we are writing to a single file and hit ENOSPC, we trigger a background flush of the inode and try again. Because we hold page locks and the iolock, the flush won't proceed until after we release these locks. This occurs once we've given up and ENOSPC has been reported. Hence if this one is the only dirty inode in the system, we'll get an ENOSPC prematurely. To fix this, remove the async flush from the allocation routines and move it to the top of the write path where we can do a synchronous flush and retry the write again. Only retry once as a second ENOSPC indicates that we really are ENOSPC. This avoids a page cache deadlock when trying to do this flush synchronously in the allocation layer that was identified by Mikulas Patocka. Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
Currently xfs_device_flush calls sync_blockdev() which is a no-op for XFS as all it's metadata is held in a different address to the one sync_blockdev() works on. Call xfs_sync_inodes() instead to flush all the delayed allocation blocks out. To do this as efficiently as possible, do it via two passes - one to do an async flush of all the dirty blocks and a second to wait for all the IO to complete. This requires some modification to the xfs-sync_inodes_ag() flush code to do efficiently. Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
When trying to reserve log space, we find the amount of space we need, then go to sleep waiting for space. When we are woken, we try to push the tail of the log forward to make sure we have space available. Unfortunately, this means that if there is not space available, and everyone who needs space goes to sleep there is no-one left to push the tail of the log to make space available. Once we have a thread waiting for space to become available, the others queue up behind it in a FIFO, and none of them push the tail of the log. This can result in everyone going to sleep in xlog_grant_log_space() if the first sleeper races with the last I/O that moves the tail of the log forward. With no further I/O tomove the tail of the log, there is nothing to wake the sleepers and hence all transactions just stop. Fix this by making sure the xfsaild will create enough space for the transaction that is about to sleep by moving the push target far enough forwards to ensure that that the curent proceeees will have enough space available when it is woken. That is, we push the AIL before we go to sleep. Because we've inserted the log ticket into the queue before we've pushed and gone to sleep, subsequent transactions will wait behind this one. Hence we are guaranteed to have space available when we are woken. Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Dave Chinner 提交于
Unwritten extent conversion can recurse back into the filesystem due to memory allocation. Memory reclaim requires I/O completions to be processed to allow the callers to make progress. If the I/O completion workqueue thread is doing the recursion, then we have a deadlock situation. Move unwritten extent completion into it's own workqueue so it doesn't block I/O completions for normal delayed allocation or overwrite data. Signed-off-by: NDave Chinner <david@fromorbit.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-